Re: [Gluster-devel] Gluster 9.6 changes to fix gluster NFS bug

2024-03-16 Thread Aravinda
> We ran into some trouble in Gluster 9.3 with the Gluster NFS server. We 
> updated to a supported Gluster  9.6 and reproduced the problem.



Please share the reproducer steps. We can include in our tests if possible.



> We understand the Gluster team recommends the use of Ganesha for NFS but in 
> our specific environment and use case, Ganesha isn’t fast enough. No 
> disrespect intended; we never got the chance to work with the Ganesha team on 
> it.



That is totally fine. I think gnfs is disabled in the later versions, you have 
to build from source to enable it. Only issue I see is gnfs doesn't support NFS 
v4 and the NFS+Gluster team shifted the focus to NFS Ganesha.



> We tried to avoid Ganesha and Gluster NFS altogether, using kernel NFS with 
> fuse mounts exported, and that was faster, but failover didn’t work. We could 
> make the mount point highly available but not open files (so when the IP 
> failover happened, the mount point would still function but the open file – a 
> squashfs in this example – would not fail over).



Was Gluster backup volfile server option used or any other method used for high 
availability?



> So we embarked on a mission to try to figure out what was going on with the 
> NFS server. I am not an expert in network code or distributed filesystems. 
> So, someone with a careful eye would need to check these changes out. 
> However, what I generally found was that the Gluster NFS server requires the 
> layers of gluster to report back ‘errno’ to determine if EINVAL is set (to 
> determine is_eof). In some instances, errno was not being passed down the 
> chain or was being reset to 0. This resulted in NFS traces showing multiple 
> READs for a 1 byte file and the NFS client showing an “I/O” error. It seemed 
> like files above 170M worked ok. This is likely due to how the layers of 
> gluster change with changing and certain file sizes. However, we did not 
> track this part down.



> We found in one case disabling the NFS performance IO cache would fix the 
> problem for a non-sharded volume, but the problem persisted in a sharded 
> volume. Testing found our environment takes the disabling of the NFS 
> performance IO cache quite hard anyway, so it wasn’t an option for us.



> We were curious why the fuse client wouldn’t be impacted but our quick look 
> found that fuse doesn’t really use or need errno in the same way Gluster NFS 
> does.



> So, the attached patch fixed the issue. Accessing small files in either case 
> above now work properly. We tried running md5sum against large files over NFS 
> and fuse mounts and everything seemed fine.



> In our environment, the NFS-exported directories tend to contain squashfs 
> files representing read-only root filesystems for compute nodes, and those 
> worked fine over NFS after the change as well.



> If you do not wish to include this patch because Gluster NFS is deprecated, I 
> would greatly appreciate it if someone could validate my work as our solution 
> will need Gluster NFS enabled for the time being. I am concerned I could have 
> missed a nuance and caused a hard to detect problem.



We can surely include this patch in Gluster repo since many tests are still 
using this feature and it is available for interested users. Thanks for the PR. 
Please submit the PR to Github repo, I will followup with the maintainers and 
update. Let me know if you need any help to submit the PR.



--
Thanks and Regards
Aravinda

Kadalu Technologies







 On Thu, 14 Mar 2024 01:32:50 +0530 Jacobson, Erik  
wrote ---



Hello team.

 

We ran into some trouble in Gluster 9.3 with the Gluster NFS server. We updated 
to a supported Gluster  9.6 and reproduced the problem.

 

We understand the Gluster team recommends the use of Ganesha for NFS but in our 
specific environment and use case, Ganesha isn’t fast enough. No disrespect 
intended; we never got the chance to work with the Ganesha team on it.

 

We tried to avoid Ganesha and Gluster NFS altogether, using kernel NFS with 
fuse mounts exported, and that was faster, but failover didn’t work. We could 
make the mount point highly available but not open files (so when the IP 
failover
 happened, the mount point would still function but the open file – a squashfs 
in this example – would not fail over).

 

So we embarked on a mission to try to figure out what was going on with the NFS 
server. I am not an expert in network code or distributed filesystems. So, 
someone with a careful eye would need to check these changes out. However, what 
I
 generally found was that the Gluster NFS server requires the layers of gluster 
to report back ‘errno’ to determine if EINVAL is set (to determine is_eof). In 
some instances, errno was not being passed down the chain or was being reset to 
0. This resulted in
 NFS traces showing multiple READs for a 

[Gluster-devel] Premium Gluster packages

2024-01-31 Thread Aravinda
Hi all,



TLDR; Tested GlusterFS packages for the upstream releases and long term support 
for the packages.



Upstream Gluster release is once in every 6 months, and only the last two 
releases gets the updates and security fixes. After every year, the release 
will be EOL and no updates are available. Upgrading the Gluster frequently in 
production is not practical. One can easily clone the Gluster repository, 
checkout the respective branch and build the packages by applying the required 
patches. Unfortunately, it is not an easy task. Patches may not apply smoothly, 
may have to be modified. Picking the required patches from the list of patches 
merged in the main branch is very time consuming and not productive for 
everybody.



Build may take several minutes and there may be build issues after merging a 
few patches. Once the built packages are available, testing is required in 
staging servers to qualify the build for production use.



Kadalu Technologies is one of the main contributors to Gluster project. Kadalu 
Technologies is planning to take-up some of the burden described above and 
provide the stable and tested Gluster packages. Please help us to prioritise 
the distributions and the use cases if you are interested in the premium 
GlusterFS packages. 



https://forms.gle/xUzZUPxMcdBzNjqGA



The survey will not take more than 5 minutes, please participate and provide 
your inputs.



Please note: This will not change anything in upstream Gluster release cycle or 
the release process.



--
Thanks & Regards

Aravinda VK

Kadalu Technologies---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Gluster project maintenance - State of Gluster project

2023-11-28 Thread Aravinda
Hello Gluster users/developers,



Previously, we from Kadalu Technologies proposed to support the Gluster project 
for three years. Based on the feedback from many maintainers and users, we will 
not be taking up three years of maintenance. We will maintain it for one year 
mainly focusing on the following tasks and see if it benefits the community in 
any way. The definite list of tasks will help us to stay focused.



1. Documentation website and content improvement.

2. Release management - Continue the existing release cycle and manage the 
releases. Support the packages for CentOS, Ubuntu, Fedora and Debian.

3. Website management - Migration to Github pages with a simple theme. Review 
blog posts from the community to publish in gluster.org. Maintain Gluster.org 
domain renewal and other activities.

4. Monthly community meetings - Promote community participation, publish the 
minutes to gluster.org. Notify developers/maintainers, if any urgent issues 
reported by users in these meetings.

5. Incoming Issues Triage - Response to the issues in 2 working days. Forward 
the issue to the respective developer/maintainer if available. Discuss the 
issues in the meeting if not addressed for two weeks.

6. Backport the PRs to minor releases that are merged in the devel branch and 
required in the released versions.

7. Maintain other tools repo: gdash, gstatus and others (Actively maintained 
repos)



Infrastructure (CI jobs, mailing lists) is maintained by Red Hat. Hopefully RH 
will continue to do this at least for a year.



Hoping to achieve the funding target before this year, so that we can start the 
maintenance activities from 2024.



Contributions from individuals and companies are welcome. We will create a 
Sponsors page at gluster.org and add the logo, name and link of the Companies 
and name, link and a photo of the individual (With permission).



Please reply to this email or write to us at mailto:glus...@kadalu.tech if you 
are interested in sponsoring the Gluster project maintenance. We will get back 
with the next steps. Thanks.



Thanks and Regards

Aravinda VK

On behalf of the Kadalu Team---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] State of the gluster project

2023-10-27 Thread Aravinda
It is very unfortunate that Gluster is not maintained. From Kadalu 
Technologies, we are trying to set up a small team dedicated to maintain 
GlusterFS for the next three years. This will be only possible if we get 
funding from community and companies. The details about the proposal is here 
https://kadalu.tech/gluster/



About Kadalu Technologies: Kadalu Technologies was started in 2019 by a few 
Gluster maintainers to provide the persistent storage for the applications 
running in Kubernetes. The solution (https://github.com/kadalu/kadalu) is based 
on GlusterFS and doesn't use the management layer Glusterd (Natively integrated 
using Kubernetes APIs). Kadalu Technologies also maintains many of the 
GlusterFS tools like gdash (https://github.com/kadalu/gdash), 
gluster-metrics-exporter (https://github.com/kadalu/gluster-metrics-exporter) 
etc.





Aravinda


https://kadalu.tech




 On Fri, 27 Oct 2023 14:21:35 +0530 Diego Zuccato  
wrote ---



Maybe a bit OT...

I'm no expert on either, but the concepts are quite similar.
Both require "extra" nodes (metadata and monitor), but those can be 
virtual machines or you can host the services on OSD machines.

We don't use snapshots, so I can't comment on that.

My experience with Ceph is limited to having it working on Proxmox. No 
experience yet with CephFS.

BeeGFS is more like a "freemium" FS: the base functionality is free, but 
if you need "enterprise" features (quota, replication...) you have to 
pay (quite a lot... probably not to compromise lucrative GPFS licensing).

We also saw more than 30 minutes for an ls on a Gluster directory 
containing about 50 files when we had many millions of files on the fs 
(with one disk per brick, which also lead to many memory issues). After 
last rebuild I created 5-disks RAID5 bricks (about 44TB each) and memory 
pressure wend down drastically, but desyncs still happen even if the 
nodes are connected via IPoIB links that are really rock-solid (and in 
the worst case they could fallback to 1Gbps Ethernet connectivity).

Diego

Il 27/10/2023 10:30, Marcus Pedersén ha scritto:
> Hi Diego,
> I have had a look at BeeGFS and is seems more similar
> to ceph then to gluster. It requires extra management
> nodes similar to ceph, right?
> Second of all there are no snapshots in BeeGFS, as
> I understand it.
> I know ceph has snapshots so for us this seems a
> better alternative. What is your experience of ceph?
> 
> I am sorry to hear about your problems with gluster,
> from my experience we had quite some issues with gluster
> when it was "young", I thing the first version we installed
> whas 3.5 or so. It was also extremly slow, an ls took forever.
> But later versions has been "kind" to us and worked quite well
> and file access has become really comfortable.
> 
> Best regards
> Marcus
> 
> On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote:
>> CAUTION: This email originated from outside of the organization. Do not 
>> click links or open attachments unless you recognize the sender and know the 
>> content is safe.
>>
>>
>> Hi.
>>
>> I'm also migrating to BeeGFS and CephFS (depending on usage).
>>
>> What I liked most about Gluster was that files were easily recoverable
>> from bricks even in case of disaster and that it said it supported RDMA.
>> But I soon found that RDMA was being phased out, and I always find
>> entries that are not healing after a couple months of (not really heavy)
>> use, directories that can't be removed because not all files have been
>> deleted from all the bricks and files or directories that become
>> inaccessible with no apparent reason.
>> Given that I currently have 3 nodes with 30 12TB disks each in replica 3
>> arbiter 1 it's become a major showstopper: can't stop production, backup
>> everything and restart from scratch every 3-4 months. And there are no
>> tools helping, just log digging :( Even at version 9.6 seems it's not
>> really "production ready"... More like v0.9.6 IMVHO. And now it being
>> EOLed makes it way worse.
>>
>> Diego
>>
>> Il 27/10/2023 09:40, Zakhar Kirpichenko ha scritto:
>>> Hi,
>>>
>>> Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
>>> projects, so Gluster doesn't get much attention. From my experience, it
>>> has deteriorated since about version 9.0, and we're migrating to
>>> alternatives.
>>>
>>> /Z
>>>
>>> On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén <mailto:marcus.peder...@slu.se
>>> <mailto:mailto:marcus.peder...@slu.se>> wrote:
>>>
>>>  Hi all,
>>>  I just have a general thought about the gluster

Re: [Gluster-devel] [Gluster-users] [Community Announcement] Announcing Kadalu Storage 1.0 Beta

2022-12-05 Thread Aravinda Vishwanathapura
Hi Gilberto,

We are happy to share that the web console is now available with the 1.0.0
version itself. Additional features will be added in upcoming releases.
Please read our Storage Console introduction blog post.

https://kadalu.tech/blog/introducing-storage-console

1.0.0 announcement blog post: https://kadalu.tech/blog/kadalu-storage-1.0.0

--
Thanks & Regards
Aravinda
https://kadalu.tech

On Mon, Aug 8, 2022 at 10:59 PM Gilberto Ferreira <
gilberto.nune...@gmail.com> wrote:

> Ok, thank you.
> ---
> Gilberto Nunes Ferreira
>
>
>
>
>
>
> Em seg., 8 de ago. de 2022 às 14:21, Aravinda Vishwanathapura
>  escreveu:
>
>> It is planned, but not available with the first release.
>>
>> On Mon, 8 Aug 2022 at 9:57 PM, Gilberto Ferreira <
>> gilberto.nune...@gmail.com> wrote:
>>
>>> Is there any web gui?
>>> I would like to try that.
>>>
>>> ---
>>> Gilberto Nunes Ferreira
>>>
>>>
>>>
>>>
>>>
>>>
>>> Em seg., 8 de ago. de 2022 às 13:09, Aravinda Vishwanathapura
>>>  escreveu:
>>>
>>>> Hi All,
>>>>
>>>> Kadalu Storage is a modern storage solution based on GlusterFS. It uses
>>>> core file system layer from GlusterFS and provides a modern management
>>>> interface, ReST APIs, and many features.
>>>>
>>>> We are happy to announce the beta release of Kadalu Storage 1.0. This
>>>> release includes many features from GlusterFS along with many improvements.
>>>>
>>>> Following quick start guide will help you to try out Kadalu Storage.
>>>> Please provide your valuable feedback and feel free to open issues with
>>>> feature requests or bug reports (Github Issues <
>>>> https://github.com/kadalu/moana/issues>)
>>>>
>>>> https://kadalu.tech/storage/quick-start
>>>>
>>>> A few other additional links to understand the similarities/differences
>>>> between Kadalu Storage and Gluster.
>>>>
>>>> - Gluster vs Kadalu Storage: https://kadalu.tech/gluster-vs-kadalu/
>>>> - Try Kadalu Storage with containers:
>>>> https://kadalu.tech/blog/try-kadalu-storage/
>>>> - Project repository: https://github.com/kadalu/moana
>>>>
>>>> Notes:
>>>>
>>>> - 1.0 Beta release of Kubernetes integration is expected in a couple of
>>>> weeks.
>>>> - Packages for other distributions are work in progress and will be
>>>> available after the 1.0 release.
>>>>
>>>> Blog: https://kadalu.tech/blog/announcing-kadalu-storage-1.0-beta
>>>>
>>>> --
>>>> Thanks and Regards
>>>> Aravinda Vishwanathapura
>>>> https://kadalu.tech
>>>> 
>>>>
>>>>
>>>>
>>>> Community Meeting Calendar:
>>>>
>>>> Schedule -
>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>> Gluster-users mailing list
>>>> gluster-us...@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [Community Announcement] Kadalu Storage 1.0 released

2022-12-05 Thread Aravinda Vishwanathapura
Dear Gluster users/developers,

We are happy to share that Kadalu Storage 1.0 is released.

Kadalu Storage is an opinionated distributed storage solution based on
GlusterFS. Kadalu Storage project was started as a lightweight project to
integrate with Kubernetes without using the management layer Glusterd. For
non-kubernetes use cases, we realized that a modern alternative to Glusterd
is required. We started working on Kadalu Storage manager, which is a
lightweight and modern approach to manage all the file system resources. It
provides ReST APIs, CLI and also Web UI based Storage management.

Highlights:


   -

   Based on GlusterFS 11 release.


   -

   Web console to manage Kadalu Storage - We are using an interesting
   approach where the web console is hosted by us, but no data specific to
   your cluster is shared. Kadalu Storage instance details are stored in Local
   storage of your browser and all the API calls are initiated from the
   browser.
   -

   Natively integrated with Kubernetes - Natively integrated with
   Kubernetes APIs without using Glusterd or Kadalu Storage manager.
   Subdirectories from Kadalu Volumes are exported as PVs of required size.
   Usage is controlled by using a Simple Quota feature.
   -

   ReST APIs and SDKs - Kadalu Storage provides ReST APIs for all the
   Storage management operations.
   -

   Supports managing Multiple Storage Pools.


Read our announcement blog post to know more about the release cycle,
features and other details.


   1.

   https://kadalu.tech/blog/kadalu-storage-1.0.0/
   2.

   https://kadalu.tech/blog/introducing-storage-console/


--
Thanks and Regards
Aravinda
https://kadalu.tech
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] [Community Announcement] Announcing Kadalu Storage 1.0 Beta

2022-08-08 Thread Aravinda Vishwanathapura
It is planned, but not available with the first release.

On Mon, 8 Aug 2022 at 9:57 PM, Gilberto Ferreira 
wrote:

> Is there any web gui?
> I would like to try that.
>
> ---
> Gilberto Nunes Ferreira
>
>
>
>
>
>
> Em seg., 8 de ago. de 2022 às 13:09, Aravinda Vishwanathapura
>  escreveu:
>
>> Hi All,
>>
>> Kadalu Storage is a modern storage solution based on GlusterFS. It uses
>> core file system layer from GlusterFS and provides a modern management
>> interface, ReST APIs, and many features.
>>
>> We are happy to announce the beta release of Kadalu Storage 1.0. This
>> release includes many features from GlusterFS along with many improvements.
>>
>> Following quick start guide will help you to try out Kadalu Storage.
>> Please provide your valuable feedback and feel free to open issues with
>> feature requests or bug reports (Github Issues <
>> https://github.com/kadalu/moana/issues>)
>>
>> https://kadalu.tech/storage/quick-start
>>
>> A few other additional links to understand the similarities/differences
>> between Kadalu Storage and Gluster.
>>
>> - Gluster vs Kadalu Storage: https://kadalu.tech/gluster-vs-kadalu/
>> - Try Kadalu Storage with containers:
>> https://kadalu.tech/blog/try-kadalu-storage/
>> - Project repository: https://github.com/kadalu/moana
>>
>> Notes:
>>
>> - 1.0 Beta release of Kubernetes integration is expected in a couple of
>> weeks.
>> - Packages for other distributions are work in progress and will be
>> available after the 1.0 release.
>>
>> Blog: https://kadalu.tech/blog/announcing-kadalu-storage-1.0-beta
>>
>> --
>> Thanks and Regards
>> Aravinda Vishwanathapura
>> https://kadalu.tech
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [Community Announcement] Announcing Kadalu Storage 1.0 Beta

2022-08-08 Thread Aravinda Vishwanathapura
Hi All,

Kadalu Storage is a modern storage solution based on GlusterFS. It uses
core file system layer from GlusterFS and provides a modern management
interface, ReST APIs, and many features.

We are happy to announce the beta release of Kadalu Storage 1.0. This
release includes many features from GlusterFS along with many improvements.

Following quick start guide will help you to try out Kadalu Storage. Please
provide your valuable feedback and feel free to open issues with feature
requests or bug reports (Github Issues <
https://github.com/kadalu/moana/issues>)

https://kadalu.tech/storage/quick-start

A few other additional links to understand the similarities/differences
between Kadalu Storage and Gluster.

- Gluster vs Kadalu Storage: https://kadalu.tech/gluster-vs-kadalu/
- Try Kadalu Storage with containers:
https://kadalu.tech/blog/try-kadalu-storage/
- Project repository: https://github.com/kadalu/moana

Notes:

- 1.0 Beta release of Kubernetes integration is expected in a couple of
weeks.
- Packages for other distributions are work in progress and will be
available after the 1.0 release.

Blog: https://kadalu.tech/blog/announcing-kadalu-storage-1.0-beta

--
Thanks and Regards
Aravinda Vishwanathapura
https://kadalu.tech
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] New logging interface

2022-03-25 Thread Aravinda VK
Looks very neat. +1

On Fri, Mar 25, 2022 at 12:03 AM Xavi Hernandez 
wrote:

> Hi all,
>
> I've just posted a proposal for a new logging interface here:
> https://github.com/gluster/glusterfs/pull/3342
>
> There are many comments and the documentation is updated in the PR itself,
> so I won't duplicate all the info here. Please check it if you are
> interested in the details.
>
> As a summary, I think that the new interface is easier to use, more
> powerful, more flexible and more robust.
>
> Since it affects an interface used by every single component of Gluster I
> would like to have some more feedback before deciding whether we merge it
> or not. Feel free to comment here or in the PR itself.
>
> Thank you very much,
>
> Xavi
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Regards
Aravinda
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [announcement] Kadalu Storage - Opinionated GlusterFS distribution

2022-01-19 Thread Aravinda VK
Hi All,

Kadalu.io was started in 2019 by a few GlusterFS maintainers with the focus
on improving the GlusterFS ecosystem for on-premise, Kubernetes and Cloud
deployments.

Initially, we experimented by integrating the GlusterFS core filesystem
layer with Kubernetes APIs without using the Management layer Glusterd.
That project is liked by many users and developers, check this GitHub repo (
https://github.com/kadalu/kadalu) to know more about this project.

Today we are happy to announce our new GlusterFS based project for
non-Kubernetes use cases as well. We started a series of blog posts
explaining the new features and differences compared to Glusterd based
Cluster. Read the first part in the series here (
https://kadalu.io/blog/kadalu-storage-part-1/)

This project is still in active development and many features are still
under development. Feel free to try it out in your dev setup and give us
feedback. Read the quick start guide here (
https://kadalu.io/docs/kadalu-storage/main/quick-start/)

To know more about Kadalu Storage, please join the Kadalu community meeting
happening on Jan 20th, 2022 4 pm-5 pm IST | UTC 10:30-11:30 am. Bridge Link:
https://meet.google.com/jtp-kvsh-ggu <https://t.co/aXT0FhGoz7>

Thanks and Regards

Aravinda Vishwanathapura

(On behalf of Kadalu Storage Team)

https://kadalu.io
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] ACTION REQUESTED: Migrate your glusterfs patches from Gerrit to GitHub

2020-10-07 Thread Aravinda VK
This is awesome. Thanks to infra team for the great effort.

—
Aravinda Vishwanathapura
https://kadalu.io

> On 07-Oct-2020, at 3:16 PM, Deepshikha Khandelwal  wrote:
> 
> Hi folks,
> 
> We have initiated the migration process today. All the patch owners are 
> requested to move their existing patches from Gerrit[1] to Github[2].
> 
> The changes we brought in with this migration:
> 
> - The 'devel' branch[3] is the new default branch on GitHub to get away from 
> master/slave language.
> 
> - This 'devel' branch is the result of the merge of the current branch and 
> the historic repository, thus requiring a new clone. It helps in getting the 
> complete idea of tracing any changes properly to its origin to understand the 
> intentions behind the code.
> 
> - We have switched the glusterfs repo on gerrit to readonly state. So you 
> will not be able to merge the patches on Gerrit from now onwards. Though we 
> are not deprecating gerrit right now, we will work with the remaining 
> users/projects to move to github as well.
> 
> - Changes in the development workflow: 
> - All the required smoke tests would be auto-triggered on submitting a PR.
> - Developers can retrigger the smoke tests using "/recheck smoke" as 
> comment.
> - The "regression" tests would be triggered by a comment "/run 
> regression" from anyone in the gluster-maintainers[4] github group. To run 
> full regression, maintainers need to comment "/run full regression"
> 
> For more information you can go through the contribution guidelines listed in 
> CONTRIBUTING.md[5]
> 
> [1] https://review.gluster.org/#/q/status:open+project:glusterfs 
> <https://review.gluster.org/#/q/status:open+project:glusterfs>
> [2] https://github.com/gluster/glusterfs 
> <https://github.com/gluster/glusterfs>
> [3] https://github.com/gluster/glusterfs/tree/devel 
> <https://github.com/gluster/glusterfs/tree/devel>
> [4] https://github.com/orgs/gluster/teams/gluster-maintainers 
> <https://github.com/orgs/gluster/teams/gluster-maintainers>
> [5] https://github.com/gluster/glusterfs/blob/master/CONTRIBUTING.md 
> <https://github.com/gluster/glusterfs/blob/master/CONTRIBUTING.md>
> 
> Please reach out to us if you have any queries.
> 
> Thanks,
> Gluster-infra team
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers





___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Rebalance improvement.

2020-08-03 Thread Aravinda VK
Interesting numbers. Thanks for the effort.

What is the unit of old/new numbers? seconds? 

> On 03-Aug-2020, at 12:47 PM, Susant Palai  wrote:
> 
> Centos Users can add the following repo and install the build from the master 
> branch to try out the feature. [Testing purpose only, not ready for 
> consumption in production env.]
> 
> [gluster-nightly-master]
> baseurl=http://artifacts.ci.centos.org/gluster/nightly/master/7/x86_64/ 
> <http://artifacts.ci.centos.org/gluster/nightly/master/7/x86_64/>
> gpgcheck=0
> keepalive=1
> enabled=1
> repo_gpgcheck = 0
> name=Gluster Nightly builds (master branch)
> 
> A summary of perf numbers from our test lab :
> 
> DirSize - 1MillionOld New %diff
> Depth - 100 (Run 1)   353 74  +377%
> Depth - 100 (Run 2)   348 72  +377~%
> Depth - 50246 122 +100%
> Depth - 3 174 114 +52%
> 
> Susant
> 
> 
> On Mon, Aug 3, 2020 at 11:16 AM Susant Palai  <mailto:spa...@redhat.com>> wrote:
> Hi,
> Recently, we have pushed some performance improvements for Rebalance 
> Crawl which used to consume a significant amount of time, out of the entire 
> rebalance process.
> 
> 
> The patch [1] is recently merged in upstream and may land as an experimental 
> feature in the upcoming upstream release.
> 
> The improvement currently works only for pure-distribute Volume. (which can 
> be expanded).
> 
> 
> Things to look forward to in future :
>  - Parallel Crawl in Rebalance
>  - Global Layout
> 
> Once these improvements are in place, we would be able to reduce the overall 
> rebalance time by a significant time.
> 
> Would request our community to try out the feature and give us feedback.
> 
> More information regarding the same will follow.
> 
> 
> Thanks & Regards,
> Susant Palai
> 
> 
> [1] https://review.gluster.org/#/c/glusterfs/+/24443/ 
> <https://review.gluster.org/#/c/glusterfs/+/24443/>
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

Aravinda Vishwanathapura
https://kadalu.io



___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Action required] Jobs running under centos ci

2020-07-24 Thread Aravinda VK


> On 24-Jul-2020, at 12:00 PM, Deepshikha Khandelwal  
> wrote:
> 
> These are the jobs that either have been failing for more than a year now or 
> are not active on https://ci.centos.org/view/Gluster/ 
> <https://ci.centos.org/view/Gluster/>.
> 
> - gluster_ansible-infra
> - gluster_anteater_gcs
> - gluster_block-clang
> - gluster_block-lcov
> - gluster_block_glusto
> - gluster_csi-driver-master
> - gluster_csi-driver-smoke
Deprecated Repo. https://github.com/gluster/gluster-csi-driver

> - gluster_glusto
> - gluster_kubernetes
Deprecated Repo https://github.com/gluster/gluster-kubernetes

> - gluster_libgfapi-python
> - gluster_run-tests-in-vagrant
> - glusterfs-regression
> 
> I'm looking for the job owners who can confirm if we can remove these jobs.
> 
> On Thu, Jul 23, 2020 at 4:28 PM Yaniv Kaul  <mailto:yk...@redhat.com>> wrote:
> 
> 
> On Thu, Jul 23, 2020 at 1:04 PM Deepshikha Khandelwal  <mailto:dkhan...@redhat.com>> wrote:
> 
> FYI, we have the list of jobs running under 
> https://ci.centos.org/view/Gluster/ <https://ci.centos.org/view/Gluster/>
> 
> 1. Delete those that did not pass nor fail in the last year. No one is using 
> them.
> 2. I think you can also delete those that did not pass in the last year. 
> Obviously no one cares about them. Start with deactivating them, perhaps?
> Y.
> 
> 
> Please take a look and start to clean up the ones which are not required.
> -- Forwarded message -
> From: Vipul Siddharth mailto:vi...@redhat.com>>
> Date: Thu, Jun 25, 2020 at 1:50 PM
> Subject: [Ci-users] Call for migration to new openshift cluster
> To: ci-users mailto:ci-us...@centos.org>>
> 
> 
> Hi all,
> 
> We are done and ready with the new Openshift 4 CI cluster. Now we want
> to start the second phase of it i.e migrating projects to the new
> cluster.
> 
> If you are currently using apps.ci.centos.org <http://apps.ci.centos.org/> 
> (OCP 3.6), please
> contact us off-list so that we can create a namespace for you in the
> new OCP 4.4 cluster, so you can start migrating your workloads.
> 
> It's not a hard deadline but we are hoping to retire the older
> Openshift 3.6 cluster in the upcoming quarter (3 months) and legacy
> (ci.centos.org <http://ci.centos.org/>) environment in a couple of months 
> after that.
> 
> If you are using ci.centos.org <http://ci.centos.org/>, it's a very good time 
> to start working
> on updating your workload
> 
> If you have any questions, please reach out to us
> Thank You and stay safe
> 
> -- 
> Vipul Siddharth
> Fedora | CentOS CI Infrastructure Team
> 
> ___
> CI-users mailing list
> ci-us...@centos.org <mailto:ci-us...@centos.org>
> https://lists.centos.org/mailman/listinfo/ci-users 
> <https://lists.centos.org/mailman/listinfo/ci-users>
> 
> ___
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968 <https://bluejeans.com/441850968>
> 
> 
> 
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org>
> https://lists.gluster.org/mailman/listinfo/gluster-devel 
> <https://lists.gluster.org/mailman/listinfo/gluster-devel>
> 
> ___
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> 
> 
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 

Aravinda Vishwanathapura
https://kadalu.io



___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing problematic language in geo-replication

2020-07-22 Thread Aravinda VK
+1

> On 22-Jul-2020, at 2:34 PM, Ravishankar N  wrote:
> 
> Hi,
> 
> The gluster code base has some words and terminology (blacklist, whitelist, 
> master, slave etc.) that can be considered hurtful/offensive to people in a 
> global open source setting. Some of words can be fixed trivially but the 
> Geo-replication code seems to be something that needs extensive rework. More 
> so because we have these words being used in the CLI itself. Two questions 
> that I had were:
> 
> 1. Can I replace master:slave with primary:secondary everywhere in the code 
> and the CLI? Are there any suggestions for more appropriate terminology?

Primary -> Secondary looks good.

> 
> 2. Is it okay to target the changes to a major release (release-9) and *not* 
> provide backward compatibility for the CLI?

Functionality is not affected and CLI commands are compatible since all are 
positional arguments. Need changes in

- Geo-rep status xml output
- Documentation
- CLI help
- Variables and other references in Code.

> 
> Thanks,
> 
> Ravi
> 
> 
> ___
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> 
> 
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 

Aravinda Vishwanathapura
https://kadalu.io



___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Updating the repository's actual 'ACTIVE'ness status

2020-03-25 Thread Aravinda VK
I archived the following projects

- gluster/restapi
- gluster/glusterd2
- gluster/gcs
- gluster/gluster-csi-driver
- gluster/gluster-block-restapi
- gluster/python-gluster-mgmt-client

Please let me know if anyone interested to unarchive and maintain any project 
from the above list.

—
regards
Aravinda Vishwanathapura
https://kadalu.io

> On 25-Mar-2020, at 5:49 PM, Amar Tumballi  wrote:
> 
> Hi all,
> 
> We have 101 repositories in gluster org in github. Only handful of them are 
> being actively managed, and progressing.
> 
> After seeing https://github.com/gluster/gluster-kubernetes/issues/644 
> <https://github.com/gluster/gluster-kubernetes/issues/644>, I feel we should 
> at least keep the status of the project up-to-date in the repository, so the 
> users can move on to other repos if not maintained. Saves time for them, and 
> they wouldn't form a opinion on gluster project. But if they spend time on 
> setting it up, and later find that its not working, and is not maintained, 
> they would feel bad about the overall project itself.
> 
> So my request to all repository maintainers are to mark a repository as 
> 'Archived'. And update the README (or description) to reflect the same.
> 
> In any case, On April 1st week, we should actively mark them inactive if no 
> activity is found in last 15+ months. For other repos, maintainer's can take 
> appropriate action.
> 
> Regards,
> Amar
> 
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers





___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] gluster_gd2-nightly-rpms - do we need to continue to build for this?

2020-02-17 Thread Aravinda VK
We can stop this job.

regards
Aravinda

> On 18-Feb-2020, at 5:53 AM, Sankarshan Mukhopadhyay 
>  wrote:
> 
> There is no practical work being done on gd2, do we need to continue
> to have a build job?
> 
> On Tue, 18 Feb 2020 at 05:46,  wrote:
>> 
>> See <https://ci.centos.org/job/gluster_gd2-nightly-rpms/643/display/redirect>
>> 
> 
> 
> 
> -- 
> sankarshan mukhopadhyay
> <https://about.me/sankarshan.mukhopadhyay>
> ___
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
> 
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus

2019-11-21 Thread Aravinda Vishwanathapura Krishna Murthy
t; from gNFS. Also note that, even though the packages are not built, none of
>> the regression tests using gNFS are stopped with latest master, so it is
>> working same from at least last 2 years.
>>
>> I request the package maintainers to please add '--with gnfs' (or
>> --enable-gnfs) back to their release script through this email, so those
>> users wanting to use gNFS happily can continue to use it. Also points to
>> users/admins is that, the status is 'Odd Fixes', so don't expect any
>> 'enhancements' on the features provided by gNFS.
>>
>> Happy to hear feedback, if any.
>>
>> Regards,
>> Amar
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/maintainers
>>
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing 
> listGluster-devel@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-devel
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
regards
Aravinda VK
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-10-14 Thread Aravinda Vishwanathapura Krishna Murthy
Centos CI was voting Glusterd2 repo's pull requests.

https://github.com/gluster/centosci/blob/master/jobs/glusterd2.yml

On Mon, Oct 14, 2019 at 8:31 PM Amar Tumballi  wrote:

>
>
> On Mon, 14 Oct, 2019, 5:37 PM Niels de Vos,  wrote:
>
>> On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote:
>> > Any thoughts on this?
>> >
>> > I tried a basic .travis.yml for the unified glusterfs repo I am
>> > maintaining, and it is good enough for getting most of the tests.
>> > Considering we are very close to glusterfs-7.0 release, it is good to
>> time
>> > this after 7.0 release.
>>
>> Is there a reason to move to Travis? GitHub does offer integration with
>> Jenkins, so we should be able to keep using our existing CI, I think?
>>
>
> Yes, that's true. I tried Travis because I don't have complete idea of
> Jenkins infra and trying Travis needed just basic permissions from me on
> repo (it was tried on my personal repo)
>
> Happy to get some help here.
>
> Regards,
> Amar
>
>
>> Niels
>>
>>
>> >
>> > -Amar
>> >
>> > On Thu, Sep 5, 2019 at 5:13 PM Amar Tumballi  wrote:
>> >
>> > > Going through the thread, I see in general positive responses for the
>> > > same, with few points on review system, and not loosing information
>> when
>> > > merging the patches.
>> > >
>> > > While we are working on that, we need to see and understand how our
>> CI/CD
>> > > looks like with github migration. We surely need suggestion and
>> volunteers
>> > > here to get this going.
>> > >
>> > > Regards,
>> > > Amar
>> > >
>> > >
>> > > On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos 
>> wrote:
>> > >
>> > >> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan
>> > >> wrote:
>> > >> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos 
>> > >> wrote:
>> > >> >
>> > >> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda
>> Vishwanathapura
>> > >> Krishna
>> > >> > > Murthy wrote:
>> > >> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian <
>> j...@julianfamily.org>
>> > >> wrote:
>> > >> > > >
>> > >> > > > > > Comparing the changes between revisions is something
>> > >> > > > > that GitHub does not support...
>> > >> > > > >
>> > >> > > > > It does support that,
>> > >> > > > > actually.___
>> > >> > > > >
>> > >> > > >
>> > >> > > > Yes, it does support. We need to use Squash merge after all
>> review
>> > >> is
>> > >> > > done.
>> > >> > >
>> > >> > > Squash merge would also combine multiple commits that are
>> intended to
>> > >> > > stay separate. This is really bad :-(
>> > >> > >
>> > >> > >
>> > >> > We should treat 1 patch in gerrit as 1 PR in github, then squash
>> merge
>> > >> > works same as how reviews in gerrit are done.  Or we can come up
>> with
>> > >> > label, upon which we can actually do 'rebase and merge' option,
>> which
>> > >> can
>> > >> > preserve the commits as is.
>> > >>
>> > >> Something like that would be good. For many things, including commit
>> > >> message update squashing patches is just loosing details. We dont do
>> > >> that with Gerrit now, and we should not do that when using GitHub
>> PRs.
>> > >> Proper documenting changes is still very important to me, the
>> details of
>> > >> patches should be explained in commit messages. This only works well
>> > >> when developers 'force push' to the branch holding the PR.
>> > >>
>> > >> Niels
>> > >> ___
>> > >>
>> > >> Community Meeting Calendar:
>> > >>
>> > >> APAC Schedule -
>> > >> Every 2nd and 4th Tuesday at 11:30 AM IST
>> > >> Bridge: https://bluejeans.com/836554017
>> > >>
>> > >> NA/EMEA Schedule -
>> > >> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> > >> Bridge: https://bluejeans.com/486278655
>> > >>
>> > >> Gluster-devel mailing list
>> > >> Gluster-devel@gluster.org
>> > >> https://lists.gluster.org/mailman/listinfo/gluster-devel
>> > >>
>> > >>
>>
>> > ___
>> >
>> > Community Meeting Calendar:
>> >
>> > APAC Schedule -
>> > Every 2nd and 4th Tuesday at 11:30 AM IST
>> > Bridge: https://bluejeans.com/118564314
>> >
>> > NA/EMEA Schedule -
>> > Every 1st and 3rd Tuesday at 01:00 PM EDT
>> > Bridge: https://bluejeans.com/118564314
>> >
>> > Gluster-devel mailing list
>> > Gluster-devel@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-devel
>> >
>>
>> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
regards
Aravinda VK
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Query regards to expose client-pid to fuse process

2019-10-13 Thread Aravinda Vishwanathapura Krishna Murthy
Geo-replication uses this option to identify itself as an internal client

On Sun, Oct 13, 2019 at 11:41 AM Amar Tumballi  wrote:

>
>
> On Fri, Oct 11, 2019 at 5:05 PM Mohit Agrawal  wrote:
>
>> Hi,
>>
>> Yes, you are right it is not a default value.
>>
>> We can assign the client_pid only while volume has mounted after through
>> a glusterfs binary directly like
>> /usr/local/sbin/glusterfs --process-name fuse
>> --volfile-server=192.168.1.3 --client-pid=-3 --volfile-id=/test /mnt1
>>
>>
> I agree that this is in general risky, and good to fix. But as the check
> for this happens after basic auth check in RPC (ip/user based), it should
> be OK.  Good to open a github issue and have some possible design options
> so we can have more discussions on this.
>
> -Amar
>
>
>
>> Regards,
>> Mohit Agrawal
>>
>>
>> On Fri, Oct 11, 2019 at 4:52 PM Nithya Balachandran 
>> wrote:
>>
>>>
>>>
>>> On Fri, 11 Oct 2019 at 14:56, Mohit Agrawal  wrote:
>>>
>>>> Hi,
>>>>
>>>>   I have a query specific to authenticate a client based on the PID
>>>> (client-pid).
>>>>   It can break the bricks xlator functionality, Usually, on the brick
>>>> side we take a decision about the
>>>>source of fop request based on PID.If PID value is -ve xlator
>>>> considers the request has come from an internal
>>>>   client otherwise it has come from an external client.
>>>>
>>>>   If a user has mounted the volume through fuse after provide
>>>> --client-pid to command line argument similar to internal client PID
>>>>   in that case brick_xlator consider external fop request also as an
>>>> internal and it will break functionality.
>>>>
>>>>   We are checking pid in (lease, posix-lock, worm, trash) xlator to
>>>> know about the source of the fops.
>>>>   Even there are other brick xlators also we are checking specific PID
>>>> value for all internal
>>>>   clients that can be break if the external client has the same pid.
>>>>
>>>>   My query is why we need to expose client-pid as an argument to the
>>>> fuse process?
>>>>
>>>
>>>
>>> I don't think this is a default value to the fuse mount. One place where
>>> this helps us is with the script based file migration and rebalance - we
>>> can provide a negative pid to  the special client mount to ensure these
>>> fops are also treated as internal fops.
>>>
>>> In the meantime I do not see the harm in having this option available as
>>> it allows a specific purpose. Are there any other client processes that use
>>> this?
>>>
>>>I think we need to resolve it. Please share your view on the same.
>>>>
>>>> Thanks,
>>>> Mohit Agrawal
>>>> ___
>>>>
>>>> Community Meeting Calendar:
>>>>
>>>> APAC Schedule -
>>>> Every 2nd and 4th Tuesday at 11:30 AM IST
>>>> Bridge: https://bluejeans.com/118564314
>>>>
>>>> NA/EMEA Schedule -
>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>>>> Bridge: https://bluejeans.com/118564314
>>>>
>>>> Gluster-devel mailing list
>>>> Gluster-devel@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
regards
Aravinda VK
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty

2019-09-23 Thread Aravinda Vishwanathapura Krishna Murthy
Hi Shwetha,

Good to see this bug is picked up.

You are right, and the fix should be to remove the path from HTIME file.
RFE is already available here https://github.com/gluster/glusterfs/issues/76

There is one more RFE about optimizing Changelogs storage. Currently, all
changelogs are stored in a single directory, so this needs to be changed.
This affects the above RFE, instead of storing a complete changelog path in
HTIME file store with the prefix used in this RFE.

https://github.com/gluster/glusterfs/issues/154

These two RFE's to be worked together.

One major issue with format change is to handle the upgrades. Workaround
script to be used to upgrade existing HTIME file and new directory
structure of Changelog files.

Let me know if you have any questions.


On Mon, Sep 23, 2019 at 4:14 PM Shwetha Acharya  wrote:

> Hi All,
> I am planning to work on this
> <https://bugzilla.redhat.com/show_bug.cgi?id=1238699> bugzilla issue.
> Here, when we restore the snapshots, and start the geo-replication
> session, we see that the geo-replication goes faulty. It is mainly because,
> the brick path of original session and the session after snapshot restore
> will be different. There is a proposed work around for this issue,
> according to which we replace the old brick path with new brick path inside
> the index file HTIME.xx, which basically solves the issue.
>
> I have some doubts regarding the same.
> We are going with the work around from a long time. Are there any
> limitations stopping us from implementing solution for this, which I am
> currently unaware of?
> Is it important to have paths inside index file? Can we eliminate the
> paths inside them?
> Is there any concerns from snapshot side?
> Are there any other general concerns regarding the same?
>
> Regards,
> Shwetha
>


-- 
regards
Aravinda VK
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Aravinda Vishwanathapura Krishna Murthy
On Mon, Aug 26, 2019 at 8:44 PM Joe Julian  wrote:

> You can also see diffs between force pushes now.
>

Nice.


> On August 26, 2019 8:06:30 AM PDT, Aravinda Vishwanathapura Krishna Murthy
>  wrote:
>>
>>
>>
>> On Mon, Aug 26, 2019 at 7:49 PM Joe Julian  wrote:
>>
>>> > Comparing the changes between revisions is something
>>> that GitHub does not support...
>>>
>>> It does support that,
>>> actually.___
>>>
>>
>> Yes, it does support. We need to use Squash merge after all review is
>> done.
>> A sample pull request is here to see reviews with multiple revisions.
>>
>> https://github.com/aravindavk/reviewdemo/pull/1
>>
>>
>>
>>
>>> maintainers mailing list
>>> maintain...@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/maintainers
>>>
>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>


-- 
regards
Aravinda VK
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Aravinda Vishwanathapura Krishna Murthy
On Mon, Aug 26, 2019 at 7:49 PM Joe Julian  wrote:

> > Comparing the changes between revisions is something
> that GitHub does not support...
>
> It does support that,
> actually.___
>

Yes, it does support. We need to use Squash merge after all review is done.
A sample pull request is here to see reviews with multiple revisions.

https://github.com/aravindavk/reviewdemo/pull/1




> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
regards
Aravinda VK
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Project Update: Containers-based distributed tests runner

2019-06-14 Thread Aravinda
**gluster-tester** is a framework to run existing "*.t" test files in
parallel using containers.

Install and usage instructions are available in the following
repository.

https://github.com/aravindavk/gluster-tester

## Completed:
- Create a base container image with all the dependencies installed.
- Create a tester container image with requested refspec(or latest
master) compiled and installed.
- SSH setup in containers required to test Geo-replication
- Take `--num-parallel` option and spawn the containers with ready
infra for running tests
- Split the tests based on the number of parallel jobs specified.
- Execute the tests in parallel in each container and watch for the
status.
- Archive only failed tests(Optionally enable logs for successful tests
using `--preserve-success-logs`)

## Pending:
- NFS related tests are not running since the required changes are
pending while creating the container image. (To know the failures run
gluster-tester with `--include-nfs-tests` option)
- Filter support while running the tests(To enable/disable tests on the
run time)
- Some Loop based tests are failing(I think due to shared `/dev/loop*`)
- A few tests are timing out(Due to this overall test duration is more)
- Once tests are started, showing real-time status is pending(Now
status is checked in `/regression-.log` for example
`/var/log/gluster-tester/regression-3.log`
- If the base image is not built before running tests, it gives an
error. Need to re-trigger the base container image step if not built.
(Issue: https://github.com/aravindavk/gluster-tester/issues/11)
- Creating an archive of core files
- Creating a single archive from all jobs/containers
- Testing `--ignore-from` feature to ignore the tests
- Improvements to the status output
- Cleanup(Stop test containers, and delete)

I opened an issue to collect the details of failed tests. I will
continue to update that issue as and when I capture failed tests in my
setup.
https://github.com/aravindavk/gluster-tester/issues/9

Feel free to suggest any feature improvements. Contributions are
welcome.
https://github.com/aravindavk/gluster-tester/issues

--
Regards
Aravinda
http://aravindavk.in


___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Following up on the "Github teams/repo cleanup"

2019-03-28 Thread Aravinda
On Fri, 2019-03-29 at 00:08 +0530, Sankarshan Mukhopadhyay wrote:
> On Thu, Mar 28, 2019 at 11:34 PM John Strunk 
> wrote:
> > Thanks for bringing this to the list.
> > 
> > I think this is a good set of guidelines, and we should publicly
> > post and enforce them once agreement is reached.
> > The integrity of the gluster github org is important for the future
> > of the project.
> > 
> 
> I agree. And so, I am looking forward to additional
> individuals/maintainers agreeing to this so that we can codify it
> under the Gluster.org Github org too.

We can create new Github organization similar to facebookarchive(
https://github.com/facebookarchive), and move all unmaintained
repositories to that organization.

I strongly feel separate github organization is required to host new
projects. Once the project becomes mature we can promote to Gluster
organization. All new project should be created under this organization
instead of creating under main Gluster organization.

Inspirations:
- https://github.com/rust-lang-nursery
- https://github.com/facebookincubator
- https://github.com/cloudfoundry-incubator

> 
> > On Wed, Mar 27, 2019 at 10:21 PM Sankarshan Mukhopadhyay <
> > sankarshan.mukhopadh...@gmail.com> wrote:
> > > The one at <
> > > https://lists.gluster.org/pipermail/gluster-infra/2018-June/004589.html
> > > >
> > > I am not sure if the proposal from Michael was agreed to
> > > separately
> > > and it was done. Also, do we want to do this periodically?
> > > 
> > > Feedback is appreciated.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Path based Geo-replication

2019-02-08 Thread Aravinda
Hi All,

I prepared a design document for Path based Geo-replication feature.
Similar to existing GFID based solution, it uses Changelogs to detect
the changes but converts to Path using GFID-to-path feature before
syncing.

Feel free to add comments, suggestions or any issues or challenges if I
have not considered.

https://docs.google.com/document/d/1gW5ETQxNiy9tt4uV1ohRH1g5AMmWLtbQYD3QPs_v8Ec/edit?usp=sharing

-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Glusterd2 project updates (github.com/gluster/glusterd2)

2018-12-11 Thread Aravinda
Completed
===

- Memory Leak issue fixed - Memory leak was identified when
  client(mount, brick process, self heal daemon etc) disconnects.
  https://github.com/gluster/glusterd2/pull/1371

- Intelligent Volume expand by brick resize. If the volume is auto
  provisioned then it can be expanded by doing `lvresize` of each
  brick if space available in the device.
  https://github.com/gluster/glusterd2/pull/1281

- Intelligent Replace brick API - Glusterd2 will automatically choose
  the target brick matching the configuration of source brick which
  needs to be replaced.
  https://github.com/gluster/glusterd2/pull/1269

- Resilient Transaction Engine
  https://github.com/gluster/glusterd2/pull/1268

- Brick multiplexing support added
  https://github.com/gluster/glusterd2/pull/1301

- Device Edit support added, If any  required for device it
  can be disabled using the device edit API. If a device is disabled
  then it will not be considered for provisioning while creating new
  volumes.
  https://github.com/gluster/glusterd2/pull/1118

- Glusterd2 logs are now in UTC.
  https://github.com/gluster/glusterd2/pull/1381

- New group profile for db workload is added(`profile.db`)
  https://github.com/gluster/glusterd2/pull/1370

- Handle local brick mount failures
  https://github.com/gluster/glusterd2/pull/1337

- Client Volfiles were stored in etcd to avoid generating in all nodes
  when a Volume is created or volume options updated. Now this is
  changed to generate client volfile on demand without storing it in
  etcd.
  https://github.com/gluster/glusterd2/pull/1363

- Device information was stored as part of Peer information itself,
  due to this multiple marshal and unmarshal was required while
  managing device information. Now the device details are stored
  separately in its namespace.
  https://github.com/gluster/glusterd2/pull/1354
  
- Default Options for new Volumes - Default Options groups for each
  volume types are introduced, which will be applied when a new Volume
  gets created.
  https://github.com/gluster/glusterd2/pull/1376

- Fixed the CLI issue while displaying Volume Size
  https://github.com/gluster/glusterd2/pull/1340
  
- Snapshot feature documentation
  https://github.com/gluster/glusterd2/pull/1106
  
- Tracing feature documentation
  https://github.com/gluster/glusterd2/pull/1149

- Added support for Split brain resolution commands and improved self
  heal e2e tests
  https://github.com/gluster/glusterd2/pull/1173
  https://github.com/gluster/glusterd2/pull/1361

- LVM and Filesystem utilities are re-factored as library
  `pkg/lvmutils` and `pkg/fsutils`
  https://github.com/gluster/glusterd2/pull/1333

- Normalized the size units to bytes everywhere
  https://github.com/gluster/glusterd2/pull/1326
  
- Volume Profile support added
  https://github.com/gluster/glusterd2/pull/962
  
- Tracing support added for Volume Start/Stop and Snapshot
  Create/Delete
  https://github.com/gluster/glusterd2/pull/1255

- Geo-rep feature documentation and e2e tests
  https://github.com/gluster/glusterd2/pull/1044
  https://github.com/gluster/glusterd2/pull/1055
  https://github.com/gluster/glusterd2/pull/1064

In Progress


- Support for Gluster Block Volumes
  https://github.com/gluster/glusterd2/pull/1357
  
- Remove device API
  https://github.com/gluster/glusterd2/pull/1120
  
- Brick multiplexing Configuration related fixes
  https://github.com/gluster/glusterd2/pull/1373
  https://github.com/gluster/glusterd2/pull/1372
  
- Golang profiling for glusterd2 binary
  https://github.com/gluster/glusterd2/pull/1345

  
-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Monitoring project updates (github.com/gluster/gluster-prometheus)

2018-10-27 Thread Aravinda
Completed
--

- Configuration format is now changed to `toml` format for ease of
  use. Usage is upadated in README
  PR: https://github.com/gluster/gluster-prometheus/pull/16

- Enabled Travis tests to validate incoming PRs and added build status
  in README
  PRs: https://github.com/gluster/gluster-prometheus/pull/51,
   https://github.com/gluster/gluster-prometheus/pull/52 and
   https://github.com/gluster/gluster-prometheus/pull/60

- Improved Glusterd1 and Glusterd2 support by introducing Gluster
  interface.
  PR: https://github.com/gluster/gluster-prometheus/pull/47

- Fixed build issues related to dependency library version
  PR: https://github.com/gluster/gluster-prometheus/pull/59
  
- Brick related metrics are now exported only if Volume state is
  Started.
  PR: https://github.com/gluster/gluster-prometheus/pull/58

- Added support for logging. Now each metrics collectors can log any
  failures during the collection.
  PR: https://github.com/gluster/gluster-prometheus/pull/50

- New Metrics: Added `gluster_volume_total_count`,
  `gluster_volume_created_count`, `gluster_volume_started_count` and
  `gluster_volume_brick_count`
  PR: https://github.com/gluster/gluster-prometheus/pull/22

- New utility functions `Peers` and `IsLeader` added with support to
  both glusterd and glusterd2.
  PR: https://github.com/gluster/gluster-prometheus/pull/38


In Progress
---

- Brick disk io related metrics
  PR: https://github.com/gluster/gluster-prometheus/pull/15

- RPM spec file
  PR: https://github.com/gluster/gluster-prometheus/pull/26

- Self Heal related metrics
  Issue: https://github.com/gluster/gluster-prometheus/issues/53

- Prometheus rule to get Volume utilization based on sub volume
  utilization exported from all exporters.
  Issue: https://github.com/gluster/gluster-prometheus/issues/54
  
- Documentation for exported metrics
  PR: https://github.com/gluster/gluster-prometheus/pull/61
  
- Kubernetes integration
  PR: https://github.com/gluster/gluster-prometheus/pull/48


-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Glusterd2 project updates (github.com/gluster/glusterd2)

2018-10-27 Thread Aravinda
Completed
--

- Environment variable(`GD2_ENDPOINTS`) support to `glustercli` to
  make it work without any configuration in GCS setup.
  (https://github.com/gluster/glusterd2/pull/1292)

- Fixed issues related to nightly rpm generation scripts
  (https://github.com/gluster/glusterd2/pull/1298)

- Error messages cleaned up according to agreed standard.
  (https://github.com/gluster/glusterd2/pull/1297)

- Fixed typo in `glustercli snapshot create` help message.
  (https://github.com/gluster/glusterd2/pull/1296)

- Brick type validation added. In Glusterd1 last brick in arbiter sub
  volume is always treated as arbiter brick. In glusterd2, flexibility
  is added to specify the arbiter brick anywhere in the list. Brick
type
  as "arbiter" need to be specified if a brick is arbiter brick. Due to
  a bug, Arbiter Volume was treated as replicate volume if the input
  doesn't specify brick type "arbiter" or wrong brick type.
  (https://github.com/gluster/glusterd2/pull/1271)

- Improved getting/setting cluster options
  (https://github.com/gluster/glusterd2/pull/1293)

- Moved snapshot error messages to error package
  (https://github.com/gluster/glusterd2/pull/1294)

- Add qsh to the list of required claims(REST API Authentication)
  (https://github.com/gluster/glusterd2/pull/1290)

- Snapshot bricks are now automatically mounted on glusterd2
start/restart.
  (https://github.com/gluster/glusterd2/pull/1053)


In progress


- Replace brick API for auto provisioned volumes
  (https://github.com/gluster/glusterd2/pull/1269)

- Expanding auto provisioned Volumes by resizing the LVs
  (https://github.com/gluster/glusterd2/pull/1281)

- Brick multiplexing
  (https://github.com/gluster/glusterd2/pull/1301)

- Tracing integrations for option set, volume start/stop and snapshot
  create/delete APIs
  (https://github.com/gluster/glusterd2/pull/1300 and
  https://github.com/gluster/glusterd2/pull/1255)

- Transaction framework enhancements
  (https://github.com/gluster/glusterd2/pull/1268)

- Support for Volume profile
  (https://github.com/gluster/glusterd2/pull/962)

- Plugin architecture for brick Provisioners
  (https://github.com/gluster/glusterd2/pull/1256)

- Template support added for Volfiles generation
  (https://github.com/gluster/glusterd2/pull/1229)

- Arbiter brick size calculation for auto provisioned volumes
  (https://github.com/gluster/glusterd2/pull/1267)

- Add group profile for transactional DB workload
  (https://github.com/gluster/glusterd2/pull/1282)


-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Glusterd2 project updates (github.com/gluster/glusterd2)

2018-10-14 Thread Aravinda
## Completed

- Fixed issue related to wrongly setting brick type on snapshot create.
  (https://github.com/gluster/glusterd2/pull/1279)

- With glusterfs patch https://review.gluster.org/21249, bricks can
  choose its own port. Glusterd2 is enhanced to accept the port used by
  brick and updating its store.
  (https://github.com/gluster/glusterd2/pull/1257)

- Glusterd2 is enhanced to automatically notify firewalld over dbus
  once bricks chooses a port and signs in with glusterd2.
  (https://github.com/gluster/glusterd2/pull/1264)

- Code refactor: Refactored cluster options package for ease of use
  (https://github.com/gluster/glusterd2/pull/1270)

- Fixed GeorepStatusList response
  (https://github.com/gluster/glusterd2/pull/1274)

- Refactor and improve pmap package
  (https://github.com/gluster/glusterd2/pull/1247)

- Snapshot reserve factor specified during Volume create was not
  stored anywhere. Due to this, Volume expand or brick replace will
  not be able to use the same snapshot reserve factor. Glusterd2 is
  now enhanced to store the details about Snapshot reserve factor
  specified during Volume create.
  (https://github.com/gluster/glusterd2/pull/1266)

- Add Version() to REST client
  (https://github.com/gluster/glusterd2/pull/1261)

- Fix issues reported by Go Report Card
  (https://github.com/gluster/glusterd2/pull/1260)

- Volume options are catagorized into Basic, Advanced and
  Experimental. Earlier used flag names to set the options were
  confusing since user thinks these flags are for creating
  Basic/Advanced/Experimental volumes. These flags are now refactored
  to avoid this confusion.
  (https://github.com/gluster/glusterd2/pull/1259)
  
- Glusterd2 was not cleaning up the stale socket file during start if
  the previous run was terminated abruptly and socket file was not
  deleted. Glusterd2 now uses lock file to see any glusterd2 is running
  or not, if glusterd2 can aquire lock then it unlinks the stale socket
  file and uses it.(https://github.com/gluster/glusterd2/pull/1258)

- Add Volume capacity information in Volume info - Since Volume
  capacity is static information which will not change after volume
  create unless volume is expanded. Enhancement is planned to save the
  capacity information in volinfo when auto provision volume is
  created. (https://github.com/gluster/glusterd2/pull/1193)

- Newly added bricks were not placed properly when distributed
  replicate volume was expanded. Glusterd2 now handles the Volume
  expansion for all supported Volume types.
  (https://github.com/gluster/glusterd2/pull/1218)


## Ongoing

- An alternative Transaction framework design is proposed to support
  better rollback mechanism and to sync the peer's state if a peer was
  offline during the transaction. A PR is sent for the same
  https://github.com/gluster/glusterd2/pull/1268, design related
  discussions are available here
  https://github.com/gluster/glusterd2/pull/1003

- Template support for Volgen PR is under review, once it is merged
  Glusterd2 will be able to generate Volfiles based on the provided
  template and Volume info.
  https://github.com/gluster/glusterd2/pull/1229

- A PR is under review to make bricks provisioner as plugin
  https://github.com/gluster/glusterd2/pull/1256

- Added validation to fix wrong brick type issue during arbiter volume
  create. https://github.com/gluster/glusterd2/pull/1271
  
- Replace brick API - New brick needs to be provisioned automatically
  if replace brick is called on the Volume which is auto provisioned.
  https://github.com/gluster/glusterd2/pull/1269
  
- Glusterd2 can auto provision the bricks based on the requested size,
  Arbiter bricks requires less size compared to other bricks present in
  same sub volume. Glusterd2 is enhanced to calculate Arbiter brick
size
  based on the provided volume size and average file size to be stored.
  https://github.com/gluster/glusterd2/pull/1267
  
- Volume expand API by Size: If space available in the VG, bricks can
  be expanded to increase the volume size instead of adding one more
  sub volume. The PR is being worked on to add support.
  https://github.com/gluster/glusterd2/issues/852
  
- Tracing support for Volume and Snapshot operations
  (https://github.com/gluster/glusterd2/pull/1255 and
  https://github.com/gluster/glusterd2/pull/1149)

-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Monitoring using Prometheus - Status Update

2018-10-12 Thread Aravinda
## Quick start:

```
cd $GOPATH/src/github.com/gluster
git clone https://github.com/gluster/gluster-prometheus.git
cd gluster-prometheus
PREFIX=/usr make
PREFIX=/usr make install

# Enable and start using,
systemctl enable gluster-exporter
systemctl start gluster-exporter
```

Note: By default exporter collects metrics using `glusterd` and
`gluster` CLI. Configure `/etc/gluster-exporter/global.conf` to use
with glusterd2.


## Completed

- All the supported metrics are now works with both `glusterd` and
  `glusterd2`. Volume info from glusterd will be upgraded to include
  sub volume details and to match with glusterd2 Volume info. This
  also enables capturing sub-volume related metrics like
  sub volume utilization easily.
  (https://github.com/gluster/gluster-prometheus/pull/35)

- Configuration support added to support glusterd/glusterd2 related
  configurations. By default it collects metrics from `glusterd`.
Update
  the configuration file(`/etc/gluster-exporter/global.conf`)
  (https://github.com/gluster/gluster-prometheus/pull/24)

- All metrics collectors are enabled by default, metrics can be
  disabled by updating the `/etc/gluster-exporter/collectors.conf`
  file and restarting the `gluster-exporter`
  (https://github.com/gluster/gluster-prometheus/pull/24)

- `gluster-exporter` can be managed as a `systemd` service. Once
  installed, it can be enabled and started using `systemctl enable
  gluster-exporter` and `systemctl start gluster-exporter`
  (https://github.com/gluster/gluster-prometheus/pull/37)
  
- Installation and setup instructions are updated in README file.
  (https://github.com/gluster/gluster-prometheus/pull/40 and
  https://github.com/gluster/gluster-prometheus/pull/35)

- `pkg/glusterutils` package is introduced, which collects the
  required information from both `glusterd` and `glusterd2`. Metrics
  collectors need not worry about handling it for `glusterd` and
  `glusterd2`. For example, `glusterutils.VolumeInfo` internally
  handles glusterd/glusterd2 based on configuration and provides
  uniform interface to metrics
  collectors. (https://github.com/gluster/gluster-prometheus/pull/35)
  

## In-progress

- RPM generation scripts - Currently prometheus exporter can be
  installed using source install(`PREFIX=/usr make` and `PREFIX=/usr
  make install`). RPM spec file helps to generate the RPM and to
  integrate with GCS and to integrate with centos-ci.
  (https://github.com/gluster/gluster-prometheus/pull/26)
  
- Understanding Prometheus Operator - POC project started to try out
  Prometheus Operator. Theoritically Prometheus operator can detect
  the pods/containers which are annotated as `prometheus.io/scrape:
  "true"`. Custom `Dockerfile` is created to experiment with
  Prometheus Operator till the RPM spec file related changes merges
  and rpm is included in gluster official container.
  (https://github.com/gluster/gluster-prometheus/pull/48)

- Gluster interface - Code is refactored to support glusterd and
  glusterd2 compatibility feature easily.
  (https://github.com/gluster/gluster-prometheus/pull/47)

- Ongoing metrics collectors - Volume count and brick disk io related
  metrics PRs are under review.
  (https://github.com/gluster/gluster-prometheus/pull/22 and
  https://github.com/gluster/gluster-prometheus/pull/15)

- PR related to selecting Leader node/peer is under review. This
  feature will become foundation for sending Cluster related metrics
  only from the leader node.
  (https://github.com/gluster/gluster-prometheus/pull/38)



Install and Usage guide:
https://github.com/gluster/gluster-prometheus/blob/master/README.adoc

Project repo: https://github.com/gluster/gluster-prometheus

-- 
regards
Aravinda
(on behalf of gluster-prometheus Team)

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Glusterd2 project updates (https://github.com/gluster/glusterd2)

2018-10-01 Thread Aravinda
.(https://github.com/gluster/glusterd2/pull/1228,
  https://github.com/gluster/glusterd2/pull/1053)
- Tracing support for Volume and Snapshot operations
  (https://github.com/gluster/glusterd2/pull/1255 and
  https://github.com/gluster/glusterd2/pull/1149)
- Support for Volume profile - This helps users to understand and
  debug the Gluster Volume. In Glusterd, this feature is supported
  using `gluster volume profile*`
  command.(https://github.com/gluster/glusterd2/pull/962)

-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Aravinda Vishwanathapura Krishna Murthy
On Tue, Jul 24, 2018 at 10:11 PM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> On Tue, Jul 24, 2018 at 9:48 PM, Pranith Kumar Karampuri
>  wrote:
> > hi,
> >   Quite a few commands to monitor gluster at the moment take almost a
> > second to give output.
>
> Is this at the (most) minimum recommended cluster size?
>
> > Some categories of these commands:
> > 1) Any command that needs to do some sort of mount/glfs_init.
> >  Examples: 1) heal info family of commands 2) statfs to find
> > space-availability etc (On my laptop replica 3 volume with all local
> bricks,
> > glfs_init takes 0.3 seconds on average)
> > 2) glusterd commands that need to wait for the previous command to
> unlock.
> > If the previous command is something related to lvm snapshot which takes
> > quite a few seconds, it would be even more time consuming.
> >
> > Nowadays container workloads have hundreds of volumes if not thousands.
> If
> > we want to serve any monitoring solution at this scale (I have seen
> > customers use upto 600 volumes at a time, it will only get bigger) and
> lets
> > say collecting metrics per volume takes 2 seconds per volume(Let us take
> the
> > worst example which has all major features enabled like
> > snapshot/geo-rep/quota etc etc), that will mean that it will take 20
> minutes
> > to collect metrics of the cluster with 600 volumes. What are the ways in
> > which we can make this number more manageable? I was initially thinking
> may
> > be it is possible to get gd2 to execute commands in parallel on different
> > volumes, so potentially we could get this done in ~2 seconds. But quite a
> > few of the metrics need a mount or equivalent of a mount(glfs_init) to
> > collect different information like statfs, number of pending heals, quota
> > usage etc. This may lead to high memory usage as the size of the mounts
> tend
> > to be high.
> >
>
> I am not sure if starting from the "worst example" (it certainly is
> not) is a good place to start from. That said, for any environment
> with that number of disposable volumes, what kind of metrics do
> actually make any sense/impact?
>

This is really interesting question. When we have more number of disposable
volumes, I think we need metrics like available size in the cluster to
create more volumes than the utilization per volumes. (If we need to
observe the usage patterns of applications then we need per volume
utilization as well)


>
> > I wanted to seek suggestions from others on how to come to a conclusion
> > about which path to take and what problems to solve.
> >
> > I will be happy to raise github issues based on our conclusions on this
> mail
> > thread.
> >
> > --
> > Pranith
> >
>
>
>
>
>
> --
> sankarshan mukhopadhyay
> <https://about.me/sankarshan.mukhopadhyay>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel


--
regards
Aravinda VK
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Aravinda Vishwanathapura Krishna Murthy
On Wed, Jul 25, 2018 at 11:54 PM Yaniv Kaul  wrote:

>
>
> On Tue, Jul 24, 2018, 7:20 PM Pranith Kumar Karampuri 
> wrote:
>
>> hi,
>>   Quite a few commands to monitor gluster at the moment take almost a
>> second to give output.
>> Some categories of these commands:
>> 1) Any command that needs to do some sort of mount/glfs_init.
>>  Examples: 1) heal info family of commands 2) statfs to find
>> space-availability etc (On my laptop replica 3 volume with all local
>> bricks, glfs_init takes 0.3 seconds on average)
>> 2) glusterd commands that need to wait for the previous command to
>> unlock. If the previous command is something related to lvm snapshot which
>> takes quite a few seconds, it would be even more time consuming.
>>
>> Nowadays container workloads have hundreds of volumes if not thousands.
>> If we want to serve any monitoring solution at this scale (I have seen
>> customers use upto 600 volumes at a time, it will only get bigger) and lets
>> say collecting metrics per volume takes 2 seconds per volume(Let us take
>> the worst example which has all major features enabled like
>> snapshot/geo-rep/quota etc etc), that will mean that it will take 20
>> minutes to collect metrics of the cluster with 600 volumes. What are the
>> ways in which we can make this number more manageable? I was initially
>> thinking may be it is possible to get gd2 to execute commands in parallel
>> on different volumes, so potentially we could get this done in ~2 seconds.
>> But quite a few of the metrics need a mount or equivalent of a
>> mount(glfs_init) to collect different information like statfs, number of
>> pending heals, quota usage etc. This may lead to high memory usage as the
>> size of the mounts tend to be high.
>>
>> I wanted to seek suggestions from others on how to come to a conclusion
>> about which path to take and what problems to solve.
>>
>
> I would imagine that in gd2 world:
> 1. All stats would be in etcd.
>

Only static state information stored in etcd by gd2. For real-time status
gd2 still has to reach respective nodes to collect the details. For
example, Volume utilization is changed by multiple mounts which are
external to gd2, to keep track of real-time status gd2 has to poll bricks
utilization on every node and update etcd.



> 2. There will be a single API call for GetALLVolumesStats or something and
> we won't be asking the client to loop, or there will be a similar efficient
> single API to query and deliver stats for some volumes in a batch ('all
> bricks in host X' for example).
>

Single API available for Volume stats, but this API is expensive because
the real-time stats not stored in etcd.


>
> Worth looking how it's implemented elsewhere in K8S.
>
> In any case, when asking for metrics I assume the latest already available
> would be returned and we are not going to fetch them when queried. This is
> both fragile (imagine an entity that doesn't respond well) and adds latency
> and will be inaccurate anyway a split second later.
>
> Y.
>
>
>
>> I will be happy to raise github issues based on our conclusions on this
>> mail thread.
>>
>> --
>> Pranith
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel


--
regards
Aravinda VK
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-25 Thread Aravinda

On 03/22/2018 11:34 PM, Shyam Ranganathan wrote:

On 03/21/2018 04:12 AM, Amar Tumballi wrote:

 Current 4.1 project release lane is empty! I cleaned it up, because I
 want to hear from all as to what content to add, than add things marked
 with the 4.1 milestone by default.


I would like to see we have sane default values for most of the options,
or have group options for many use-cases.

Amar, do we have an issue that lists the use-cases and hence the default
groups to be provided for the same?


Also want to propose that,  we include a release
of http://github.com/gluster/gluster-health-report with 4.1, and make
the project more usable.

In the theme of including sub-projects that we want to highlight, what
else should we tag a release for or highlight with 4.1?

@Aravinda, how do you envision releasing this with 4.1? IOW, what
interop tests and hence sanity can be ensured with 4.1 and how can we
tag a release that is sane against 4.1?


Some more changes required to make it work with Gluster 4.x, I will work 
on fixing those issues and test scripts.


I have not yet started with Packaging for Fedora/Ubuntu. As of now 
available as `pip install`. Let me know if that is fine with 4.1 release.





Also, we see that some of the patches from FB branch on namespace and
throttling are in, so we would like to call that feature out as
experimental by then.

I would assume we track this against
https://github.com/gluster/glusterfs/issues/408 would that be right?
___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers



--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-14 Thread Aravinda

On 03/14/2018 05:38 PM, Shyam Ranganathan wrote:

On 03/14/2018 02:40 AM, Aravinda wrote:

I have following suggestion for managing release notes. Let me know
if this can be included in automation document.

Aravinda, I assume this is an additional suggestion, orthogonal to the
current discussion around spec and docs, right?

Yes. Sorry about that.




If a Github issue used in Commit message as "Fixes: #" then Smoke
test should fail if patch does not contain `$SRC/release-notes/.md`
(if `$SRC/release-notes/.md` not already exists in codebase)

On branching, delete all these release-notes from Master branch and start
fresh. Release branch now contains these notes for all the features
went in after the last release. Release manager's job is to merge all
these release notes into single release notes document.

We can restrict on the format of release-note as,

     First Line is Title
     Tags: component-name, keywords etc
     --
     Description about the feature, example, links etc

If all patches are submitted with `Updates` instead of `Fixes`, then
Issue can't be closed without submitting patch with release-note.

Most of the above is fine and we can thrash out specifics, but...

I am thinking differently, if spec and doc are provided writing a short
1-5 line summary in the release notes is not a problem.


I think developer who developed the feature is the best person to write
the release notes. Release notes are easy to write while developing the
feature or just after finishing the feature. Once developer starts working
on other features/bug fixes it is very difficult to get interest in writing
release-note for an already merged feature.

I think extracting summary from the documentation is also difficult, if the
release maintainer not aware of all the features then it will be more
time-consuming to write summary.



The issue at present is that, to get contents into the release notes, at
times even the code has to be read to understand options, defaults and
what not. This being done by one person has been a challenge.

So if we address spec and doc, I/we can check how easy it is to write
release notes from these (over the next couple of releases say) and then
decide if we want authors to provide the release notes as well.

Thoughts?

Shyam



--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Announcing Gluster release 4.0.0 (Short Term Maintenance)

2018-03-14 Thread Aravinda

On 03/14/2018 07:13 AM, Shyam Ranganathan wrote:

The Gluster community celebrates 13 years of development with this
latest release, Gluster 4.0. This release enables improved integration
with containers, an enhanced user experience, and a next-generation
management framework. The 4.0 release helps cloud-native app developers
choose Gluster as the default scale-out distributed file system.

We’re highlighting some of the announcements, major features and changes
here, but our release notes[1] have announcements, expanded major
changes and features, and bugs addressed in this release.

Major enhancements:

- Management
GlusterD2 (GD2) is a new management daemon for Gluster-4.0. It is a
complete rewrite, with all new internal core frameworks, that make it
more scalable, easier to integrate with and has lower maintenance
requirements. This replaces GlusterD.

A quick start guide [6] is available to get started with GD2.

Although GD2 is in tech preview for this release, it is ready to use for
forming and managing new clusters.

- Monitoring
With this release, GlusterFS enables a lightweight method to access
internal monitoring information.

- Performance
There are several enhancements to performance in the disperse translator
and in the client side metadata caching layers.

- Other enhancements of note
 This release adds: ability to run Gluster on FIPS compliant systems,
ability to force permissions while creating files/directories, and
improved consistency in distributed volumes.

- Developer related
New on-wire protocol version and full type encoding of internal
dictionaries on the wire, Global translator to handle per-daemon
options, improved translator initialization structure, among a few other
improvements, that help streamline development of newer translators.

Release packages (or where to get them) are available at [2] and are
signed with [3]. The upgrade guide for this release can be found at [4]

Related announcements:

- As 3.13 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.0.0 [5].

- Releases that receive maintenance updates post 4.0 release are, 3.10,
3.12, 4.0 [5].

- With this release, the CentOS storage SIG will not build server
packages for CentOS6. Server packages will be available for CentOS7
only. For ease of migrations, client packages on CentOS6 will be
published and maintained, as announced here [7].

References:
[1] Release notes:
https://docs.gluster.org/en/latest/release-notes/4.0.0.md/

Above link is not working, actual link is

https://docs.gluster.org/en/latest/release-notes/4.0.0/


[2] Packages: https://download.gluster.org/pub/gluster/glusterfs/4.0/
[3] Packages signed with:
https://download.gluster.org/pub/gluster/glusterfs/4.0/rsa.pub
[4] Upgrade guide:
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.0/
[5] Release schedule: https://www.gluster.org/release-schedule/
[6] GD2 quick start:
https://github.com/gluster/glusterd2/blob/master/doc/quick-start-user-guide.md
[7] CentOS Storage SIG CentOS6 support announcement:
http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html
___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers



--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-14 Thread Aravinda

On 03/13/2018 04:00 PM, Shyam Ranganathan wrote:

On 03/13/2018 03:53 AM, Sankarshan Mukhopadhyay wrote:

On Tue, Mar 13, 2018 at 1:05 PM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:


On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan <srang...@redhat.com>
wrote:

Hi,

As we wind down on 4.0 activities (waiting on docs to hit the site, and
packages to be available in CentOS repositories before announcing the
release), it is time to start preparing for the 4.1 release.

4.1 is where we have GD2 fully functional and shipping with migration
tools to aid Glusterd to GlusterD2 migrations.

Other than the above, this is a call out for features that are in the
works for 4.1. Please *post* the github issues to the *devel lists* that
you would like as a part of 4.1, and also mention the current state of
development.

Further, as we hit end of March, we would make it mandatory for features
to have required spec and doc labels, before the code is merged, so
factor in efforts for the same if not already done.


Could you explain the point above further? Is it just the label or the
spec/doc
that we need merged before the patch is merged?


I'll hazard a guess that the intent of the label is to indicate
availability of the doc. "Completeness" of code is being defined as
including specifications and documentation.

As Sankarshan gleaned, spec and doc should be merged/completed before
the code is merged, as otherwise smoke will not vote a +1 on the same.


I have following suggestion for managing release notes. Let me know
if this can be included in automation document.


If a Github issue used in Commit message as "Fixes: #" then Smoke
test should fail if patch does not contain `$SRC/release-notes/.md`
(if `$SRC/release-notes/.md` not already exists in codebase)

On branching, delete all these release-notes from Master branch and start
fresh. Release branch now contains these notes for all the features
went in after the last release. Release manager's job is to merge all
these release notes into single release notes document.

We can restrict on the format of release-note as,

    First Line is Title
    Tags: component-name, keywords etc
    --
    Description about the feature, example, links etc

If all patches are submitted with `Updates` instead of `Fixes`, then
Issue can't be closed without submitting patch with release-note.





That said, I'll wait for Shyam to be more elaborate on this.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel



--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Branched

2018-01-29 Thread Aravinda

Hi Shyam,

Below patches identified lately while working on 
Geo-replication+Glusterd2 integration. Patches are merged in Master and 
backported to release-4.0


https://review.gluster.org/19343  (Ready to submit)
https://review.gluster.org/19364  (Waiting for centos regression)

Please merge the patches. Thanks


On Thursday 25 January 2018 09:49 PM, Shyam Ranganathan wrote:

On 01/23/2018 03:17 PM, Shyam Ranganathan wrote:

4.0 release has been branched!

I will follow this up with a more detailed schedule for the release, and
also the granted feature backport exceptions that we are waiting.

Feature backports would need to make it in by this weekend, so that we
can tag RC0 by the end of the month.

Backports need to be ready for merge on or before Jan, 29th 2018 3:00 PM
Eastern TZ.

Features that requested and hence are granted backport exceptions are as
follows,

1) Dentry fop serializer xlator on brick stack
https://github.com/gluster/glusterfs/issues/397

@Du please backport the same to the 4.0 branch as the patch in master is
merged.

2) Leases support on GlusterFS
https://github.com/gluster/glusterfs/issues/350

@Jiffin and @ndevos, there is one patch pending against master,
https://review.gluster.org/#/c/18785/ please do the needful and backport
this to the 4.0 branch.

3) Data corruption in write ordering of rebalance and application writes
https://github.com/gluster/glusterfs/issues/308

@susant, @du if we can conclude on the strategy here, please backport as
needed.

4) Couple of patches that are tracked for a backport are,
https://review.gluster.org/#/c/19223/
https://review.gluster.org/#/c/19267/ (prep for ctime changes in later
releases)

Other features discussed are not in scope for a backports to 4.0.

If you asked for one and do not see it in this list, shout out!


Only exception could be: https://review.gluster.org/#/c/19223/

Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers



--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Health output like mdstat

2017-11-08 Thread Aravinda

On Tuesday 07 November 2017 03:22 PM, Gandalf Corvotempesta wrote:

I think would be useful to add a cumulative cluster health output like
mdstat for mdadm.
So that with a simple command, would be possible to see:

1) how many nodes are UP and DOWN (and which nodes are DOWN)
2) any background operation running (like healing, scrubbing) with
their progress
3) any split brain files that won't be fixed automatically

Currently, we need to run multiple commands to see cluster health.

An even better version would be something similiar to "mdadm --detail /dev/md0"
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Looks like good candidate for "gluster-health-report" project. I will 
add these items to the issue page. Thanks


Details:
Project:  https://github.com/aravindavk/gluster-health-report
Issue:    https://github.com/gluster/glusterfs/issues/313
Details mail: 
http://lists.gluster.org/pipermail/gluster-users/2017-October/032758.html


--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster Health Report tool

2017-10-25 Thread Aravinda

Hi,

We started a new project to identify issues/misconfigurations in
Gluster nodes. This project is very young and not yet ready for
Production use, Feedback on the existing reports and ideas for more
Reports are welcome.

This tool needs to run in every Gluster node to detect the local
issues (Example: Parsing log files, checking disk space etc) in each
Nodes. But some of the reports use Gluster CLI to identify the issues
Which can be run in any one node.(For example
`gluster-health-report --run-only glusterd-peer-disconnect`)

# Install

    sudo pip install gluster-health-report

# Usage
Run `gluster-health-report --help` for help

    gluster-health-report

Example output is available here 
https://github.com/aravindavk/gluster-health-report


# Project Details
- Issue page: https://github.com/gluster/glusterfs/issues/313
- Project page: https://github.com/aravindavk/gluster-health-report
- Open new issue if you have new report suggestion or found issue with
  existing report 
https://github.com/aravindavk/gluster-health-report/issues


--

regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Metrics: and how to get them out from gluster

2017-09-01 Thread Aravinda
an in 'stack.h (and some in xlator.h)'.


Also, we can provide 'option monitoring enable' (or disable) option as 
a default option for every translator, and can handle it at 
xlator_init() time itself. (This is not a blocker for 4.0, but good to 
have). Idea proposed @ github #304 [7].


NOTE: this approach is working pretty good already at 'experimental' 
branch, excluding [7]. Depending on feedback, we can improve it further.


*c) framework for xlators to provide private metrics*

One possible solution is to use statedump functions. But to cause 
least disruption to an existing code, I propose 2 new methods. 
'dump_metrics()', and 'reset_metrics()' to xlator methods, which can 
be dl_open()'d to xlator structure.


'dump_metrics()' dumps the private metrics in the expected format, and 
will be called from the global dump-metrics framework, and 
'reset_metrics()' would be called from a CLI command when someone 
wants to restart metrics from 0 to check / validate few things in a 
running cluster. Helps debug-ability.


Further feedback welcome.

NOTE: a sample code is already implemented in 'experimental' branch, 
and protocol/server xlator uses this framework to dump metrics from 
rpc layer, and client connections.


*d) format of the 'metrics' file.*

If you want any plot-able data on a graph, you need key (should be 
string), and value (should be a number), collected over time. So, this 
file should output data for the monitoring systems and not exactly for 
the debug-ability. We have 'statedump' for debug-ability.


So, I propose a plain text file, where data would be dumped like below.

```
# anything starting from # would be treated as comment.

# anything after the value would be ignored.
```
Any better solutions are welcome. Ideally, we should keep this 
friendly for external projects to consume, like tendrl [8] or 
graphite, prometheus etc. Also note that, once we agree to the format, 
it would be very hard to change it as external projects would use it.


I would like to hear the feedback from people who are experienced with 
monitoring systems here.


NOTE: the above format works fine with 'glustermetrics' project [9] 
and is working decently on 'experimental' branch.


--

** Discussions:*

Let me know how you all want to take the discussion forward?

Should we get to github, and discuss on each issue? or should I rebase 
and send the current patches from experimental to 'master' branch and 
discuss in our review system?  Or should we continue on the email here!


Regards,
Amar

References:

[1] - https://github.com/gluster/glusterfs/issues/137
[2] - https://github.com/gluster/glusterfs/issues/141
[3] - https://github.com/gluster/glusterfs/issues/275
[4] - https://github.com/gluster/glusterfs/issues/168
[5] - 
http://lists.gluster.org/pipermail/maintainers/2017-August/002954.html 
(last email of the thread).

[6] - https://github.com/gluster/glusterfs/issues/303
[7] - https://github.com/gluster/glusterfs/issues/304
[8] - https://github.com/Tendrl
[9] - https://github.com/amarts/glustermetrics

--
Amar Tumballi (amarts)


___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers



--
regards
Aravinda VK
http://aravindavk.in

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster documentation search

2017-08-28 Thread Aravinda

On 08/28/2017 04:44 PM, Nigel Babu wrote:

Hello folks,

I spend some time today mucking about trying to figure out how to make 
our documentation search a better experience. The short answer is, 
search kind of works now.


Long answer: mkdocs creates a client side file which is used for 
search. RTD overrides this by referring people to Elasticsearch. 
However, that doesn't clear out stale entries and we're plagued with a 
whole lot of stale entries. I've made some changes that other 
consumers of RTD have done to override our search to use the JS file 
rather than Elasticsearch.


--
nigelb


___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Nice.

Please version the generated search_index.json file so that it will be 
easy to invalidate the browser's cache once changed.


--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] geo-rep regression because of node-uuid change

2017-06-20 Thread Aravinda

On 06/20/2017 06:02 PM, Pranith Kumar Karampuri wrote:
Xavi, Aravinda and I had a discussion on #gluster-dev and we agreed to 
go with the format Aravinda suggested for now and in future we wanted 
some more changes for dht to detect which subvolume went down came 
back up, at that time we will revisit the solution suggested by Xavi.


Susanth is doing the dht changes
Aravinda is doing geo-rep changes

Done. Geo-rep patch sent for review https://review.gluster.org/17582

--
Aravinda



Thanks to all of you guys for the discussions!

On Tue, Jun 20, 2017 at 5:05 PM, Xavier Hernandez 
<xhernan...@datalab.es <mailto:xhernan...@datalab.es>> wrote:


    Hi Aravinda,

On 20/06/17 12:42, Aravinda wrote:

I think following format can be easily adopted by all components

UUIDs of a subvolume are seperated by space and subvolumes are
separated
by comma

For example, node1 and node2 are replica with U1 and U2 UUIDs
respectively and
node3 and node4 are replica with U3 and U4 UUIDs respectively

node-uuid can return "U1 U2,U3 U4"


While this is ok for current implementation, I think this can be
insufficient if there are more layers of xlators that require to
indicate some sort of grouping. Some representation that can
represent hierarchy would be better. For example: "(U1 U2) (U3
U4)" (we can use spaces or comma as a separator).


Geo-rep can split by "," and then split by space and take
first UUID
DHT can split the value by space or comma and get unique UUIDs
list


This doesn't solve the problem I described in the previous email.
Some more logic will need to be added to avoid more than one node
from each replica-set to be active. If we have some explicit
hierarchy information in the node-uuid value, more decisions can
be taken.

An initial proposal I made was this:

DHT[2](AFR[2,0](NODE(U1), NODE(U2)), AFR[2,0](NODE(U1), NODE(U2)))

This is harder to parse, but gives a lot of information: DHT with
2 subvolumes, each subvolume is an AFR with replica 2 and no
arbiters. It's also easily extensible with any new xlator that
changes the layout.

However maybe this is not the moment to do this, and probably we
could implement this in a new xattr with a better name.

Xavi



Another question is about the behavior when a node is down,
existing
node-uuid xattr will not return that UUID if a node is down.
What is the
behavior with the proposed xattr?

Let me know your thoughts.

regards
Aravinda VK

On 06/20/2017 03:06 PM, Aravinda wrote:

Hi Xavi,

On 06/20/2017 02:51 PM, Xavier Hernandez wrote:

Hi Aravinda,

On 20/06/17 11:05, Pranith Kumar Karampuri wrote:

Adding more people to get a consensus about this.

    On Tue, Jun 20, 2017 at 1:49 PM, Aravinda
<avish...@redhat.com <mailto:avish...@redhat.com>
<mailto:avish...@redhat.com
<mailto:avish...@redhat.com>>> wrote:


regards
Aravinda VK


On 06/20/2017 01:26 PM, Xavier Hernandez wrote:

Hi Pranith,

adding gluster-devel, Kotresh and Aravinda,

On 20/06/17 09:45, Pranith Kumar Karampuri
wrote:



On Tue, Jun 20, 2017 at 1:12 PM,
Xavier Hernandez
<xhernan...@datalab.es
<mailto:xhernan...@datalab.es>
<mailto:xhernan...@datalab.es
<mailto:xhernan...@datalab.es>>
<mailto:xhernan...@datalab.es
<mailto:xhernan...@datalab.es>
<mailto:xhernan...@datalab.es
<mailto:xhernan...@datalab.es>>>> wrote:

On 20/06/17 09:31, Pranith Kumar
Karampuri wrote:

The way geo-replication works is:
On each machine, it does
getxattr of node-uuid and
check if its
own uuid
is present in the list. If it
is present then it
will consider
it active
otherwise it will be
considered passive. With this
change we are
  

Re: [Gluster-devel] geo-rep regression because of node-uuid change

2017-06-20 Thread Aravinda

I think following format can be easily adopted by all components

UUIDs of a subvolume are seperated by space and subvolumes are separated 
by comma


For example, node1 and node2 are replica with U1 and U2 UUIDs 
respectively and

node3 and node4 are replica with U3 and U4 UUIDs respectively

node-uuid can return "U1 U2,U3 U4"

Geo-rep can split by "," and then split by space and take first UUID
DHT can split the value by space or comma and get unique UUIDs list

Another question is about the behavior when a node is down, existing 
node-uuid xattr will not return that UUID if a node is down. What is the 
behavior with the proposed xattr?


Let me know your thoughts.

regards
Aravinda VK

On 06/20/2017 03:06 PM, Aravinda wrote:

Hi Xavi,

On 06/20/2017 02:51 PM, Xavier Hernandez wrote:

Hi Aravinda,

On 20/06/17 11:05, Pranith Kumar Karampuri wrote:

Adding more people to get a consensus about this.

On Tue, Jun 20, 2017 at 1:49 PM, Aravinda <avish...@redhat.com
<mailto:avish...@redhat.com>> wrote:


regards
Aravinda VK


On 06/20/2017 01:26 PM, Xavier Hernandez wrote:

Hi Pranith,

adding gluster-devel, Kotresh and Aravinda,

On 20/06/17 09:45, Pranith Kumar Karampuri wrote:



On Tue, Jun 20, 2017 at 1:12 PM, Xavier Hernandez
<xhernan...@datalab.es <mailto:xhernan...@datalab.es>
<mailto:xhernan...@datalab.es
<mailto:xhernan...@datalab.es>>> wrote:

On 20/06/17 09:31, Pranith Kumar Karampuri wrote:

The way geo-replication works is:
On each machine, it does getxattr of node-uuid and
check if its
own uuid
is present in the list. If it is present then it
will consider
it active
otherwise it will be considered passive. With this
change we are
giving
all uuids instead of first-up subvolume. So all
machines think
they are
ACTIVE which is bad apparently. So that is the
reason. Even I
felt bad
that we are doing this change.


And what about changing the content of node-uuid to
include some
sort of hierarchy ?

for example:

a single brick:

NODE()

AFR/EC:

AFR[2](NODE(), NODE())
EC[3,1](NODE(), NODE(), NODE())

DHT:

DHT[2](AFR[2](NODE(), NODE()),
AFR[2](NODE(),
NODE()))

This gives a lot of information that can be used to 
take the

appropriate decisions.


I guess that is not backward compatible. Shall I CC
    gluster-devel and
Kotresh/Aravinda?


Is the change we did backward compatible ? if we only require
the first field to be a GUID to support backward compatibility,
we can use something like this:

No. But the necessary change can be made to Geo-rep code as well if
format is changed, Since all these are built/shipped together.

Geo-rep uses node-id as follows,

list = listxattr(node-uuid)
active_node_uuids = list.split(SPACE)
active_node_flag = True if self.node_id exists in active_node_uuids
else False


How was this case solved ?

suppose we have three servers and 2 bricks in each server. A 
replicated volume is created using the following command:


gluster volume create test replica 2 server1:/brick1 server2:/brick1 
server2:/brick2 server3:/brick1 server3:/brick1 server1:/brick2


In this case we have three replica-sets:

* server1:/brick1 server2:/brick1
* server2:/brick2 server3:/brick1
* server3:/brick2 server2:/brick2

Old AFR implementation for node-uuid always returned the uuid of the 
node of the first brick, so in this case we will get the uuid of the 
three nodes because all of them are the first brick of a replica-set.


Does this mean that with this configuration all nodes are active ? Is 
this a problem ? Is there any other check to avoid this situation if 
it's not good ?
Yes all Geo-rep workers will become Active and participate in syncing. 
Since changelogs will have the same information in replica bricks this 
will lead to duplicate syncing and consuming network bandwidth.


Node-uuid based Active worker is the default configuration in Geo-rep 
till now, Geo-rep also has Meta Volume based syncronization for Active 
worker using lock files.(Can be opted using Geo-rep configuration, 
with this config node-uuid will not be used)


Kotresh proposed a solution to configure which worker to become 
Active. This will give more control to Admin to choose Active workers, 
This will become default configuration from 3.12

https://gith

Re: [Gluster-devel] geo-rep regression because of node-uuid change

2017-06-20 Thread Aravinda

Hi Xavi,

On 06/20/2017 02:51 PM, Xavier Hernandez wrote:

Hi Aravinda,

On 20/06/17 11:05, Pranith Kumar Karampuri wrote:

Adding more people to get a consensus about this.

On Tue, Jun 20, 2017 at 1:49 PM, Aravinda <avish...@redhat.com
<mailto:avish...@redhat.com>> wrote:


regards
Aravinda VK


On 06/20/2017 01:26 PM, Xavier Hernandez wrote:

Hi Pranith,

adding gluster-devel, Kotresh and Aravinda,

On 20/06/17 09:45, Pranith Kumar Karampuri wrote:



On Tue, Jun 20, 2017 at 1:12 PM, Xavier Hernandez
<xhernan...@datalab.es <mailto:xhernan...@datalab.es>
<mailto:xhernan...@datalab.es
<mailto:xhernan...@datalab.es>>> wrote:

On 20/06/17 09:31, Pranith Kumar Karampuri wrote:

The way geo-replication works is:
On each machine, it does getxattr of node-uuid and
check if its
own uuid
is present in the list. If it is present then it
will consider
it active
otherwise it will be considered passive. With this
change we are
giving
all uuids instead of first-up subvolume. So all
machines think
they are
ACTIVE which is bad apparently. So that is the
reason. Even I
felt bad
that we are doing this change.


And what about changing the content of node-uuid to
include some
sort of hierarchy ?

for example:

a single brick:

NODE()

AFR/EC:

AFR[2](NODE(), NODE())
EC[3,1](NODE(), NODE(), NODE())

DHT:

DHT[2](AFR[2](NODE(), NODE()),
AFR[2](NODE(),
NODE()))

This gives a lot of information that can be used to 
take the

appropriate decisions.


I guess that is not backward compatible. Shall I CC
gluster-devel and
Kotresh/Aravinda?


Is the change we did backward compatible ? if we only require
the first field to be a GUID to support backward compatibility,
we can use something like this:

No. But the necessary change can be made to Geo-rep code as well if
format is changed, Since all these are built/shipped together.

Geo-rep uses node-id as follows,

list = listxattr(node-uuid)
active_node_uuids = list.split(SPACE)
active_node_flag = True if self.node_id exists in active_node_uuids
else False


How was this case solved ?

suppose we have three servers and 2 bricks in each server. A 
replicated volume is created using the following command:


gluster volume create test replica 2 server1:/brick1 server2:/brick1 
server2:/brick2 server3:/brick1 server3:/brick1 server1:/brick2


In this case we have three replica-sets:

* server1:/brick1 server2:/brick1
* server2:/brick2 server3:/brick1
* server3:/brick2 server2:/brick2

Old AFR implementation for node-uuid always returned the uuid of the 
node of the first brick, so in this case we will get the uuid of the 
three nodes because all of them are the first brick of a replica-set.


Does this mean that with this configuration all nodes are active ? Is 
this a problem ? Is there any other check to avoid this situation if 
it's not good ?
Yes all Geo-rep workers will become Active and participate in syncing. 
Since changelogs will have the same information in replica bricks this 
will lead to duplicate syncing and consuming network bandwidth.


Node-uuid based Active worker is the default configuration in Geo-rep 
till now, Geo-rep also has Meta Volume based syncronization for Active 
worker using lock files.(Can be opted using Geo-rep configuration, with 
this config node-uuid will not be used)


Kotresh proposed a solution to configure which worker to become Active. 
This will give more control to Admin to choose Active workers, This will 
become default configuration from 3.12

https://github.com/gluster/glusterfs/issues/244

--
Aravinda



Xavi





Bricks:



AFR/EC:
(, )

DHT:
((, ...), (, ...))

In this case, AFR and EC would return the same  they
returned before the patch, but between '(' and ')' they put the
full list of guid's of all nodes. The first  can be used
by geo-replication. The list after the first  can be used
for rebalance.

Not sure if there's any user of node-uuid above DHT.

Xavi




Xavi


On Tue, Jun 20, 2017 at 12:46 PM, Xavier Hernandez
<xhernan...@datalab.es
<mailto:xhernan...@datalab.es> <mailto:xhernan...@data

Re: [Gluster-devel] geo-rep regression because of node-uuid change

2017-06-20 Thread Aravinda


regards
Aravinda VK

On 06/20/2017 01:26 PM, Xavier Hernandez wrote:

Hi Pranith,

adding gluster-devel, Kotresh and Aravinda,

On 20/06/17 09:45, Pranith Kumar Karampuri wrote:



On Tue, Jun 20, 2017 at 1:12 PM, Xavier Hernandez <xhernan...@datalab.es
<mailto:xhernan...@datalab.es>> wrote:

On 20/06/17 09:31, Pranith Kumar Karampuri wrote:

The way geo-replication works is:
On each machine, it does getxattr of node-uuid and check if its
own uuid
is present in the list. If it is present then it will consider
it active
otherwise it will be considered passive. With this change we are
giving
all uuids instead of first-up subvolume. So all machines think
they are
ACTIVE which is bad apparently. So that is the reason. Even I
felt bad
that we are doing this change.


And what about changing the content of node-uuid to include some
sort of hierarchy ?

for example:

a single brick:

NODE()

AFR/EC:

AFR[2](NODE(), NODE())
EC[3,1](NODE(), NODE(), NODE())

DHT:

DHT[2](AFR[2](NODE(), NODE()), AFR[2](NODE(),
NODE()))

This gives a lot of information that can be used to take the
appropriate decisions.


I guess that is not backward compatible. Shall I CC gluster-devel and
Kotresh/Aravinda?


Is the change we did backward compatible ? if we only require the 
first field to be a GUID to support backward compatibility, we can use 
something like this:
No. But the necessary change can be made to Geo-rep code as well if 
format is changed, Since all these are built/shipped together.


Geo-rep uses node-id as follows,

list = listxattr(node-uuid)
active_node_uuids = list.split(SPACE)
active_node_flag = True if self.node_id exists in active_node_uuids else 
False




Bricks:



AFR/EC:
(, )

DHT:
((, ...), (, ...))

In this case, AFR and EC would return the same  they returned 
before the patch, but between '(' and ')' they put the full list of 
guid's of all nodes. The first  can be used by geo-replication. 
The list after the first  can be used for rebalance.


Not sure if there's any user of node-uuid above DHT.

Xavi





Xavi


On Tue, Jun 20, 2017 at 12:46 PM, Xavier Hernandez
<xhernan...@datalab.es <mailto:xhernan...@datalab.es>
<mailto:xhernan...@datalab.es <mailto:xhernan...@datalab.es>>>
wrote:

Hi Pranith,

On 20/06/17 07:53, Pranith Kumar Karampuri wrote:

hi Xavi,
   We all made the mistake of not sending about 
changing

behavior of
node-uuid xattr so that rebalance can use multiple nodes
for doing
rebalance. Because of this on geo-rep all the workers
are becoming
active instead of one per EC/AFR subvolume. So we are
frantically trying
to restore the functionality of node-uuid and introduce
a new
xattr for
the new behavior. Sunil will be sending out a patch for
this.


Wouldn't it be better to change geo-rep behavior to use the
new data
? I think it's better as it's now, since it gives more
information
to upper layers so that they can take more accurate 
decisions.


Xavi


--
Pranith





--
Pranith





--
Pranith




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Finding size of volume

2017-04-26 Thread Aravinda

This tool(gluster-df) provides df functionality.

http://aravindavk.in/blog/glusterdf-df-for-gluster-volumes/
https://github.com/aravindavk/glusterfs-tools

commands are renamed from glusterdf to gluster-df, check the README for 
more details.

https://github.com/aravindavk/glusterfs-tools/blob/master/README.md


regards
Aravinda

On 04/26/2017 02:59 PM, Nux! wrote:

Hello,

No, not as client. I want to get the size of all the volumes on my cluster.
I had hoped "gluster volume status detail" would tell me this, but it does not. 
I need to mount all the volumes and run df to find out.

I am not a coder, not sure how to use gfapi.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -

From: "Mohammed Rafi K C" <rkavu...@redhat.com>
To: "Nux!" <n...@li.nux.ro>, "gluster-users" <gluster-us...@gluster.org>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Wednesday, 26 April, 2017 10:26:35
Subject: Re: [Gluster-devel] Finding size of volume
I assume that you want to get the size from a client machine, rather
than nodes from trusted storage pools. If so, you can use gfapi to do a
fstat and can get the size of the volume.


Regards

Rafi KC


On 04/26/2017 02:17 PM, Nux! wrote:

Hello,

Is there a way with gluster tools to show size of a volume?
I want to avoid mounting volumes and running df.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GFID2 - Proposal to add extra byte to existing GFID

2016-12-18 Thread Aravinda


regards
Aravinda

On 12/16/2016 05:47 PM, Xavier Hernandez wrote:

On 12/16/2016 08:31 AM, Aravinda wrote:

Proposal to add one more byte to GFID to store "Type" information.
Extra byte will represent type(directory: 00, file: 01, Symlink: 02
etc)

For example, if a directory GFID is f4f18c02-0360-4cdc-8c00-0164e49a7afd
then, GFID2 will be 00f4f18c02-0360-4cdc-8c00-0164e49a7afd.

Changes to Backend store

Existing: .glusterfs/gfid[0:2]/gfid/[2:4]/gfid
Proposed: .glusterfs/gfid2[0:2]/gfid2[2:4]/gfid2[4:6]/gfid2

Advantages:
---
- Automatic grouping in .glusterfs directory based on file Type.
- Easy identification of Type by looking at GFID in logs/status output
  etc.
- Crawling(Quota/AFR): List of directories can be easily fetched by
  crawling `.glusterfs/gfid2[0:2]/` directory. This enables easy
  parallel Crawling.
- Quota - Marker: Marker transator can mark xtime of current file and
  parent directory. No need to update xtime xattr of all directories
  till root.
- Geo-replication: - Crawl can be multithreaded during initial sync.
  With marker changes above it will be more effective in crawling.

Please add if any more advantageous.

Disadvantageous:

Functionality is not changed with the above change except the length
of the ID. I can't think of any disadvantages except the code changes
to accommodate this change. Let me know if I missed anything here.


One disadvantage is that 17 bytes is a very ugly number for 
structures. Compilers will add paddings that will make any structure 
containing a GFID noticeable bigger. This will also cause troubles on 
all binary formats where a GFID is used, making them incompatible. One 
clear case of this is the XDR encoding of the gluster protocol. 
Currently a GFID is defined this way in many places:


opaque gfid[16]

This seems to make it quite complex to allow a mix of gluster versions 
in the same cluster (for example in a middle of an upgrade).


What about this alternative approach:

Based on the RFC4122 [1] that describes the format of an UUID, we can 
define a new structure for new GFID's using the same length.


Currently all GFID's are generated using the "random" method. This 
means that all GFID have this structure:


--Mxxx-Nxxx-

Where N can be 8, 9, A or B, and M is 4.

There are some special GFID's that have a M=0 and N=0, for example the 
root GFID.


What I propose is to use a new variant of GFID, for example E or F 
(officially marked as reserved for future definition) or even 0 to 7. 
We could use M as an internal version for the GFID structure (defined 
by ourselves when needed). Then we could use the first 4 or 8 bits of 
each GFID as you propose, without needing to extend current GFID 
length nor risking to collide with existing GFID's.


If we are concerned about the collision probability (quite small but 
still bigger than the current version) because we loose some random 
bits, we could use N = 0..7 and leave M random. This way we get 5 more 
random bits, from which we could use 4 to represent the inode type.


I think this way everything will work smoothly with older versions 
with minimal effort.


What do you think ?

That is really nice suggestion.

To get the crawling advantageous as mentioned above, we need to make 
backend store as .glusterfs/N/gfid[0:2]/gfid[2:4]/gfid


Xavi

[1] https://www.ietf.org/rfc/rfc4122.txt



Changes:
-
- Code changes to accommodate 17 bytes GFID instead of 16 bytes(Read
  and Write)
- Migration Tool to upgrade GFIDs in Volume/Cluster

Let me know your thoughts.





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GFID2 - Proposal to add extra byte to existing GFID

2016-12-15 Thread Aravinda

Proposal to add one more byte to GFID to store "Type" information.
Extra byte will represent type(directory: 00, file: 01, Symlink: 02
etc)

For example, if a directory GFID is f4f18c02-0360-4cdc-8c00-0164e49a7afd
then, GFID2 will be 00f4f18c02-0360-4cdc-8c00-0164e49a7afd.

Changes to Backend store

Existing: .glusterfs/gfid[0:2]/gfid/[2:4]/gfid
Proposed: .glusterfs/gfid2[0:2]/gfid2[2:4]/gfid2[4:6]/gfid2

Advantages:
---
- Automatic grouping in .glusterfs directory based on file Type.
- Easy identification of Type by looking at GFID in logs/status output
  etc.
- Crawling(Quota/AFR): List of directories can be easily fetched by
  crawling `.glusterfs/gfid2[0:2]/` directory. This enables easy
  parallel Crawling.
- Quota - Marker: Marker transator can mark xtime of current file and
  parent directory. No need to update xtime xattr of all directories
  till root.
- Geo-replication: - Crawl can be multithreaded during initial sync.
  With marker changes above it will be more effective in crawling.

Please add if any more advantageous.

Disadvantageous:

Functionality is not changed with the above change except the length
of the ID. I can't think of any disadvantages except the code changes
to accommodate this change. Let me know if I missed anything here.

Changes:
-
- Code changes to accommodate 17 bytes GFID instead of 16 bytes(Read
  and Write)
- Migration Tool to upgrade GFIDs in Volume/Cluster

Let me know your thoughts.

--
regards
Aravinda
http://aravindavk.in

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Tool to find directory issues in Gluster Bricks

2016-12-15 Thread Aravinda

Hi,

Yesterday I created a small utility to scan the brick and find issues 
with directories created by Gluster. Tool can detect following issues


- No GFID: If a directory in brick backend doesn't have `trusted.gfid` xattr
- No Parent GFID: Parent directory doesn't have a `trusted.gfid` xattr
- No Symlink: Gluster maintains a symlink in `$BRICK/.glusterfs` 
directory for each directory, If Symlink file not present for a directory
- Wrong Symlink: If symlink exists but linked to different 
directory(Same GFID assigned to two directories)

- Invalid GFID: Invalid data in `trusted.gfid` xattr
- Invalid Parent GFID: Invalid data in `trusted.gfid` xattr of parent dir

Installation instructions are available in the repo
https://github.com/aravindavk/gluster-dir-health-check

Usage:
gluster-dir-health-check 

Example:

gluster-dir-health-check /exports/bricks/brick1 > 
~/brick1_dir_status.txt


Grep for "NOT OK" for issues recorded in the above output file.

grep "NOT OK" ~/brick1_dir_status.txt

More details about this tool is available 
here(https://github.com/aravindavk/gluster-dir-health-check)


This is a super-young project, please let me know if you face any issues 
while using this utility.


Feel free to open issue/feature request here
https://github.com/aravindavk/gluster-dir-health-check/issues

Note: This tool is created using Rust(https://rust-lang.org)

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Geo-replication updates

2016-12-06 Thread Aravinda
Geo-replication feature got lot of improvements in 3.7, 3.8 and 3.9 
releases. We will write about these features soon. A few updates from 
last week:


- Kotresh discovered awesome option in rsync(`--ignore-missing-args`) 
which helps reducing rsync retries and improving the sync performance if 
your workload involves lots of unlinks. This option is only available 
with Rsync version 3.1.0.

  Patch sent to include this rsync flag by default.
  http://review.gluster.org/16010 (under review)

- Geo-rep creates entries in slave before triggering rsync for data 
modifications. Rsync will sync data using aux-gfid-mount. Added 
`--existing` flag to rsync command to reduce rsync retries if entry does 
not exists in Slave(Note: Entry failures are handled separately)

  http://review.gluster.org/16010 (under review)

- Improved local node detection
  Geo-rep monitor process spawns one worker process for each local 
brick. To identify the local node it was doing real network check, patch 
sent to identify the local node by comparing with Brick host UUID(From 
volume info) with Host UUID(gluster system:: uuid get)

  http://review.gluster.org/16035 (under review)

- Added additional details to Geo-replication events for Events 
APIs(http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Events%20APIs/)

  http://review.gluster.org/15858 (merged)

- Fixed a issue with status output during Hybrid crawl
  http://review.gluster.org/15869 (merged)

- Patch sent to avoid Geo-rep workers restart when 
`--log-rsync-performance` config changes

  http://review.gluster.org/15816 (under review)

- Patch sent to fix a worker crash during cleanup and exit
  http://review.gluster.org/15686 (under review)

Last but not least, we have rewritten upstream Geo-replication 
documentation. Please let us know if it is useful.

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/

--
regards
Aravinda
http://aravindavk.in

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glustercli-python project updates

2016-12-06 Thread Aravinda
A project to provide Python wrappers for Gluster CLI commands.These 
wrappers helps to integrate with external python projects easily.


For example to start a Gluster volume "gv1"

from gluster.cli import volume

try:
volume.start("gv1")
except GlusterCmdException as e:
print "Volume start failed:", e

These wrappers does additional processing compared to raw Gluster xml 
outputs to provide meaningful data.

For example,
georep.status joins `gluster volume info` and `gluster georep status` to 
return output in sorted order same as Volume info and to show offline 
status.
volume.status_detail which runs `gluster volume info` and `gluster 
volume status detail` and merges the output to show the offline bricks 
status.



Repo:
-
https://github.com/gluster/glustercli-python


Install:

Install using pip command `sudo pip install glustercli` 
(https://pypi.python.org/pypi/glustercli)



What is available:
---
Volume Operations: 
start/stop/restart/create/delete/info/status_detail/optset/optreset/log_rotate/sync/clear_locks/barrier_enable/barrier_disable/profile_start/profile_stop/profile_info
Geo-rep Operations: 
gsec_create/create/start/stop/restart/delete/pause/resume/config_set/config_reset/status

Snapshot operations: activate/clone/create/deactivate/delete/restore/config
Rebalance operations: fix_layout_start/start/stop/status
Quota operations: 
inode_quota_enable/enable/disable/remove_path/remove_objects/default_soft_limit/limit_usage/limit_objects/alert_time/soft_timeout/hard_timeout

Peer operations: probe/attach/detach/status/pool
NFS Ganesha: enable/disable
Heal operations: enable/disable/full/split_brain
Bricks operations: add/remove_start/remove_stop/remove_commit/replace_commit
Bitrot operations: 
enable/disable/scrub_throttle/scrub_frequency/scrub_pause/scrub_resume

Tier operations: start/attach/detach_start/detach_stop/detach_commit

Who is using:
-
Gluster management REST APIs project: https://github.com/gluster/restapi
Gluster Geo-replication tools: 
http://aravindavk.in/blog/gluster-georep-tools


Call for participation:
---
- Integration with external projects
- Fedora/Ubuntu and other distributions packaging
- API documentation
- Tests
- Following wrappers to be implemented, feel free to send pull 
requests($SRC/gluster/cli/parsers.py)


- Bitrot scrub status
- Rebalance Status
- Quota List Paths
- Quota List Objects
- Geo-rep Config Get
- Remove Brick status
- Tier detach status
- Tier status
- Volumes List
- Heal info
- Heal Statistics
- Snapshot status
- Snapshot info
- Snapshot List
- Volume options

Thanks to Xiaohui Liu(xiaohui) for Volume profile output parsing and 
peer list parsing pull requests


https://github.com/gluster/glustercli-python/commit/bb7cad16d244101f0f298b6359fa053ca4755808
https://github.com/gluster/glustercli-python/commit/f4f0a8c7540bb94af6a9b486baaa2a2fe67c5d04

--
regards
Aravinda
http://aravindavk.in

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - Release 3.9

2016-10-27 Thread Aravinda
Ack for Geo-replication and Events API features. No regressions found 
during testing, and verified all the bug fixes made for Release-3.9.


regards
Aravinda

On Wednesday 26 October 2016 08:04 PM, Aravinda wrote:

Gluster 3.9.0rc2 tarball is available here
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz 



regards
Aravinda

On Tuesday 25 October 2016 04:12 PM, Aravinda wrote:

Hi,

Since Automated test framework for Gluster is in progress, we need 
help from Maintainers and developers to test the features and bug 
fixes to release Gluster 3.9.


In last maintainers meeting Shyam shared an idea about having a Test 
day to accelerate the testing and release.


Please participate in testing your component(s) on Oct 27, 2016. We 
will prepare the rc2 build by tomorrow and share the details before 
Test day.


RC1 Link: 
http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
Release Checklist: 
https://public.pad.fsfe.org/p/gluster-component-release-checklist



Thanks and Regards
Aravinda and Pranith



___
maintainers mailing list
maintain...@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-10-26 Thread Aravinda

Gluster 3.9.0rc2 tarball is available here
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz

regards
Aravinda

On Tuesday 25 October 2016 04:12 PM, Aravinda wrote:

Hi,

Since Automated test framework for Gluster is in progress, we need 
help from Maintainers and developers to test the features and bug 
fixes to release Gluster 3.9.


In last maintainers meeting Shyam shared an idea about having a Test 
day to accelerate the testing and release.


Please participate in testing your component(s) on Oct 27, 2016. We 
will prepare the rc2 build by tomorrow and share the details before 
Test day.


RC1 Link: 
http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
Release Checklist: 
https://public.pad.fsfe.org/p/gluster-component-release-checklist



Thanks and Regards
Aravinda and Pranith



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Test Thursday - Release 3.9

2016-10-25 Thread Aravinda

Hi,

Since Automated test framework for Gluster is in progress, we need help 
from Maintainers and developers to test the features and bug fixes to 
release Gluster 3.9.


In last maintainers meeting Shyam shared an idea about having a Test day 
to accelerate the testing and release.


Please participate in testing your component(s) on Oct 27, 2016. We will 
prepare the rc2 build by tomorrow and share the details before Test day.


RC1 Link: 
http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
Release Checklist: 
https://public.pad.fsfe.org/p/gluster-component-release-checklist



Thanks and Regards
Aravinda and Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Disable experimental features in Release-3.9

2016-09-30 Thread Aravinda

Hi,

Following features are not ready or not planned for 3.9 release. Do we 
have any option to disable these features using ./configure options in 
release-3.9 branch or we need to revert all the patches related to the 
feature.


- dht2
- jbr

Please add if we need to disable/remove any other features from 
release-3.9 branch


--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS 3.9 release Schedule

2016-09-09 Thread Aravinda

Hi All,

Gluster 3.9 release Schedule

Week of Sept 12-16 - Beta Tagging and Start testing
Week of Sept 19-23 - RC tagging
End of Sept 2016   - GA(General Availability) release of 3.9

Considering that beta tagging will be done in next week, is it okay to 
accept any features(Which are already Merged in Master) in release-3.9 
branch till Sept 12?


Other tasks before GA:
- Removing or disabling incomplete features or the features which are 
not ready

- Identifying Packaging issues for different distributions
- Documenting the release process so that it will be helpful for new 
maintainers

- Release notes preparation
- Testing and Documentation completeness checking.
- Blog about the release

Comments and Suggestions are Welcome.

@Pranith, please add if missed anything.

Thanks
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release-3.9 branch created

2016-09-02 Thread Aravinda

Hi,

Release 3.9 branch created yesterday. Thanks to all who contributed 
features and bug fixes for 3.9.


Please backport non feature patches to Release-3.9 as well.

Next steps:
- Release-3.9 branch stabilization for GA
- Tests coverage for the features
- RC builds for testing
- Documentation updates and Release notes update
- Initiate discussion on next release 3.10 and update Roadmap page

On behalf of
Aravinda & Pranith

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Request to provide PASS flags to a patch in gerrit

2016-08-31 Thread Aravinda

+1

regards
Aravinda

On Wednesday 31 August 2016 04:23 PM, Raghavendra Talur wrote:

Hi All,

We have a test [1] which is causing hangs in NetBSD. We have not been 
able to debug the issue yet.
It could be because the bash script does not comply with posix 
guidelines or that there is a bug in the brick code.


However, as we have 3.9 merge deadline tomorrow this is causing the 
test pipeline to grow a lot and needing manual intervention.
I recommend we disable this test for now. I request Kaushal to provide 
pass flags to the patch [2] for faster merge.



[1] ./tests/features/lock_revocation.t
[2] http://review.gluster.org/#/c/15374/


Thanks,
Raghavendra Talur


___
maintainers mailing list
maintain...@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Events API new Port requirement

2016-08-28 Thread Aravinda

Thanks Niels and Joe,

Changed my patch to use 24009. Dynamic port is not required since one 
process per node.


24005 is already registered by "med-ci" as checked in /etc/services

regards
Aravinda

On Sunday 28 August 2016 02:13 PM, Niels de Vos wrote:

On Sat, Aug 27, 2016 at 02:02:43PM -0700, Joe Julian wrote:

On 08/27/2016 12:15 PM, Niels de Vos wrote:

On Sat, Aug 27, 2016 at 08:52:11PM +0530, Aravinda wrote:

Hi,

As part of Client events support, glustereventsd needs to be configured to
use a port. Following ports are already used by Gluster components.
24007  For glusterd
24008  For glusterd RDMA port management
38465  Gluster NFS service
38466  Gluster NFS service
38467  Gluster NFS service
38468  Gluster NFS service
38469  Gluster NFS service
49152-49664512 ports for bricks

If I remember correctly, 24009+ ports were used by old GlusterFS(<3.4). In
the patch eventsd is using 24005. Please suggest which port glustereventsd
can use?(Can we use 24009 port)
http://review.gluster.org/15189

I guess you can use 24009, but it would be way better to either:

a. use glusterd-portmapper and get a dynamic port 49152+

Strongly disagree. It's enough that we have to poke dynamic holes in
firewalls already (sure it's easy with firewalld, but with hardware
firewalls or openstack security groups we just have to open a huge hole),
adding one that we also need to use as a service endpoint is too much.

Heh, yes, of course. I also think it needs quite some work in
glustereventsd to be able to use it, and then the clients need to
request the port from glusterd before they can consume events. It is the
way how other gluster clients work atm.


b. register a port at IANA, see /etc/services
 (maybe we should try to register the 24007+24008 ports in any case)

+100

This definitely has my preference too. I've always wanted to try to
register port 24007/8, and maybe the time has come to look into it.

Thanks for sharing your opinion!
Niels


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-27 Thread Aravinda
Updated. We have couple of enhancements in Geo-replication too. I will 
send pull request to update the Geo-rep enhancements.


regards
Aravinda

On Friday 26 August 2016 09:39 PM, Pranith Kumar Karampuri wrote:



On Fri, Aug 26, 2016 at 9:38 PM, Pranith Kumar Karampuri 
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:


hi,
  Now that we are almost near the feature freeze date (31st of
Aug), want to get a sense if any of the status of the features.


I meant "want to get a sense of the status of the features"


Please respond with:
1) Feature already merged
2) Undergoing review will make it by 31st Aug
3) Undergoing review, but may not make it by 31st Aug
4) Feature won't make it for 3.9.

I added the features that were not planned(i.e. not in the 3.9
roadmap page) but made it to the release and not planned but may
make it to release at the end of this mail.
If you added a feature on master that will be released as part of
3.9.0 but forgot to add it to roadmap page, please let me know I
will add it.

Here are the features planned as per the roadmap:
1) Throttling
Feature owner: Ravishankar

2) Trash improvements
Feature owners: Anoop, Jiffin

3) Kerberos for Gluster protocols:
Feature owners: Niels, Csaba

4) SELinux on gluster volumes:
Feature owners: Niels, Manikandan

5) Native sub-directory mounts:
Feature owners: Kaushal, Pranith

6) RichACL support for GlusterFS:
Feature owners: Rajesh Joseph

7) Sharemodes/Share reservations:
Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri,
Rajesh Joseph, Anoop C S

8) Integrate with external resource management software
Feature owners: Kaleb Keithley, Jose Rivera

9) Python Wrappers for Gluster CLI Commands
Feature owners: Aravinda VK


Will be available in python-pip and in external repository
https://github.com/gluster/glustercli-python



10) Package and ship libgfapi-python
Feature owners: Prashant Pai

11) Management REST APIs
Feature owners: Aravinda VK

Will be available as external repository. Distributions packaging is 
stretch goal.

https://github.com/gluster/restapi



12) Events APIs
Feature owners: Aravinda VK

Main Feature is already in. Following feature is in progress, hopefully 
these patches will get merged before Aug 31.

- Client side events support
- SysvInit and other init system support



13) CLI to get state representation of a cluster from the local
glusterd pov
Feature owners: Samikshan Bairagya

14) Posix-locks Reclaim support
Feature owners: Soumya Koduri

15) Deprecate striped volumes
Feature owners: Vijay Bellur, Niels de Vos

16) Improvements in Gluster NFS-Ganesha integration
Feature owners: Jiffin Tony Thottan, Soumya Koduri

*The following need to be added to the roadmap:*

Features that made it to master already but were not palnned:
1) Multi threaded self-heal in EC
Feature owner: Pranith (Did this because serkan asked for it. He
has 9PB volume, self-healing takes a long time :-/)

2) Lock revocation (Facebook patch)
Feature owner: Richard Wareing

Features that look like will make it to 3.9.0:
1) Hardware extension support for EC
Feature owner: Xavi

2) Reset brick support for replica volumes:
Feature owner: Anuradha

3) Md-cache perf improvements in smb:
Feature owner: Poornima

-- 
Pranith





--
Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Events API new Port requirement

2016-08-27 Thread Aravinda

Hi,

As part of Client events support, glustereventsd needs to be configured 
to use a port. Following ports are already used by Gluster components.

24007  For glusterd
24008  For glusterd RDMA port management
38465  Gluster NFS service
38466  Gluster NFS service
38467  Gluster NFS service
38468  Gluster NFS service
38469  Gluster NFS service
49152-49664512 ports for bricks

If I remember correctly, 24009+ ports were used by old GlusterFS(<3.4). 
In the patch eventsd is using 24005. Please suggest which port 
glustereventsd can use?(Can we use 24009 port)

http://review.gluster.org/15189

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Geo-replication]gluster geo-replication pair lost after reboot nodes with gluster version glusterfs 3.7.13

2016-08-25 Thread Aravinda

Hi,

Looks like some issue while showing the status. We will look into this. 
Geo-rep session is safe, issue only while showing the status.


Please try Force Stop Geo-replication and Start

gluster volume geo-replication smb1 110.110.110.14::smb11 stop force
gluster volume geo-replication smb1 110.110.110.14::smb11 start

If issue is not resolved, please share Gluster logs with us.

regards
Aravinda

On Thursday 25 August 2016 07:19 AM, Wei-Ming Lin wrote:

Hi all,

Now I have three node CS135f55, CS1145c7 and  CS1227ac as 
geo-replication source cluster,


source volume info as follow :

Volume Name: smb1
Type: Disperse
Volume ID: ccaf6a49-75ba-48cb-821f-4ced8ed01855
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
*Brick1: CS135f55:/export/IFT_lvol_LICSLxEIxq/fs*
*Brick2: CS1145c7:/export/IFT_lvol_oDC1AuFQDr/fs*
*Brick3: CS1227ac:/export/IFT_lvol_6JG0HAWa2A/fs*
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
storage.batch-fsync-delay-usec: 0
server.allow-insecure: on
performance.stat-prefetch: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
disperse.eager-lock: off
performance.write-behind: off
performance.read-ahead: off
performance.quick-read: off
performance.open-behind: off
performance.io-cache: off
nfs.disable: on
server.manage-gids: on
performance.readdir-ahead: off
cluster.enable-shared-storage: enable
cluster.server-quorum-ratio: 51%

# gluster volume geo-replication status :

MASTER NODEMASTER VOLMASTER BRICK  SLAVE USERSLAVE 
 SLAVE NODESTATUS CRAWL STATUS   
LAST_SYNCED

--
CS1227ac   smb1  /export/IFT_lvol_6JG0HAWa2A/fsroot 
 ssh://110.110.110.14::smb11CS14b550  PassiveN/A   
 N/A
CS135f55   smb1  /export/IFT_lvol_LICSLxEIxq/fsroot 
 ssh://110.110.110.14::smb11CS1630aa  PassiveN/A   
 N/A
CS1145c7   smb1  /export/IFT_lvol_oDC1AuFQDr/fsroot 
 ssh://110.110.110.14::smb11CS154d98  Active Changelog Crawl   
 2016-08-25 08:49:26



now when I reboot CS135f55, CS1145c7 and  CS1227ac at same time,

after node all node come back,

I get  geo-replication status again ,

and shows :

"*No active geo-replication sessions*"

So, if I need to keep my geo-replication conf after source cluster 
reboot, how can I do?


or is this a limitation for gluster geo-replication now ?

thanks.

Ivan


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Events API: Adding support for Client Events

2016-08-23 Thread Aravinda


regards
Aravinda

On Tuesday 23 August 2016 09:32 PM, Pranith Kumar Karampuri wrote:



On Tue, Aug 23, 2016 at 9:27 PM, Aravinda <avish...@redhat.com 
<mailto:avish...@redhat.com>> wrote:


Today I discussed about the topic with Rajesh, Avra and Kotresh.
Summary as below

- Instead of exposing eventsd to external world why not expose a
Glusterd RPC for gf_event, Since Glusterd already has logic for
backup volfile server.
- Gluster Clients to Glusterd using RPC, Glusterd will send
message to local eventsd.


Any suggestions for this approach?


If I remember correctly this is something we considered before we 
finalized on exposing eventsd. I think the reason was that this 
approach takes two hops which we didn't like in the discussion at the 
time. Did any other parameter change for reconsidering this approach?

- No extra auth required other than existing glusterd communication
- Backup volfile server for sending message to other glusterd of server 
if one server is down

- Events traffic is not heavy two hops may be acceptable(?)



regards
Aravinda


On Thursday 04 August 2016 11:04 AM, Aravinda wrote:


regards
Aravinda

On 08/03/2016 09:19 PM, Vijay Bellur wrote:

On 08/02/2016 11:24 AM, Pranith Kumar Karampuri wrote:



On Tue, Aug 2, 2016 at 8:21 PM, Vijay Bellur
<vbel...@redhat.com <mailto:vbel...@redhat.com>
<mailto:vbel...@redhat.com
<mailto:vbel...@redhat.com>>> wrote:

On 08/02/2016 07:27 AM, Aravinda wrote:

Hi,

As many of you aware, Gluster Eventing feature
is available in
Master.
To add support to listen to the Events from
GlusterFS Clients
following
changes are identified

- Change in Eventsd to listen to tcp socket
instead of Unix domain
socket. This enables Client to send message to
Eventsd running in
Storage node.
- On Client connection, share Port and Token
details with Xdata
- Client gf_event will connect to this port
and pushes the
event(Includes Token)
- Eventsd validates Token, publishes events
only if Token is valid.


Is there a lifetime/renewal associated with this
token? Are there
more details on how token management is being
done? Sorry if these
are repeat questions as I might have missed
something along the
review trail!


At least in the discussion it didn't seem like we
needed any new tokens
once it is generated. Do you have any usecase?


No specific usecase right now but I am interested in
understanding more details about token lifecycle
management. Are we planning to use the same token
infrastructure described in Authentication section of [1]?

If we use the same token as in REST API then we can expire the
tokens easily without the overhead of maintaining the token
state in node. If we expire tokens then Clients have to get
new tokens once expired. Let me know if we already have any
best practice with glusterd to client communication.


Thanks,
Vijay

[1]

http://review.gluster.org/#/c/13214/6/under_review/management_rest_api.md

<http://review.gluster.org/#/c/13214/6/under_review/management_rest_api.md>






--
Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] CFP for Gluster Developer Summit

2016-08-23 Thread Aravinda

Title: Events APIs for GlusterFS
Theme: Gluster.Next

With 3.9 release Gluster will have Events APIs support. Cluster events 
will be pushed to registered Client applications in realtime.


I plan to cover the following,

- Introduction
- Demo
- List of supported Events
- How to consume Events - Example Events Client (CLI client and example 
Web App)

- Future

regards
Aravinda

On Saturday 13 August 2016 03:45 AM, Amye Scavarda wrote:



On Fri, Aug 12, 2016 at 12:48 PM, Vijay Bellur <vbel...@redhat.com 
<mailto:vbel...@redhat.com>> wrote:


Hey All,

Gluster Developer Summit 2016 is fast approaching [1] on us. We
are looking to have talks and discussions related to the following
themes in the summit:

1. Gluster.Next - focusing on features shaping the future of Gluster

2. Experience - Description of real world experience and feedback
from:
   a> Devops and Users deploying Gluster in production
   b> Developers integrating Gluster with other
ecosystems

3. Use cases  - focusing on key use cases that drive Gluster.today
and Gluster.Next

4. Stability & Performance - focusing on current improvements to
reduce our technical debt backlog

5. Process & infrastructure  - focusing on improving current
workflow, infrastructure to make life easier for all of us!

If you have a talk/discussion proposal that can be part of these
themes, please send out your proposal(s) by replying to this
thread. Please clearly mention the theme for which your proposal
is relevant when you do so. We will be ending the CFP by 12
midnight PDT on August 31st, 2016.

If you have other topics that do not fit in the themes listed,
please feel free to propose and we might be able to accommodate
some of them as lightening talks or something similar.

Please do reach out to me or Amye if you have any questions.

Thanks!
Vijay

[1] https://www.gluster.org/events/summit2016/
<https://www.gluster.org/events/summit2016/>


Annoyingly enough, the Google Doc form won't let people outside of the 
Google Apps domain view it, which is not going to be super helpful for 
this.


I'll go ahead and close the Google form, send out the talks that have 
already been added, and have the form link back to this mailing list 
post.

Thanks!

- amye


--
Amye Scavarda | a...@redhat.com <mailto:a...@redhat.com> | Gluster 
Community Lead



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Geo-rep] file in source volume do not cost capacity, but after geo-replication file in target volume does.

2016-08-09 Thread Aravinda
Geo-replication uses rsync with --inplace option to retain same GFID. 
Rsync will not allow to use both -S and --inplace option.


Tar+SSH mode is sparse aware. (gluster volume geo-replication  
:: config use_tarssh True)


regards
Aravinda

On 08/08/2016 05:35 PM, Wei-Ming Lin wrote:

HI ALL,

I create a geo-rep between two volumes, detail as follow:

SRC volume info:

Volume Name: ivan
Type: Disperse
Volume ID: 6c4a1a7c-0516-47c4-a298-dabf00dd268c
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: giting4:/export/ivan1
Brick2: giting5:/export/ivan1
Brick3: giting6:/export/ivan1
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on

TAR volume info:

Volume Name: tar
Type: Disperse
Volume ID: e5147094-bd19-4b41-ae12-f6b84a89a18b
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: giting1:/export/ec1/fs
Brick2: giting2:/export/ec1/fs
Brick3: giting3:/export/ec1/fs
Options Reconfigured:
performance.readdir-ahead: on
cluster.enable-shared-storage: enable


RR status:
[root@giting4 ivan]# gluster volume geo-replication ivan giting1::tar 
status


MASTER NODEMASTER VOLMASTER BRICK SLAVE USER  SLAVE   
SLAVE NODESTATUS CRAWL STATUS LAST_SYNCED

--
giting4ivan  /export/ivan1root  giting1::tar   
 giting3   Active Changelog Crawl  2016-08-09 01:14:11
giting6ivan  /export/ivan1root  giting1::tar   
 giting1   PassiveN/A  N/A
giting5ivan  /export/ivan1root  giting1::tar   
 giting2   PassiveN/A  N/A


and when I use cmd :

# dd if=/dev/zero of=/mnt/ivan/10G_seek_file bs=1M count=0 seek=10240

to generate a empty file with 10G size at mount point of ivan (source 
volume),


because "count=0", this file will not cost real capacity,

# du 10G_seek_file -h will get :

8.0K10G_seek_file

and I also expect that target volume will have 10G_seek_file that do 
not cost capacity.


but when I check mount point of tar (target volume),

geo-replication happened,

but 10G_seek_file in target volume does cost capacity.

# du 10G_seek_file -h will get :

10G  10G_seek_file.

I know that geo-rep in gluster using rsync,

so I using rsync to test the same geo-rep,

found that if I do rsync  with "-S" option,

file in target volume will not cost capacity as source volume.

So, is this a bug?

or is there any configure in gluster can handle this?

thanks.









___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Geo-replication: Improving the performance during History Crawl

2016-08-08 Thread Aravinda

Thanks Vijay. Posted the initial patch for the same.
http://review.gluster.org/15110

Answers inline.

regards
Aravinda

On 08/09/2016 01:38 AM, Vijay Bellur wrote:

On 08/05/2016 04:45 AM, Aravinda wrote:

Hi,

Geo-replication has three types of Change detection(To identify the list
of files changed and to sync only those files)

1. XTime based Brick backend Crawl for initial sync
2. Historical Changelogs to sync backlogs(Files created/modified/deleted
between Worker down and start)
3. Live Changelogs - As and when changelog is rolled over, process it
and sync the changes

If initial data available in Master Volume before Geo-replication
session is created, then it does XTime based Crawl(Hybrid Crawl) and
then switches to Live Changelog mode.
After initial sync, Xtime crawl will not be used. On worker restart it
uses Historical changelogs and then switches to Live Changelogs.

Geo-replication is very slow during History Crawl if backlog changelogs
grows(If Geo-rep session was down for long time).



Do we need an upper bound on the duration allowed for the backlog 
changelog to grow? If the backlog grows beyond a certain threshold, 
should we resort to xtime based crawl as in the initial sync?
Added 15 days cap for processing. Initial sync part is not changed, 
Geo-rep will use Xsync for initial sync. This optimization only when 
worker is down after initial sync for long time.



- If a same file is Created, deleted and again created, Geo-rep is
replaying the changelogs in the same manner in Slave side.
- Data sync happens GFID to GFID, So except the final GFID sync all the
other sync will fail since file not exists in Master(File may exist but
with different GFID)
  Due to these data sync and retries, Geo-rep performance is affected.

Me and Kotresh discussed about the same and came up with following
changes to Geo-replication

While processing History,

- Collect all the entry, data and meta operations in a temporary 
database


Depending on the number of changelogs and operations, creation of this 
database itself might take a non trivial amount of time. If there is 
an archival/WORM workload without any deletions, would this step be 
counter productive from a performance perspective?
Temp database is purged and created for each iteration. Little change 
here, entry operations are not stored in db. Only Data and Meta GFIDs 
are stored. Entry operations are processed as and when Changelogs are 
processed.



- Delete all Data and Meta GFIDs which are already unlinked as per
Changelogs


We need to delete only those GFIDs whose link count happens to be zero 
after the unlink. Would this need an additional stat()?

Valid point, will add stat before removing from data/meta list.




- Process all Entry operations in batch
- Process data and meta operations in batch
- Once the sync is complete, Update last Changelog's time as last_synced
time as usual.

Challenges:
- If worker crashes in between while doing above steps, on restart same
changelogs will be reprocessed.(Crawl done in small batches in existing,
so on failure reprocess only last partially completed last batch)
  Some of the retries can be avoided if we start maintaining details
about entry_last_synced(entry_stime) and data_last_synced(stime)
separately.



Right, this can be a significant challenge if we keep crashing at the 
same point due to an external factor or a bug in code. Having a more 
granular tracker can help in reducing the cost of a retry.
Entry operations retries can be optimized by having entry_stime xattr 
separate from stime xattr.


-Vijay





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Geo-replication: Improving the performance during History Crawl

2016-08-05 Thread Aravinda

Hi,

Geo-replication has three types of Change detection(To identify the list 
of files changed and to sync only those files)


1. XTime based Brick backend Crawl for initial sync
2. Historical Changelogs to sync backlogs(Files created/modified/deleted 
between Worker down and start)
3. Live Changelogs - As and when changelog is rolled over, process it 
and sync the changes


If initial data available in Master Volume before Geo-replication 
session is created, then it does XTime based Crawl(Hybrid Crawl) and 
then switches to Live Changelog mode.
After initial sync, Xtime crawl will not be used. On worker restart it 
uses Historical changelogs and then switches to Live Changelogs.


Geo-replication is very slow during History Crawl if backlog changelogs 
grows(If Geo-rep session was down for long time).


- If a same file is Created, deleted and again created, Geo-rep is 
replaying the changelogs in the same manner in Slave side.
- Data sync happens GFID to GFID, So except the final GFID sync all the 
other sync will fail since file not exists in Master(File may exist but 
with different GFID)

  Due to these data sync and retries, Geo-rep performance is affected.

Me and Kotresh discussed about the same and came up with following 
changes to Geo-replication


While processing History,

- Collect all the entry, data and meta operations in a temporary database
- Delete all Data and Meta GFIDs which are already unlinked as per 
Changelogs

- Process all Entry operations in batch
- Process data and meta operations in batch
- Once the sync is complete, Update last Changelog's time as last_synced 
time as usual.


Challenges:
- If worker crashes in between while doing above steps, on restart same 
changelogs will be reprocessed.(Crawl done in small batches in existing, 
so on failure reprocess only last partially completed last batch)
  Some of the retries can be avoided if we start maintaining details 
about entry_last_synced(entry_stime) and data_last_synced(stime) separately.


Let us know if any suggestions.

@Kotresh, Please add if I missed anything.

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Events API: Adding support for Client Events

2016-08-03 Thread Aravinda


regards
Aravinda

On 08/03/2016 09:19 PM, Vijay Bellur wrote:

On 08/02/2016 11:24 AM, Pranith Kumar Karampuri wrote:



On Tue, Aug 2, 2016 at 8:21 PM, Vijay Bellur <vbel...@redhat.com
<mailto:vbel...@redhat.com>> wrote:

On 08/02/2016 07:27 AM, Aravinda wrote:

Hi,

As many of you aware, Gluster Eventing feature is available in
Master.
To add support to listen to the Events from GlusterFS Clients
following
changes are identified

- Change in Eventsd to listen to tcp socket instead of Unix 
domain
socket. This enables Client to send message to Eventsd 
running in

Storage node.
- On Client connection, share Port and Token details with Xdata
- Client gf_event will connect to this port and pushes the
event(Includes Token)
- Eventsd validates Token, publishes events only if Token is 
valid.



Is there a lifetime/renewal associated with this token? Are there
more details on how token management is being done? Sorry if these
are repeat questions as I might have missed something along the
review trail!


At least in the discussion it didn't seem like we needed any new tokens
once it is generated. Do you have any usecase?



No specific usecase right now but I am interested in understanding 
more details about token lifecycle management. Are we planning to use 
the same token infrastructure described in Authentication section of [1]?
If we use the same token as in REST API then we can expire the tokens 
easily without the overhead of maintaining the token state in node. If 
we expire tokens then Clients have to get new tokens once expired. Let 
me know if we already have any best practice with glusterd to client 
communication.


Thanks,
Vijay

[1] 
http://review.gluster.org/#/c/13214/6/under_review/management_rest_api.md


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Events API: Adding support for Client Events

2016-08-02 Thread Aravinda

Hi,

As many of you aware, Gluster Eventing feature is available in Master. 
To add support to listen to the Events from GlusterFS Clients following 
changes are identified


- Change in Eventsd to listen to tcp socket instead of Unix domain 
socket. This enables Client to send message to Eventsd running in 
Storage node.

- On Client connection, share Port and Token details with Xdata
- Client gf_event will connect to this port and pushes the 
event(Includes Token)

- Eventsd validates Token, publishes events only if Token is valid.


Kaushal, Pranith, Atin Please add if I missed anything.

Ref:
Events API Design: http://review.gluster.org/13115
Events API Intro & Demo: 
http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing/ (CLI name 
changed from "gluster-eventing" to "gluster-eventsapi")


--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Gluster Events API - Help required to identify the list of Events from each component

2016-07-14 Thread Aravinda

+gluster-users

regards
Aravinda

On 07/13/2016 09:03 PM, Vijay Bellur wrote:

On 07/13/2016 10:23 AM, Aravinda wrote:

Hi,

We are working on Eventing feature for Gluster, Sent feature patch for
the same.
Design: http://review.gluster.org/13115
Patch:  http://review.gluster.org/14248
Demo: http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing

Following document lists the events(mostly user driven events are
covered in the doc). Please let us know the Events from your components
to be supported by the Eventing Framework.

https://docs.google.com/document/d/1oMOLxCbtryypdN8BRdBx30Ykquj4E31JsaJNeyGJCNo/edit?usp=sharing 






Thanks for putting this together, Aravinda! Might be worth to poll 
-users ML also about events of interest.


-Vijay


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Events API - Help required to identify the list of Events from each component

2016-07-13 Thread Aravinda

Hi,

We are working on Eventing feature for Gluster, Sent feature patch for 
the same.

Design: http://review.gluster.org/13115
Patch:  http://review.gluster.org/14248
Demo: http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing

Following document lists the events(mostly user driven events are 
covered in the doc). Please let us know the Events from your components 
to be supported by the Eventing Framework.


https://docs.google.com/document/d/1oMOLxCbtryypdN8BRdBx30Ykquj4E31JsaJNeyGJCNo/edit?usp=sharing

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS 3.9 Planning

2016-06-28 Thread Aravinda

Hi All,

Are you working on any new features or enhancements to Gluster? Gluster 
3.9 is coming :)


As discussed previously in mailing lists, community meetings we will 
have a GlusterFS release in September 2016.


Please share the details about features/enhancements you are working on. 
(Feature freeze will be around Aug 31, 2016)


If you are working on a feature please open a bug and add it to 3.9.0 
tracker listed below.


https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.0


Following list of features/enhancements copied from 3.9 road map 
page(https://www.gluster.org/community/roadmap/3.9), Let us know if any 
of the features are not planned for 3.9 release.


- Throttling xlator (Owners: Ravishankar N)
- Trash Improvements (Owners: Anoop C S, Jiffin Tony Thottan)
- Kerberos for Gluster protocols (Owners: Niels de Vos, Csaba Henk)
- SELinux on Gluster Volumes (Owners: Niels de Vos, Manikandan)
- DHT2 (Owners: Shyamsundar Ranganathan, Venky Shankar, Kotresh)
- Native sub-directory mounts (Owners: Kaushal M, Pranith K)
- RichACL support for GlusterFS (Owners: Rajesh Joseph + Volunteers)
- Share modes / Share reservations (Owners: Raghavendra Talur + Poornima 
G, Soumya Koduri, Rajesh Joseph, Anoop C S)
- Integrate with external resource management software (Owners: Kaleb 
Keithley, Jose Rivera)

- Gluster Eventing (Owners: Aravinda VK)
- Gluster Management REST APIs (Owners: Aravinda VK)
- Inotify (Owners: Soumya Koduri)
- pNFS Layout Recall (Owners: Jiffin Tony Thottan, Soumya Koduri)
- iSCSI access for Gluster (Owners: Raghavendra Bhat, Vijay Bellur)
- Directory/Files filters for Geo-replication (Owners: Kotresh, Aravinda)
- Add + Remove brick with Volume Tiering (Owners: Dan Lambright)
- Volume Tiering (Owners: Dan Lambright)
- User and Group Quotas (Owners: Vijaikumar M, Manikandan)

Feel free to add new features/enhancements to the list by creating a 
feature bug and attaching to the tracker.



--
Thanks
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release Management Process change - proposal

2016-06-03 Thread Aravinda

Hi Vijay,

Discovered two regression issues in Geo-rep, identified the fix and 
patch posted to master and release-3.7. Waiting for regression tests run.


Is it possible to take these patches for 3.7.12?

Master:
BZ http://review.gluster.org/14636
http://review.gluster.org/14425

Release-3.7:
http://review.gluster.org/14637
Another patch yet to be posted.

regards
Aravinda

On 06/03/2016 08:25 AM, Vijay Bellur wrote:

On Sun, May 29, 2016 at 9:37 PM, Vijay Bellur <vbel...@redhat.com> wrote:

Since we do not have any objections to this proposal, let us do the
following for 3.7.12:

1. Treat June 1st as the cut-off for patch acceptance in release-3.7.
2. I will tag 3.7.12rc1 on June 2nd.


Gentle reminder - I will be tagging 3.7.12rc1 in about 12 hours from
now. Please merge patches needed for 3.7.12 by then. Post that,
patches would be pushed out to 3.7.13.

Thanks!
Vijay


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request - Gluster Eventing Feature

2016-05-24 Thread Aravinda

Hi,

I submitted patch for the new feature "Gluster Eventing", Please review 
the patch.


Patch:   http://review.gluster.org/14248
Design:  http://review.gluster.org/13115
Blog: http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing
Demo:https://www.youtube.com/watch?v=urzong5sKqc

Thanks

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Suggest a feature redirect

2016-05-19 Thread Aravinda


regards
Aravinda

On 05/19/2016 12:37 PM, Raghavendra Talur wrote:



On Thu, May 19, 2016 at 12:10 PM, Vijay Bellur <vbel...@redhat.com 
<mailto:vbel...@redhat.com>> wrote:


On 05/19/2016 02:38 AM, Amye Scavarda wrote:



On Thu, May 19, 2016 at 12:06 PM, Vijay Bellur
<vbel...@redhat.com <mailto:vbel...@redhat.com>
<mailto:vbel...@redhat.com <mailto:vbel...@redhat.com>>> wrote:

On Thu, May 19, 2016 at 2:21 AM, Raghavendra Talur
<rta...@redhat.com <mailto:rta...@redhat.com>
<mailto:rta...@redhat.com <mailto:rta...@redhat.com>>> wrote:
>
>
> On Wed, May 18, 2016 at 2:43 PM, Amye Scavarda
<a...@redhat.com <mailto:a...@redhat.com>
<mailto:a...@redhat.com <mailto:a...@redhat.com>>> wrote:
>>
>>
>>
>> On Wed, May 18, 2016 at 1:46 PM, Humble Devassy Chirammal
>> <humble.deva...@gmail.com
<mailto:humble.deva...@gmail.com>
<mailto:humble.deva...@gmail.com
<mailto:humble.deva...@gmail.com>>> wrote:
>>>
>>> Hi Amye,
>>>
>>> afaict, the feature proposal has start from
glusterfs-specs
>>>
(http://review.gluster.org/#/q/project:glusterfs-specs) project
under
>>> review.gluster.org <http://review.gluster.org>
<http://review.gluster.org>
>>>
>>> --Humble
>>>
>>>
>>> On Wed, May 18, 2016 at 12:15 PM, Amye Scavarda
<a...@redhat.com <mailto:a...@redhat.com>
<mailto:a...@redhat.com <mailto:a...@redhat.com>>> wrote:
>>>>
>>>> Currently on the gluster.org <http://gluster.org>
<http://gluster.org> website,
we're directing the 'Suggest a
>>>> feature' to the old mediawiki site for 'Features'.
>>>>
http://www.gluster.org/community/documentation/index.php/Features
>>>>
>>>> In an ideal world, this would go somewhere else.
Where should
this go?
>>>> Let me know and I'll fix it.
>>>> - amye
>>>>
>>>> --
>>>> Amye Scavarda | a...@redhat.com
<mailto:a...@redhat.com> <mailto:a...@redhat.com
<mailto:a...@redhat.com>> |
Gluster Community Lead
>>>>
>>>> ___
>>>> Gluster-devel mailing list
>>>> Gluster-devel@gluster.org
<mailto:Gluster-devel@gluster.org>
<mailto:Gluster-devel@gluster.org
<mailto:Gluster-devel@gluster.org>>
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>> Said another way, if you wanted to be able to have someone
contribute a
>> feature idea, what would be the best way?
>> Bugzilla? A google form? An email into the -devel list?
>>
>
> If it is more of a request from a user it should a
bugzilla RFE.
If the user
> requested on -devel list without a bug, then one of the
community devs
> should file a bug for it.
> If it is a proposal for a feature with design
details/implementation ideas
> then a patch to glusterfs-specs is encouraged.


In any case opening a discussion on gluster-devel would be
ideal to
get the attention of all concerned.

-Vijay


So should this stay on the website in the first place?
- amye


Yes, the relevant workflow needs to be in the website. In the
absence of that, newcomers to the community might find it
difficult to understand our workflow.


Currently it is

href='http://www.gluster.org/community/documentation/index.php/Features'>


Suggest a feature!



which points to old wiki.

Would it be better to have
http://gluster-devel@gluster.org?subject=Feature> Proposal">Suggest a 
feature

there?
Makes sense. Is it possible to send mail to this list without 
registration? or moderator can verify the feature mail from unsubscribed 
user and allow?


Thanks,
Raghavendra Talur



-Vijay






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing the size of the glusterdocs git repository

2016-05-18 Thread Aravinda

+1 for Prashanth's approach.

regards
Aravinda

On 05/18/2016 03:55 PM, Kaushal M wrote:

On Wed, May 18, 2016 at 3:29 PM, Humble Devassy Chirammal
<humble.deva...@gmail.com> wrote:





On Wed, May 18, 2016 at 3:26 PM, Amye Scavarda <a...@redhat.com> wrote:




All in favor of the 'everyone working on this should clone again'
approach.

Both approaches require cloning again, so which on are we voting for?

1. Prashanth's approach
 - modify existing repo
 - users re clone
 - push force to their forks on github

2. Misc's approach
   - create a new minified repo
   - users clone/fork the new repo

I don't mind either.


+2 on the same thought. :)







___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Idea: Alternate Release process

2016-05-13 Thread Aravinda

Hi,

Based on the discussion in last community meeting and previous discussions,

1. Too frequent releases are difficult to manage.(without dedicated 
release manager)

2. Users wants to see features early for testing or POC.
3. Backporting patches to more than two release branches is pain

Enclosed visualizations to understand existing release and support cycle 
and proposed alternatives.


- Each grid interval is 6 months
- Green rectangle shows supported release or LTS
- Black dots are minor releases till it is supported(once a month)
- Orange rectangle is non LTS release with minor releases(Support ends 
when next version released)


Enclosed following images
1. Existing Release cycle and support plan(6 months release cycle, 3 
releases supported all the time)
2. Proposed alternative 1 - One LTS every year and non LTS stable 
release once in every 2 months
3. Proposed alternative 2 - One LTS every year and non LTS stable 
release once in every 3 months
4. Proposed alternative 3 - One LTS every year and non LTS stable 
release once in every 4 months
5. Proposed alternative 4 - One LTS every year and non LTS stable 
release once in every 6 months (Similar to existing but only alternate 
one will become LTS)


Please do vote for the proposed alternatives about release intervals and 
LTS releases. You can also vote for the existing plan.


Do let me know if I missed anything.

regards
Aravinda

On 05/11/2016 12:01 AM, Aravinda wrote:


I couldn't find any solution for the backward incompatible changes. As 
you mentioned this model will not work for LTS.


How about adopting this only for non LTS releases? We will not have 
backward incompatibility problem since we need not release minor 
updates to non LTS releases.


regards
Aravinda
On 05/05/2016 04:46 PM, Aravinda wrote:


regards
Aravinda

On 05/05/2016 03:54 PM, Kaushal M wrote:

On Thu, May 5, 2016 at 11:48 AM, Aravinda <avish...@redhat.com> wrote:

Hi,

Sharing an idea to manage multiple releases without maintaining
multiple release branches and backports.

This idea is heavily inspired by the Rust release model(you may feel
exactly same except the LTS part). I think Chrome/Firefox also follows
the same model.

http://blog.rust-lang.org/2014/10/30/Stability.html

Feature Flag:
--
Compile time variable to prevent compiling featurerelated code when
disabled. (For example, ./configure--disable-geo-replication
or ./configure --disable-xml etc)

Plan
-
- Nightly build with all the features enabled(./build --nightly)

- All new patches will land in Master, if the patch belongs to a
   existing feature then it should be written behind that feature 
flag.


- If a feature is still work in progress then it will be only 
enabled in

   nightly build and not enabled in beta or stable builds.
   Once the maintainer thinks the feature is ready for testing then 
that

   feature will be enabled in beta build.

- Every 6 weeks, beta branch will be created by enabling all the
   features which maintainers thinks it is stable and previous beta
   branch will be promoted as stable.
   All the previous beta features will be enabled in stable unless it
   is marked as unstable during beta testing.

- LTS builds are same as stable builds but without enabling all the
   features. If we decide last stable build will become LTS release,
   then the feature list from last stable build will be saved as
   `features-release-.yaml`, For example:
   features-release-3.9.yaml`
   Same feature list will be used while building minor releases for 
the
   LTS. For example, `./build --stable --features 
features-release-3.8.yaml`


- Three branches, nightly/master, testing/beta, stable

To summarize,
- One stable release once in 6 weeks
- One Beta release once in 6 weeks
- Nightly builds every day
- LTS release once in 6 months or 1 year, Minor releases once in 6 
weeks.


Advantageous:
-
1. No more backports required to different release branches.(only
exceptional backports, discussed below)
2. Non feature Bugfix will never get missed in releases.
3. Release process can be automated.
4. Bugzilla process can be simplified.

Challenges:

1. Enforcing Feature flag for every patch
2. Tests also should be behind feature flag
3. New release process

Backports, Bug Fixes and Features:
--
- Release bug fix - Patch only to Master, which will be available in
   next beta/stable build.
- Urgent bug fix - Patch to Master and Backport to beta and stable
   branch, and early release stable and beta build.
- Beta bug fix - Patch to Master and Backport to Beta branch if 
urgent.
- Security fix - Patch to Master, Beta and last stable branch and 
build

   all LTS releases.
- Features - Patch only to Master, which will be available in
   stable/beta builds once feature becomes stable.

FAQs:
-
- Can a feature development take more than one release cycle(6 weeks)?
Yes, the feature will be enabled only in nightly

Re: [Gluster-devel] Release Management Process change - proposal

2016-05-13 Thread Aravinda

+1 from Geo-rep.

Will explicit ack to be done in the mailing list?

regards
Aravinda

On 05/10/2016 12:01 AM, Vijay Bellur wrote:

Hi All,

We are blocked on 3.7.12 owing to this proposal. Appreciate any
feedback on this!

Thanks,
Vijay

On Thu, Apr 28, 2016 at 11:58 PM, Vijay Bellur <vbel...@redhat.com> wrote:

Hi All,

We have encountered a spate of regressions in recent 3.7.x releases. The
3.7.x maintainers are facing additional burdens to ensure functional,
performance and upgrade correctness. I feel component maintainers should own
these aspects of stability as we own the components and understand our
components better than anybody else. In order to have more active
participation from maintainers for every release going forward, I propose
this process:

1. All component maintainers will need to provide an explicit ack about the
content and quality of their respective components before a release is
tagged.

2. A release will not be tagged if any component is not acked by a
maintainer.

3. Release managers will co-ordinate getting acks from maintainers and
perform necessary housekeeping (closing bugs etc.).

This is not entirely new and a part of this process has been outlined in the
Guidelines for Maintainers [1] document. I am inclined to enforce this
process with more vigor to ensure that we do better on quality & stability.

Thoughts, questions and feedback about the process are very welcome!

Thanks,
Vijay

[1]
http://www.gluster.org/community/documentation/index.php/Guidelines_For_Maintainers



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-13 Thread Aravinda

Refreshed the patch to fix glupy, systemd and mount.glusterfs files.
http://review.gluster.org/14315

regards
Aravinda

On 05/13/2016 10:25 AM, Kaushal M wrote:

On Fri, May 13, 2016 at 9:59 AM, Aravinda <avish...@redhat.com> wrote:

Sent patch to fix glupy installation issue.
http://review.gluster.org/#/c/14315/

regards
Aravinda

On 05/12/2016 11:28 PM, Aravinda wrote:

Sorry miss from my side. Updated list of files/dir which do not honour
--prefix

usr/lib/
usr/lib/systemd
usr/lib/systemd/system
usr/lib/systemd/system/glusterd.service
usr/lib/python2.7
usr/lib/python2.7/site-packages
usr/lib/python2.7/site-packages/gluster
usr/lib/python2.7/site-packages/gluster/__init__.pyo
usr/lib/python2.7/site-packages/gluster/__init__.pyc
usr/lib/python2.7/site-packages/gluster/__init__.py
usr/lib/python2.7/site-packages/gluster/glupy
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyo
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyc
usr/lib/python2.7/site-packages/gluster/glupy/__init__.py
sbin/
sbin/mount.glusterfs


Thanks for identifying the list of paths. We need to fix all of this.
I've opened a bug [1] so that this can be correctly tracked and fixed.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1335717


Things I did to find above list.
./autogen.sh
./configure --prefix=/usr/local
DESTDIR=/tmp/glusterfs make install

Then listed all the files which are in /tmp/glusterfs except
/tmp/glusterfs/usr/local

regards
Aravinda

On 05/12/2016 08:56 PM, Aravinda wrote:


regards
Aravinda

On 05/12/2016 08:23 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 04:28:40PM +0530, Aravinda wrote:

regards
Aravinda

On 05/12/2016 04:08 PM, Kaushal M wrote:

The install path should be `$DESTDIR/$PREFIX/`.

PREFIX should be the path under which the file is going to be installed.

Yes. That is substituted during ./configure if --prefix is passed, otherwise
generated Makefile will have $prefix variable. I think glupy need to
installed on /usr/lib/python2.6/site-packages/  to import python packages
globally while testing. Same rule is used to deploy systemd unit files.
(Prefix is not used)

I'm not convinced about this yet. If someone decides to use --prefix, I
think we should honour that everywhere. If that is not common, we can
introduce an additional ./configure option for the uncommon use-cases
like the Python site-packages.

Do you have a reference where the --prefix option explains that some
contents may not use it?

Following files/dirs are not honoring prefix, I am not sure about the exact
reason(for example, /var/log or /var/lib/glusterd)

sbin
sbin/mount.glusterfs
usr/lib/
usr/lib/systemd
usr/lib/systemd/system
usr/lib/systemd/system/glustereventsd.service
usr/lib/systemd/system/glusterd.service
usr/lib/python2.7
usr/lib/python2.7/site-packages
usr/lib/python2.7/site-packages/gluster
usr/lib/python2.7/site-packages/gluster/__init__.pyo
usr/lib/python2.7/site-packages/gluster/__init__.pyc
usr/lib/python2.7/site-packages/gluster/__init__.py
usr/lib/python2.7/site-packages/gluster/glupy
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyo
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyc
usr/lib/python2.7/site-packages/gluster/glupy/__init__.py
var/
var/lib
var/lib/glusterd
var/lib/glusterd/glusterfind
var/lib/glusterd/glusterfind/.keys
var/lib/glusterd/groups
var/lib/glusterd/groups/virt
var/lib/glusterd/hooks
var/lib/glusterd/hooks/1
var/lib/glusterd/hooks/1/delete
var/lib/glusterd/hooks/1/delete/post
var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py
var/lib/glusterd/hooks/1/gsync-create
var/lib/glusterd/hooks/1/gsync-create/post
var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
var/lib/glusterd/hooks/1/reset
var/lib/glusterd/hooks/1/reset/post
var/lib/glusterd/hooks/1/reset/post/S31ganesha-reset.sh
var/lib/glusterd/hooks/1/stop
var/lib/glusterd/hooks/1/stop/pre
var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
var/lib/glusterd/hooks/1/start
var/lib/glusterd/hooks/1/start/post
var/lib/glusterd/hooks/1/start/post/S31ganesha-start.sh
var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
var/lib/glusterd/hooks/1/set
var/lib/glusterd/hooks/1/set/post
var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
var/lib/glusterd/hooks/1/add-brick
var/lib/glusterd/hooks/1/add-brick/pre
var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
var/lib/glusterd/hooks/1/add-brick/post
var/lib/glusterd/hooks/1/add-brick/post/disabled-quota-root-xattr-heal.sh
var/log
var/log/glusterfs
var/run
var/run/gluster

Thanks,
Niels


DESTDIR is a way to make it easier to package builders to collect
installed files.
It shouldn't be used as an alternative to prefix. And I think software
generally shouldn't be run from DESTDIR.

More information is available at
https://www.g

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-12 Thread Aravinda

Sent patch to fix glupy installation issue.
http://review.gluster.org/#/c/14315/

regards
Aravinda

On 05/12/2016 11:28 PM, Aravinda wrote:
Sorry miss from my side. Updated list of files/dir which do not honour 
--prefix


usr/lib/
usr/lib/systemd
usr/lib/systemd/system
usr/lib/systemd/system/glusterd.service
usr/lib/python2.7
usr/lib/python2.7/site-packages
usr/lib/python2.7/site-packages/gluster
usr/lib/python2.7/site-packages/gluster/__init__.pyo
usr/lib/python2.7/site-packages/gluster/__init__.pyc
usr/lib/python2.7/site-packages/gluster/__init__.py
usr/lib/python2.7/site-packages/gluster/glupy
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyo
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyc
usr/lib/python2.7/site-packages/gluster/glupy/__init__.py
sbin/
sbin/mount.glusterfs


Things I did to find above list.
./autogen.sh
./configure --prefix=/usr/local
DESTDIR=/tmp/glusterfs make install

Then listed all the files which are in /tmp/glusterfs except 
/tmp/glusterfs/usr/local

regards
Aravinda
On 05/12/2016 08:56 PM, Aravinda wrote:


regards
Aravinda
On 05/12/2016 08:23 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 04:28:40PM +0530, Aravinda wrote:

regards
Aravinda

On 05/12/2016 04:08 PM, Kaushal M wrote:

The install path should be `$DESTDIR/$PREFIX/`.

PREFIX should be the path under which the file is going to be installed.

Yes. That is substituted during ./configure if --prefix is passed, otherwise
generated Makefile will have $prefix variable. I think glupy need to
installed on /usr/lib/python2.6/site-packages/  to import python packages
globally while testing. Same rule is used to deploy systemd unit files.
(Prefix is not used)

I'm not convinced about this yet. If someone decides to use --prefix, I
think we should honour that everywhere. If that is not common, we can
introduce an additional ./configure option for the uncommon use-cases
like the Python site-packages.

Do you have a reference where the --prefix option explains that some
contents may not use it?
Following files/dirs are not honoring prefix, I am not sure about the 
exact reason(for example, /var/log or /var/lib/glusterd)


sbin
sbin/mount.glusterfs
usr/lib/
usr/lib/systemd
usr/lib/systemd/system
usr/lib/systemd/system/glustereventsd.service
usr/lib/systemd/system/glusterd.service
usr/lib/python2.7
usr/lib/python2.7/site-packages
usr/lib/python2.7/site-packages/gluster
usr/lib/python2.7/site-packages/gluster/__init__.pyo
usr/lib/python2.7/site-packages/gluster/__init__.pyc
usr/lib/python2.7/site-packages/gluster/__init__.py
usr/lib/python2.7/site-packages/gluster/glupy
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyo
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyc
usr/lib/python2.7/site-packages/gluster/glupy/__init__.py
var/
var/lib
var/lib/glusterd
var/lib/glusterd/glusterfind
var/lib/glusterd/glusterfind/.keys
var/lib/glusterd/groups
var/lib/glusterd/groups/virt
var/lib/glusterd/hooks
var/lib/glusterd/hooks/1
var/lib/glusterd/hooks/1/delete
var/lib/glusterd/hooks/1/delete/post
var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py
var/lib/glusterd/hooks/1/gsync-create
var/lib/glusterd/hooks/1/gsync-create/post
var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
var/lib/glusterd/hooks/1/reset
var/lib/glusterd/hooks/1/reset/post
var/lib/glusterd/hooks/1/reset/post/S31ganesha-reset.sh
var/lib/glusterd/hooks/1/stop
var/lib/glusterd/hooks/1/stop/pre
var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
var/lib/glusterd/hooks/1/start
var/lib/glusterd/hooks/1/start/post
var/lib/glusterd/hooks/1/start/post/S31ganesha-start.sh
var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
var/lib/glusterd/hooks/1/set
var/lib/glusterd/hooks/1/set/post
var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
var/lib/glusterd/hooks/1/add-brick
var/lib/glusterd/hooks/1/add-brick/pre
var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
var/lib/glusterd/hooks/1/add-brick/post
var/lib/glusterd/hooks/1/add-brick/post/disabled-quota-root-xattr-heal.sh
var/log
var/log/glusterfs
var/run
var/run/gluster


Thanks,
Niels



DESTDIR is a way to make it easier to package builders to collect
installed files.
It shouldn't be used as an alternative to prefix. And I think software
generally shouldn't be run from DESTDIR.

More information is available at
https://www.gnu.org/software/automake/manual/html_node/DESTDIR.html
On Thu, May 12, 2016 at 3:55 PM, Aravinda<avish...@redhat.com>  wrote:

regards
Aravinda

On 05/12/2016 02:33 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 02:01:43PM +0530, Aravinda wrote:

I checked the Makefile.am and configure.ac of glupy, looks good to me. I
don't think we have issue in glupy.

If we run make install with DESTDIR empty then
`${DESTDIR}/usr/lib/python2.

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-12 Thread Aravinda
Sorry miss from my side. Updated list of files/dir which do not honour 
--prefix


usr/lib/
usr/lib/systemd
usr/lib/systemd/system
usr/lib/systemd/system/glusterd.service
usr/lib/python2.7
usr/lib/python2.7/site-packages
usr/lib/python2.7/site-packages/gluster
usr/lib/python2.7/site-packages/gluster/__init__.pyo
usr/lib/python2.7/site-packages/gluster/__init__.pyc
usr/lib/python2.7/site-packages/gluster/__init__.py
usr/lib/python2.7/site-packages/gluster/glupy
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyo
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyc
usr/lib/python2.7/site-packages/gluster/glupy/__init__.py
sbin/
sbin/mount.glusterfs


Things I did to find above list.
./autogen.sh
./configure --prefix=/usr/local
DESTDIR=/tmp/glusterfs make install

Then listed all the files which are in /tmp/glusterfs except 
/tmp/glusterfs/usr/local


regards
Aravinda

On 05/12/2016 08:56 PM, Aravinda wrote:


regards
Aravinda
On 05/12/2016 08:23 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 04:28:40PM +0530, Aravinda wrote:

regards
Aravinda

On 05/12/2016 04:08 PM, Kaushal M wrote:

The install path should be `$DESTDIR/$PREFIX/`.

PREFIX should be the path under which the file is going to be installed.

Yes. That is substituted during ./configure if --prefix is passed, otherwise
generated Makefile will have $prefix variable. I think glupy need to
installed on /usr/lib/python2.6/site-packages/  to import python packages
globally while testing. Same rule is used to deploy systemd unit files.
(Prefix is not used)

I'm not convinced about this yet. If someone decides to use --prefix, I
think we should honour that everywhere. If that is not common, we can
introduce an additional ./configure option for the uncommon use-cases
like the Python site-packages.

Do you have a reference where the --prefix option explains that some
contents may not use it?
Following files/dirs are not honoring prefix, I am not sure about the 
exact reason(for example, /var/log or /var/lib/glusterd)


sbin
sbin/mount.glusterfs
usr/lib/
usr/lib/systemd
usr/lib/systemd/system
usr/lib/systemd/system/glustereventsd.service
usr/lib/systemd/system/glusterd.service
usr/lib/python2.7
usr/lib/python2.7/site-packages
usr/lib/python2.7/site-packages/gluster
usr/lib/python2.7/site-packages/gluster/__init__.pyo
usr/lib/python2.7/site-packages/gluster/__init__.pyc
usr/lib/python2.7/site-packages/gluster/__init__.py
usr/lib/python2.7/site-packages/gluster/glupy
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyo
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyc
usr/lib/python2.7/site-packages/gluster/glupy/__init__.py
var/
var/lib
var/lib/glusterd
var/lib/glusterd/glusterfind
var/lib/glusterd/glusterfind/.keys
var/lib/glusterd/groups
var/lib/glusterd/groups/virt
var/lib/glusterd/hooks
var/lib/glusterd/hooks/1
var/lib/glusterd/hooks/1/delete
var/lib/glusterd/hooks/1/delete/post
var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py
var/lib/glusterd/hooks/1/gsync-create
var/lib/glusterd/hooks/1/gsync-create/post
var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
var/lib/glusterd/hooks/1/reset
var/lib/glusterd/hooks/1/reset/post
var/lib/glusterd/hooks/1/reset/post/S31ganesha-reset.sh
var/lib/glusterd/hooks/1/stop
var/lib/glusterd/hooks/1/stop/pre
var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
var/lib/glusterd/hooks/1/start
var/lib/glusterd/hooks/1/start/post
var/lib/glusterd/hooks/1/start/post/S31ganesha-start.sh
var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
var/lib/glusterd/hooks/1/set
var/lib/glusterd/hooks/1/set/post
var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
var/lib/glusterd/hooks/1/add-brick
var/lib/glusterd/hooks/1/add-brick/pre
var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
var/lib/glusterd/hooks/1/add-brick/post
var/lib/glusterd/hooks/1/add-brick/post/disabled-quota-root-xattr-heal.sh
var/log
var/log/glusterfs
var/run
var/run/gluster


Thanks,
Niels



DESTDIR is a way to make it easier to package builders to collect
installed files.
It shouldn't be used as an alternative to prefix. And I think software
generally shouldn't be run from DESTDIR.

More information is available at
https://www.gnu.org/software/automake/manual/html_node/DESTDIR.html
On Thu, May 12, 2016 at 3:55 PM, Aravinda<avish...@redhat.com>  wrote:

regards
Aravinda

On 05/12/2016 02:33 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 02:01:43PM +0530, Aravinda wrote:

I checked the Makefile.am and configure.ac of glupy, looks good to me. I
don't think we have issue in glupy.

If we run make install with DESTDIR empty then
`${DESTDIR}/usr/lib/python2.6/site-packages/gluster` will become
/usr/lib/python2.6/site-packages/gluster. So we will get that error.

For example,
  DESTDIR= make i

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-12 Thread Aravinda


regards
Aravinda

On 05/12/2016 08:23 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 04:28:40PM +0530, Aravinda wrote:

regards
Aravinda

On 05/12/2016 04:08 PM, Kaushal M wrote:

The install path should be `$DESTDIR/$PREFIX/`.

PREFIX should be the path under which the file is going to be installed.

Yes. That is substituted during ./configure if --prefix is passed, otherwise
generated Makefile will have $prefix variable. I think glupy need to
installed on /usr/lib/python2.6/site-packages/  to import python packages
globally while testing. Same rule is used to deploy systemd unit files.
(Prefix is not used)

I'm not convinced about this yet. If someone decides to use --prefix, I
think we should honour that everywhere. If that is not common, we can
introduce an additional ./configure option for the uncommon use-cases
like the Python site-packages.

Do you have a reference where the --prefix option explains that some
contents may not use it?
Following files/dirs are not honoring prefix, I am not sure about the 
exact reason(for example, /var/log or /var/lib/glusterd)


sbin
sbin/mount.glusterfs
usr/lib/
usr/lib/systemd
usr/lib/systemd/system
usr/lib/systemd/system/glustereventsd.service
usr/lib/systemd/system/glusterd.service
usr/lib/python2.7
usr/lib/python2.7/site-packages
usr/lib/python2.7/site-packages/gluster
usr/lib/python2.7/site-packages/gluster/__init__.pyo
usr/lib/python2.7/site-packages/gluster/__init__.pyc
usr/lib/python2.7/site-packages/gluster/__init__.py
usr/lib/python2.7/site-packages/gluster/glupy
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyo
usr/lib/python2.7/site-packages/gluster/glupy/__init__.pyc
usr/lib/python2.7/site-packages/gluster/glupy/__init__.py
var/
var/lib
var/lib/glusterd
var/lib/glusterd/glusterfind
var/lib/glusterd/glusterfind/.keys
var/lib/glusterd/groups
var/lib/glusterd/groups/virt
var/lib/glusterd/hooks
var/lib/glusterd/hooks/1
var/lib/glusterd/hooks/1/delete
var/lib/glusterd/hooks/1/delete/post
var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py
var/lib/glusterd/hooks/1/gsync-create
var/lib/glusterd/hooks/1/gsync-create/post
var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
var/lib/glusterd/hooks/1/reset
var/lib/glusterd/hooks/1/reset/post
var/lib/glusterd/hooks/1/reset/post/S31ganesha-reset.sh
var/lib/glusterd/hooks/1/stop
var/lib/glusterd/hooks/1/stop/pre
var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
var/lib/glusterd/hooks/1/start
var/lib/glusterd/hooks/1/start/post
var/lib/glusterd/hooks/1/start/post/S31ganesha-start.sh
var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
var/lib/glusterd/hooks/1/set
var/lib/glusterd/hooks/1/set/post
var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
var/lib/glusterd/hooks/1/add-brick
var/lib/glusterd/hooks/1/add-brick/pre
var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
var/lib/glusterd/hooks/1/add-brick/post
var/lib/glusterd/hooks/1/add-brick/post/disabled-quota-root-xattr-heal.sh
var/log
var/log/glusterfs
var/run
var/run/gluster



Thanks,
Niels



DESTDIR is a way to make it easier to package builders to collect
installed files.
It shouldn't be used as an alternative to prefix. And I think software
generally shouldn't be run from DESTDIR.

More information is available at
https://www.gnu.org/software/automake/manual/html_node/DESTDIR.html
On Thu, May 12, 2016 at 3:55 PM, Aravinda <avish...@redhat.com> wrote:

regards
Aravinda

On 05/12/2016 02:33 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 02:01:43PM +0530, Aravinda wrote:

I checked the Makefile.am and configure.ac of glupy, looks good to me. I
don't think we have issue in glupy.

If we run make install with DESTDIR empty then
`${DESTDIR}/usr/lib/python2.6/site-packages/gluster` will become
/usr/lib/python2.6/site-packages/gluster. So we will get that error.

For example,
  DESTDIR= make install
  or
  make install DESTDIR=

Can we check how we are executing smoke test?

I think it is this script, no DESTDIR in there:


https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/build.sh

My guess is that the --prefix ./configure option is not honoured?

DESTDIR will not get substitute during ./configure, it is used during make
install. Once we run ./autogen.sh and ./configure(with whatever prefix),
generated Makefile for glupy is


install-pyglupyPYTHON: $(pyglupy_PYTHON)
  @$(NORMAL_INSTALL)
  @list='$(pyglupy_PYTHON)'; dlist=; list2=; test -n "$(pyglupydir)" ||
list=; \
  if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(pyglupydir)'"; \
$(MKDIR_P) "$(DESTDIR)$(pyglupydir)" || exit 1; \
  fi; \
  for p in $$list; do \
if test -f "$$p"; then b=; else b="$(srcdir)/";

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-12 Thread Aravinda


regards
Aravinda

On 05/12/2016 04:08 PM, Kaushal M wrote:

The install path should be `$DESTDIR/$PREFIX/`.

PREFIX should be the path under which the file is going to be installed.
Yes. That is substituted during ./configure if --prefix is passed, 
otherwise generated Makefile will have $prefix variable. I think glupy 
need to installed on /usr/lib/python2.6/site-packages/  to import python 
packages globally while testing. Same rule is used to deploy systemd 
unit files. (Prefix is not used)

DESTDIR is a way to make it easier to package builders to collect
installed files.
It shouldn't be used as an alternative to prefix. And I think software
generally shouldn't be run from DESTDIR.

More information is available at
https://www.gnu.org/software/automake/manual/html_node/DESTDIR.html




On Thu, May 12, 2016 at 3:55 PM, Aravinda <avish...@redhat.com> wrote:

regards
Aravinda

On 05/12/2016 02:33 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 02:01:43PM +0530, Aravinda wrote:

I checked the Makefile.am and configure.ac of glupy, looks good to me. I
don't think we have issue in glupy.

If we run make install with DESTDIR empty then
`${DESTDIR}/usr/lib/python2.6/site-packages/gluster` will become
/usr/lib/python2.6/site-packages/gluster. So we will get that error.

For example,
 DESTDIR= make install
 or
 make install DESTDIR=

Can we check how we are executing smoke test?

I think it is this script, no DESTDIR in there:


https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/build.sh

My guess is that the --prefix ./configure option is not honoured?

DESTDIR will not get substitute during ./configure, it is used during make
install. Once we run ./autogen.sh and ./configure(with whatever prefix),
generated Makefile for glupy is


install-pyglupyPYTHON: $(pyglupy_PYTHON)
 @$(NORMAL_INSTALL)
 @list='$(pyglupy_PYTHON)'; dlist=; list2=; test -n "$(pyglupydir)" ||
list=; \
 if test -n "$$list"; then \
   echo " $(MKDIR_P) '$(DESTDIR)$(pyglupydir)'"; \
   $(MKDIR_P) "$(DESTDIR)$(pyglupydir)" || exit 1; \
 fi; \
 for p in $$list; do \
   if test -f "$$p"; then b=; else b="$(srcdir)/"; fi; \
   if test -f $$b$$p; then \
 $(am__strip_dir) \
 dlist="$$dlist $$f"; \
 list2="$$list2 $$b$$p"; \
   else :; fi; \
 done; \
 for file in $$list2; do echo $$file; done | $(am__base_list) | \
 while read files; do \
   echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pyglupydir)'"; \
   $(INSTALL_DATA) $$files "$(DESTDIR)$(pyglupydir)" || exit $$?; \
 done || exit $$?; \
 if test -n "$$dlist"; then \
   $(am__py_compile) --destdir "$(DESTDIR)" \
 --basedir "$(pyglupydir)" $$dlist; \
 else :; fi

If you run `make install` without destdir then it will install to machine's
global path depending on prefix.(If this is the case then their is genuine
"permission denied" error in the machine I think.

If we are packaging or installing to custom target, we should pass DESTDIR.

DESTDIR=/build/install make install



Niels


regards
Aravinda

On 05/12/2016 12:29 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 01:14:07AM -0400, Raghavendra Gowdappa wrote:

https://build.gluster.org/job/smoke/27674/console

06:09:06 /bin/mkdir: cannot create directory
`/usr/lib/python2.6/site-packages/gluster': Permission denied
06:09:06 make[6]: *** [install-pyglupyPYTHON] Error 1

This definitely is a bug in the installation of glupy. Nothing should
get installed under /usr, teh installation process is instructed to do
its install under /build/install.

Did someone file a bug for this yet?

Thanks,
Niels

06:09:06 make[5]: *** [install-am] Error 2
06:09:06 make[4]: *** [install-recursive] Error 1
06:09:06 make[3]: *** [install-recursive] Error 1
06:09:06 make[2]: *** [install-recursive] Error 1
06:09:06 make[1]: *** [install-recursive] Error 1
06:09:06 make: *** [install-recursive] Error 1
06:09:06 Build step 'Execute shell' marked build as failure
06:09:06 Finished: FAILURE

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-12 Thread Aravinda


regards
Aravinda

On 05/12/2016 02:33 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 02:01:43PM +0530, Aravinda wrote:

I checked the Makefile.am and configure.ac of glupy, looks good to me. I
don't think we have issue in glupy.

If we run make install with DESTDIR empty then
`${DESTDIR}/usr/lib/python2.6/site-packages/gluster` will become
/usr/lib/python2.6/site-packages/gluster. So we will get that error.

For example,
 DESTDIR= make install
 or
 make install DESTDIR=

Can we check how we are executing smoke test?

I think it is this script, no DESTDIR in there:

   
https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/build.sh

My guess is that the --prefix ./configure option is not honoured?
DESTDIR will not get substitute during ./configure, it is used during 
make install. Once we run ./autogen.sh and ./configure(with whatever 
prefix), generated Makefile for glupy is



install-pyglupyPYTHON: $(pyglupy_PYTHON)
@$(NORMAL_INSTALL)
@list='$(pyglupy_PYTHON)'; dlist=; list2=; test -n "$(pyglupydir)" 
|| list=; \

if test -n "$$list"; then \
  echo " $(MKDIR_P) '$(DESTDIR)$(pyglupydir)'"; \
  $(MKDIR_P) "$(DESTDIR)$(pyglupydir)" || exit 1; \
fi; \
for p in $$list; do \
  if test -f "$$p"; then b=; else b="$(srcdir)/"; fi; \
  if test -f $$b$$p; then \
$(am__strip_dir) \
dlist="$$dlist $$f"; \
list2="$$list2 $$b$$p"; \
  else :; fi; \
done; \
for file in $$list2; do echo $$file; done | $(am__base_list) | \
while read files; do \
  echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pyglupydir)'"; \
  $(INSTALL_DATA) $$files "$(DESTDIR)$(pyglupydir)" || exit $$?; \
done || exit $$?; \
if test -n "$$dlist"; then \
  $(am__py_compile) --destdir "$(DESTDIR)" \
--basedir "$(pyglupydir)" $$dlist; \
else :; fi

If you run `make install` without destdir then it will install to 
machine's global path depending on prefix.(If this is the case then 
their is genuine "permission denied" error in the machine I think.


If we are packaging or installing to custom target, we should pass DESTDIR.

DESTDIR=/build/install make install



Niels



regards
Aravinda

On 05/12/2016 12:29 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 01:14:07AM -0400, Raghavendra Gowdappa wrote:

https://build.gluster.org/job/smoke/27674/console

06:09:06 /bin/mkdir: cannot create directory 
`/usr/lib/python2.6/site-packages/gluster': Permission denied
06:09:06 make[6]: *** [install-pyglupyPYTHON] Error 1

This definitely is a bug in the installation of glupy. Nothing should
get installed under /usr, teh installation process is instructed to do
its install under /build/install.

Did someone file a bug for this yet?

Thanks,
Niels


06:09:06 make[5]: *** [install-am] Error 2
06:09:06 make[4]: *** [install-recursive] Error 1
06:09:06 make[3]: *** [install-recursive] Error 1
06:09:06 make[2]: *** [install-recursive] Error 1
06:09:06 make[1]: *** [install-recursive] Error 1
06:09:06 make: *** [install-recursive] Error 1
06:09:06 Build step 'Execute shell' marked build as failure
06:09:06 Finished: FAILURE

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-12 Thread Aravinda
I checked the Makefile.am and configure.ac of glupy, looks good to me. I 
don't think we have issue in glupy.


If we run make install with DESTDIR empty then 
`${DESTDIR}/usr/lib/python2.6/site-packages/gluster` will become 
/usr/lib/python2.6/site-packages/gluster. So we will get that error.


For example,
DESTDIR= make install
or
make install DESTDIR=

Can we check how we are executing smoke test?

regards
Aravinda

On 05/12/2016 12:29 PM, Niels de Vos wrote:

On Thu, May 12, 2016 at 01:14:07AM -0400, Raghavendra Gowdappa wrote:

https://build.gluster.org/job/smoke/27674/console

06:09:06 /bin/mkdir: cannot create directory 
`/usr/lib/python2.6/site-packages/gluster': Permission denied
06:09:06 make[6]: *** [install-pyglupyPYTHON] Error 1

This definitely is a bug in the installation of glupy. Nothing should
get installed under /usr, teh installation process is instructed to do
its install under /build/install.

Did someone file a bug for this yet?

Thanks,
Niels


06:09:06 make[5]: *** [install-am] Error 2
06:09:06 make[4]: *** [install-recursive] Error 1
06:09:06 make[3]: *** [install-recursive] Error 1
06:09:06 make[2]: *** [install-recursive] Error 1
06:09:06 make[1]: *** [install-recursive] Error 1
06:09:06 make: *** [install-recursive] Error 1
06:09:06 Build step 'Execute shell' marked build as failure
06:09:06 Finished: FAILURE

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Introduction and demo of Gluster Eventing Feature

2016-05-11 Thread Aravinda

Hi,

Yesterday I recorded a demo and wrote a blog post about Gluster Eventing 
feature.


http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing/

Comments and Suggestions Welcome.

--
regards
Aravinda
http://aravindavk.in

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Maintainership

2016-05-11 Thread Aravinda

I would like to propose Kotresh. +2 from my side.

regards
Aravinda

On 05/11/2016 10:25 AM, Venky Shankar wrote:

Hello,

I'm wanting to relinquish maintainership for changelog[1] translator.

For the uninformed, changelog xlator is the supporting infrastructure for 
features
such as Geo-replication, Bitrot and glusterfind. However, this would eventually 
be
replaced by FDL[2] when it's ready and the dependent components either integrate
with the new (and improved) infrastructure or get redesigned.

Interested folks please reply (all) to this email. Although I would prefer 
folks who
have contributed to this feature, it does not mean others cannot speak up. 
There's
always a need for a backup maintainer who can in the course of time contribute 
and
become primary maintainer in the future.

[1]: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L76
[2]: https://github.com/gluster/glusterfs/tree/master/xlators/experimental/fdl



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Idea: Alternate Release process

2016-05-10 Thread Aravinda
I couldn't find any solution for the backward incompatible changes. As 
you mentioned this model will not work for LTS.


How about adopting this only for non LTS releases? We will not have 
backward incompatibility problem since we need not release minor updates 
to non LTS releases.


regards
Aravinda

On 05/05/2016 04:46 PM, Aravinda wrote:


regards
Aravinda

On 05/05/2016 03:54 PM, Kaushal M wrote:

On Thu, May 5, 2016 at 11:48 AM, Aravinda <avish...@redhat.com> wrote:

Hi,

Sharing an idea to manage multiple releases without maintaining
multiple release branches and backports.

This idea is heavily inspired by the Rust release model(you may feel
exactly same except the LTS part). I think Chrome/Firefox also follows
the same model.

http://blog.rust-lang.org/2014/10/30/Stability.html

Feature Flag:
--
Compile time variable to prevent compiling featurerelated code when
disabled. (For example, ./configure--disable-geo-replication
or ./configure --disable-xml etc)

Plan
-
- Nightly build with all the features enabled(./build --nightly)

- All new patches will land in Master, if the patch belongs to a
   existing feature then it should be written behind that feature flag.

- If a feature is still work in progress then it will be only 
enabled in

   nightly build and not enabled in beta or stable builds.
   Once the maintainer thinks the feature is ready for testing then 
that

   feature will be enabled in beta build.

- Every 6 weeks, beta branch will be created by enabling all the
   features which maintainers thinks it is stable and previous beta
   branch will be promoted as stable.
   All the previous beta features will be enabled in stable unless it
   is marked as unstable during beta testing.

- LTS builds are same as stable builds but without enabling all the
   features. If we decide last stable build will become LTS release,
   then the feature list from last stable build will be saved as
   `features-release-.yaml`, For example:
   features-release-3.9.yaml`
   Same feature list will be used while building minor releases for the
   LTS. For example, `./build --stable --features 
features-release-3.8.yaml`


- Three branches, nightly/master, testing/beta, stable

To summarize,
- One stable release once in 6 weeks
- One Beta release once in 6 weeks
- Nightly builds every day
- LTS release once in 6 months or 1 year, Minor releases once in 6 
weeks.


Advantageous:
-
1. No more backports required to different release branches.(only
exceptional backports, discussed below)
2. Non feature Bugfix will never get missed in releases.
3. Release process can be automated.
4. Bugzilla process can be simplified.

Challenges:

1. Enforcing Feature flag for every patch
2. Tests also should be behind feature flag
3. New release process

Backports, Bug Fixes and Features:
--
- Release bug fix - Patch only to Master, which will be available in
   next beta/stable build.
- Urgent bug fix - Patch to Master and Backport to beta and stable
   branch, and early release stable and beta build.
- Beta bug fix - Patch to Master and Backport to Beta branch if urgent.
- Security fix - Patch to Master, Beta and last stable branch and build
   all LTS releases.
- Features - Patch only to Master, which will be available in
   stable/beta builds once feature becomes stable.

FAQs:
-
- Can a feature development take more than one release cycle(6 weeks)?
Yes, the feature will be enabled only in nightly build and not in
beta/stable builds. Once the feature is complete mark it as
stable so that it will be included in next beta build and stable
build.


---

Do you like the idea? Let me know what you guys think.

This reduces the number of versions that we need to maintain, which I 
like.

Having official test (beta) releases should help get features out to
testers hand faster,
and get quicker feedback.

One thing that's still not quite clear to is the issue of backwards
compatibility.
I'm still thinking it thorough and don't have a proper answer to this 
yet.

Would a new release be backwards compatible with the previous release?
Should we be maintaining compatibility with LTS releases with the
latest release?
Each LTS release will have seperate list of features to be enabled. If 
we make any breaking changes(which are not backward compatible) then 
it will affect LTS releases as you mentioned. But we should not break 
compatibility unless it is major version change like 4.0. I have to 
workout how we can handle backward incompatible changes.



With our current strategy, we at least have a long term release branch,
so we get some guarantees of compatibility with releases on the same 
branch.


As I understand the proposed approach, we'd be replacing a stable
branch with the beta branch.
So we don't have a long-term release branch (apart from LTS).
Stable branch is common for LTS releases also. Builds will be 
different using differen

Re: [Gluster-devel] Idea: Alternate Release process

2016-05-05 Thread Aravinda


regards
Aravinda

On 05/05/2016 03:54 PM, Kaushal M wrote:

On Thu, May 5, 2016 at 11:48 AM, Aravinda <avish...@redhat.com> wrote:

Hi,

Sharing an idea to manage multiple releases without maintaining
multiple release branches and backports.

This idea is heavily inspired by the Rust release model(you may feel
exactly same except the LTS part). I think Chrome/Firefox also follows
the same model.

http://blog.rust-lang.org/2014/10/30/Stability.html

Feature Flag:
--
Compile time variable to prevent compiling featurerelated code when
disabled. (For example, ./configure--disable-geo-replication
or ./configure --disable-xml etc)

Plan
-
- Nightly build with all the features enabled(./build --nightly)

- All new patches will land in Master, if the patch belongs to a
   existing feature then it should be written behind that feature flag.

- If a feature is still work in progress then it will be only enabled in
   nightly build and not enabled in beta or stable builds.
   Once the maintainer thinks the feature is ready for testing then that
   feature will be enabled in beta build.

- Every 6 weeks, beta branch will be created by enabling all the
   features which maintainers thinks it is stable and previous beta
   branch will be promoted as stable.
   All the previous beta features will be enabled in stable unless it
   is marked as unstable during beta testing.

- LTS builds are same as stable builds but without enabling all the
   features. If we decide last stable build will become LTS release,
   then the feature list from last stable build will be saved as
   `features-release-.yaml`, For example:
   features-release-3.9.yaml`
   Same feature list will be used while building minor releases for the
   LTS. For example, `./build --stable --features features-release-3.8.yaml`

- Three branches, nightly/master, testing/beta, stable

To summarize,
- One stable release once in 6 weeks
- One Beta release once in 6 weeks
- Nightly builds every day
- LTS release once in 6 months or 1 year, Minor releases once in 6 weeks.

Advantageous:
-
1. No more backports required to different release branches.(only
exceptional backports, discussed below)
2. Non feature Bugfix will never get missed in releases.
3. Release process can be automated.
4. Bugzilla process can be simplified.

Challenges:

1. Enforcing Feature flag for every patch
2. Tests also should be behind feature flag
3. New release process

Backports, Bug Fixes and Features:
--
- Release bug fix - Patch only to Master, which will be available in
   next beta/stable build.
- Urgent bug fix - Patch to Master and Backport to beta and stable
   branch, and early release stable and beta build.
- Beta bug fix - Patch to Master and Backport to Beta branch if urgent.
- Security fix - Patch to Master, Beta and last stable branch and build
   all LTS releases.
- Features - Patch only to Master, which will be available in
   stable/beta builds once feature becomes stable.

FAQs:
-
- Can a feature development take more than one release cycle(6 weeks)?
Yes, the feature will be enabled only in nightly build and not in
beta/stable builds. Once the feature is complete mark it as
stable so that it will be included in next beta build and stable
build.


---

Do you like the idea? Let me know what you guys think.


This reduces the number of versions that we need to maintain, which I like.
Having official test (beta) releases should help get features out to
testers hand faster,
and get quicker feedback.

One thing that's still not quite clear to is the issue of backwards
compatibility.
I'm still thinking it thorough and don't have a proper answer to this yet.
Would a new release be backwards compatible with the previous release?
Should we be maintaining compatibility with LTS releases with the
latest release?
Each LTS release will have seperate list of features to be enabled. If 
we make any breaking changes(which are not backward compatible) then it 
will affect LTS releases as you mentioned. But we should not break 
compatibility unless it is major version change like 4.0. I have to 
workout how we can handle backward incompatible changes.



With our current strategy, we at least have a long term release branch,
so we get some guarantees of compatibility with releases on the same branch.

As I understand the proposed approach, we'd be replacing a stable
branch with the beta branch.
So we don't have a long-term release branch (apart from LTS).
Stable branch is common for LTS releases also. Builds will be different 
using different list of features.


Below example shows stable release once in 6 weeks, and two LTS releases 
in 6 months gap(3.8 and 3.12)


LTS 1 : 3.83.8.1  3.8.2  3.8.3   3.8.4   3.8.5...
LTS 2 :  3.123.12.1...
Stable: 3.83.93.10   3.113.123.13...

A user would be upgrading from one branch to another for every release.

[Gluster-devel] Idea: Alternate Release process

2016-05-05 Thread Aravinda

Hi,

Sharing an idea to manage multiple releases without maintaining
multiple release branches and backports.

This idea is heavily inspired by the Rust release model(you may feel
exactly same except the LTS part). I think Chrome/Firefox also follows
the same model.

http://blog.rust-lang.org/2014/10/30/Stability.html

Feature Flag:
--
Compile time variable to prevent compiling featurerelated code when
disabled. (For example, ./configure--disable-geo-replication
or ./configure --disable-xml etc)

Plan
-
- Nightly build with all the features enabled(./build --nightly)

- All new patches will land in Master, if the patch belongs to a
  existing feature then it should be written behind that feature flag.

- If a feature is still work in progress then it will be only enabled in
  nightly build and not enabled in beta or stable builds.
  Once the maintainer thinks the feature is ready for testing then that
  feature will be enabled in beta build.

- Every 6 weeks, beta branch will be created by enabling all the
  features which maintainers thinks it is stable and previous beta
  branch will be promoted as stable.
  All the previous beta features will be enabled in stable unless it
  is marked as unstable during beta testing.

- LTS builds are same as stable builds but without enabling all the
  features. If we decide last stable build will become LTS release,
  then the feature list from last stable build will be saved as
  `features-release-.yaml`, For example:
  features-release-3.9.yaml`
  Same feature list will be used while building minor releases for the
  LTS. For example, `./build --stable --features features-release-3.8.yaml`

- Three branches, nightly/master, testing/beta, stable

To summarize,
- One stable release once in 6 weeks
- One Beta release once in 6 weeks
- Nightly builds every day
- LTS release once in 6 months or 1 year, Minor releases once in 6 weeks.

Advantageous:
-
1. No more backports required to different release branches.(only
   exceptional backports, discussed below)
2. Non feature Bugfix will never get missed in releases.
3. Release process can be automated.
4. Bugzilla process can be simplified.

Challenges:

1. Enforcing Feature flag for every patch
2. Tests also should be behind feature flag
3. New release process

Backports, Bug Fixes and Features:
--
- Release bug fix - Patch only to Master, which will be available in
  next beta/stable build.
- Urgent bug fix - Patch to Master and Backport to beta and stable
  branch, and early release stable and beta build.
- Beta bug fix - Patch to Master and Backport to Beta branch if urgent.
- Security fix - Patch to Master, Beta and last stable branch and build
  all LTS releases.
- Features - Patch only to Master, which will be available in
  stable/beta builds once feature becomes stable.

FAQs:
-
- Can a feature development take more than one release cycle(6 weeks)?
Yes, the feature will be enabled only in nightly build and not in
beta/stable builds. Once the feature is complete mark it as
stable so that it will be included in next beta build and stable
build.


---

Do you like the idea? Let me know what you guys think.

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Proposal for third party sub commands for Gluster CLI

2016-04-18 Thread Aravinda

Hi,

As many of you aware, git supports external subcommand support which
enables users to extend the functionality around git.

Create a shell script "git-hello" as below and place it anywhere in
system(should be available in $PATH)

#!/bin/bash
echo "Hello World"

Make this script executable, (chmod +x /usr/local/bin/git-hello)

This can be executed as `git-hello` or `git hello` (Other example is
git-review tool, can be executed as git-review or git review)

Similarly we can have sub command support for Gluster CLI. If any
script/binary available in PATH env with the name gluster- can be
executed as `gluster `

Let me know what you guys think about this feature. I am also planning
to add this feature to
glustertool(https://github.com/gluster/glustertool)

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REST APIs and Eventing Framework - Status Update

2016-04-06 Thread Aravinda

Hi,

REST API's and Eventing designs are in final stages of discussion,
hopefully will get merged soon.

http://review.gluster.org/13214
http://review.gluster.org/13115

Submitted single big(+2885, -5) WIP patch to upstream, will split the 
patches

into smaller patches for easy review.

http://review.gluster.org/13887

REST APIs:
-
[   DONE] REST Server implementation(Golang)
[   DONE] JWT Authentication(Shared Secret approach)
[   DONE] Use of Autotools for packaging and installation
[   DONE] RPMs generation
[   DONE] --disable-restapi option for ./configure
[   DONE] CLI to manage REST Server apps and configs
[   DONE] REST Client(Written in Python)
[   DONE] Systemd service file
[   DONE] Peers Attach/Detach/List APIs
[   DONE] Volume Create/Start/Stop/Restart/Delete/Info/Status APIs
[   DONE] Configuration to disable Authentication
[IN PROGRESS] Auto create "gluster" app when enabled REST.
[IN PROGRESS] Volume Options APIs
[IN PROGRESS] Geo-replication APIs
[IN PROGRESS] Snapshot APIs
[IN PROGRESS] Quota APIs
[IN PROGRESS] Bricks Management APIs (Add/Remove)
[IN PROGRESS] Tier APIs
[IN PROGRESS] Sharding, Bitrot APIs
[IN PROGRESS] REST APIs documentation
[IN PROGRESS] Adding REST APIs Tests
[IN PROGRESS] User/Admin documentation
[NOT STARTED] rc.d service file for non systemd distributions
[NOT STARTED] Go lang package dependency management(glide?)

Eventing:
-
[   DONE] Agent to listen to /var/run/gluster/events.sock
[   DONE] Broadcast messages to all peer nodes
[   DONE] Websocket end point to listen/watch events
[   DONE] CLI tool to list/listen to events
[   DONE] Use of Autotools for packaging and installation
[   DONE] RPMs generation
[   DONE] --disable-events option for ./configure
[   DONE] Systemd service file
[   DONE] CLI to enable/disable/start/stop events
[IN PROGRESS] C Library to send events to agent
[IN PROGRESS] Go Library to send events to agent
[IN PROGRESS] Python Library to send events to agent
[IN PROGRESS] Integration with Gluster code(Add gf_event)
[IN PROGRESS] API documentation
[IN PROGRESS] Volume Create/Start/Stop/Set/Reset/Delete Events
[IN PROGRESS] Peer Attach/Detach Events
[IN PROGRESS] Bricks Add/Remove/Replace Events
[IN PROGRESS] Volume Tier Attach/Detach Events
[IN PROGRESS] Rebalance Start/Stop Events
[IN PROGRESS] Quota Enable/Disable Events
[IN PROGRESS] Self-heal Enable/Disable Events
[IN PROGRESS] Geo-rep Create/Start/Config/Stop/Delete/Pause/Resume Events
[IN PROGRESS] Bitrot Enable/Disable/Config Events
[IN PROGRESS] Sharding Enable/Disable Events
[IN PROGRESS] Snapshot Create/Clone/Restore/Config/Delete/
  Activate/Deactivate Events
[IN PROGRESS] Change in Geo-rep Worker Status Active/Passive/Faulty
  Events
[IN PROGRESS] User/Admin documentation
[NOT STARTED] rc.d service file for non systemd distributions
[NOT STARTED] Go lang package dependency management(glide?)

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Snapshot aware Geo-replication

2016-04-05 Thread Aravinda

Hi,

Gluster Snapshots and Geo-replication are not well integrated, lot of
steps to be performed to take snapshot of Gluster Volume which is
Geo-replicated. Proposed enhancement for Geo-replication to understand
Snapshot better and automatically handle Slave side snapshot.

Proposed Solution:
--
Take Gluster Snapshot and set Geo-replication Config
`current-snapshot` using,

gluster volume geo-replication  :: \
config current_snapshot 

Geo-rep will automatically restart on config change and new config
will act as switch to use Snapshot or Live Volume.

Geo-rep will mount snapshot Volume in Master instead of Live
Volume, so that Geo-rep can sync the changes from Snapshot Volume
instead of live volume. Along with the mount Geo-rep should use the
back end changelogs of snapshot brick instead of live brick.

Geo-rep worker will update stime both in snapshot bricks and live
bricks, this is required to prevents re-processing changelogs which
are already processed when switched to live Changelogs.

Once all the changes from Snapshot synced to slave then Geo-rep worker
will trigger snapshot at slave side. On successful slave snapshot
Geo-replication will automatically switches to live Volume by
resetting current_snapshot option.

Snapshot Restore:
-
Restore both Slave and Master Volume to the same Snapshot name,
Geo-rep should work without any further changes.

Challenges:
---
- Geo-rep may not work as expected if we give old snapshot name after
  latest snapshot name.
- Detecting the completion of sync from Snapshot Volume(Checkpoint?)
- Since Changelogs are generated even in Snapshot Volume, updating stime
  on live bricks while syncing from snapshot Volume brick may cause 
problems

  when switched back to live.
- Finding respective snapshot brick path from live volume brick path may be
  challenging if Bricks removed/added after taking snapshot.

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged at 2200PDT 30th March.

2016-03-31 Thread Aravinda

Hi Kaushal,

We have a Changelog bug which can lead to data loss if Glusterfind is 
enabled(To be specific,  when changelog.capture-del-path and 
changelog.changelog options enabled on a replica volume).


http://review.gluster.org/#/c/13861/

This is very corner case. but good to go with the release. We tried to 
merge this before the merge window for 3.7.10, but regressions not yet 
complete :(


Do you think we should wait for this patch?

@Kotresh can provide more details about this issue.

regards
Aravinda

On 03/31/2016 01:29 PM, Kaushal M wrote:

The last change for 3.7.10 has been merged now. Commit 2cd5b75 will be
used for the release. I'll be preparing release-notes, and tagging the
release soon.

After running verification tests and checking for any perf
improvements, I'll make be making the release tarball.

Regards,
Kaushal

On Wed, Mar 30, 2016 at 7:00 PM, Kaushal M <kshlms...@gmail.com> wrote:

Hi all,

I'll be taking over the release duties for 3.7.10. Vijay is busy and
could not get the time to do a scheduled release.

The .10 release has been scheduled for tagging on 30th (ie. today).
In the interests of providing some heads up to developers wishing to
get changes merged,
I'll be waiting till 10PM PDT, 30th March. (0500UTC/1030IST 31st
March), to tag the release.

So you have ~15 hours to get any changes required merged.

Thanks,
Kaushal

___
maintainers mailing list
maintain...@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Using geo-replication as backup solution using gluster volume snapshot!

2016-03-09 Thread Aravinda


regards
Aravinda

On 03/09/2016 06:22 PM, Shyam wrote:

On 03/09/2016 12:45 AM, Aravinda wrote:


regards
Aravinda

On 03/08/2016 11:34 PM, Atin Mukherjee wrote:


On 03/07/2016 05:13 PM, Kotresh Hiremath Ravishankar wrote:

Added gluster-users.

Thanks and Regards,
Kotresh H R

- Original Message -

From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Monday, March 7, 2016 3:03:08 PM
Subject: [Gluster-devel] Using geo-replication as backup solution
using gluster volume snapshot!

Hi All,

Here is the idea, we can use geo-replication as backup solution
using gluster
volume
snapshots on slave side. One of the drawbacks of geo-replication is
that it's
a
continuous asynchronous replication and would not help in getting
the last
week's or
yesterday's data. So if we use gluster snapshots at the slave end,
we can use
the
snapshots to get the last week's or yesterday's data making it a
candidate
for a
backup solution. The limitation is that the snapshots at the slave
end can't
be
restored as it will break the running geo-replication. It could be
mounted
and
we have access to data when the snapshots are taken. It's just a
naive idea.
Any suggestions and use cases are worth discussing:)
When you mention that gluster snapshot can be taken at the slave end 
how
does it guarantee that the data is available till yesterday or next 
week

considering the asynchronous nature of the replication strategy? (I've
very limited knowledge on geo replication, so I may sound stupid!)

Check my response in the same thread. Geo-rep now has a scheduler
script, which can be used to run Geo-replication whenever required.
It does the following to make sure everything is synced to Slave.

1. Stop Geo-replication if Started
2. Start Geo-replication
3. Set Checkpoint
4. Check the Status and see Checkpoint is Complete.(LOOP)
5. If checkpoint complete, Stop Geo-replication


Will this stop at the moment checkpoint is complete, or is there a 
chance that geo-rep would continue with the next change log, but be 
interrupted by this script.
Checkpoint here is just a stop notification for Scheduler script. 
Geo-rep may sync some files which are created/modified after the 
checkpoint time. But it makes sure that everything created/modified 
before checkpoint are in sync.


IOW, we are polling for the checkpoint, which when we detect has 
happened, may not mean geo-rep has not processed (or started 
processing) the next changelog, would this understanding be right?

It may process next changelogs.


I state/ask this, as we may want to geo-rep upto a checkpoint, which 
is a point of snapshot on the master, and not geo-rep beyond this point.
Geo-rep can't provide point of snapshot since data changes are not 
recorded in changelogs. File may have modified after it recorded data in 
Changelog.


I guess I need to understand checkpoints better, if my understanding 
is incorrect.
Checkpoint is kind of indicator that shows all the files 
created/modified before the checkpoint time are synced.




Thanks and Regards,
Kotresh H R

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] sub-directory geo-replication, snapshot features

2016-03-08 Thread Aravinda


regards
Aravinda

On 03/08/2016 07:32 PM, Pranith Kumar Karampuri wrote:

hi,
 Late last week I sent a solution for how to achieve 
subdirectory-mount support with access-controls 
(http://www.gluster.org/pipermail/gluster-devel/2016-March/048537.html). 
What follows here is a short description of how other features of 
gluster volumes are implemented for sub-directories.


Please note that the sub-directories are not allowed to be accessed by 
normal mounts i.e. top-level volume mounts. All access to the 
sub-directories goes only through sub-directory mounts.


1) Geo-replication:
The direction in which we are going is to allow geo-replicating just 
some sub-directories and not all of the volume based on options. When 
these options are set, server xlators populate extra information in 
the frames/xdata to write changelog for the fops coming from their 
sub-directory mounts. changelog xlator on seeing this will only 
geo-replicate the files/directories that are in the changelog. Thus 
only the sub-directories are geo-replicated. There is also a 
suggestion from Vijay and Aravinda to have separate domains for 
operations inside sub-directories for changelogs.
We can additionally record subdir/client info in Changelog to 
differentiate I/O belonging to each subdirs instead of having separate 
domains for changelogs.


Just a note, Geo-replication expects target to be Gluster Volume and not 
any directory. If subdir 1 to be replicated to remote site A and subdir2 
to be replicated to remote site B, then we should have two Geo-rep 
session from Master to two remote volumes in remote site A and site B 
respectively.(with subdir filter accordingly)


2) Sub-directory snapshots using lvm
Every time a sub-directory needs to be created, Our idea is that the 
admin needs to execute subvolume creation command which creates a 
mount to an empty snapshot at the given sub-directory name. All these 
directories can be modified in parallel and we can take individual 
snapshots of each of the directories. We will be providing a detailed 
list commands to do the same once they are fleshed out. At the moment 
these are the directions we are going to increase granularity from 
volume to subdirectory for the main features.


Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Using geo-replication as backup solution using gluster volume snapshot!

2016-03-08 Thread Aravinda

Awesome idea!

Now Geo-rep can be used to take daily/monthly backup. With Gluster
3.7.9, we can run Geo-replication whenever required instead of running
all the time.

http://review.gluster.org/13510 (geo-rep: Script to Schedule
Geo-replication)

Full Backup:

1. Create Geo-rep session.
2. Run Schedule Geo-rep script
3. Take Gluster Volume Snapshot at Slave side.

Incremental(Daily):
---
1. Run Schedule Geo-rep script
2. Take Gluster Volume Snapshot at Slave side

Note: Delete old snapshots regularly.

Restore:

Depending on the snapshot you want to recover, Clone the snapshot to
create new Volume and then establish Geo-replication from cloned
Volume to new volume where ever required.

If we need to restore any specific file/directory then just mount the
snapshot and copy data to required location.

regards
Aravinda

On 03/07/2016 05:13 PM, Kotresh Hiremath Ravishankar wrote:

Added gluster-users.

Thanks and Regards,
Kotresh H R

- Original Message -

From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Monday, March 7, 2016 3:03:08 PM
Subject: [Gluster-devel] Using geo-replication as backup solution using gluster 
volume snapshot!

Hi All,

Here is the idea, we can use geo-replication as backup solution using gluster
volume
snapshots on slave side. One of the drawbacks of geo-replication is that it's
a
continuous asynchronous replication and would not help in getting the last
week's or
yesterday's data. So if we use gluster snapshots at the slave end, we can use
the
snapshots to get the last week's or yesterday's data making it a candidate
for a
backup solution. The limitation is that the snapshots at the slave end can't
be
restored as it will break the running geo-replication. It could be mounted
and
we have access to data when the snapshots are taken. It's just a naive idea.
Any suggestions and use cases are worth discussing:)


Thanks and Regards,
Kotresh H R

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] REST API authentication: JWT - Shared Token vs Shared Secret

2016-03-03 Thread Aravinda


regards
Aravinda

On 03/03/2016 05:58 PM, Kaushal M wrote:

On Thu, Mar 3, 2016 at 2:39 PM, Aravinda <avish...@redhat.com> wrote:

Thanks.

We can use Shared secret if https requirement can be completely
avoided. I am not sure how to use same SSL certificates in all the
nodes of the Cluster.(REST API server patch set 2 was written based on
shared secret method based on custom HMAC signing
http://review.gluster.org/#/c/13214/2/in_progress/management_rest_api.md)

Listing the steps involved in each side with both the
approaches. (Skipping Register steps since it is common to both)

Shared Token:
-
Client side:
1. Add saved token Authorization header and initiate a REST call.
2. If UnAuthorized, call /token and get access_token again and repeat
the step 1

Server side:
1. Verify JWT using the Server's secret.

You forgot the part where server generates the token. :)

Oh Yes. I missed that step :)




Shared Secret:
--
Client side:
1. Hash the Method + URL + Params and include in qsh claim of JWT
2. Using shared secret, create JWT.
3. Add previously generated JWT in Authorization header and initiate
REST call

Server side:
1. Recalculate the hash using same details (Method + URL + Params) and
verify with received qsh
2. Do not trust any claims, validate against the values stored in
Server(role/group/capabilities)
3. Verify JWT using the shared secret


Anyways, I'm still not sure which of the two approaches I like better.
My google research on this topic (ReST api authentication) led to many
results which followed a token approach.
This causes me to lean slightly towards shared tokens.
Yeah, Shared token is widely used method. But https is mandatory to 
avoid URL tampering. But using SSL certs in multi node setup is 
tricky.(Using same cert files across all nodes)


Shared secret approach it is difficult to test using curl or 
Postman(https://www.getpostman.com/)




Since, I can't decide, I plan to write down workflows involved for
both and try to compare them that way.
It would probably help arrive at a decision. I'll try to share this
ASAP (probably this weekend).

Thanks.



regards
Aravinda


On 03/03/2016 11:49 AM, Luis Pabon wrote:

Hi Aravinda,
Very good summary.  I would like to rephrase a few parts.

On the shared token approach, the disadvantage is that the server will be
more complicated (not *really* complicated, just more than the shared
token), because it would need a login mechanism.  Server would have to both
authenticate and authorize the user.  Once this has occurred a token with an
expiration date can be handed back to the caller.

On the shared secret approach, I do not consider the client creating a JWT
a disadvantage (unless you are doing it in C), it is pretty trivial for
programs written in Python, Go, Javascript etc to create a JWT on each call.

- Luis

- Original Message -
From: "Aravinda" <avish...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>
Cc: "Kaushal Madappa" <kmada...@redhat.com>, "Atin Mukherjee"
<amukh...@redhat.com>, "Luis Pabon" <lpa...@redhat.com>,
kmayi...@redhat.com, "Prashanth Pai" <p...@redhat.com>
Sent: Wednesday, March 2, 2016 1:53:00 AM
Subject: REST API authentication: JWT - Shared Token vs Shared Secret

Hi,

For Gluster REST project we are planning to use JSON Web Token for
authentication. There are two approaches to use JWT, please help us to
evaluate between these two options.

http://jwt.io/

For both approach, user/app will register with Username and Secret.

Shared Token Approach:(Default as per JWT website
http://jwt.io/introduction/)

--
Server will generate JWT with pre-configured expiry once user login to
server by providing Username and Secret. Secret is encrypted and
stored in Server. Clients should include that JWT in all requests.

Advantageous:
1. Clients need not worry anything about JWT signing.
2. Single secret at server side can be used for all token verification.
3. This is a stateless authentication mechanism as the user state is
  never saved in the server memory(http://jwt.io/introduction/)
4. Secret is encrypted and stored in Server.

Disadvantageous:
1. URL Tampering can be protected only by using HTTPS.

Shared Secret Approach:
---
Secret will not be encrypted in Server side because secret is
required for JWT signing and verification. Clients will sign every
request using Secret and send that signature along with the
request. Server will sign again using the same secret to check the
signature match.

Advantageous:
1. Protection against URL Tampering without HTTPS.
2. Different expiry time management based on issued time

Disadvantageous:
1. Clients should be aware of JWT and Signing
2. Shared secrets will be stored as plain text format in server.
3. Every request sho

Re: [Gluster-devel] REST API authentication: JWT - Shared Token vs Shared Secret

2016-03-03 Thread Aravinda

Thanks.

We can use Shared secret if https requirement can be completely
avoided. I am not sure how to use same SSL certificates in all the
nodes of the Cluster.(REST API server patch set 2 was written based on
shared secret method based on custom HMAC signing
http://review.gluster.org/#/c/13214/2/in_progress/management_rest_api.md)

Listing the steps involved in each side with both the
approaches. (Skipping Register steps since it is common to both)

Shared Token:
-
Client side:
1. Add saved token Authorization header and initiate a REST call.
2. If UnAuthorized, call /token and get access_token again and repeat
   the step 1

Server side:
1. Verify JWT using the Server's secret.


Shared Secret:
--
Client side:
1. Hash the Method + URL + Params and include in qsh claim of JWT
2. Using shared secret, create JWT.
3. Add previously generated JWT in Authorization header and initiate
   REST call

Server side:
1. Recalculate the hash using same details (Method + URL + Params) and
   verify with received qsh
2. Do not trust any claims, validate against the values stored in
   Server(role/group/capabilities)
3. Verify JWT using the shared secret

regards
Aravinda

On 03/03/2016 11:49 AM, Luis Pabon wrote:

Hi Aravinda,
   Very good summary.  I would like to rephrase a few parts.

On the shared token approach, the disadvantage is that the server will be more 
complicated (not *really* complicated, just more than the shared token), 
because it would need a login mechanism.  Server would have to both 
authenticate and authorize the user.  Once this has occurred a token with an 
expiration date can be handed back to the caller.

On the shared secret approach, I do not consider the client creating a JWT a 
disadvantage (unless you are doing it in C), it is pretty trivial for programs 
written in Python, Go, Javascript etc to create a JWT on each call.

- Luis

- Original Message -
From: "Aravinda" <avish...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>
Cc: "Kaushal Madappa" <kmada...@redhat.com>, "Atin Mukherjee" <amukh...@redhat.com>, "Luis Pabon" 
<lpa...@redhat.com>, kmayi...@redhat.com, "Prashanth Pai" <p...@redhat.com>
Sent: Wednesday, March 2, 2016 1:53:00 AM
Subject: REST API authentication: JWT - Shared Token vs Shared Secret

Hi,

For Gluster REST project we are planning to use JSON Web Token for
authentication. There are two approaches to use JWT, please help us to
evaluate between these two options.

http://jwt.io/

For both approach, user/app will register with Username and Secret.

Shared Token Approach:(Default as per JWT website
http://jwt.io/introduction/)
--
Server will generate JWT with pre-configured expiry once user login to
server by providing Username and Secret. Secret is encrypted and
stored in Server. Clients should include that JWT in all requests.

Advantageous:
1. Clients need not worry anything about JWT signing.
2. Single secret at server side can be used for all token verification.
3. This is a stateless authentication mechanism as the user state is
 never saved in the server memory(http://jwt.io/introduction/)
4. Secret is encrypted and stored in Server.

Disadvantageous:
1. URL Tampering can be protected only by using HTTPS.

Shared Secret Approach:
---
Secret will not be encrypted in Server side because secret is
required for JWT signing and verification. Clients will sign every
request using Secret and send that signature along with the
request. Server will sign again using the same secret to check the
signature match.

Advantageous:
1. Protection against URL Tampering without HTTPS.
2. Different expiry time management based on issued time

Disadvantageous:
1. Clients should be aware of JWT and Signing
2. Shared secrets will be stored as plain text format in server.
3. Every request should lookup for shared secret per user.



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


  1   2   >