[Gluster-devel] Fwd: [FOSDEM] CfP Software Defined Storage devroom FOSDEM23

2022-11-16 Thread Niels de Vos
Hi!

In a few montsh time FOSDEM will host an in-person Software Defined
Storage devroom again. It would be a great oppertunity to show what the
Gluster project has been doing, and what is planned for the future.
Please consider proposing a talk!

Thanks,
Niels


- Forwarded message from Jan Fajerski  -

> From: Jan Fajerski 
> To: fos...@lists.fosdem.org
> Cc: devroom-manag...@lists.fosdem.org
> Date: Thu, 10 Nov 2022 10:49:51 +0100
> Subject: [FOSDEM] CfP Software Defined Storage devroom FOSDEM23
> 
> FOSDEM is a free software event that offers open source communities a place to
> meet, share ideas and collaborate.  It is well known for being highly
> developer-oriented and in the past brought together 8000+ participants from
> all over the world.  Its home is in the city of Brussels (Belgium).
> 
> FOSDEM 2023 will take place as an in-person event during the weekend of 
> February
> 4./5. 2023. More details about the event can be found at http://fosdem.org/
> 
> ** Call For Participation
> 
> The Software Defined Storage devroom will go into its seventh round for talks
> around Open Source Software Defined Storage projects, management tools
> and real world deployments.
> 
> Presentation topics could include but are not limited too:
> 
> - Your work on a SDS project like Ceph, Gluster, OpenEBS, CORTX or Longhorn
> 
> - Your work on or with SDS related projects like OpenStack SWIFT or Container
> Storage Interface
> 
> - Management tools for SDS deployments
> 
> - Monitoring tools for SDS clusters
> 
> ** Important dates:
> 
> - Dec 10th 2022:  submission deadline for talk proposals
> - Dec 15th 2022:  announcement of the final schedule
> - Feb  4th 2023:  Software Defined Storage dev room
> 
> Talk proposals will be reviewed by a steering committee:
> - Niels de Vos (Red Hat)
> - Jan Fajerski (Red Hat)
> - TBD
> 
> We also welcome additional volunteers to help with making this devroom a
> success.
> 
> Use the FOSDEM 'pentabarf' tool to submit your proposal:
> https://penta.fosdem.org/submission/FOSDEM23
> 
> - If necessary, create a Pentabarf account and activate it.
> Please reuse your account from previous years if you have
> already created it.
> https://penta.fosdem.org/user/new_account/FOSDEM23
> 
> - In the "Person" section, provide First name, Last name
> (in the "General" tab), Email (in the "Contact" tab)
> and Bio ("Abstract" field in the "Description" tab).
> 
> - Submit a proposal by clicking on "Create event".
> 
> - If you plan to register your proposal in several tracks to increase your
> chances, don't! Register your talk once, in the most accurate track.
> 
> - Presentations have to be pre-recorded before the event and will be streamed
> on   the event weekend.
> 
> - Important! Select the "Software Defined Storage devroom" track
> (on the "General" tab).
> 
> - Provide the title of your talk ("Event title" in the "General" tab).
> 
> - Provide a description of the subject of the talk and the
> intended audience (in the "Abstract" field of the "Description" tab)
> 
> - Provide a rough outline of the talk or goals of the session (a short
> list of bullet points covering topics that will be discussed) in the
> "Full description" field in the "Description" tab
> 
> - Provide an expected length of your talk in the "Duration" field.
>   We suggest a length between 15 and 45 minutes.
> 
> ** Recording of talks
> 
> The FOSDEM organizers plan to have live streaming and recording fully working,
> both for remote/later viewing of talks, and so that people can watch streams
> in the hallways when rooms are full. This requires speakers to consent to
> being recorded and streamed. If you plan to be a speaker, please understand
> that by doing so you implicitly give consent for your talk to be recorded and
> streamed. The recordings will be published under the same license as all
> FOSDEM content (CC-BY).
> 
> Hope to hear from you soon! And please forward this announcement.
> 
> If you have any further questions, please write to the mailing list at
> storage-devr...@lists.fosdem.org and we will try to answer as soon as
> possible.
> 
> Thanks!
> 
> ___
> FOSDEM mailing list
> fos...@lists.fosdem.org
> https://lists.fosdem.org/listinfo/fosdem

- End forwarded message -


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Fwd: [Bug 2070721] New: gluster-block-target.service uses .include which hasn't been supported since F33

2022-04-01 Thread Niels de Vos
Does anyone know if gluster-block works on recent Linux distributions? I
would appreciate it if someone can look at this bug, and propose a fix
on how the service(s) need to be started correctly.

Thanks,
Niels

- Forwarded message from bugzi...@redhat.com -

> From: bugzi...@redhat.com
> To: nde...@redhat.com
> Date: Thu, 31 Mar 2022 18:30:20 +
> Subject: [Bug 2070721] New: gluster-block-target.service uses .include which 
> hasn't been supported since F33
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=2070721
> 
> Bug ID: 2070721
>Summary: gluster-block-target.service uses .include which
> hasn't been supported since F33
>Product: Fedora
>Version: 36
> Status: NEW
>  Component: gluster-block
>   Assignee: nde...@redhat.com
>   Reporter: zbys...@in.waw.pl
> QA Contact: extras...@fedoraproject.org
> CC: jriv...@redhat.com, nde...@redhat.com,
> prasanna.kale...@redhat.com
>   Target Milestone: ---
> Classification: Fedora
> 
> 
> 
> Description of problem:
> systemd-246 dropped support for .include (after it was deprecated and 
> generated
> warnings for a long while). So the unit file is unlikely to be doing what it's
> supposed to.
> 
> Version-Release number of selected component (if applicable):
> gluster-block-0.5-8.fc36.x86_64
> 
> Actual results:
> /usr/lib/systemd/system/gluster-block-target.service:8: Assignment outside of
> section. Ignoring.
> 
> 
> -- 
> You are receiving this mail because:
> You are on the CC list for the bug.
> You are the assignee for the bug.
> https://bugzilla.redhat.com/show_bug.cgi?id=2070721
> 

- End forwarded message -


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Fwd: CfP FOSDEM 22 Software Defined Storage

2021-12-09 Thread Niels de Vos
Please be aware that the call for participation is open for the Software
Defined Storage devroom at FOSDEM. We welcome proposals related to
Gluster, and other storage solutions.

Thanks!
Niels


- Forwarded message from Jan Fajerski  -

> From: Jan Fajerski 
> To: fos...@lists.fosdem.org
> Date: Mon, 6 Dec 2021 10:03:56 +0100
> Subject: [FOSDEM] CfP FOSDEM 22 Software Defined Storage
> 
> FOSDEM is a free software event that offers open source communities a place to
> meet, share ideas and collaborate.  It is well known for being highly
> developer-oriented and in the past brought together 8000+ participants from
> all over the world.  It's home is in the city of Brussels (Belgium).
> 
> FOSDEM 2022 will take place as an online event during the weekend of February
> 5./6. 2022. More details about the event can be found at http://fosdem.org/
> 
> ** Call For Participation
> 
> The Software Defined Storage devroom will go into it's sixth round for talks
> around Open Source Software Defined Storage projects, management tools
> and real world deployments.
> 
> Presentation topics could include but are not limited too:
> 
> - Your work on a SDS project like Ceph, Gluster, OpenEBS, CORTX or Longhorn
> 
> - Your work on or with SDS related projects like OpenStack SWIFT or Container
> Storage Interface
> 
> - Management tools for SDS deployments
> 
> - Monitoring tools for SDS clusters
> 
> ** Important dates:
> 
> - Jan  7th 2022:  submission deadline for talk proposals
> - Jan 14th 2022:  announcement of the final schedule
> - Feb  6th 2022:  Software Defined Storage dev room
> 
> Talk proposals will be reviewed by a steering committee:
> - Niels de Vos (Red Hat)
> - Jan Fajerski (Red Hat)
> - TBD
> 
> Use the FOSDEM 'pentabarf' tool to submit your proposal:
> https://penta.fosdem.org/submission/FOSDEM22
> 
> - If necessary, create a Pentabarf account and activate it.
> Please reuse your account from previous years if you have
> already created it.
> https://penta.fosdem.org/user/new_account/FOSDEM22
> 
> - In the "Person" section, provide First name, Last name
> (in the "General" tab), Email (in the "Contact" tab)
> and Bio ("Abstract" field in the "Description" tab).
> 
> - Submit a proposal by clicking on "Create event".
> 
> - If you plan to register your proposal in several tracks to increase your
> chances, don't! Register your talk once, in the most accurate track.
> 
> - Presentations have to be pre-recorded before the event and will be streamed
> on   the event weekend.
> 
> - Important! Select the "Software Defined Storage devroom" track
> (on the "General" tab).
> 
> - Provide the title of your talk ("Event title" in the "General" tab).
> 
> - Provide a description of the subject of the talk and the
> intended audience (in the "Abstract" field of the "Description" tab)
> 
> - Provide a rough outline of the talk or goals of the session (a short
> list of bullet points covering topics that will be discussed) in the
> "Full description" field in the "Description" tab
> 
> - Provide an expected length of your talk in the "Duration" field.
>   We suggest a length between 15 and 45 minutes.
> 
> ** For accepted talks
> 
> Once your proposal is accepted we will assign you a volunteer deputy who will
> help you to produce the talk recording.  The volunteer will also try to ensure
> the recording is of good quality, help with uploading it to the system,
> broadcasting it during the event and moderate the Q session after the
> broadcast.  Please note that as a presenter you're expected to be available
> online during and especially after the broadcast of you talk.  The schedule
> will be available under
> https://fosdem.org/2022/schedule/track/software_defined_storage/
> 
> Hope to hear from you soon! And please forward this announcement.
> 
> If you have any further questions, please write to the mailing list at
> storage-devr...@lists.fosdem.org
> (https://lists.fosdem.org/listinfo/storage-devroom)
> and we will try to answer as soon as possible.
> 
> Thanks!
> ___
> FOSDEM mailing list
> fos...@lists.fosdem.org
> https://lists.fosdem.org/listinfo/fosdem

- End forwarded message -


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Gluster 10 - RC Build for Testing

2021-10-27 Thread Niels de Vos
On Tue, Oct 26, 2021 at 04:47:23PM +0200, Niels de Vos wrote:
> On Tue, Oct 26, 2021 at 12:04:49PM +0530, Saju Mohammed Noohu wrote:
> > Hello Glusterians,
> > 
> > Ready to test drive the latest Gluster 10 release ?.
> > RC0 build is ready and available here
> > <https://download.gluster.org/pub/gluster/glusterfs/qa-releases/10.0rc0/>.
> > 
> > The highlight of this release is a major performance improvement of ~20%
> > w.r.t
> > small file as well as large file testing in our controlled lab
> > environments, also other numerous bug fixes.
> > The details of above improvement is available here
> > <https://github.com/gluster/glusterfs/issues/2771>.
> > 
> > Request you all to actively participate and give feedback/comments.
> > 
> > Packages are signed. The public key is at
> > https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
> 
> CentOS users can also get the Release Candidate packages from the CentOS
> Storage SIG repositories. The centos-release-gluster10 package is not
> available in CentOS Extras yet, so it is required to download and
> install the package to enable the repository, pick the version that
> matches your CentOS version:
> 
>  - 
> https://cbs.centos.org/kojifiles/work/tasks/5018/2585018/centos-release-gluster10-0.1-1.el8.noarch.rpm
>  - 
> https://cbs.centos.org/kojifiles/work/tasks/4948/2584948/centos-release-gluster10-0.1-1.el8s.noarch.rpm
>  - 
> https://cbs.centos.org/kojifiles/work/tasks/5015/2585015/centos-release-gluster10-0.1-1.el9s.noarch.rpm
> 
> The testing repositories should also contain gluster-block and
> glusterfs-coreutils, together with required dependencies. In case
> somethign is missing or not working as expected, please let us know!

The builds from the CentOS Storage SIG have been updated today and are
versioned like glusterfs-10.0-0.2.rc0. This change was done to enable
using tcmalloc which should result in the performance benefits that Saju
announced.

Please report any test results as a reply to this email, and keep the
lists included. This makes sure that the testing is a community effort
and everyone is informed about the issues.

Thanks,
Niels


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Gluster 10 - RC Build for Testing

2021-10-27 Thread Niels de Vos
On Tue, Oct 26, 2021 at 06:58:30PM +, Strahil Nikolov wrote:
> Hey Niels,
> I haven't got a chance to test it yet, but can you take a look if the 
> following issue is applicable for v10 :
>  https://github.com/gluster/glusterfs/issues/2844

/var/lib/glusterd/groups/samba is included in the glusterfs-server
package from the CentOS Storage SIG, I just checked that. If it is not
include in the Ubuntu (and Debian?) packages, it should probably be
reported at https://github.com/gluster/glusterfs-debian

HTH,
Niels


> Best Regards,Strahil Nikolov
>  
>   On Tue, Oct 26, 2021 at 17:47, Niels de Vos wrote:   On 
> Tue, Oct 26, 2021 at 12:04:49PM +0530, Saju Mohammed Noohu wrote:
> > Hello Glusterians,
> > 
> > Ready to test drive the latest Gluster 10 release ?.
> > RC0 build is ready and available here
> > <https://download.gluster.org/pub/gluster/glusterfs/qa-releases/10.0rc0/>.
> > 
> > The highlight of this release is a major performance improvement of ~20%
> > w.r.t
> > small file as well as large file testing in our controlled lab
> > environments, also other numerous bug fixes.
> > The details of above improvement is available here
> > <https://github.com/gluster/glusterfs/issues/2771>.
> > 
> > Request you all to actively participate and give feedback/comments.
> > 
> > Packages are signed. The public key is at
> > https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
> 
> CentOS users can also get the Release Candidate packages from the CentOS
> Storage SIG repositories. The centos-release-gluster10 package is not
> available in CentOS Extras yet, so it is required to download and
> install the package to enable the repository, pick the version that
> matches your CentOS version:
> 
>  - 
> https://cbs.centos.org/kojifiles/work/tasks/5018/2585018/centos-release-gluster10-0.1-1.el8.noarch.rpm
>  - 
> https://cbs.centos.org/kojifiles/work/tasks/4948/2584948/centos-release-gluster10-0.1-1.el8s.noarch.rpm
>  - 
> https://cbs.centos.org/kojifiles/work/tasks/5015/2585015/centos-release-gluster10-0.1-1.el9s.noarch.rpm
> 
> The testing repositories should also contain gluster-block and
> glusterfs-coreutils, together with required dependencies. In case
> somethign is missing or not working as expected, please let us know!
> 
> Thanks,
> Niels
>   


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Gluster 10 - RC Build for Testing

2021-10-26 Thread Niels de Vos
On Tue, Oct 26, 2021 at 12:04:49PM +0530, Saju Mohammed Noohu wrote:
> Hello Glusterians,
> 
> Ready to test drive the latest Gluster 10 release ?.
> RC0 build is ready and available here
> .
> 
> The highlight of this release is a major performance improvement of ~20%
> w.r.t
> small file as well as large file testing in our controlled lab
> environments, also other numerous bug fixes.
> The details of above improvement is available here
> .
> 
> Request you all to actively participate and give feedback/comments.
> 
> Packages are signed. The public key is at
> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub

CentOS users can also get the Release Candidate packages from the CentOS
Storage SIG repositories. The centos-release-gluster10 package is not
available in CentOS Extras yet, so it is required to download and
install the package to enable the repository, pick the version that
matches your CentOS version:

 - 
https://cbs.centos.org/kojifiles/work/tasks/5018/2585018/centos-release-gluster10-0.1-1.el8.noarch.rpm
 - 
https://cbs.centos.org/kojifiles/work/tasks/4948/2584948/centos-release-gluster10-0.1-1.el8s.noarch.rpm
 - 
https://cbs.centos.org/kojifiles/work/tasks/5015/2585015/centos-release-gluster10-0.1-1.el9s.noarch.rpm

The testing repositories should also contain gluster-block and
glusterfs-coreutils, together with required dependencies. In case
somethign is missing or not working as expected, please let us know!

Thanks,
Niels


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Update on georep failure

2021-02-03 Thread Niels de Vos
On Tue, Feb 02, 2021 at 09:19:23PM +0100, Michael Scherer wrote:
> Le mardi 02 février 2021 à 21:06 +0200, Yaniv Kaul a écrit :
> > On Tue, Feb 2, 2021 at 8:14 PM Michael Scherer 
> > wrote:
> > 
> > > Hi,
> > > 
> > > so we finally found the cause of the georep failure, after several
> > > days
> > > of work from Deepshika and I.
> > > 
> > > Short story:
> > > 
> > > 
> > > side effect of adding libtirpc-devel on EL 7:
> > > https://github.com/gluster/project-infrastructure/issues/115
> > 
> > 
> > Looking at
> > https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/191
> > - we
> > weren't supposed to use it?
> > From
> > https://github.com/gluster/glusterfs/blob/d1d7a6f35c816822fab51c820e25023863c239c1/glusterfs.spec.in#L61
> > :
> > # Do not use libtirpc on EL6, it does not have xdr_uint64_t() and
> > xdr_uint32_t
> > # Do not use libtirpc on EL7, it does not have xdr_sizeof()
> > %if ( 0%{?rhel} && 0%{?rhel} <= 7 )
> > %global _without_libtirpc --without-libtirpc
> > %endif
> > 
> > 
> > CentOS 7 has an ancient version, CentOS 8 has a newer version, so
> > perhaps
> > just one CentOS 8 slaves?
> 
> Fine for me for C8, but if libtirpc on EL7 is missing a function (or
> more), how come the code compile without trouble, and fail at run time
> in a rather non obvious way ?

>From what I remember of the rpc functions, is that glibc provides an
implementation too. Symbols might get partially from libtirpc and the
missing symbols from glibc. Mixing these will not work, as the internal
status/structures are different. Memory corruption and possibly
segfaults would most likely be the result.

If there is something linking against libtirpc, the library will (just
like glibc) be in memory, and symbols might get picked up from the wrong
library causing issues.

HTH,
Niels


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Archiving old projects

2021-02-02 Thread Niels de Vos
On Tue, Feb 02, 2021 at 12:02:28PM +, Strahil Nikolov wrote:
> Are we sure we want to archive the samba extension ?

This is maintained in the main Samba project, so I think it is not
needed to keep around separately.

Anoop?



> Best Regards,Strahil Nikolov
> 
> Sent from Yahoo Mail on Android 
>  
>   On Tue, Feb 2, 2021 at 14:00, 
> gluster-devel-requ...@gluster.org wrote:   
> Send Gluster-devel mailing list submissions to
>     gluster-devel@gluster.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>     https://lists.gluster.org/mailman/listinfo/gluster-devel
> or, via email, send a message with subject or body 'help' to
>     gluster-devel-requ...@gluster.org
> 
> You can reach the person managing the list at
>     gluster-devel-ow...@gluster.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-devel digest..."
> 
> 
> Today's Topics:
> 
>   1. Archiving old projects (Michael Scherer)
>   2. Re: Archiving old projects (sankarshan)
> 
> 
> --
> 
> Message: 1
> Date: Mon, 01 Feb 2021 13:55:46 +0100
> From: Michael Scherer 
> To: Gluster Devel 
> Subject: [Gluster-devel] Archiving old projects
> Message-ID:
>     <212e1d318042f5a10a7e035dd42e6c0cca91b596.ca...@redhat.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Hi, 
> 
> So, it is this time of the year again, do we want to archive the older
> projects on github ?
> 
> I already archived:
> 
> - gluster/cockpit-gluster (listed as unmaintained)
> - gluster/gluster-plus-one-scale (just a readme)
> - gluster/samba-glusterfs (listed as unmaintained)
> - gluster/anteater (empty, except 1 bug to say to remove it)
> 
> 
> 
> Unless people say "no" this week, I propose thoses :
> https://github.com/gluster/nagios-server-addons
> https://github.com/gluster/gluster-nagios-common
> https://github.com/gluster/gluster-nagios-addons
> https://github.com/gluster/mod_proxy_gluster
> https://github.com/gluster/xglfs
> https://github.com/gluster/glusterfs-java-filesystem
> https://github.com/gluster/glusterfs-kubernetes-openshift
> https://github.com/gluster/libgfapi-jni
> https://github.com/gluster/gluster-debug-tools
> https://github.com/gluster/gdeploy_config_generator
> https://github.com/gluster/glustertool
> https://github.com/gluster/gluster-zeroconf
> https://github.com/gluster/Gfapi-sys
> https://github.com/gluster/gluster-swift
> 
> I can't find my last mail, but that's everything that wasn't changed in
> 2019 nor 2020. Repo who are used just for bug tracking are ok.
> 
> 
> 
> -- 
> Michael Scherer / He/Il/Er/?l
> Sysadmin, Community Infrastructure
> 
> 
> 
> -- next part --
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 836 bytes
> Desc: This is a digitally signed message part
> URL: 
> 
> 
> --
> 
> Message: 2
> Date: Mon, 1 Feb 2021 18:40:06 +0530
> From: sankarshan 
> To: Michael Scherer 
> Cc: Gluster Devel 
> Subject: Re: [Gluster-devel] Archiving old projects
> Message-ID:
>     
> Content-Type: text/plain; charset="UTF-8"
> 
> Seems reasonable to do this spring cleaning. I'd like to suggest that
> we add this as an agenda topic to an upcoming meeting and have it on
> record prior to moving ahead? Do you mind terribly to track this via
> an issue which can be referenced in the meeting?
> 
> On Mon, 1 Feb 2021 at 18:26, Michael Scherer  wrote:
> >
> > Hi,
> >
> > So, it is this time of the year again, do we want to archive the older
> > projects on github ?
> >
> > I already archived:
> >
> > - gluster/cockpit-gluster (listed as unmaintained)
> > - gluster/gluster-plus-one-scale (just a readme)
> > - gluster/samba-glusterfs (listed as unmaintained)
> > - gluster/anteater (empty, except 1 bug to say to remove it)
> >
> >
> >
> > Unless people say "no" this week, I propose thoses :
> > https://github.com/gluster/nagios-server-addons
> > https://github.com/gluster/gluster-nagios-common
> > https://github.com/gluster/gluster-nagios-addons
> > https://github.com/gluster/mod_proxy_gluster
> > https://github.com/gluster/xglfs
> > https://github.com/gluster/glusterfs-java-filesystem
> > https://github.com/gluster/glusterfs-kubernetes-openshift
> > https://github.com/gluster/libgfapi-jni
> > https://github.com/gluster/gluster-debug-tools
> > https://github.com/gluster/gdeploy_config_generator
> > https://github.com/gluster/glustertool
> > https://github.com/gluster/gluster-zeroconf
> > https://github.com/gluster/Gfapi-sys
> > https://github.com/gluster/gluster-swift
> >
> > I can't find my last mail, but that's everything that wasn't changed in
> > 2019 nor 2020. Repo who are used just for bug tracking are ok.
> >
> >
> >
> > --
> > Michael Scherer / He/Il/Er/?l
> > Sysadmin, Community Infrastructure
> >

[Gluster-devel] [FOSDEM] CfP Software Defined Storage devroom

2020-12-08 Thread Niels de Vos
FOSDEM is a free software event that offers open source communities a place to 
meet, share ideas and collaborate.  It is well known for being highly 
developer-oriented and in the past brought together 8000+ participants from all 
over the world.  It's home is in the city of Brussels (Belgium).


FOSDEM 2021 will take place as an online event during the weekend of February 
6./7. 2021. More details about the event can be found at http://fosdem.org/


** Call For Participation

The Software Defined Storage devroom will go into it's fifth round for talks 
around Open Source Software Defined Storage projects, management tools

and real world deployments.

Presentation topics could include but are not limited too:

- Your work on a SDS project like Ceph, Gluster, OpenEBS, CORTX or Longhorn

- Your work on or with SDS related projects like OpenStack SWIFT or Container 
  Storage Interface


- Management tools for SDS deployments

- Monitoring tools for SDS clusters

** Important dates:

- Dec 27th 2020:  submission deadline for talk proposals
- Dec 31st 2020:  announcement of the final schedule
- Feb  6th 2021:  Software Defined Storage dev room

Talk proposals will be reviewed by a steering committee:
- Niels de Vos (OpenShift Container Storage Developer - Red Hat)
- Jan Fajerski (Ceph Developer - SUSE)
- TBD

Use the FOSDEM 'pentabarf' tool to submit your proposal:
https://penta.fosdem.org/submission/FOSDEM21

- If necessary, create a Pentabarf account and activate it.
Please reuse your account from previous years if you have
already created it.
https://penta.fosdem.org/user/new_account/FOSDEM21

- In the "Person" section, provide First name, Last name
(in the "General" tab), Email (in the "Contact" tab)
and Bio ("Abstract" field in the "Description" tab).

- Submit a proposal by clicking on "Create event".

- If you plan to register your proposal in several tracks to increase your chances, 
don't! Register your talk once, in the most accurate track.


- Presentations have to be pre-recorded before the event and will be streamed on 
  the event weekend.


- Important! Select the "Software Defined Storage devroom" track
(on the "General" tab).

- Provide the title of your talk ("Event title" in the "General" tab).

- Provide a description of the subject of the talk and the
intended audience (in the "Abstract" field of the "Description" tab)

- Provide a rough outline of the talk or goals of the session (a short
list of bullet points covering topics that will be discussed) in the
"Full description" field in the "Description" tab

- Provide an expected length of your talk in the "Duration" field.
  We suggest a length between 15 and 45 minutes.

** For accepted talks

Once your proposal is accepted we will assign you a volunteer deputy who will 
help you to produce the talk recording.  The volunteer will also try to ensure 
the recording is of good quality, help with uploading it to the system, 
broadcasting it during the event and moderate the Q session after the 
broadcast.  Please note that as a presenter you're expected to be available 
online during and especially after the broadcast of you talk.  The schedule will 
be available under 
https://fosdem.org/2021/schedule/track/software_defined_storage/


Hope to hear from you soon! And please forward this announcement.

If you have any further questions, please write to the mailing list at
storage-devr...@lists.fosdem.org and we will try to answer as soon as
possible.

Thanks!

___
FOSDEM mailing list
fos...@lists.fosdem.org
https://lists.fosdem.org/listinfo/fosdem


signature.asc
Description: PGP signature
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Centos8/RHEL8 Nightly Build for glusterfs needs to be created

2020-10-13 Thread Niels de Vos
On Tue, Oct 13, 2020 at 07:23:56PM +0530, Rinku Kothiya wrote:
> Hi Team,
> 
> I have been trying to run the following command to fix the centos ci
> nightly builds but Its failing I am not sure what's wrong here.
> Any help would be appreciated.

For me, the following works:

$ mkdir centos8-fluster
$ cd centos8-gluster
$ vagrant init centos/8
$ vagrant up
$ cat << EOF | vagrant ssh -c 'sudo sh'
dnf -y update
dnf -y install epel-release
curl https://termbin.com/sc70j > nightly-build.sh
export CENTOS_VERSION=8 CENTOS_ARCH=x86_64 GERRIT_BRANCH=master
sh -x nightly-build.sh
EOF

The https://termbin.com/sc70j is a slightly modified version of the
jobs/scripts/nightly-builds/nightly-builds.sh script from the
gluster/centosci repository. It has some adaptions for CentOS-8, which
will need to get merged in a cleaner way into the original script.

You mentioned over chat that this didn't work for you. So I am wondering
if others have problems with the above steps too.

The CI environment provides a clean CentOS system for every run. It is
installed with the minimal set of packages, similar to the Vagrant box.

HTH,
Niels


> 
> # /usr/bin/mock --root epel-8-x86_64 --with=gnfs --resultdir
> /srv/gluster/nightly/master/8/x86_64 --rebuild
> /root/centosci/glusterfs/glusterfs-20201007.d1d7a6f-0.0.autobuild.src.rpm
> .
> .
> .
> RPM build errors:
> error: Directory not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster
> error: File not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster/__init__.*
> error: File not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster/__pycache__
> error: File not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster/cliutils
> Directory not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster
> File not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster/__init__.*
> File not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster/__pycache__
> File not found:
> /builddir/build/BUILDROOT/glusterfs-20201007.d1d7a6f-0.0.el8.x86_64/gluster/cliutils
> Finish: rpmbuild glusterfs-20201007.d1d7a6f-0.0.autobuild.src.rpm
> Finish: build phase for glusterfs-20201007.d1d7a6f-0.0.autobuild.src.rpm
> ERROR:
> Exception(/root/centosci/glusterfs/glusterfs-20201007.d1d7a6f-0.0.autobuild.src.rpm)
> Config(epel-8-x86_64) 6 minutes 19 seconds
> INFO: Results and/or logs in: /srv/gluster/nightly/master/8/x86_64
> INFO: Cleaning up build root ('cleanup_on_failure=True')
> Start: clean chroot
> Finish: clean chroot
> ERROR: Command failed:
>  # /usr/bin/systemd-nspawn -q -M e76a1e2f68cc4c88af4159fea2987ad1 -D
> /var/lib/mock/epel-8-x86_64/root -a -u mockbuild --capability=cap_ipc_lock
> --bind=/tmp/mock-resolv.kxdftj8r:/etc/resolv.conf --bind=/dev/loop-control
> --bind=/dev/loop0 --bind=/dev/loop1 --bind=/dev/loop2 --bind=/dev/loop3
> --bind=/dev/loop4 --bind=/dev/loop5 --bind=/dev/loop6 --bind=/dev/loop7
> --bind=/dev/loop8 --bind=/dev/loop9 --bind=/dev/loop10 --bind=/dev/loop11
> --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir
> --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin
> --setenv=PROMPT_COMMAND=printf "\033]0;\007"
> --setenv=PS1= \s-\v\$  --setenv=LANG=C.UTF-8 --resolv-conf=off
> bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps
> /builddir/build/SPECS/glusterfs.spec
> 
> Regards
> Rinku
> 
> On Fri, Oct 9, 2020 at 12:22 PM Niels de Vos  wrote:
> 
> > On Thu, Oct 08, 2020 at 04:21:50PM +0530, Rinku Kothiya wrote:
> > > Hi Niels,
> > >
> > > Yaniv wanted the glusterfs nightly builds generated to be tested for
> > > performance bottlenecks.
> > >
> > > I was working on the below project :
> > > https://github.com/gluster/gluster-performance-test-suite
> > >
> > > As of now we are running the above script on the glusterfs-nightly builds
> > > every night through a jenkins machine.
> > > Currently we have only been using Glusterfs-EL7 Nightly builds. But for
> > > testing the Gusfterfs-EL8 nightly build, I  need those nightly builds to
> > be
> > > generated.
> > > I raised a request with Deepshikha and she said that you have been
> > looking
> > > at it in the past.
> >
> > Please have a look at the Jenkins Jobs Builder scripts from
> > https://github.com/gluster/centosci/ that are used to run these jobs.
> > For the nightly GlusterFS builds, there are two jobs involved:
> >
> > 1. jobs/build-

Re: [Gluster-devel] Centos8/RHEL8 Nightly Build for glusterfs needs to be created

2020-10-09 Thread Niels de Vos
On Thu, Oct 08, 2020 at 04:21:50PM +0530, Rinku Kothiya wrote:
> Hi Niels,
> 
> Yaniv wanted the glusterfs nightly builds generated to be tested for
> performance bottlenecks.
> 
> I was working on the below project :
> https://github.com/gluster/gluster-performance-test-suite
> 
> As of now we are running the above script on the glusterfs-nightly builds
> every night through a jenkins machine.
> Currently we have only been using Glusterfs-EL7 Nightly builds. But for
> testing the Gusfterfs-EL8 nightly build, I  need those nightly builds to be
> generated.
> I raised a request with Deepshikha and she said that you have been looking
> at it in the past.

Please have a look at the Jenkins Jobs Builder scripts from
https://github.com/gluster/centosci/ that are used to run these jobs.
For the nightly GlusterFS builds, there are two jobs involved:

1. jobs/build-rpms.yml contains the different versions and architectures
   that the gluster_build-rpms job can build. It needs to be extended to
   support the `release-8` branch

2. jobs/nightly-rpm-builds.yml does the triggering of the 1st job. It
   passes parameters to the job, which then will do the builds. Som in
   order to add a build for GlusterFS-8, inlucde an new `trigger-build`
   section.

3. jobs/scripts/nightly-builds/nightly-builds.sh is the script that does
   the actual work. You should be able to execute it manually on a
   CentOS system (set the environment like the parameters from 2) and
   see if it works for GlusterFS-8. Ideally the script does not need any
   changes.

You can send a PR with changes that you need, and once it is merged (by
Deepshika or one of the other active maintainers), it should
automatically get updated in the Jenkins environment, and the next day
there should be a new "Gluster 8 Nightly" repository.

Adding gluster-devel to CC, as this information should enable others to
add or improve CI jobs too.

Good luck!
Niels


signature.asc
Description: PGP signature
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Fwd: [fuse-devel] Call for new maintainer(s)

2020-03-23 Thread Niels de Vos
Gluster does not depend much (unfortunately) on libfuse. At one time we
had the idea to rewrite our own implementation of the fuse bindings and
use the well tested and maintained libfuse. Possibly in combination with
libgfapi. This would reduce maintenance overhead quite a bit.

If someone still has an interest in persuading this, you might as well
want to support maintaining libfuse. I guess a small group of
maintainers is welcome (not expected to come all from Gluster). Get in
touch with Nikolaus and see what options there are.

Niels


- Forwarded message from Nikolaus Rath  -

> Date: Sat, 21 Mar 2020 10:49:55 +
> From: Nikolaus Rath 
> To: fuse-de...@lists.sourceforge.net
> Subject: [fuse-devel] Call for new maintainer(s)
> 
> Hi all,
> 
> It's been about 5 years since I took over maintainership of libfuse from
> Miklos. Overall, I think this was a productive time: libfuse 3 was
> released, the build system was changed to Meson, a more extensive test
> suite added, the project moved from Sourceforge to Github, and I feel
> like the number of contributors has increased.
> 
> However, since then my circumstances have changed. I am now a lot more
> occupied with work and family. My role in the project has become pretty
> much limited to triaging bugs, merging pull requests and doing the
> occasional release, and I don't expect to have time to do any actual
> development work anytime soon.
> 
> In other words, I think libfuse would benefit from (one or more) new
> maintainers who can spend more time on it.
> 
> If you think this role would be for you, please let me know. Ideally,
> you'd have some history of contribution to libfuse or other open-source
> projects and a lot of energy to drive things forward again.
> 
> 
> Best,
> -Nikolaus
> 
> 
> -- 
> GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
> 
>  »Time flies like an arrow, fruit flies like a Banana.«




> -- 
> fuse-devel mailing list
> To unsubscribe or subscribe, visit 
> https://lists.sourceforge.net/lists/listinfo/fuse-devel


- End forwarded message -

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Intent to retire Heketi package in Fedora

2020-02-16 Thread Niels de Vos
Hi all,

Currently Heketi is (just a little) maintained in Fedora and packaged as an
RPM for stand-alone installation (not containerized).

As Heketi is a Golang application, and the Fedora Packaging Guidelines
have changed since the package was included, the current Heketi
packaging does not follow the current guidelines anymore. It would take
much work to return the package into good shape.

Also remember that Heketi is in "near-maintenance mode", and no major
new work is expected anymore:
 - https://github.com/heketi/heketi#important-notice

Because of this, I intent to retire Heketi from Fedora. This means that
the package will not be available in upcoming Fedora releases anymore.
Installation from the GitHub project repository will stay possible, of
course.

If there is someone that is interested in taking over the responsibility
for Heketi in Fedora, please let me know and I can work with you to
become the packager.

Thanks,
Niels

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Do we still support the '"GlusterFS 3.3" RPC client?

2020-01-22 Thread Niels de Vos
On Wed, Jan 22, 2020 at 08:31:59PM +0200, Yaniv Kaul wrote:
> Or can we remove this code?

The GlusterFS 3.3 RPC code is for the version of the protocol, and not
much related to the version of Gluster itself. Newer versions of Gluster
introduced protocol version 4 (maybe with Gluster 6?), but can still
fall back to the old protocol in case deployments have mixed versions.
Online upgrades depend on both protocols being available.

HTH,
Niels

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-22 Thread Niels de Vos
On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> Hi All,
> 
> As per the discussion on https://review.gluster.org/23645, recently we
> changed the status of gNFS (gluster's native NFSv3 support) feature to
> 'Depricated / Orphan' state. (ref:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
> With this email, I am proposing to change the status again to 'Odd Fixes'
> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)

I'd recommend against re-surrecting gNFS. The server is not very
extensible and adding new features is pretty tricky without breaking
other (mostly undocumented) use-cases. Eventhough NFSv3 is stateless,
the actual usage of NFSv3, mounting and locking is definitely not. The
server keeps track of which clients have an export mounted, and which
clients received grants for locks. These things are currently not very
reliable in combination with high-availability. And there is also the by
default disabled duplicate-reply-cache (DRC) that has always been very
buggy (and neither cluster-aware).

If we enable gNFS by default again, we're sending out an incorrect
message to our users. gNFS works fine for certain workloads and
environments, but it should not be advertised as 'clustered NFS'.

Instead of going the gNFS route, I suggest to make it easier to deploy
NFS-Ganesha as that is a more featured, well maintained and can be
configured for much more reliable high-availability than gNFS.

If someone really wants to maintain gNFS, I won't object much, but they
should know that previous maintainers have had many difficulties just
keeping it working well while other components evolved. Addressing some
of the bugs/limitations will be extremely difficult and may require
large rewrites of parts of gNFS.

Until now, I have not read convincing arguments in this thread that gNFS
is stable enough to be consumed by anyone in the community. Users should
be aware of its limitations and be careful what workloads to run on it.

HTH,
Niels

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 6.6

2019-11-05 Thread Niels de Vos
On Tue, Nov 05, 2019 at 10:25:07AM +0100, Niels de Vos wrote:
> On Tue, Nov 05, 2019 at 04:51:02AM +0200, Strahil wrote:
> > Hi Niels,
> > 
> > It seems that 5 days later, the v6.6 is still missing.
> > Do you have any contacts in CentOS mailing lists that I can ask to check 
> > what''s going on ?
> 
> There seems to be a hickup in the publishing of contents on the CentOS
> side. Not only the Gluster repository is affected, others as well. We're
> waiting for more details on the progress there.

The problematic repository has been identified and fixed. This caused
the signing+pushing of the other repositories to continue now. The
glusterfs-6.6-1.el7 is available on the main CentOS mirrors, other
systems around the world should catchup soon too.

Cheers,
Niels


> 
> Niels
> 
> 
> > 
> > 
> > Best Regards,
> > Strahil NikolovOn Oct 31, 2019 10:39, Niels de Vos  
> > wrote:
> > >
> > > On Thu, Oct 31, 2019 at 07:39:56AM +, Strahil Nikolov wrote: 
> > > >  I can't see v6.6 for ovirt/CentOS7.7 .Is it available for CentOS 7.7 ? 
> > >
> > > Packages have been handed off to the CentOS team yesterday. It is 
> > > expected that the RPMs get signed and pushed to the mirrors today. 
> > >
> > > In general, the announcements of releases are a little ahead of the 
> > > availability of the packages in the distributions. 
> > >
> > > HTH, 
> > > Niels 
> > >
> > >
> > > > Here is what I got:Installed Packages 
> > > > glusterfs.x86_64
> > > >   6.5-1.el7 
> > > >     
> > > > @centos-gluster6 
> > > > Available Packages 
> > > > glusterfs.x86_64
> > > >   3.12.2-47.2.el7   
> > > >     
> > > > base 
> > > > glusterfs.x86_64
> > > >   6.0-1.el7 
> > > >     
> > > > centos-gluster6 
> > > > glusterfs.x86_64
> > > >   6.0-1.el7 
> > > >   

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 6.6

2019-11-05 Thread Niels de Vos
On Tue, Nov 05, 2019 at 04:51:02AM +0200, Strahil wrote:
> Hi Niels,
> 
> It seems that 5 days later, the v6.6 is still missing.
> Do you have any contacts in CentOS mailing lists that I can ask to check 
> what''s going on ?

There seems to be a hickup in the publishing of contents on the CentOS
side. Not only the Gluster repository is affected, others as well. We're
waiting for more details on the progress there.

Niels


> 
> 
> Best Regards,
> Strahil NikolovOn Oct 31, 2019 10:39, Niels de Vos  wrote:
> >
> > On Thu, Oct 31, 2019 at 07:39:56AM +, Strahil Nikolov wrote: 
> > >  I can't see v6.6 for ovirt/CentOS7.7 .Is it available for CentOS 7.7 ? 
> >
> > Packages have been handed off to the CentOS team yesterday. It is 
> > expected that the RPMs get signed and pushed to the mirrors today. 
> >
> > In general, the announcements of releases are a little ahead of the 
> > availability of the packages in the distributions. 
> >
> > HTH, 
> > Niels 
> >
> >
> > > Here is what I got:Installed Packages 
> > > glusterfs.x86_64  
> > >     6.5-1.el7 
> > >     
> > > @centos-gluster6 
> > > Available Packages 
> > > glusterfs.x86_64  
> > >     3.12.2-47.2.el7   
> > >     base 
> > > glusterfs.x86_64  
> > >     6.0-1.el7 
> > >     
> > > centos-gluster6 
> > > glusterfs.x86_64  
> > >     6.0-1.el7 
> > >   

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 6.6

2019-10-31 Thread Niels de Vos
On Thu, Oct 31, 2019 at 07:39:56AM +, Strahil Nikolov wrote:
>  I can't see v6.6 for ovirt/CentOS7.7 .Is it available for CentOS 7.7 ?

Packages have been handed off to the CentOS team yesterday. It is
expected that the RPMs get signed and pushed to the mirrors today.

In general, the announcements of releases are a little ahead of the
availability of the packages in the distributions.

HTH,
Niels


> Here is what I got:Installed Packages
> glusterfs.x86_64  
>     6.5-1.el7 
>     @centos-gluster6
> Available Packages
> glusterfs.x86_64  
>     3.12.2-47.2.el7   
>     base
> glusterfs.x86_64  
>     6.0-1.el7 
>     centos-gluster6
> glusterfs.x86_64  
>     6.0-1.el7 
>     
> ovirt-4.3-centos-gluster6
> glusterfs.x86_64  
>     6.1-1.el7 
>     centos-gluster6
> glusterfs.x86_64  
>     6.1-1.el7 
>     
> ovirt-4.3-centos-gluster6
> glusterfs.x86_64  
>     6.3-1.el7 
>     centos-gluster6
> glusterfs.x86_64  
>     6.3-1.el7 
>     
> ovirt-4.3-centos-gluster6
> glusterfs.x86_64  
>     6.4-1.el7 
>     centos-gluster6
> glusterfs.x86_64  
>     6.4-1.el7 
>     
> ovirt-4.3-centos-gluster6
> glusterfs.x86_64  
>     6.5-1.el7 
>     centos-gluster6
> glusterfs.x86_64  
>     6.5-1.el7 
>     
> ovirt-4.3-centos-gluster6
> 
> Best Regards,Strahil Nikolov
> 
> В сряда, 30 октомври 2019 г., 9:39:45 ч. Гринуич-4, Hari Gowtham 
>  написа:  
>  
>  Hi,
> 
> The Gluster community is pleased to announce the release of Gluster
> 6.6 (packages available at [1]).
> 
> Release notes for the release can be found at [2].
> 
> Major changes, features and limitations addressed in this release:
> None
> 
> Thanks,
> Gluster community
> 
> [1] Packages for 6.6:
> https://download.gluster.org/pub/gluster/glusterfs/6/6.6/
> 
> [2] Release notes for 6.6:
> https://docs.gluster.org/en/latest/release-notes/6.6/
> 
> 
> -- 
> Regards,
> Hari Gowtham.
> 
> 
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
> 
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>   

> 
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
> 
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: 

Re: [Gluster-devel] GlusterFS API to manipulate open file descriptor - glfs_fcntl?

2019-10-15 Thread Niels de Vos
On Tue, Oct 15, 2019 at 12:20:54PM +0530, Anoop C S wrote:
> Hi all,
> 
> This is to check and confirm whether we have an API(or an internal
> implementation which can be exposed as API) to perform operations on an
> open file descriptor as a wrapper around existing fcntl() system call.
> We do have specific APIs for locks(glfs_posix_lock) and file descriptor
> duplication(glfs_dup) which are important among those operations listed
> as per man fcntl(2).
> 
> At present we have a requirement(very recent) from Samba to set file
> descriptor flags through its VFS layer which would need a corresponding
> mechanism inside GlusterFS. Due to its absence, VFS module for
> GlusterFS inside Samba will have to workaround with the hack of
> creating fake local file descriptors outside GlusterFS.
> 
> Thoughts and suggestions are welcome.

The fcntl() operations are split when FUSE is used. There in direct
fcntl() call that FUSE passes on, instead it calls lock() and similar
interfaces. I think you refer to F_GETFD and F_SETFD commands for
fcntl(). For all I can see, these do not exist in FUSE, and have not
been added to gfapi either. Not sure if the single supported flag
FD_CLOEXEC can have a benefit on Gluster, as glfs_fini() is expected to
cleanup everything that gfapi allocates.

Can you explain your use-case a little more?

Also adding intergrat...@gluster.org so that other projects interested
in gfapi can follow and comment on the discussion.

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-10-14 Thread Niels de Vos
On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote:
> Any thoughts on this?
> 
> I tried a basic .travis.yml for the unified glusterfs repo I am
> maintaining, and it is good enough for getting most of the tests.
> Considering we are very close to glusterfs-7.0 release, it is good to time
> this after 7.0 release.

Is there a reason to move to Travis? GitHub does offer integration with
Jenkins, so we should be able to keep using our existing CI, I think?

Niels


> 
> -Amar
> 
> On Thu, Sep 5, 2019 at 5:13 PM Amar Tumballi  wrote:
> 
> > Going through the thread, I see in general positive responses for the
> > same, with few points on review system, and not loosing information when
> > merging the patches.
> >
> > While we are working on that, we need to see and understand how our CI/CD
> > looks like with github migration. We surely need suggestion and volunteers
> > here to get this going.
> >
> > Regards,
> > Amar
> >
> >
> > On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos  wrote:
> >
> >> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan
> >> wrote:
> >> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos 
> >> wrote:
> >> >
> >> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura
> >> Krishna
> >> > > Murthy wrote:
> >> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian 
> >> wrote:
> >> > > >
> >> > > > > > Comparing the changes between revisions is something
> >> > > > > that GitHub does not support...
> >> > > > >
> >> > > > > It does support that,
> >> > > > > actually.___
> >> > > > >
> >> > > >
> >> > > > Yes, it does support. We need to use Squash merge after all review
> >> is
> >> > > done.
> >> > >
> >> > > Squash merge would also combine multiple commits that are intended to
> >> > > stay separate. This is really bad :-(
> >> > >
> >> > >
> >> > We should treat 1 patch in gerrit as 1 PR in github, then squash merge
> >> > works same as how reviews in gerrit are done.  Or we can come up with
> >> > label, upon which we can actually do 'rebase and merge' option, which
> >> can
> >> > preserve the commits as is.
> >>
> >> Something like that would be good. For many things, including commit
> >> message update squashing patches is just loosing details. We dont do
> >> that with Gerrit now, and we should not do that when using GitHub PRs.
> >> Proper documenting changes is still very important to me, the details of
> >> patches should be explained in commit messages. This only works well
> >> when developers 'force push' to the branch holding the PR.
> >>
> >> Niels
> >> ___
> >>
> >> Community Meeting Calendar:
> >>
> >> APAC Schedule -
> >> Every 2nd and 4th Tuesday at 11:30 AM IST
> >> Bridge: https://bluejeans.com/836554017
> >>
> >> NA/EMEA Schedule -
> >> Every 1st and 3rd Tuesday at 01:00 PM EDT
> >> Bridge: https://bluejeans.com/486278655
> >>
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-devel
> >>
> >>

> ___
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] CfP: Software Defined Storage devroom at FOSDEM 2020

2019-10-11 Thread Niels de Vos
FOSDEM is a free software event that offers open source communities a
place to meet, share ideas and collaborate.  It is renown for being
highly developer- oriented and brings together 8000+ participants from
all over the world.  It is held in the city of Brussels (Belgium).

FOSDEM 2020 will take place during the weekend of February 1st-2nd 2020.
More details about the event can be found at http://fosdem.org/

** Call For Participation

The Software Defined Storage devroom will go into it's fourth round for
talks around Open Source Software Defined Storage projects, management
tools and real world deployments.

Presentation topics could include but are not limited too:

- Your work on a SDS project like Ceph, Gluster, OpenEBS or LizardFS
- Your work on or with SDS related projects like SWIFT or Container
  Storage Interface
- Management tools for SDS deployments
- Monitoring tools for SDS clusters

** Important dates:

- Nov 24th 2019:  submission deadline for talk proposals
- Dec 15th 2019:  announcement of the final schedule
- Feb  2nd 2020:  Software Defined Storage dev room

Talk proposals will be reviewed by a steering committee:
- Niels de Vos (OpenShift Container Storage Developer - Red Hat)
- Jan Fajerski (Ceph Developer - SUSE)
- Kai Wagner (SUSE)
- Mike Perez (Ceph Community Manager, Red Hat)

Use the FOSDEM 'pentabarf' tool to submit your proposal:
https://penta.fosdem.org/submission/FOSDEM20

- If necessary, create a Pentabarf account and activate it.  Please
  reuse your account from previous years if you have already created it.
  https://penta.fosdem.org/user/new_account/FOSDEM20

- In the "Person" section, provide First name, Last name (in the
  "General" tab), Email (in the "Contact" tab) and Bio ("Abstract" field
  in the "Description" tab).

- Submit a proposal by clicking on "Create event".

- Important! Select the "Software Defined Storage devroom" track (on the
  "General" tab).

- Provide the title of your talk ("Event title" in the "General" tab).

- Provide a description of the subject of the talk and the intended
  audience (in the "Abstract" field of the "Description" tab)

- Provide a rough outline of the talk or goals of the session (a short
  list of bullet points covering topics that will be discussed) in the
  "Full description" field in the "Description" tab

- Provide an expected length of your talk in the "Duration" field.
  Please consider at least 5 minutes of discussion into your proposal
  plus allow 5 minutes for the handover to the next presenter.
  Suggested talk length would be 20+5+5 and 45+10+5 minutes. Note that
  short talks have a preference so that more topics can be presented
  during the day.

** Recording of talks

The FOSDEM organizers plan to have live streaming and recording fully
working, both for remote/later viewing of talks, and so that people can
watch streams in the hallways when rooms are full. This requires
speakers to consent to being recorded and streamed. If you plan to be a
speaker, please understand that by doing so you implicitly give consent
for your talk to be recorded and streamed. The recordings will be
published under the same license as all FOSDEM content (CC-BY).

Hope to hear from you soon! And please forward this announcement.

If you have any further questions, please write to the mailinglist at
storage-devr...@lists.fosdem.org and we will try to answer as soon as
possible.

Thanks!


signature.asc
Description: PGP signature
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] [Gluster-Maintainers] GlusterFS - 7.0RC1 - Test day (26th Sep 2019)

2019-09-20 Thread Niels de Vos
On Fri, Sep 20, 2019 at 09:19:24AM -0400, Kaleb Keithley wrote:
> On Fri, Sep 20, 2019 at 8:39 AM Rinku Kothiya  wrote:
> 
> > Hi,
> >
> > Release-7 RC1 packages are built. We are planning to have a test day on
> > 26-Sep-2019, we request your participation. Do post on the lists any
> > testing done and feedback for the same.
> >
> > Packages for Fedora 29, Fedora 30, RHEL 8, CentOS  at
> > https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc1/
> >
> > Packages are signed. The public key is at
> > https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
> >
> 
> FYI, there are no CentOS packages there, but there are Debian stretch and
> Debian buster packages.
> 
> Packages for CentOS 7 are built in  CentOS CBS at
> https://cbs.centos.org/koji/buildinfo?buildID=26538 but I don't see them in
> https://buildlogs.centos.org/centos/7/storage/x86_64/.
> 
> @Niels, shouldn't we expect them in buildlogs?

Ai, it seems the requested configuration for syncing is not applied yet:
- https://bugs.centos.org/view.php?id=16363

I've now pinged in #centos-devel on Freenode to get some attention to
the request.

Thanks,
Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-28 Thread Niels de Vos
On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan wrote:
> On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos  wrote:
> 
> > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna
> > Murthy wrote:
> > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian  wrote:
> > >
> > > > > Comparing the changes between revisions is something
> > > > that GitHub does not support...
> > > >
> > > > It does support that,
> > > > actually.___
> > > >
> > >
> > > Yes, it does support. We need to use Squash merge after all review is
> > done.
> >
> > Squash merge would also combine multiple commits that are intended to
> > stay separate. This is really bad :-(
> >
> >
> We should treat 1 patch in gerrit as 1 PR in github, then squash merge
> works same as how reviews in gerrit are done.  Or we can come up with
> label, upon which we can actually do 'rebase and merge' option, which can
> preserve the commits as is.

Something like that would be good. For many things, including commit
message update squashing patches is just loosing details. We dont do
that with Gerrit now, and we should not do that when using GitHub PRs.
Proper documenting changes is still very important to me, the details of
patches should be explained in commit messages. This only works well
when developers 'force push' to the branch holding the PR.

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Niels de Vos
On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna 
Murthy wrote:
> On Mon, Aug 26, 2019 at 7:49 PM Joe Julian  wrote:
> 
> > > Comparing the changes between revisions is something
> > that GitHub does not support...
> >
> > It does support that,
> > actually.___
> >
> 
> Yes, it does support. We need to use Squash merge after all review is done.

Squash merge would also combine multiple commits that are intended to
stay separate. This is really bad :-(

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Niels de Vos
On Fri, Aug 23, 2019 at 11:56:53PM -0400, Yaniv Kaul wrote:
> On Fri, 23 Aug 2019, 9:13 Amar Tumballi  wrote:
> 
> > Hi developers,
> >
> > With this email, I want to understand what is the general feeling around
> > this topic.
> >
> > We from gluster org (in github.com/gluster) have many projects which
> > follow complete github workflow, where as there are few, specially the main
> > one 'glusterfs', which uses 'Gerrit'.
> >
> > While this has worked all these years, currently, there is a huge set of
> > brain-share on github workflow as many other top projects, and similar
> > projects use only github as the place to develop, track and run tests etc.
> > As it is possible to have all of the tools required for this project in
> > github itself (code, PR, issues, CI/CD, docs), lets look at how we are
> > structured today:
> >
> > Gerrit - glusterfs code + Review system
> > Bugzilla - For bugs
> > Github - For feature requests
> > Trello - (not very much used) for tracking project development.
> > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo.
> > Docs - glusterdocs - different repo.
> > Metrics - Nothing (other than github itself tracking contributors).
> >
> > While it may cause a minor glitch for many long time developers who are
> > used to the flow, moving to github would bring all these in single place,
> > makes getting new users easy, and uniform development practices for all
> > gluster org repositories.
> >
> > As it is just the proposal, I would like to hear people's thought on this,
> > and conclude on this another month, so by glusterfs-8 development time, we
> > are clear about this.
> >
> 
> I don't like mixed mode, but I also dislike Github's code review tools, so
> I'd like to remind the option of using http://gerrithub.io/ for code
> review.
> Other than that, I'm in favor of moving over.
> Y.

I agree that using GitHub for code review is not optimal. We have many
patches for the GlusterFS project that need multiple rounds of review
and corrections. Comparing the changes between revisions is something
that GitHub does not support, but Gerrit/GerritHub does.

Before switching over, there also needs to be documentation how to
structure the issues in GitHubs tracker (which labels to use, what they
mean etc,). Also, what about migration of bugs from Bugzilla to GitHub?

Except for those topics, I don't have a problem with moving to GitHub.

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [RFC] What if client fuse process crash?

2019-08-06 Thread Niels de Vos
On Tue, Aug 06, 2019 at 04:47:46PM +0800, Changwei Ge wrote:
> Hi Niels,
> 
> On 2019/8/6 3:50 下午, Niels de Vos wrote:
> > On Tue, Aug 06, 2019 at 03:14:46PM +0800, Changwei Ge wrote:
> > > On 2019/8/6 2:57 下午, Ravishankar N wrote:
> > > > On 06/08/19 11:44 AM, Changwei Ge wrote:
> > > > > Hi Ravishankar,
> > > > > 
> > > > > 
> > > > > Thanks for your share, it's very useful to me.
> > > > > 
> > > > > I am setting up a glusterfs storage cluster recently and the
> > > > > umount/mount recovering process bothered me.
> > > > Hi Changwei,
> > > > Why are you needing to do frequent remounts? If your gluster fuse client
> > > > is crashing frequently, that should be investigated and fixed. If you
> > > > have a reproducer, please raise a bug with all the details like the
> > > > glusterfs version, core files and log files.
> > > 
> > > Hi Ravi,
> > > 
> > > Actually, glusterfs client fuse process ran well in my environment. But
> > > high-availability and fault-tolerance are also my big concerns.
> > > 
> > > So I killed the fuse process to see what would happen. AFAIK, userspace
> > > processes are likely to be killed or crashed somehow, which is not under 
> > > our
> > > control. :-(
> > > 
> > > Another scenario is *software upgrade*. Since we have to upgrade glusterfs
> > > client version in order to enrich features and fix bugs.  It will be
> > > friendly to applications if the upgrade is transparent.
> > As open files have a state associated with them, and the state is lost
> > when the fuse process exits. Restarting the fuse process will then need
> > to restore the state of the open files (and caches, and more). This is
> > not trivial and I do not think any work on this end has been done yet.
> 
> 
> True, tons of work have to be done if we want to restore all files' state to
> make restarted fuse process continue to work as never be restarted.
> 
> I suppose two methods might be feasible:
> 
>     One is to try to fetch file state from kernel to restore files' state
> into fuse process,
> 
>     the other one is to duplicate those  state to a standby process or just
> use Linux shared memory mechanism?

Restoring the state from the kernel would be my preference. That is the
view of the storage that the application has as well. But it may not be
possible to recover all details that the xlators track. Storing those in
shared memory (or file backed persistent storage) might not even be
sufficient. With upgrades it is possible to get new features in existing
xlators that would need to refresh their state to get the extensions. It
is even possible that new xlators get added, and those will need to get
the state of the files too.

I think, in the end it would boil down to getting the state from the
kernel, and revalidating each inode through the mountpoint to the
server. This is also what happens on graph-switches (new volume layout
or options pushed from the server to client). To get this to work, it
needs to be possible for a FUSE service to re-attach itself to a
mountpoint where the previous FUSE process detached. I do not think this
is possible at the moment, it will require extensions in the FUSE kernel
module (and then re-attaching a new state to all inodes).

> > Some users take an alternative route. Mounted filesystems have indeed
> > issues with online updating. So, maybe you do not need to mount the
> > filesystem at all. Depending on the need of your applications, using
> > glusterfs-coreutils instead of a FUSE (or NFS) mount might be an option
> > for you. The short living processes connect to the Gluster Volume when
> > needed, and do not keep a connection open. Updating userspace tools is
> > much simpler than long running processes that are hooked into the
> > kernel.
> > 
> > See https://github.com/gluster/glusterfs-coreutils for details.
> 
> 
> That's helpful, but I think then some POSIX file operations can't be
> performed anymore.

Indeed, glusterfs-coreutils is more of an object storage interface than
a POSIX complaint filesystem.

Niels


> 
> 
> Thanks,
> 
> Changwei
> 
> 
> > 
> > HTH,
> > Niels
> > 
> > 
> > > 
> > > Thanks,
> > > 
> > > Changwei
> > > 
> > > 
> > > > Regards,
> > > > Ravi
> > > > > 
> > > > > I happened to find some patches[1] from internet aiming to address
> > > > > such a problem but no idea why they were not 

Re: [Gluster-devel] [RFC] What if client fuse process crash?

2019-08-06 Thread Niels de Vos
On Tue, Aug 06, 2019 at 03:14:46PM +0800, Changwei Ge wrote:
> On 2019/8/6 2:57 下午, Ravishankar N wrote:
> > 
> > On 06/08/19 11:44 AM, Changwei Ge wrote:
> > > Hi Ravishankar,
> > > 
> > > 
> > > Thanks for your share, it's very useful to me.
> > > 
> > > I am setting up a glusterfs storage cluster recently and the
> > > umount/mount recovering process bothered me.
> > Hi Changwei,
> > Why are you needing to do frequent remounts? If your gluster fuse client
> > is crashing frequently, that should be investigated and fixed. If you
> > have a reproducer, please raise a bug with all the details like the
> > glusterfs version, core files and log files.
> 
> 
> Hi Ravi,
> 
> Actually, glusterfs client fuse process ran well in my environment. But
> high-availability and fault-tolerance are also my big concerns.
> 
> So I killed the fuse process to see what would happen. AFAIK, userspace
> processes are likely to be killed or crashed somehow, which is not under our
> control. :-(
> 
> Another scenario is *software upgrade*. Since we have to upgrade glusterfs
> client version in order to enrich features and fix bugs.  It will be
> friendly to applications if the upgrade is transparent.

As open files have a state associated with them, and the state is lost
when the fuse process exits. Restarting the fuse process will then need
to restore the state of the open files (and caches, and more). This is
not trivial and I do not think any work on this end has been done yet.

Some users take an alternative route. Mounted filesystems have indeed
issues with online updating. So, maybe you do not need to mount the
filesystem at all. Depending on the need of your applications, using
glusterfs-coreutils instead of a FUSE (or NFS) mount might be an option
for you. The short living processes connect to the Gluster Volume when
needed, and do not keep a connection open. Updating userspace tools is
much simpler than long running processes that are hooked into the
kernel.

See https://github.com/gluster/glusterfs-coreutils for details.

HTH,
Niels


> 
> 
> Thanks,
> 
> Changwei
> 
> 
> > Regards,
> > Ravi
> > > 
> > > 
> > > I happened to find some patches[1] from internet aiming to address
> > > such a problem but no idea why they were not managed to merge into
> > > glusterfs mainline.
> > > 
> > > Do you know why?
> > > 
> > > 
> > > Thanks,
> > > 
> > > Changwei
> > > 
> > > 
> > > [1]:
> > > 
> > > https://review.gluster.org/#/c/glusterfs/+/16843/
> > > 
> > > https://github.com/gluster/glusterfs/issues/242
> > > 
> > > 
> > > On 2019/8/6 1:12 下午, Ravishankar N wrote:
> > > > On 05/08/19 3:31 PM, Changwei Ge wrote:
> > > > > Hi list,
> > > > > 
> > > > > If somehow, glusterfs client fuse process dies. All
> > > > > subsequent file operations will be failed with error 'no
> > > > > connection'.
> > > > > 
> > > > > I am curious if the only way to recover is umount and mount again?
> > > > Yes, this is pretty much the case with all fuse based file
> > > > systems. You can use -o auto_unmount
> > > > (https://review.gluster.org/#/c/17230/) to automatically cleanup
> > > > and not having to manually unmount.
> > > > > 
> > > > > If so, that means all processes working on top of glusterfs
> > > > > have to close files, which sometimes is hard to be
> > > > > acceptable.
> > > > 
> > > > There is
> > > > https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html,
> > > > which claims to provide a framework for transparent failovers. I
> > > > can't find any publicly available code though.
> > > > 
> > > > Regards,
> > > > Ravi
> > > > > 
> > > > > 
> > > > > Thanks,
> > > > > 
> > > > > Changwei
> > > > > 
> > > > > 
> > > > > ___
> > > > > 
> > > > > Community Meeting Calendar:
> > > > > 
> > > > > APAC Schedule -
> > > > > Every 2nd and 4th Tuesday at 11:30 AM IST
> > > > > Bridge: https://bluejeans.com/836554017
> > > > > 
> > > > > NA/EMEA Schedule -
> > > > > Every 1st and 3rd Tuesday at 01:00 PM EDT
> > > > > Bridge: https://bluejeans.com/486278655
> > > > > 
> > > > > Gluster-devel mailing list
> > > > > Gluster-devel@gluster.org
> > > > > https://lists.gluster.org/mailman/listinfo/gluster-devel
> > > > > 
> ___
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org

Re: [Gluster-devel] Release 5.7 or 5.8

2019-07-12 Thread Niels de Vos
On Thu, Jul 11, 2019 at 01:02:48PM +0530, Hari Gowtham wrote:
> Hi,
> 
> We came across an build issue with release 5.7. It was related the
> python version.
> A fix for it ha been posted [ 
> https://review.gluster.org/#/c/glusterfs/+/23028 ]
> Once we take this fix in we need to go ahead with tagging and release it.
> Though we have tagged 5.7, we weren't able to package 5.7 because of this 
> issue.
> 
> Now the question is, to create 5.7.1 or go with 5.8 as recreating a
> tag isn't an option.
> My take is to create 5.8 and mark 5.7 obsolete. And the reasons are as below:
> *) We have moved on to using 5.x.  Going back to 5.x.y will be confusing.
> *) 5.8 is also due as we got delayed a lot in this issue.
> 
> If we have any other opinion, please let us know so we can decide and
> go ahead with the best option.

I would go with 5.7.1. However if 5.8 would be tagged around the same
time, then only do 5.8.

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-07-08 Thread Niels de Vos
On Mon, Jul 08, 2019 at 02:37:34PM +0530, Hari Gowtham wrote:
> I have a few concerns about adding the python3 devel package and
> continuing the build.
> In the effort to make Gluster python3 compatible,
> https://github.com/gluster/glusterfs/issues/411
> I think we have decided to skip working on Glupy to make it python3 
> compatible.
> (Correct me if i'm wrong.) As Glupy was decided to be deprecated.
> Though i don't see any mail thread regarding the same.
> I don't see any patches merged to make Glupy python3 compatible, as well.
> 
> In such a case, I think its better to make changes to the configure.ac
> of release 5 to work with python2 alone.
> This way, Glupy will not be affected as well. And machines with
> python3 will also work because of the presence of python2.
> And no change will be needed on the infra side as well.

Building when only python3 is available should still keep working as
well. Recent Fedora versions do not have python2 (by default?) anymore,
and that may be true for other distributions too.

configure.ac for release-5 and release-4.1 should probably prefer
python2 before python3.

Niels


> We are a bit too late with the 5 series releases. If we are fine with
> this approach,
> I will send out a mail informing this, work on the patch and push it.
> 
> 
> On Fri, Jul 5, 2019 at 6:48 PM Niels de Vos  wrote:
> >
> > On Thu, Jul 04, 2019 at 05:03:53PM +0200, Michael Scherer wrote:
> > > Le jeudi 04 juillet 2019 à 16:20 +0200, Niels de Vos a écrit :
> > > > On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote:
> > > > > Le mercredi 03 juillet 2019 à 20:03 +0530, Deepshikha Khandelwal a
> > > > > écrit :
> > > > > > Misc, is EPEL got recently installed on the builders?
> > > > >
> > > > > No, it has been there since september 2016. What got changed is
> > > > > that
> > > > > python3 wasn't installed before.
> > > > >
> > > > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3
> > > > > > on
> > > > > > builders seems not a good option to have.
> > > > >
> > > > >
> > > > > Python 3 is pulled by 'mock', cf
> > > > >
> > > https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html
> > > > >
> > > > > So sure, I can remove EPEL, but then it will remove mock. Or I can
> > > > > remove python3, and it will remove mock.
> > > > >
> > > > > But again, the problem is not with the set of installed packages on
> > > > > the
> > > > > builder, that's just showing there is a bug.
> > > > >
> > > > > The configure script do pick the latest python version:
> > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612
> > > > >
> > > > > if there is a python3, it take that, if not, it fall back to
> > > > > python2.
> > > > >
> > > > > then, later:
> > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639
> > > > >
> > > > > it verify the presence of what is required to build.
> > > > >
> > > > > So if there is a runtime version only of python3, it will detect
> > > > > python3, but not build anything, because the -devel subpackage is
> > > > > not h
> > > > > ere.
> > > > >
> > > > > There is 2 solutions:
> > > > > - fix that piece of code, so it doesn't just test the presence of
> > > > > python executable, but do that, and test the presence of headers
> > > > > before
> > > > > deciding if we need to build or not glupy.
> > > > >
> > > > > - use PYTHON env var to force python2, and document that it need to
> > > > > be
> > > > > done.
> > > >
> > > > What about option 3:
> > > >
> > > > - install python3-devel in addition to python3
> > >
> > > That's a option, but I think that's a disservice for the users, since
> > > that's fixing our CI to no longer trigger a corner case, which doesn't
> > > mean the corner case no longer exist, just that we do not trigger it.
> >
> > This is only interesting for building releases/packages, I think. Normal
> > build environments have -devel packages installed for the components
> > that are used during the build process. The weird python2-devel and
&g

Re: [Gluster-devel] Removing glupy from release 5.7

2019-07-05 Thread Niels de Vos
On Thu, Jul 04, 2019 at 05:03:53PM +0200, Michael Scherer wrote:
> Le jeudi 04 juillet 2019 à 16:20 +0200, Niels de Vos a écrit :
> > On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote:
> > > Le mercredi 03 juillet 2019 à 20:03 +0530, Deepshikha Khandelwal a
> > > écrit :
> > > > Misc, is EPEL got recently installed on the builders?
> > > 
> > > No, it has been there since september 2016. What got changed is
> > > that
> > > python3 wasn't installed before.
> > > 
> > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3
> > > > on
> > > > builders seems not a good option to have.
> > > 
> > > 
> > > Python 3 is pulled by 'mock', cf 
> > > 
> https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html
> > > 
> > > So sure, I can remove EPEL, but then it will remove mock. Or I can
> > > remove python3, and it will remove mock.
> > > 
> > > But again, the problem is not with the set of installed packages on
> > > the
> > > builder, that's just showing there is a bug.
> > > 
> > > The configure script do pick the latest python version:
> > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612
> > > 
> > > if there is a python3, it take that, if not, it fall back to
> > > python2. 
> > > 
> > > then, later:
> > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639
> > > 
> > > it verify the presence of what is required to build.
> > > 
> > > So if there is a runtime version only of python3, it will detect
> > > python3, but not build anything, because the -devel subpackage is
> > > not h
> > > ere. 
> > > 
> > > There is 2 solutions:
> > > - fix that piece of code, so it doesn't just test the presence of
> > > python executable, but do that, and test the presence of headers
> > > before
> > > deciding if we need to build or not glupy.
> > > 
> > > - use PYTHON env var to force python2, and document that it need to
> > > be
> > > done.  
> > 
> > What about option 3:
> > 
> > - install python3-devel in addition to python3
> 
> That's a option, but I think that's a disservice for the users, since
> that's fixing our CI to no longer trigger a corner case, which doesn't
> mean the corner case no longer exist, just that we do not trigger it. 

This is only interesting for building releases/packages, I think. Normal
build environments have -devel packages installed for the components
that are used during the build process. The weird python2-devel and
python3 (without -devel) is definitely a corner case, but not something
people would normally have. And if so, we expect -devel for the python
version that is used, so developers would hopefully just install that on
their build system.

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-07-04 Thread Niels de Vos
On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote:
> Le mercredi 03 juillet 2019 à 20:03 +0530, Deepshikha Khandelwal a
> écrit :
> > Misc, is EPEL got recently installed on the builders?
> 
> No, it has been there since september 2016. What got changed is that
> python3 wasn't installed before.
> 
> > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 on
> > builders seems not a good option to have.
> 
> 
> Python 3 is pulled by 'mock', cf 
> https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html
> 
> So sure, I can remove EPEL, but then it will remove mock. Or I can
> remove python3, and it will remove mock.
> 
> But again, the problem is not with the set of installed packages on the
> builder, that's just showing there is a bug.
> 
> The configure script do pick the latest python version:
> https://github.com/gluster/glusterfs/blob/master/configure.ac#L612
> 
> if there is a python3, it take that, if not, it fall back to python2. 
> 
> then, later:
> https://github.com/gluster/glusterfs/blob/master/configure.ac#L639
> 
> it verify the presence of what is required to build.
> 
> So if there is a runtime version only of python3, it will detect
> python3, but not build anything, because the -devel subpackage is not h
> ere. 
> 
> There is 2 solutions:
> - fix that piece of code, so it doesn't just test the presence of
> python executable, but do that, and test the presence of headers before
> deciding if we need to build or not glupy.
> 
> - use PYTHON env var to force python2, and document that it need to be
> done.  

What about option 3:

- install python3-devel in addition to python3

Niels


> 
> 
> > On Thu, Jun 20, 2019 at 6:37 PM Michael Scherer 
> > wrote:
> > 
> > > Le jeudi 20 juin 2019 à 08:38 -0400, Kaleb Keithley a écrit :
> > > > On Thu, Jun 20, 2019 at 7:39 AM Michael Scherer <
> > > > msche...@redhat.com>
> > > > wrote:
> > > > 
> > > > > Le jeudi 20 juin 2019 à 06:57 -0400, Kaleb Keithley a écrit :
> > > > > > AFAICT, working fine right up to when EPEL and python3 were
> > > > > > installed
> > > > > > on
> > > > > > the centos builders.  If it was my decision, I'd undo that
> > > > > > change.
> > > > > 
> > > > > The biggest problem is that mock do pull python3.
> > > > > 
> > > > > 
> > > > 
> > > > That's mock on Fedora — to run a build in a centos-i386 chroot.
> > > > Fedora
> > > > already has python3. I don't see how that can affect what's
> > > > running
> > > > in the
> > > > mock chroot.
> > > 
> > > I am not sure we are talking about the same thing, but mock, the
> > > rpm
> > > package from EPEL 7, do pull python 3:
> > > 
> > > $ cat /etc/redhat-release;   rpm -q --requires mock |grep
> > > 'python(abi'
> > > Red Hat Enterprise Linux Server release 7.6 (Maipo)
> > > python(abi) = 3.6
> > > 
> > > So we do have python3 installed on the Centos 7 builders (and was
> > > after
> > > a upgrade), and we are not going to remove it, because we use mock
> > > for
> > > a lot of stuff.
> > > 
> > > And again, if the configure script is detecting the wrong version
> > > of
> > > python, the fix is not to remove the version of python for the
> > > builders, the fix is to detect the right version of python, or at
> > > least, permit to people to bypass the detection.
> > > 
> > > > Is the build inside mock also installing EPEL and python3
> > > > somehow?
> > > > Now? If so, why?
> > > 
> > > No, I doubt but then, if we are using a chroot, the package
> > > installed
> > > on the builders shouldn't matter, since that's a chroot.
> > > 
> > > So I am kinda being lost.
> > > 
> > > > And maybe the solution for centos regressions is to run those in
> > > > mock, with a centos-x86_64 chroot. Without EPEL or python3.
> > > 
> > > That would likely requires a big refactor of the setup, since we
> > > have
> > > to get the data out of specific place, etc. We would also need to
> > > reinstall the builders to set partitions in a different way, with a
> > > bigger / and/or give more space for /var/lib/mock.
> > > 
> > > I do not see that happening fast, and if my hypothesis of a issue
> > > in
> > > configure is right, then fixing seems the faster way to avoid the
> > > issue.
> > > --
> > > Michael Scherer
> > > Sysadmin, Community Infrastructure
> > > 
> > > 
> > > 
> > > ___
> > > 
> > > Community Meeting Calendar:
> > > 
> > > APAC Schedule -
> > > Every 2nd and 4th Tuesday at 11:30 AM IST
> > > Bridge: https://bluejeans.com/836554017
> > > 
> > > NA/EMEA Schedule -
> > > Every 1st and 3rd Tuesday at 01:00 PM EDT
> > > Bridge: https://bluejeans.com/486278655
> > > 
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-devel
> > > 
> > > 
> -- 
> Michael Scherer
> Sysadmin, Community Infrastructure
> 
> 
> 



> ___
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday 

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Niels de Vos
On Thu, Jun 20, 2019 at 03:42:06PM +0530, Deepshikha Khandelwal wrote:
> On Thu, Jun 20, 2019 at 3:20 PM Niels de Vos  wrote:
> 
> > On Thu, Jun 20, 2019 at 02:56:51PM +0530, Amar Tumballi Suryanarayan wrote:
> > > On Thu, Jun 20, 2019 at 2:35 PM Niels de Vos  wrote:
> > >
> > > > On Thu, Jun 20, 2019 at 02:11:21PM +0530, Amar Tumballi Suryanarayan
> > wrote:
> > > > > On Thu, Jun 20, 2019 at 1:13 PM Niels de Vos 
> > wrote:
> > > > >
> > > > > > On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi
> > Suryanarayan
> > > > wrote:
> > > > > > > Considering python3 is anyways the future, I vote for taking the
> > > > patch we
> > > > > > > did in master for fixing regression tests with python3 into the
> > > > release-6
> > > > > > > and release-5 branch and getting over this deadlock.
> > > > > > >
> > > > > > > Patch in discussion here is
> > > > > > > https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone
> > > > > > notices, it
> > > > > > > changes only the files inside 'tests/' directory, which is not
> > > > packaged
> > > > > > in
> > > > > > > a release anyways.
> > > > > > >
> > > > > > > Hari, can we get the backport of this patch to both the release
> > > > branches?
> > > > > >
> > > > > > When going this route, you still need to make sure that the
> > > > > > python3-devel package is available on the CentOS-7 builders. And I
> > > > > > don't know if installing that package is already sufficient, maybe
> > the
> > > > > > backport is not even needed in that case.
> > > > > >
> > > > > >
> > > > > I was thinking, having this patch makes it compatible with both
> > python2
> > > > and
> > > > > python3, so technically, it allows us to move to Fedora30 if we need
> > to
> > > > run
> > > > > regression there. (and CentOS7 with only python2).
> > > > >
> > > > > The above patch made it compatible, not mandatory to have python3.
> > So,
> > > > > treating it as a bug fix.
> > > >
> > > > Well, whatever Python is detected (python3 has preference over
> > python2),
> > > > needs to have the -devel package available too. Detection is done by
> > > > probing the python executable. The Matching header files from -devel
> > > > need to be present in order to be able to build glupy (and others?).
> > > >
> > > > I do not think compatibility for python3/2 is the problem while
> > > > building the tarball.
> > >
> > >
> > > Got it! True. Compatibility is not the problem to build the tarball.
> > >
> > > I noticed the issue of smoke is coming only from strfmt-errors job, which
> > > checks for 'epel-6-i386' mock, and fails right now.
> > >
> > > The backport might become relevant while running
> > > > tests on environments where there is no python2.
> > > >
> > > >
> > > Backport is very important if we are running in a system where we have
> > only
> > > python3. Hence my proposal to include it in releases.
> >
> > I am sure CentOS-7 still has python2. The newer python3 only gets pulled
> > in by some additional packages that get installed from EPEL.
> >
> > > But we are stuck with strfmt-errors job right now, and looking at what it
> > > was intended to catch in first place, mostly our
> > > https://build.gluster.org/job/32-bit-build-smoke/ would be doing same.
> > If
> > > that is the case, we can remove the job altogether.  Also note, this job
> > is
> > > known to fail many smokes with 'Build root is locked by another process'
> > > errors.
> >
> > This error means that there are multiple concurrent jobs running 'mock'
> > with this buildroot. That should not happen and is a configuration error
> > in one or more Jenkins jobs.
> 
>  Adding to this, this error occurs when the last running job using mock has
> been aborted and no proper cleaning/killing in the build root has happened.
>  I'm planning to call up a cleanup function on abort.

Ah, right, that is a possibility too. Jobs should cleanup after
themselves and if that is not happening, it is a bug in the job (o

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Niels de Vos
On Thu, Jun 20, 2019 at 02:56:51PM +0530, Amar Tumballi Suryanarayan wrote:
> On Thu, Jun 20, 2019 at 2:35 PM Niels de Vos  wrote:
> 
> > On Thu, Jun 20, 2019 at 02:11:21PM +0530, Amar Tumballi Suryanarayan wrote:
> > > On Thu, Jun 20, 2019 at 1:13 PM Niels de Vos  wrote:
> > >
> > > > On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi Suryanarayan
> > wrote:
> > > > > Considering python3 is anyways the future, I vote for taking the
> > patch we
> > > > > did in master for fixing regression tests with python3 into the
> > release-6
> > > > > and release-5 branch and getting over this deadlock.
> > > > >
> > > > > Patch in discussion here is
> > > > > https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone
> > > > notices, it
> > > > > changes only the files inside 'tests/' directory, which is not
> > packaged
> > > > in
> > > > > a release anyways.
> > > > >
> > > > > Hari, can we get the backport of this patch to both the release
> > branches?
> > > >
> > > > When going this route, you still need to make sure that the
> > > > python3-devel package is available on the CentOS-7 builders. And I
> > > > don't know if installing that package is already sufficient, maybe the
> > > > backport is not even needed in that case.
> > > >
> > > >
> > > I was thinking, having this patch makes it compatible with both python2
> > and
> > > python3, so technically, it allows us to move to Fedora30 if we need to
> > run
> > > regression there. (and CentOS7 with only python2).
> > >
> > > The above patch made it compatible, not mandatory to have python3. So,
> > > treating it as a bug fix.
> >
> > Well, whatever Python is detected (python3 has preference over python2),
> > needs to have the -devel package available too. Detection is done by
> > probing the python executable. The Matching header files from -devel
> > need to be present in order to be able to build glupy (and others?).
> >
> > I do not think compatibility for python3/2 is the problem while
> > building the tarball.
> 
> 
> Got it! True. Compatibility is not the problem to build the tarball.
> 
> I noticed the issue of smoke is coming only from strfmt-errors job, which
> checks for 'epel-6-i386' mock, and fails right now.
> 
> The backport might become relevant while running
> > tests on environments where there is no python2.
> >
> >
> Backport is very important if we are running in a system where we have only
> python3. Hence my proposal to include it in releases.

I am sure CentOS-7 still has python2. The newer python3 only gets pulled
in by some additional packages that get installed from EPEL.

> But we are stuck with strfmt-errors job right now, and looking at what it
> was intended to catch in first place, mostly our
> https://build.gluster.org/job/32-bit-build-smoke/ would be doing same. If
> that is the case, we can remove the job altogether.  Also note, this job is
> known to fail many smokes with 'Build root is locked by another process'
> errors.

This error means that there are multiple concurrent jobs running 'mock'
with this buildroot. That should not happen and is a configuration error
in one or more Jenkins jobs.

> Would be great if disabling strfmt-errors is an option.

I think both jobs do different things. The smoke is functional, where as
strfmt-errors catches incorrect string formatting (some maintainers
assume always 64-bit, everywhere) that has been missed in reviews.

Niels


> 
> Regards,
> 
> > Niels
> >
> >
> > >
> > >
> > > > Niels
> > > >
> > > >
> > > > >
> > > > > Regards,
> > > > > Amar
> > > > >
> > > > > On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer  > >
> > > > wrote:
> > > > >
> > > > > > Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > > > > > > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > > > > > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > > > > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > > > > > > kkeit...@redhat.com> wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Wed, Jun 12, 2019 at 10:4

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Niels de Vos
On Thu, Jun 20, 2019 at 02:11:21PM +0530, Amar Tumballi Suryanarayan wrote:
> On Thu, Jun 20, 2019 at 1:13 PM Niels de Vos  wrote:
> 
> > On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi Suryanarayan wrote:
> > > Considering python3 is anyways the future, I vote for taking the patch we
> > > did in master for fixing regression tests with python3 into the release-6
> > > and release-5 branch and getting over this deadlock.
> > >
> > > Patch in discussion here is
> > > https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone
> > notices, it
> > > changes only the files inside 'tests/' directory, which is not packaged
> > in
> > > a release anyways.
> > >
> > > Hari, can we get the backport of this patch to both the release branches?
> >
> > When going this route, you still need to make sure that the
> > python3-devel package is available on the CentOS-7 builders. And I
> > don't know if installing that package is already sufficient, maybe the
> > backport is not even needed in that case.
> >
> >
> I was thinking, having this patch makes it compatible with both python2 and
> python3, so technically, it allows us to move to Fedora30 if we need to run
> regression there. (and CentOS7 with only python2).
> 
> The above patch made it compatible, not mandatory to have python3. So,
> treating it as a bug fix.

Well, whatever Python is detected (python3 has preference over python2),
needs to have the -devel package available too. Detection is done by
probing the python executable. The Matching header files from -devel
need to be present in order to be able to build glupy (and others?).

I do not think compatibility for python3/2 is the problem while
building the tarball. The backport might become relevant while running
tests on environments where there is no python2.

Niels


> 
> 
> > Niels
> >
> >
> > >
> > > Regards,
> > > Amar
> > >
> > > On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer 
> > wrote:
> > >
> > > > Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > > > > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > > > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > > > > kkeit...@redhat.com> wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > > > > > > > atumb...@redhat.com> wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > We recently noticed that in one of the package update on
> > > > > > > > > builder (ie,
> > > > > > > > > centos7.x machines), python3.6 got installed as a dependency.
> > > > > > > > > So, yes, it
> > > > > > > > > is possible to have python3 in centos7 now.
> > > > > > > > >
> > > > > > > >
> > > > > > > > EPEL updated from python34 to python36 recently, but C7 doesn't
> > > > > > > > have
> > > > > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > > > > packages for
> > > > > > > > building.
> > > > > > > >
> > > > > > > > And GlusterFS-5 isn't python3 ready.
> > > > > > > >
> > > > > > >
> > > > > > > Correction: GlusterFS-5 is mostly or completely python3
> > > > > > > ready.  FWIW,
> > > > > > > python33 is available on both RHEL7 and CentOS7 from the Software
> > > > > > > Collection Library (SCL), and python34 and now python36 are
> > > > > > > available from
> > > > > > > EPEL.
> > > > > > >
> > > > > > > But packages built for the CentOS Storage SIG have never used the
> > > > > > > SCL or
> > > > > > > EPEL (EPEL not allowed) and the shebangs in the .py files are
> > > > > > > converted
> > > > > > > from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild
> > > > > > > %prep stage.
> > > > > > > All the python dependencies for the packages remain the python2
> > > 

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Niels de Vos
On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi Suryanarayan wrote:
> Considering python3 is anyways the future, I vote for taking the patch we
> did in master for fixing regression tests with python3 into the release-6
> and release-5 branch and getting over this deadlock.
> 
> Patch in discussion here is
> https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone notices, it
> changes only the files inside 'tests/' directory, which is not packaged in
> a release anyways.
> 
> Hari, can we get the backport of this patch to both the release branches?

When going this route, you still need to make sure that the
python3-devel package is available on the CentOS-7 builders. And I
don't know if installing that package is already sufficient, maybe the
backport is not even needed in that case.

Niels


> 
> Regards,
> Amar
> 
> On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer  wrote:
> 
> > Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > > kkeit...@redhat.com> wrote:
> > > > >
> > > > > >
> > > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > > > > > atumb...@redhat.com> wrote:
> > > > > >
> > > > > > >
> > > > > > > We recently noticed that in one of the package update on
> > > > > > > builder (ie,
> > > > > > > centos7.x machines), python3.6 got installed as a dependency.
> > > > > > > So, yes, it
> > > > > > > is possible to have python3 in centos7 now.
> > > > > > >
> > > > > >
> > > > > > EPEL updated from python34 to python36 recently, but C7 doesn't
> > > > > > have
> > > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > > packages for
> > > > > > building.
> > > > > >
> > > > > > And GlusterFS-5 isn't python3 ready.
> > > > > >
> > > > >
> > > > > Correction: GlusterFS-5 is mostly or completely python3
> > > > > ready.  FWIW,
> > > > > python33 is available on both RHEL7 and CentOS7 from the Software
> > > > > Collection Library (SCL), and python34 and now python36 are
> > > > > available from
> > > > > EPEL.
> > > > >
> > > > > But packages built for the CentOS Storage SIG have never used the
> > > > > SCL or
> > > > > EPEL (EPEL not allowed) and the shebangs in the .py files are
> > > > > converted
> > > > > from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild
> > > > > %prep stage.
> > > > > All the python dependencies for the packages remain the python2
> > > > > flavors.
> > > > > AFAIK the centos-regression machines ought to be building the
> > > > > same way.
> > > >
> > > > Indeed, there should not be a requirement on having EPEL enabled on
> > > > the
> > > > CentOS-7 builders. At least not for the building of the glusterfs
> > > > tarball. We still need to do releases of glusterfs-4.1 and
> > > > glusterfs-5,
> > > > until then it is expected to have python2 as the (only?) version
> > > > for the
> > > > system. Is it possible to remove python3 from the CentOS-7 builders
> > > > and
> > > > run the jobs that require python3 on the Fedora builders instead?
> > >
> > > Actually, if the python-devel package for python3 is installed on the
> > > CentOS-7 builders, things may work too. It still feels like some sort
> > > of
> > > Frankenstein deployment, and we don't expect to this see in
> > > production
> > > environments. But maybe this is a workaround in case something
> > > really,
> > > really, REALLY depends on python3 on the builders.
> >
> > To be honest, people would be surprised what happen in production
> > around (sysadmins tend to discuss around, we all have horrors stories,
> > stuff that were supposed to be cleaned and wasn't, etc)
> >
> > After all, "frankenstein deployment now" is better than "perfect
> > later", especially since lots of IT departements are under

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-13 Thread Niels de Vos
On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley  wrote:
> > 
> > >
> > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > > atumb...@redhat.com> wrote:
> > >
> > >>
> > >> We recently noticed that in one of the package update on builder (ie,
> > >> centos7.x machines), python3.6 got installed as a dependency. So, yes, it
> > >> is possible to have python3 in centos7 now.
> > >>
> > >
> > > EPEL updated from python34 to python36 recently, but C7 doesn't have
> > > python3 in the base. I don't think we've ever used EPEL packages for
> > > building.
> > >
> > > And GlusterFS-5 isn't python3 ready.
> > >
> > 
> > Correction: GlusterFS-5 is mostly or completely python3 ready.  FWIW,
> > python33 is available on both RHEL7 and CentOS7 from the Software
> > Collection Library (SCL), and python34 and now python36 are available from
> > EPEL.
> > 
> > But packages built for the CentOS Storage SIG have never used the SCL or
> > EPEL (EPEL not allowed) and the shebangs in the .py files are converted
> > from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild %prep stage.
> > All the python dependencies for the packages remain the python2 flavors.
> > AFAIK the centos-regression machines ought to be building the same way.
> 
> Indeed, there should not be a requirement on having EPEL enabled on the
> CentOS-7 builders. At least not for the building of the glusterfs
> tarball. We still need to do releases of glusterfs-4.1 and glusterfs-5,
> until then it is expected to have python2 as the (only?) version for the
> system. Is it possible to remove python3 from the CentOS-7 builders and
> run the jobs that require python3 on the Fedora builders instead?

Actually, if the python-devel package for python3 is installed on the
CentOS-7 builders, things may work too. It still feels like some sort of
Frankenstein deployment, and we don't expect to this see in production
environments. But maybe this is a workaround in case something really,
really, REALLY depends on python3 on the builders.

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-13 Thread Niels de Vos
On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley  wrote:
> 
> >
> > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > atumb...@redhat.com> wrote:
> >
> >>
> >> We recently noticed that in one of the package update on builder (ie,
> >> centos7.x machines), python3.6 got installed as a dependency. So, yes, it
> >> is possible to have python3 in centos7 now.
> >>
> >
> > EPEL updated from python34 to python36 recently, but C7 doesn't have
> > python3 in the base. I don't think we've ever used EPEL packages for
> > building.
> >
> > And GlusterFS-5 isn't python3 ready.
> >
> 
> Correction: GlusterFS-5 is mostly or completely python3 ready.  FWIW,
> python33 is available on both RHEL7 and CentOS7 from the Software
> Collection Library (SCL), and python34 and now python36 are available from
> EPEL.
> 
> But packages built for the CentOS Storage SIG have never used the SCL or
> EPEL (EPEL not allowed) and the shebangs in the .py files are converted
> from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild %prep stage.
> All the python dependencies for the packages remain the python2 flavors.
> AFAIK the centos-regression machines ought to be building the same way.

Indeed, there should not be a requirement on having EPEL enabled on the
CentOS-7 builders. At least not for the building of the glusterfs
tarball. We still need to do releases of glusterfs-4.1 and glusterfs-5,
until then it is expected to have python2 as the (only?) version for the
system. Is it possible to remove python3 from the CentOS-7 builders and
run the jobs that require python3 on the Fedora builders instead?

I guess we could force the release-4.1 and release-5 branches to use
python2 only. This might be done by exporting PYTHON=/usr/bin/python2 in
the environment where './configure' is run. That would likely require
changes to multiple Jenkins jobs...

Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-12 Thread Niels de Vos
On Wed, Jun 12, 2019 at 07:54:17PM +0530, Hari Gowtham wrote:
> We haven't sent any patch to fix it.
> Waiting for the decision to be made.
> The bz: https://bugzilla.redhat.com/show_bug.cgi?id=1719778
> The link to the build log:
> https://build.gluster.org/job/strfmt_errors/1/artifact/RPMS/el6/i686/build.log
> 
> The last few messages in the log:
> 
> config.status: creating xlators/features/changelog/lib/src/Makefile
> config.status: creating xlators/features/changetimerecorder/Makefile
> config.status: creating xlators/features/changetimerecorder/src/Makefile
> BUILDSTDERR: config.status: error: cannot find input file:
> xlators/features/glupy/Makefile.in
> RPM build errors:
> BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build)
> BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build)
> Child return code was: 1
> EXCEPTION: [Error()]
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py",
> line 96, in trace
> result = func(*args, **kw)
>   File "/usr/lib/python3.6/site-packages/mockbuild/util.py", line 736,
> in do_with_status
> raise exception.Error("Command failed: \n # %s\n%s" % (command,
> output), child.returncode)
> mockbuild.exception.Error: Command failed:
>  # bash --login -c /usr/bin/rpmbuild -bb --target i686 --nodeps
> /builddir/build/SPECS/glusterfs.spec

Those messages are caused by missing files. The 'make dist' that
generates the tarball in the previous step did not included the glupy
files.

https://build.gluster.org/job/strfmt_errors/1/console contains the
following message:

configure: WARNING:

-
cannot build glupy. python 3.6 and python-devel/python-dev package 
are required.

-

I am not sure if there have been any recent backports to release-5 that
introduced this behaviour. Maybe it is related to the builder where the
tarball is generated. The job seems to detect python-3.6.8, which is not
included in CentOS-7 for all I know?

Maybe someone else understands how this can happen?

HTH,
Niels


> 
> On Wed, Jun 12, 2019 at 7:04 PM Niels de Vos  wrote:
> >
> > On Wed, Jun 12, 2019 at 02:44:04PM +0530, Hari Gowtham wrote:
> > > Hi,
> > >
> > > Due to the recent changes we made. we have a build issue because of glupy.
> > > As glupy is already removed from master, we are thinking of removing
> > > it in 5.7 as well rather than fixing the issue.
> > >
> > > The release of 5.7 will be delayed as we have send a patch to fix this 
> > > issue.
> > > And if anyone has any concerns, do let us know.
> >
> > Could you link to the BZ with the build error and patches that attempt
> > fixing it?
> >
> > We normally do not remove features with minor updates. Fixing the build
> > error would be the preferred approach.
> >
> > Thanks,
> > Niels
> 
> 
> 
> -- 
> Regards,
> Hari Gowtham.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-12 Thread Niels de Vos
On Wed, Jun 12, 2019 at 02:44:04PM +0530, Hari Gowtham wrote:
> Hi,
> 
> Due to the recent changes we made. we have a build issue because of glupy.
> As glupy is already removed from master, we are thinking of removing
> it in 5.7 as well rather than fixing the issue.
> 
> The release of 5.7 will be delayed as we have send a patch to fix this issue.
> And if anyone has any concerns, do let us know.

Could you link to the BZ with the build error and patches that attempt
fixing it?

We normally do not remove features with minor updates. Fixing the build
error would be the preferred approach.

Thanks,
Niels
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Meeting Details on footer of the gluster-devel and gluster-user mailing list

2019-05-08 Thread Niels de Vos
On Tue, May 07, 2019 at 11:37:27AM -0700, Vijay Bellur wrote:
> On Tue, May 7, 2019 at 11:15 AM FNU Raghavendra Manjunath 
> wrote:
> 
> >
> > + 1 to this.
> >
> 
> I have updated the footer of gluster-devel. If that looks ok, we can extend
> it to gluster-users too.
> 
> In case of a month with 5 Tuesdays, we can skip the 5th Tuesday and always
> stick to the first 4 Tuesdays of every month. That will help in describing
> the community meeting schedule better. If we want to keep the schedule
> running on alternate Tuesdays, please let me know and the mailing list
> footers can be updated accordingly :-).
> 
> 
> > There is also one more thing. For some reason, the community meeting is
> > not visible in my calendar (especially NA region). I am not sure if anyone
> > else also facing this issue.
> >
> 
> I did face this issue. Realized that we had a meeting today and showed up
> at the meeting a while later but did not see many participants. Perhaps,
> the calendar invite has to be made a recurring one.

Maybe a new invite can be sent with the minutes after a meeting has
finished. This makes it easier for people that recently subscribed to
the list to add it to their calendar?

Niels


> 
> Thanks,
> Vijay
> 
> 
> >
> > Regards,
> > Raghavendra
> >
> > On Tue, May 7, 2019 at 5:19 AM Ashish Pandey  wrote:
> >
> >> Hi,
> >>
> >> While we send a mail on gluster-devel or gluster-user mailing list,
> >> following content gets auto generated and placed at the end of mail.
> >>
> >> Gluster-users mailing list
> >> gluster-us...@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-users
> >>
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-devel
> >>
> >> In the similar way, is it possible to attach meeting schedule and link at 
> >> the end of every such mails?
> >> Like this -
> >>
> >> Meeting schedule -
> >>
> >>
> >>- APAC friendly hours
> >>   - Tuesday 14th May 2019, 11:30AM IST
> >>   - Bridge: https://bluejeans.com/836554017
> >>   - NA/EMEA
> >>   - Tuesday 7th May 2019, 01:00 PM EDT
> >>   - Bridge: https://bluejeans.com/486278655
> >>
> >> Or just a link to meeting minutes details??
> >>  
> >> https://github.com/gluster/community/tree/master/meetings
> >>
> >> This will help developers and users of the community to know when and 
> >> where meeting happens and how to attend those meetings.
> >>
> >> ---
> >> Ashish
> >>
> >>
> >>
> >>
> >>
> >>
> >> ___
> >> Gluster-users mailing list
> >> gluster-us...@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users

> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] gluster-block v0.4 is alive!

2019-05-06 Thread Niels de Vos
On Thu, May 02, 2019 at 11:04:41PM +0530, Prasanna Kalever wrote:
> Hello Gluster folks,
> 
> Gluster-block team is happy to announce the v0.4 release [1].
> 
> This is the new stable version of gluster-block, lots of new and
> exciting features and interesting bug fixes are made available as part
> of this release.
> Please find the big list of release highlights and notable fixes at [2].
> 
> Details about installation can be found in the easy install guide at
> [3]. Find the details about prerequisites and setup guide at [4].
> If you are a new user, checkout the demo video attached in the README
> doc [5], which will be a good source of intro to the project.
> There are good examples about how to use gluster-block both in the man
> pages [6] and test file [7] (also in the README).
> 
> gluster-block is part of fedora package collection, an updated package
> with release version v0.4 will be soon made available. And the
> community provided packages will be soon made available at [8].

Updates for Fedora are available in the testing repositories:

Fedora 30: https://bodhi.fedoraproject.org/updates/FEDORA-2019-76730d7230
Fedora 29: https://bodhi.fedoraproject.org/updates/FEDORA-2019-cc7cdce2a4
Fedora 28: https://bodhi.fedoraproject.org/updates/FEDORA-2019-9e9a210110

Installation instructions can be found at the above links. Please leave
testing feedback as comments on the Fedora Update pages.

Thanks,
Niels


> Please spend a minute to report any kind of issue that comes to your
> notice with this handy link [9].
> We look forward to your feedback, which will help gluster-block get better!
> 
> We would like to thank all our users, contributors for bug filing and
> fixes, also the whole team who involved in the huge effort with
> pre-release testing.
> 
> 
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/releases
> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
> [4] https://github.com/gluster/gluster-block#usage
> [5] https://github.com/gluster/gluster-block/blob/master/README.md
> [6] https://github.com/gluster/gluster-block/tree/master/docs
> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
> [8] https://download.gluster.org/pub/gluster/gluster-block/
> [9] https://github.com/gluster/gluster-block/issues/new
> 
> Cheers,
> Team Gluster-Block!
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] BZ updates

2019-04-24 Thread Niels de Vos
On Wed, Apr 24, 2019 at 08:44:10AM +0530, Nithya Balachandran wrote:
> All,
> 
> When working on a bug, please ensure that you update the BZ with any
> relevant information as well as the RCA. I have seen several BZs in the
> past which report crashes, however they do not have a bt or RCA captured.
> Having this information in the BZ makes it much easier to see if a newly
> reported issue has already been fixed.
> 
> I propose that maintainers merge patches only if the BZs are updated with
> required information. It will take some time to make this a habit but it
> will pay off in the end.

Great point! I really hope that most of the contributors know that
debugging steps in bugs are extremely valuable. When documented in a
bug, similar issues can be analyzed with the same techniques. As a
reminder for this, I'm proposing this addition to the Maintainer
Guidelines:

   https://github.com/gluster/glusterdocs/pull/471
   - Ensure the related Bug or GitHub Issue has sufficient details about the
 cause of the problem, or description of the introduction for the change.

I'd appreciate it when someone can approve and merge that. Of course,
suggestions for rephrasing are welcome too.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regards glusterd.service is not started automatically after reboot the node

2019-04-17 Thread Niels de Vos
On Wed, Apr 17, 2019 at 11:36:09AM +0530, Mohit Agrawal wrote:
> Hi,
> 
>   We are facing an issue after install glusterfs-6 rpms. After reboot, the
> node glusterd.service is not started automatically because
>glusterd.service is not enabled by the installation script.I am not able
> to find the patch from that we deleted command to enable service
>from glusterfs.spec.in.
>   I have posted a patch(https://review.gluster.org/#/c/glusterfs/+/22584/)
> to resolve the same.

This is not a bug, it is expected behaviour.

Services are not allowed to get automatically enabled through RPM
scriptlets. Distributions that want to enable glusterd by default should
provide a systemd preset as explained in
https://www.freedesktop.org/wiki/Software/systemd/Preset/ . This is
something you could contribute to
https://github.com/CentOS-Storage-SIG/centos-release-gluster/tree/6

HTH,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-27 Thread Niels de Vos
On Tue, Mar 26, 2019 at 02:52:33PM -0700, Vijay Bellur wrote:
> On Thu, Mar 21, 2019 at 8:44 AM Yaniv Kaul  wrote:
> 
> >
> >
> > On Thu, Mar 21, 2019 at 5:23 PM Nithya Balachandran 
> > wrote:
> >
> >>
> >>
> >> On Thu, 21 Mar 2019 at 16:16, Atin Mukherjee  wrote:
> >>
> >>> All,
> >>>
> >>> In the last few releases of glusterfs, with stability as a primary theme
> >>> of the releases, there has been lots of changes done on the code
> >>> optimization with an expectation that such changes will have gluster to
> >>> provide better performance. While many of these changes do help, but off
> >>> late we have started seeing some diverse effects of them, one especially
> >>> being the calloc to malloc conversions. While I do understand that malloc
> >>> syscall will eliminate the extra memset bottleneck which calloc bears, but
> >>> with recent kernels having in-built strong compiler optimizations I am not
> >>> sure whether that makes any significant difference, but as I mentioned
> >>> earlier certainly if this isn't done carefully it can potentially 
> >>> introduce
> >>> lot of bugs and I'm writing this email to share one of such experiences.
> >>>
> >>> Sanju & I were having troubles for last two days to figure out why
> >>> https://review.gluster.org/#/c/glusterfs/+/22388/ wasn't working in
> >>> Sanju's system but it had no problems running the same fix in my gluster
> >>> containers. After spending a significant amount of time, what we now
> >>> figured out is that a malloc call [1] (which was a calloc earlier) is the
> >>> culprit here. As you all can see, in this function we allocate txn_id and
> >>> copy the event->txn_id into it through gf_uuid_copy () . But when we were
> >>> debugging this step wise through gdb, txn_id wasn't exactly copied with 
> >>> the
> >>> exact event->txn_id and it had some junk values which made the
> >>> glusterd_clear_txn_opinfo to be invoked with a wrong txn_id later on
> >>> resulting the leaks to remain the same which was the original intention of
> >>> the fix.
> >>>
> >>> This was quite painful to debug and we had to spend some time to figure
> >>> this out. Considering we have converted many such calls in past, I'd urge
> >>> that we review all such conversions and see if there're any side effects 
> >>> to
> >>> it. Otherwise we might end up running into many potential memory related
> >>> bugs later on. OTOH, going forward I'd request every patch
> >>> owners/maintainers to pay some special attention to these conversions and
> >>> see they are really beneficial and error free. IMO, general guideline
> >>> should be - for bigger buffers, malloc would make better sense but has to
> >>> be done carefully, for smaller size, we stick to calloc.
> >>>
> >>
> >>> What do others think about it?
> >>>
> >>
> >> I believe that replacing calloc with malloc everywhere without adequate
> >> testing and review is not safe and am against doing so for the following
> >> reasons:
> >>
> >
> > No patch should get in without adequate testing and thorough review.
> >
> 
> 
> There are lots of interesting points to glean in this thread. However, this
> particular one caught my attention. How about we introduce a policy that no
> patch gets merged unless it is thoroughly tested? The onus would be on the
> developer to provide a .t test case to show completeness in the testing of
> that patch. If the developer does not or cannot for any reason, we could
> have the maintainer run tests and add a note in gerrit explaining the tests
> run. This would provide more assurance about the patches being tested
> before getting merged. Obviously, patches that fix typos or that cannot
> affect any functionality need not be subject to this policy.
> 
> As far as review thoroughness is concerned, it might be better to mandate
> acks from respective maintainers before merging a patch that affects
> several components. More eyeballs that specialize in particular
> component(s) will hopefully catch some of these issues during the review
> phase.

Both of these points have always been strongly encouraged. They are also
documented in the
https://docs.gluster.org/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
https://github.com/gluster/glusterdocs/blob/master/docs/Contributors-Guide/Guidelines-For-Maintainers.md
(formatting is broken in the 1st link, but I dont know how to fix it)

We probably need to apply our own guidelines a little better, and
remember developers that > 90% of the patch(series) should come with a
.t file or added test in an existing one.

And a big +1 for getting reviews or at least some involvement of the
component maintainers.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gfapi: add function to set client-pid

2019-03-12 Thread Niels de Vos
On Tue, Mar 12, 2019 at 03:00:27PM +0530, Ravishankar N wrote:
> Hello,
> 
> I'm planning to expose setting client-pid for gfapi clients via a new api,
> something like `glfs_set_client_pid (fs, pid)`.
> The functionality already exists for fuse mounts via the --client-pid=$PID
> option, where the value is captured in
> glusterfs_ctx_t->cmd_args->client_pid.
> 
> Background:
> 
> If the glusterfs eventing framework is enabled, AFR sends child-up/child
> down events (via the gf_event() call) in the notify code path whenever there
> is a connect/disconnect at AFR level. While this is okay for normal client
> processes, it does not make much sense if the event is coming from say
> glfsheal, which is a gfapi based program (having the AFR xlator) that is
> invoked when you run the heal info set of commands. Many applications
> periodically run heal info to monitor the heals and display it on the
> dashboard (like tendryl), leading to a flood of child up/ down messages to
> the application monitoring these events.
> 
> We need to add a unique key=value to all such gf_event() calls in AFR, based
> on which the consumer of the events can decide to ignore them if needed.
> This key-value can be client-pid=$PID, where PID can be
> GF_CLIENT_PID_SELF_HEALD for selfheal daemon, GF_CLIENT_PID_GLFS_HEAL for
> glfsheal etc (these values are already defined in the code). This is why we
> need a way to set the client-pid for gfapi clients as well.
> 
> Another approach would be to add an xlator option (say 'client-name')
> specific to AFR  and use that as the key-value pair but it seems to be an
> overkill to do that just for the sake of eventing purposes. Besides, the pid
> approach can also be extended to other gluster processes like rebalance, shd
> and other daemons where AFR is loaded but AFR child-up/down events from it
> are not of any particular interest.These daemons will now have to be spawned
> by glusterd with the --client-pid option.

Sounds good to me. This probably should be a function that is not
available to all gfapi consumers, so please use api/src/glfs-internal.h
for that. With clear documentation as written in the email, it should be
obvious that only Gluster internal processes may use it.

Adding the integration mailinglist on CC, as that is where all
discussions around gfapi should be archived.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Building Glusterfs-5.3 on armhf

2019-01-21 Thread Niels de Vos
On Sun, Jan 20, 2019 at 01:43:55PM -0500, Richard Betel wrote:
> I've got some odroid HC2's running debian 9 that i'd like to run gluster
> on, but I want to run something current, not 3.8! So I'm trying to build
> 5.3, but I can't get through the./configure.
> 
> At first, I forgot to run autogen, so i was using whatever configure I had,
> and it would error out on sqlite, even though I have the sqlite3 dev
> libraries installed. Anyhow, I realized my mistake, and ran autogen.sh .
> Now configure dies on libuuid which is also installed. before autogen it
> got well past it. here's the last few lines:
> checking sys/extattr.h usability... no
> checking sys/extattr.h presence... no
> checking for sys/extattr.h... no
> checking openssl/dh.h usability... yes
> checking openssl/dh.h presence... yes
> checking for openssl/dh.h... yes
> checking openssl/ecdh.h usability... yes
> checking openssl/ecdh.h presence... yes
> checking for openssl/ecdh.h... yes
> checking for pow in -lm... yes
> ./configure: line 13788: syntax error near unexpected token `UUID,'
> ./configure: line 13788: `PKG_CHECK_MODULES(UUID, uuid,'
> 
> Here's the config line that fails (with some:
> PKG_CHECK_MODULES(UUID, uuid,
> have_uuid=yes
>  AC_DEFINE(HAVE_LIBUUID, 1, [have libuuid.so])
>  PKGCONFIG_UUID=uuid,
> have_uuid=no)
>  if test x$have_uuid = xyes; then
>   HAVE_LIBUUID_TRUE=
>   HAVE_LIBUUID_FALSE='#'
> else
>   HAVE_LIBUUID_TRUE='#'
>   HAVE_LIBUUID_FALSE=
> fi
> 
> I tried putting "echo FOO"  before the PKG_CHECK_MODULES and it outputs
> correctly, so I'm pretty sure the problem isn't a dropped quote or
> parenthesis.
> 
> Any suggestions on what to look for to debug this?

You might be missing the PKG_CHECK_MODULES macro. Can you make sure you
have pkg-config installed?

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [CI-results] gluster_build-rpms - Build # 9068 - Still Failing! (release-3.12 on CentOS-6/x86_64)

2018-12-10 Thread Niels de Vos
On Sat, Dec 08, 2018 at 12:27:56PM +0100, Niels de Vos wrote:
> On Sat, Dec 08, 2018 at 01:17:11AM +, c...@centos.org wrote:
> > gluster_build-rpms - Build # 9068 - Still Failing:
> > 
> > Check console output at https://ci.centos.org/job/gluster_build-rpms/9068/ 
> > to view the results.
> 
> This is strange, with Gluster 5 being released the
> centos-release-gluster package that gets installed is
> centos-release-gluster-legacy which disables unmainted versions. Instead
> of centos-release-gluster-legacy it is expected to get
> centos-release-gluster5.
> 
> I can reproduce this in a clean CentOS 7.1810 Vagrant VM.
> 
> [root@localhost ~]# repoquery --whatprovides centos-release-gluster
> centos-release-gluster-legacy-0:4.0-1.el7.centos.noarch
> centos-release-gluster41-0:1.0-3.el7.centos.noarch
> centos-release-gluster5-0:1.0-1.el7.centos.noarch
> [root@localhost ~]# repoquery --provides centos-release-gluster5
> centos-release-gluster = 5
> centos-release-gluster5 = 1.0-1.el7.centos
> config(centos-release-gluster5) = 1.0-1.el7.centos
> [root@localhost ~]# repoquery --provides centos-release-gluster-legacy
> centos-release-gluster = 3.10
> centos-release-gluster = 3.12
> centos-release-gluster = 3.6
> centos-release-gluster = 3.7
> centos-release-gluster = 3.8
> centos-release-gluster = 4.0
> centos-release-gluster-legacy = 4.0-1.el7.centos
> 
> The highest version for centos-release-gluster comes from
> centos-release-gluster5. It is unclear to me why yum chooses to install
> the -legacy one.
> 
> [root@localhost ~]# yum --verbose install centos-release-gluster
> Loading "fastestmirror" plugin
> Config time: 0.005
> Yum version: 3.4.3
> rpmdb time: 0.000
> Setting up Package Sacks
> Loading mirror speeds from cached hostfile
>  * base: mirror.neostrada.nl
>  * extras: mirror.neostrada.nl
>  * updates: ftp.nluug.nl
> pkgsack time: 0.007
> Checking for virtual provide or file-provide for centos-release-gluster
> looking for ('centos-release', 'GE', ('0', '7', '5.1804.el7.centos.2')) 
> as a requirement of centos-release-gluster41.noarch 0:1.0-3.el7.centos - None
> looking for ('centos-release-storage-common', None, (None, None, None)) 
> as a requirement of centos-release-gluster41.noarch 0:1.0-3.el7.centos - None
> looking for ('centos-release', 'GE', ('0', '7', '5.1804.el7.centos.2')) 
> as a requirement of centos-release-gluster5.noarch 0:1.0-1.el7.centos - None
> looking for ('centos-release-storage-common', None, (None, None, None)) 
> as a requirement of centos-release-gluster5.noarch 0:1.0-1.el7.centos - None
> Obs Init time: 0.057
> Resolving Dependencies
> --> Running transaction check
> ---> Package centos-release-gluster-legacy.noarch 0:4.0-1.el7.centos will 
> be installed
> Checking deps for centos-release-gluster-legacy.noarch 0:4.0-1.el7.centos 
> - u
> --> Finished Dependency Resolution
> Dependency Process ending
> Depsolve time: 0.290
> 
> Dependencies Resolved
> 
> 
> 
>  Package  Arch  Version Repository
>   
>  Size
> 
> 
> Installing:
>  centos-release-gluster-legacynoarch4.0-1.el7.centosextras
> 5.0 k
> 
> Transaction Summary
> 
> 
> Install  1 Package
> 
> However there seems to be a workaround... If
> centos-release-storage-common is installed already, the -gluster5
> package gets installed?! Possibly yum changed from picking the latest
> version to 'fewest dependencies', or something?
> 
> [root@localhost ~]# yum --verbose install centos-release-storage-common 
> centos-release-gluster 
> Loading "fastestmirror" plugin
> Config time: 0.005
> Yum version: 3.4.3
> rpmdb time: 0.000
> Setting up Package Sacks
> Loading mirror speeds from cached hostfile
>  * base: mirror.neostrada.nl
>  * extras: mirror.neostrada.nl
>  * updates: ftp.nluug.nl
> pkgsack time: 0.008
> Obs Init time: 0.056
> Checking for virtual provide or file-provide for centos-release-gluster
> looking for ('centos-release', 'GE', ('0', '7', '5.1804.el7.centos.2')) 
> as a requirement of centos-release-gluster41.noarch 0:1.0-3.el7.c

[Gluster-devel] Dynamic provisioning does not provide the amount of 'free space' that was requested

2018-11-16 Thread Niels de Vos
The KubeVirt team found a bug in how Heketi provisions requested volumes
(PVCs). The problem that they hit is related to how much overhead a
filesystem needs vs how much free space is expected. This comes down to
the following:

- Heketi gets asked to provision a volume of 4GB
- checking the available space once the volume is a little less
- KubeVirt will not be able to create a 4GB disk image and errors out

It seems that Heketi does not take all overhead into account while
creating bricks, and the Gluster volume. The following overhead items
are identified:

- XFS metadata after formatting the block-device (LVM/LV)
- GlusterFS metadata under .glusterfs
- space reservation with the `storage.reserve` volume option

In order to improve Heketi and fullfill the 'requested size' correctly,
we will need to estimate/calculate the overhead and only then execute
the volume creation operations. Ideas and suggestions for this are most
welcome.

https://bugzilla.redhat.com/show_bug.cgi?id=1649991 is the main bug for
this. However it is likely that gluster-block, glusterd2 and other smart
provisioners are affected with the same problem.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Centos CI automation Retrospective

2018-11-02 Thread Niels de Vos
On Fri, Nov 02, 2018 at 11:32:12AM +0530, Nigel Babu wrote:
> Hello folks,
> 
> On Monday, I merged in the changes that allowed all the jobs in Centos CI
> to be handled in an automated fashion. In the past, it depended on Infra
> team members to review, merge, and apply the changes on Centos CI. I've now
> changed that so that the individual job owners can do their own merges.
> 
> 1. On sending a pull request, a travis-ci job will ensure the YAML is valid
> JJB.
> 2. On merge, we'll apply the changes to ci.centos.org with travis-ci.

Thanks for getting this done, it is a great improvement!

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Building GD2 from glusterd2-v5.0-0-vendor.tar.xz fails on CentOS-7

2018-10-31 Thread Niels de Vos
On Tue, Oct 30, 2018 at 08:46:36AM -0400, Kaleb S. KEITHLEY wrote:
> On 10/30/18 5:10 AM, Niels de Vos wrote:
> 
> > 
> > Thanks! But even on x86_64 there only seems to be
> > golang-1.8.3-1.2.1.el7.x86_64 in the buildroot. I can not find
> > golang-1.9.4, can you check where it comes from? The build details are
> > in https://cbs.centos.org/koji/taskinfo?taskID=595140 and you can check
> > the root.log for the packages+versions that get installed.
> 
> It's because golang-1.8 was tagged into storage7-gluster-common-candidate:
> 
> % cbs list-tagged storage7-gluster-common-candidate
> 
> Build Tag   Built by
>   
> 
> ...
> golang-1.8.3-1.2.1.el7
> storage7-gluster-common-candidate  tdawson
> ...
> 
> Not sure why it was ever
>  tagged into storage7-gluster-common-candidate. I untagged it. gd2
> builds should get golang-1.9 now.
> 
> I tried to resubmit the task for your build but only the owner or an
> admin can do that.
> 
> thanks to arrfab for helping me untangle the tags

Thanks! The older version might have been tagged for earlier gd2 builds.
I'll get the builds done now and will try to get the Gluster 5 release
from the Storage SIG out of the door today.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Building GD2 from glusterd2-v5.0-0-vendor.tar.xz fails on CentOS-7

2018-10-30 Thread Niels de Vos
On Tue, Oct 30, 2018 at 12:45:35PM +0530, Kaushal M wrote:
> On Tue, Oct 30, 2018 at 11:50 AM Kaushal M  wrote:
> >
> > On Tue, Oct 30, 2018 at 2:20 AM Niels de Vos  wrote:
> > >
> > > Hi,
> > >
> > > not sure what is going wrong when building GD2 for the CentOS Storage
> > > SIG, but it seems to fail with some golang import issues:
> > >
> > >   https://cbs.centos.org/kojifiles/work/tasks/5141/595141/build.log
> > >
> > >   + cd glusterd2-v5.0-0
> > >   ++ pwd
> > >   + export GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
> > >   + GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
> > >   + mkdir -p src/github.com/gluster
> > >   + ln -s ../../../ src/github.com/gluster/glusterd2
> > >   + pushd src/github.com/gluster/glusterd2
> > >   ~/build/BUILD/glusterd2-v5.0-0/src/github.com/gluster/glusterd2 
> > > ~/build/BUILD/glusterd2-v5.0-0
> > >   + /usr/bin/make PREFIX=/usr EXEC_PREFIX=/usr BINDIR=/usr/bin 
> > > SBINDIR=/usr/sbin DATADIR=/usr/share LOCALSTATEDIR=/var/lib 
> > > LOGDIR=/var/log SYSCONFDIR=/etc FASTBUILD=off glusterd2
> > >   Plugins Enabled
> > >   Building glusterd2 v5.0-0
> > >   # github.com/gluster/glusterd2/vendor/github.com/coreos/etcd/clientv3
> > >   vendor/github.com/coreos/etcd/clientv3/client.go:346: cannot use 
> > > c.tokenCred (type *authTokenCredential) as type 
> > > credentials.PerRPCCredentials in argument to grpc.WithPerRPCCredentials:
> > > *authTokenCredential does not implement 
> > > credentials.PerRPCCredentials (wrong type for GetRequestMetadata method)
> > > have GetRequestMetadata("context".Context, ...string) 
> > > (map[string]string, error)
> > > want 
> > > GetRequestMetadata("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
> > >  ...string) (map[string]string, error)
> > >   vendor/github.com/coreos/etcd/clientv3/client.go:421: cannot use 
> > > client.balancer (type *healthBalancer) as type grpc.Balancer in argument 
> > > to grpc.WithBalancer:
> > > *healthBalancer does not implement grpc.Balancer (wrong type for 
> > > Get method)
> > > have Get("context".Context, grpc.BalancerGetOptions) 
> > > (grpc.Address, func(), error)
> > > want 
> > > Get("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
> > >  grpc.BalancerGetOptions) (grpc.Address, func(), error)
> > >   vendor/github.com/coreos/etcd/clientv3/retry.go:145: cannot use 
> > > retryKVClient literal (type *retryKVClient) as type etcdserverpb.KVClient 
> > > in return argument:
> > > *retryKVClient does not implement etcdserverpb.KVClient (wrong 
> > > type for Compact method)
> > > have Compact("context".Context, 
> > > *etcdserverpb.CompactionRequest, ...grpc.CallOption) 
> > > (*etcdserverpb.CompactionResponse, error)
> > > want 
> > > Compact("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
> > >  *etcdserverpb.CompactionRequest, ...grpc.CallOption) 
> > > (*etcdserverpb.CompactionResponse, error)
> > >   ...
> > >
> > > Did anyone else try to build this on CentOS-7 (without EPEL)?
> >
> > This occurs when Go<1.9 is used to build GD2. The updated etcd version
> > we vendor (etcd 3.3) requires Go>=1.9 to compile.
> > But the failure here is strange, because CentOS-7 has golang-1.9.4 in
> > its default repositories.
> > Don't know what's going wrong here.
> 
> Looked at the logs again. This is an aarch64 build. It seems that
> CentOS-7 for aarch64 is still on go1.8.
> So, we could disable aarch64 for GD2 until the newer Go compiler is available.

Thanks! But even on x86_64 there only seems to be
golang-1.8.3-1.2.1.el7.x86_64 in the buildroot. I can not find
golang-1.9.4, can you check where it comes from? The build details are
in https://cbs.centos.org/koji/taskinfo?taskID=595140 and you can check
the root.log for the packages+versions that get installed.

golang-1.10.2-1.el7 is available for x86_64, but that requires some ugly
build workaround (also easy to forget reverting when there is a golang
update). And then there is still the need for aarch64 and ppc64le.
Obviously the goal it to provide an equal set of packages on all
architectures, like we have been doing for a while now.

I'll see what replies I get from the CentOS folks, maybe there is a way
to get a golang update outside of the base repository.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Building GD2 from glusterd2-v5.0-0-vendor.tar.xz fails on CentOS-7

2018-10-29 Thread Niels de Vos
Hi,

not sure what is going wrong when building GD2 for the CentOS Storage
SIG, but it seems to fail with some golang import issues:

  https://cbs.centos.org/kojifiles/work/tasks/5141/595141/build.log

  + cd glusterd2-v5.0-0
  ++ pwd
  + export GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
  + GOPATH=/builddir/build/BUILD/glusterd2-v5.0-0:/usr/share/gocode
  + mkdir -p src/github.com/gluster
  + ln -s ../../../ src/github.com/gluster/glusterd2
  + pushd src/github.com/gluster/glusterd2
  ~/build/BUILD/glusterd2-v5.0-0/src/github.com/gluster/glusterd2 
~/build/BUILD/glusterd2-v5.0-0
  + /usr/bin/make PREFIX=/usr EXEC_PREFIX=/usr BINDIR=/usr/bin 
SBINDIR=/usr/sbin DATADIR=/usr/share LOCALSTATEDIR=/var/lib LOGDIR=/var/log 
SYSCONFDIR=/etc FASTBUILD=off glusterd2
  Plugins Enabled
  Building glusterd2 v5.0-0
  # github.com/gluster/glusterd2/vendor/github.com/coreos/etcd/clientv3
  vendor/github.com/coreos/etcd/clientv3/client.go:346: cannot use c.tokenCred 
(type *authTokenCredential) as type credentials.PerRPCCredentials in argument 
to grpc.WithPerRPCCredentials:
*authTokenCredential does not implement credentials.PerRPCCredentials 
(wrong type for GetRequestMetadata method)
have GetRequestMetadata("context".Context, ...string) 
(map[string]string, error)
want 
GetRequestMetadata("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context,
 ...string) (map[string]string, error)
  vendor/github.com/coreos/etcd/clientv3/client.go:421: cannot use 
client.balancer (type *healthBalancer) as type grpc.Balancer in argument to 
grpc.WithBalancer:
*healthBalancer does not implement grpc.Balancer (wrong type for Get 
method)
have Get("context".Context, grpc.BalancerGetOptions) 
(grpc.Address, func(), error)
want 
Get("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context, 
grpc.BalancerGetOptions) (grpc.Address, func(), error)
  vendor/github.com/coreos/etcd/clientv3/retry.go:145: cannot use retryKVClient 
literal (type *retryKVClient) as type etcdserverpb.KVClient in return argument:
*retryKVClient does not implement etcdserverpb.KVClient (wrong type for 
Compact method)
have Compact("context".Context, 
*etcdserverpb.CompactionRequest, ...grpc.CallOption) 
(*etcdserverpb.CompactionResponse, error)
want 
Compact("github.com/gluster/glusterd2/vendor/golang.org/x/net/context".Context, 
*etcdserverpb.CompactionRequest, ...grpc.CallOption) 
(*etcdserverpb.CompactionResponse, error)
  ...

Did anyone else try to build this on CentOS-7 (without EPEL)?

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Adding ALUA support for Gluster-Block

2018-10-29 Thread Niels de Vos
On Mon, Oct 29, 2018 at 12:06:53PM +0530, Susant Palai wrote:
> On Fri, Oct 26, 2018 at 6:22 PM Niels de Vos  wrote:
> 
> > On Fri, Oct 26, 2018 at 05:54:28PM +0530, Susant Palai wrote:
> > > Hi,
> > >For ALUA in Gluster-Block, we need fencing support from GlusterFS.
> > This
> > > is targeted mainly to avoid corruption issues during fail-over from
> > > INITIATOR.
> > >
> > > You can find the problem statement, design document at [1] and the GitHub
> > > discussions at [2].
> > >
> > > Requesting your feedback on the same.
> >
> > From a quick glance, this looks very much like leases/delegations that
> > have been added for Samba and NFS-Ganesha. Can you explain why using
> > that is not sufficient?
> >
> Niels, is it that you are suggesting leases/delegations already solves the
> problem we are trying to solve mentioned in the design document or just the
> mandatory lock part?

I would be interested to know if you can use leases/delegations to solve
the issue. If you can not, can leases/delegations be extended instead of
proposing an new API?

>From theory, the high-available NFS-Ganesha and Samba services should
have solved similar problems already.

IIRC Anoop CS And Soumya have been working on this mostly. If you have
specific questions about the implementation in Samba or NFS-Ganesha, ask
on this list and include them on CC.

Also, we do have the (low-volume) integrat...@gluster.org list for
discussions around integrating gfapi with other projects. There might be
others that are interested in these kind of details.

Thanks,
Niels


> 
> >
> > Thanks,
> > Niels
> >
> >
> > >
> > > Thanks,
> > > Susant/Amar/Shyam/Prasanna/Xiubo
> > >
> > >
> > >
> > > [1]
> > >
> > https://docs.google.com/document/d/1up5egL9SxmVKFpZMUEuuYML6xS2mNmBGzyZbMaw1fl0/edit?usp=sharing
> > > [2]
> > >
> > https://github.com/gluster/gluster-block/issues/53#issuecomment-432924044
> >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> >
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Adding ALUA support for Gluster-Block

2018-10-26 Thread Niels de Vos
On Fri, Oct 26, 2018 at 05:54:28PM +0530, Susant Palai wrote:
> Hi,
>For ALUA in Gluster-Block, we need fencing support from GlusterFS. This
> is targeted mainly to avoid corruption issues during fail-over from
> INITIATOR.
> 
> You can find the problem statement, design document at [1] and the GitHub
> discussions at [2].
> 
> Requesting your feedback on the same.

>From a quick glance, this looks very much like leases/delegations that
have been added for Samba and NFS-Ganesha. Can you explain why using
that is not sufficient?

Thanks,
Niels


> 
> Thanks,
> Susant/Amar/Shyam/Prasanna/Xiubo
> 
> 
> 
> [1]
> https://docs.google.com/document/d/1up5egL9SxmVKFpZMUEuuYML6xS2mNmBGzyZbMaw1fl0/edit?usp=sharing
> [2]
> https://github.com/gluster/gluster-block/issues/53#issuecomment-432924044

> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] FOSDEM Call for Participation: Software Defined Storage devroom

2018-10-12 Thread Niels de Vos
CfP for the Software Defined Storage devroom at FOSDEM 2019 (Brussels,
Belgium, February 3rd).

FOSDEM is a free software event that offers open source communities a
place to meet, share ideas and collaborate. It is renown for being
highly developer- oriented and brings together 8000+ participants from
all over the world.  It is held in the city of Brussels (Belgium).

FOSDEM 2019 will take place during the weekend of February 2nd-3rd 2019.
More details about the event can be found at http://fosdem.org/

** Call For Participation

The Software Defined Storage devroom will go into it's third round for
talks around Open Source Software Defined Storage projects, management
tools and real world deployments.

Presentation topics could include but are not limited too:

- Your work on a SDS project like Ceph, Gluster, OpenEBS or LizardFS

- Your work on or with SDS related projects like SWIFT or Container
  Storage Interface

- Management tools for SDS deployments

- Monitoring tools for SDS clusters

** Important dates:

- Nov 25th 2018:  submission deadline for talk proposals
- Dec 17th 2018:  announcement of the final schedule
- Feb  3rd 2019:  Software Defined Storage dev room

Talk proposals will be reviewed by a steering committee:
- Niels de Vos (Gluster Developer - Red Hat)
- Jan Fajerski (Ceph Developer - SUSE)
- other volunteers TBA

Use the FOSDEM 'pentabarf' tool to submit your proposal:
https://penta.fosdem.org/submission/FOSDEM19

- If necessary, create a Pentabarf account and activate it.
  Please reuse your account from previous years if you have already
  created it.

- In the "Person" section, provide First name, Last name
  (in the "General" tab), Email (in the "Contact" tab) and Bio
  ("Abstract" field in the "Description" tab).

- Submit a proposal by clicking on "Create event".

- Important! Select the "Software Defined Storage devroom" track (on the
  "General" tab).

- Provide the title of your talk ("Event title" in the "General" tab).

- Provide a description of the subject of the talk and the intended
  audience (in the "Abstract" field of the "Description" tab)

- Provide a rough outline of the talk or goals of the session (a short
  list of bullet points covering topics that will be discussed) in the
  "Full description" field in the "Description" tab

- Provide an expected length of your talk in the "Duration" field. Please
  count at least 10 minutes of discussion into your proposal plus allow
  5 minutes for the handover to the next presenter.
  Suggested talk length would be 20+10 and 45+15 minutes.

** Recording of talks

The FOSDEM organizers plan to have live streaming and recording fully
working, both for remote/later viewing of talks, and so that people can
watch streams in the hallways when rooms are full. This requires
speakers to consent to being recorded and streamed. If you plan to be a
speaker, please understand that by doing so you implicitly give consent
for your talk to be recorded and streamed. The recordings will be
published under the same license as all FOSDEM content (CC-BY).

Hope to hear from you soon! And please forward this announcement.

If you have any further questions, please write to the mailinglist at
storage-devr...@lists.fosdem.org and we will try to answer as soon as
possible.

Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

2018-10-11 Thread Niels de Vos
On Thu, Oct 11, 2018 at 09:00:50AM -0400, Shyam Ranganathan wrote:
> On 10/10/2018 11:20 PM, Atin Mukherjee wrote:
> > 
> > 
> > On Wed, 10 Oct 2018 at 20:30, Shyam Ranganathan  > > wrote:
> > 
> > The following options were added post 4.1 and are part of 5.0 as the
> > first release for the same. They were added in as part of bugs, and
> > hence looking at github issues to track them as enhancements did not
> > catch the same.
> > 
> > We need to document it in the release notes (and also the gluster doc.
> > site ideally), and hence I would like a some details on what to write
> > for the same (or release notes commits) for them.
> > 
> > Option: cluster.daemon-log-level
> > Attention: @atin
> > Review: https://review.gluster.org/c/glusterfs/+/20442
> > 
> > 
> > This option has to be used based on extreme need basis and this is why
> > it has been mentioned as GLOBAL_NO_DOC. So ideally this shouldn't be
> > documented.
> > 
> > Do we still want to capture it in the release notes?
> 
> This is an interesting catch-22, when we want users to use the option
> (say to provide better logs for troubleshooting), we have nothing to
> point to, and it would be instructions (repeated over the course of
> time) over mails.
> 
> I would look at adding this into an options section in the docs, but the
> best I can find in there is
> https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/
> 
> I would say we need to improve the way we deal with options and the
> required submissions around the same.
> 
> Thoughts?

Maybe this should be documented under
https://docs.gluster.org/en/latest/Troubleshooting/ and not the general
"Managing Volumes" part of the docs.

Having it documented *somewhere* is definitely needed. And because it
seems to be related to debugging particular components, the
Troubleshooting section seems appropriate.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-30 Thread Niels de Vos
On Sat, Sep 29, 2018 at 05:53:42PM +0300, Yaniv Kaul wrote:
> On Sat, Sep 29, 2018, 5:03 PM Hetz Ben Hamo  wrote:
> 
> > /dev/disk-by-id could be problematic. it only showing disks that have been
> > formatted.
> >
> > For example, I've just created a node with 3 disks and on Anaconda I chose
> > only the first disk. After the node installation and reboot, I see on
> > /dev/disk/by-id only the DM, and the DVD, not the two unformatted disks
> > (which can be seen using lsscsi command).
> > Anaconda, however, does see the disks, details etc...
> >
> 
> That's not what I know. Might be something with udev or some filtering, but
> certainly I was not aware it's related to formatting.

Unfortnately not all disks provide an (stable?) ID. After formatting (at
least with 'pvcreate'), the UUID from the LVM-header is also used for
creating /dev/disk/by-id/... symlinks to the disk/partition.

I'm not so sure about getting the symlinks after formatting with a
filesystem though.

Related to this: https://github.com/heketi/heketi/issues/1371

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-28 Thread Niels de Vos
On Fri, Sep 28, 2018 at 09:14:15AM -0400, Shyam Ranganathan wrote:
> On 09/28/2018 09:11 AM, Niels de Vos wrote:
> > On Fri, Sep 28, 2018 at 08:57:06AM -0400, Shyam Ranganathan wrote:
> >> On 09/28/2018 06:12 AM, Niels de Vos wrote:
> >>> On Thu, Sep 27, 2018 at 08:40:54AM -0400, Shyam Ranganathan wrote:
> >>>> On 09/27/2018 08:07 AM, Kaleb S. KEITHLEY wrote:
> >>>>>> The thought is,
> >>>>>> - Add a configure option "--enable-py-version-correction" to configure,
> >>>>>> that is disabled by default
> >>>>> "correction" implies there's something that's incorrect. How about
> >>>>> "conversion" or perhaps just --enable-python2
> >>>>>
> >>>> I would not like to go with --enable-python2 as that implies it is an
> >>>> conscious choice with the understanding that py2 is on the box. Given
> >>>> the current ability to detect and hence correct the python shebangs, I
> >>>> would think we should retain it as a more detect and modify the shebangs
> >>>> option name. (I am looking at this more as an option that does the right
> >>>> thing implicitly than someone/tool using this checking explicitly, which
> >>>> can mean different things to different people, if that makes sense)
> >>>>
> >>>> Now "correction" seems like an overkill, maybe "conversion"?
> >>> Is it really needed to have this as an option? Instead of an option in
> >>> configure.ac, can it not be a post-install task in a Makefile.am? The
> >>> number of executable python scripts that get installed are minimal, so I
> >>> do not expect that a lot of changes are needed for this.
> >>
> >> Here is how I summarize this proposal,
> >> - Perform the shebang "correction" for py2 in the post install
> >>   - Keeps the git clone clean
> >> - shebang correction occurs based on a configure time option
> >>   - It is not implicit but an explicit choice to correct the shebangs to
> >> py2, hence we need an option either way
> >> - The configure option would be "--enable-python2"
> >>   - Developers that need py2, can configure it as such
> >>   - Regression jobs that need py2, either because of the platform they
> >> test against, or for py2 compliance in the future, use the said option
> >>   - Package builds are agnostic to these changes (currently) as they
> >> decide at build time based on the platform what needs to be done.
> > 
> > I do not think such a ./configure option is needed. configure.ac can
> > find out the version that is available, and pick python3 if it has both.
> > 
> > Tests should just run with "$PYTHON run-the-test.py" instead of
> > ./run-the-test.py with a #!/usr/bin/python shebang. The testing
> > framework can also find out what version of python is available.
> 
> If we back up a bit here, if all shebangs are cleared, then we do not
> need anything. That is not the situation at the moment, and neither do I
> know if that state can be reached.

Not all shebangs need to go away, only the ones for the test-cases. A
post-install hook can modify the shebangs from python3 to python2
depending on what ./configure detected.

> We also need to ensure we work against py2 and py3 for the near future,
> which entails being specific in some regression job at least on the
> python choice, does that correct the shebangs really depends on the
> above conclusion.

Ok, so if both python2 and python3 are available, and you want to run
with python2, just run "python2 my-py2-test.py"?

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-28 Thread Niels de Vos
On Fri, Sep 28, 2018 at 08:57:06AM -0400, Shyam Ranganathan wrote:
> On 09/28/2018 06:12 AM, Niels de Vos wrote:
> > On Thu, Sep 27, 2018 at 08:40:54AM -0400, Shyam Ranganathan wrote:
> >> On 09/27/2018 08:07 AM, Kaleb S. KEITHLEY wrote:
> >>>> The thought is,
> >>>> - Add a configure option "--enable-py-version-correction" to configure,
> >>>> that is disabled by default
> >>> "correction" implies there's something that's incorrect. How about
> >>> "conversion" or perhaps just --enable-python2
> >>>
> >> I would not like to go with --enable-python2 as that implies it is an
> >> conscious choice with the understanding that py2 is on the box. Given
> >> the current ability to detect and hence correct the python shebangs, I
> >> would think we should retain it as a more detect and modify the shebangs
> >> option name. (I am looking at this more as an option that does the right
> >> thing implicitly than someone/tool using this checking explicitly, which
> >> can mean different things to different people, if that makes sense)
> >>
> >> Now "correction" seems like an overkill, maybe "conversion"?
> > Is it really needed to have this as an option? Instead of an option in
> > configure.ac, can it not be a post-install task in a Makefile.am? The
> > number of executable python scripts that get installed are minimal, so I
> > do not expect that a lot of changes are needed for this.
> 
> Here is how I summarize this proposal,
> - Perform the shebang "correction" for py2 in the post install
>   - Keeps the git clone clean
> - shebang correction occurs based on a configure time option
>   - It is not implicit but an explicit choice to correct the shebangs to
> py2, hence we need an option either way
> - The configure option would be "--enable-python2"
>   - Developers that need py2, can configure it as such
>   - Regression jobs that need py2, either because of the platform they
> test against, or for py2 compliance in the future, use the said option
>   - Package builds are agnostic to these changes (currently) as they
> decide at build time based on the platform what needs to be done.

I do not think such a ./configure option is needed. configure.ac can
find out the version that is available, and pick python3 if it has both.

Tests should just run with "$PYTHON run-the-test.py" instead of
./run-the-test.py with a #!/usr/bin/python shebang. The testing
framework can also find out what version of python is available.


> > There do seem quite some Python files that have a shebang, but do not
> > need it (__init__.py, not executable, no __main__-like functions). This
> > should probably get reviewed as well. When those scripts get their
> > shebang removed, even fewer files need to be 'fixed-up'.
> 
> I propose maintainers/component-owner take this cleanup.

That would be ideal!


> > Is there a BZ or GitHub Issue that I can use to send some fixes?
> 
> See: https://github.com/gluster/glusterfs/issues/411

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-28 Thread Niels de Vos
On Thu, Sep 27, 2018 at 08:40:54AM -0400, Shyam Ranganathan wrote:
> On 09/27/2018 08:07 AM, Kaleb S. KEITHLEY wrote:
> >> The thought is,
> >> - Add a configure option "--enable-py-version-correction" to configure,
> >> that is disabled by default
> > "correction" implies there's something that's incorrect. How about
> > "conversion" or perhaps just --enable-python2
> > 
> 
> I would not like to go with --enable-python2 as that implies it is an
> conscious choice with the understanding that py2 is on the box. Given
> the current ability to detect and hence correct the python shebangs, I
> would think we should retain it as a more detect and modify the shebangs
> option name. (I am looking at this more as an option that does the right
> thing implicitly than someone/tool using this checking explicitly, which
> can mean different things to different people, if that makes sense)
> 
> Now "correction" seems like an overkill, maybe "conversion"?

Is it really needed to have this as an option? Instead of an option in
configure.ac, can it not be a post-install task in a Makefile.am? The
number of executable python scripts that get installed are minimal, so I
do not expect that a lot of changes are needed for this.

There do seem quite some Python files that have a shebang, but do not
need it (__init__.py, not executable, no __main__-like functions). This
should probably get reviewed as well. When those scripts get their
shebang removed, even fewer files need to be 'fixed-up'.

Is there a BZ or GitHub Issue that I can use to send some fixes?

Thanks,
Niels


$ for F in $(git ls-files) ; do if ( head -n1 $F | grep -q -E '^#.+python' 
) ; then echo "$F uses Python" ; fi ; done
api/examples/getvolfile.py uses Python
events/eventskeygen.py uses Python
events/src/gf_event.py uses Python
events/src/glustereventsd.py uses Python
events/src/peer_eventsapi.py uses Python
events/tools/eventsdash.py uses Python
extras/create_new_xlator/generate_xlator.py uses Python
extras/distributed-testing/distributed-test-runner.py uses Python
extras/failed-tests.py uses Python
extras/geo-rep/schedule_georep.py.in uses Python
extras/git-branch-diff.py uses Python
extras/gnfs-loganalyse.py uses Python
extras/hook-scripts/S40ufo-stop.py uses Python
extras/profiler/glusterfs-profiler uses Python
extras/quota/quota_fsck.py uses Python
extras/quota/xattr_analysis.py uses Python
extras/rebalance.py uses Python
extras/snap_scheduler/conf.py.in uses Python
extras/snap_scheduler/gcron.py uses Python
extras/snap_scheduler/snap_scheduler.py uses Python
geo-replication/src/peer_georep-sshkey.py.in uses Python
geo-replication/src/peer_mountbroker.in uses Python
geo-replication/src/peer_mountbroker.py.in uses Python
geo-replication/syncdaemon/changelogagent.py uses Python
geo-replication/syncdaemon/conf.py.in uses Python
geo-replication/syncdaemon/gsyncd.py uses Python
geo-replication/syncdaemon/gsyncdstatus.py uses Python
geo-replication/tests/__init__.py uses Python
geo-replication/tests/unit/__init__.py uses Python
geo-replication/tests/unit/test_gsyncdstatus.py uses Python
geo-replication/tests/unit/test_syncdutils.py uses Python
libglusterfs/src/gen-defaults.py uses Python
libglusterfs/src/generator.py uses Python
tools/gfind_missing_files/gfid_to_path.py uses Python
tools/glusterfind/S57glusterfind-delete-post.py uses Python
tools/glusterfind/glusterfind.in uses Python
tools/glusterfind/src/brickfind.py uses Python
tools/glusterfind/src/changelog.py uses Python
tools/glusterfind/src/main.py uses Python
tools/glusterfind/src/nodeagent.py uses Python
xlators/experimental/fdl/src/gen_dumper.py uses Python
xlators/experimental/fdl/src/gen_fdl.py uses Python
xlators/experimental/fdl/src/gen_recon.py uses Python
xlators/experimental/jbr-client/src/gen-fops.py uses Python
xlators/experimental/jbr-server/src/gen-fops.py uses Python
xlators/features/changelog/lib/examples/python/changes.py uses Python
xlators/features/cloudsync/src/cloudsync-fops-c.py uses Python
xlators/features/cloudsync/src/cloudsync-fops-h.py uses Python
xlators/features/glupy/src/__init__.py.in uses Python
xlators/features/utime/src/utime-gen-fops-c.py uses Python
xlators/features/utime/src/utime-gen-fops-h.py uses Python

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster-gnfs missing in CentOS repos

2018-09-27 Thread Niels de Vos
On Thu, Sep 27, 2018 at 02:28:32PM +0530, Sahina Bose wrote:
> On Mon, Sep 24, 2018 at 3:04 PM Sahina Bose  wrote:
> 
> > Hi all,
> >
> > gluster-gnfs rpms are missing in 4.0/4.1 repos in CentOS storage. Is this
> > intended?
> >
> 
> Rephrasing my question - is there plans to push gluster-gnfs rpms to the
> CentOS repos as well?

No, Gluster/NFS is deprecated and not really maintained anymore. This
was announced with the Gluster 4.0 release and we encourage all users to
move to NFS-Ganesha.

Gluster 3.x is the last series that have Gluster/NFS enabled by default.
At the moment it is still possible to run './configure --with-gnfs' and
get the Gluster/NFS pieces, but if the gNFS maintainers do not improve
their participation, we might remove the code completely at one point.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [heketi-devel] Heketi v8.0.0 available for download

2018-09-13 Thread Niels de Vos
On Wed, Sep 12, 2018 at 07:43:47PM -0400, John Mulligan wrote:
> Heketi v8.0.0 is now available [1].
> 
> This is the new stable version of Heketi.
> 
> Major additions in this release:
> * Resumable delete of Volumes and Block Volumes
> * Server administrative modes
> * Throttling of concurrent operations
> * Support configuration of block hosting volume options
> * Heketi cli command to fetch operation counters
> * Support setting restrictions on block hosting volume; to prevent block 
> hosting volumes from taking new block volumes
> * Add an option to destroy data while adding a device to a node
> * Heketi Container: load an initial topology if HEKETI_TOPOLOGY_FILE is set
> 
> This release contains numerous stability and bug fixes. A more detailed 
> changelog is available at the release page [1].

Packages for Fedora 29 (and 30/Rawhide), and CentOS Storage SIG have
been built and are available for testing (if not yet, then very soon).

For Fedora, please run 'dnf --enablerepo=updates-testing install heketi'
and leave feedback in 
https://bodhi.fedoraproject.org/updates/FEDORA-2018-5aa0a9dc9b
(Fedora 29 is in Beta at the moment, makes testing more difficult.)

On CentOS-7, use this (assumes centos-release-gluster{312,41} is
installed):

  # yum --enablerepo=centos-gluster*-test install heketi

Once packages for CentOS have been tested, email these lists and we'll
mark the package for release.

A container image based on CentOS should land in the CentOS Registry
soon (tomorrow?) as well. Check
https://registry.centos.org/gluster/storagesig-heketi for the latest
builds, the :testing tag will get heketi-8.0.0 with a new image build.

Thanks,
Niels


> Special thanks to Michael Adam and Raghavendra Talur for assisting me with 
> creating my first release.
> 
> -- John M. on behalf of the Heketi team
> 
> 
> [1] https://github.com/heketi/heketi/releases/tag/v8.0.0
> 
> 
> ___
> heketi-devel mailing list
> heketi-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/heketi-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-13 Thread Niels de Vos
On Thu, Sep 13, 2018 at 02:25:22PM +0530, Ravishankar N wrote:
...
> What rules does clang impose on function/argument wrapping and alignment? I
> somehow found the new code wrapping to be random and highly unreadable. An
> example of 'before and after' the clang format patches went in:
> https://paste.fedoraproject.org/paste/dC~aRCzYgliqucGYIzxPrQ Wondering if
> this is just me or is it some problem of spurious clang fixes.

I agree that this example looks pretty ugly. Looking at random changes
to the code where I am most active does not show this awkward
formatting.

However, I was expecting to see enforcing of the
single-line-if-statements like this (and while/for/.. loops):

if (need_to_do_it) {
 do_it();
}

instead of

if (need_to_do_it)
 do_it();

At least the conversion did not take care of this. But, maybe I'm wrong
as I can not find the discussion in https://bugzilla.redhat.com/1564149
about this. Does someone remember what was decided in the end?

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposal to change Gerrit -> Bugzilla updates

2018-09-11 Thread Niels de Vos
On Mon, Sep 10, 2018 at 09:38:04AM -0400, Shyam Ranganathan wrote:
> On 09/10/2018 08:37 AM, Nigel Babu wrote:
> > Hello folks,
> > 
> > We now have review.gluster.org  as an
> > external tracker on Bugzilla. Our current automation when there is a
> > bugzilla attached to a patch is as follows:
> > 
> > 1. When a new patchset has "Fixes: bz#1234" or "Updates: bz#1234", we
> > will post a comment to the bug with a link to the patch and change the
> > status to POST. 2. When the patchset is merged, if the commit said
> > "Fixes", we move the status to MODIFIED.
> > 
> > I'd like to propose the following improvements:
> > 1. Add the Gerrit URL as an external tracker to the bug.
> 
> My assumption here is that for each patch that mentions a BZ, an
> additional tracker would be added to the tracker list, right?
> 
> Further assumption (as I have not used trackers before) is that this
> would reduce noise as comments in the bug itself, right?
> 
> In the past we have reduced noise by not commenting on the bug (or
> github issue) every time the patch changes, so we get 2 comments per
> patch currently, with the above change we would just get one and that
> too as a terse external reference (see [1], based on my test/understanding).

This has my preference. The information of a patch being posted has
little relevance for a bug reporter. The bug moving to POST should be an
indication that work is being done. The link to the patch is available
so the status can be tracked pretty easily if needed.

> What we would lose is the commit details when the patch is merged in the
> BZ, as far as I can tell based on the changes below. These are useful
> and would like these to be retained in case they are not.

I agree with this. Specially once a patch has been merged, a comment
with the commit hash, subject and message is extremely helpful.

> > 2. When a patch is merged, only change state of the bug if needed. If
> > there is no state change, do not add an additional message. The external
> > tracker state should change reflecting the state of the review.
> 
> I added a tracker to this bug [1], but not seeing the tracker state
> correctly reflected in BZ, is this work that needs to be done?

That indeed looks close to useless. If there is no summary/subject or
status in the external tracker table, we do not gain a lot. I hope this
can be fixed soon.

> > 3. Assign the bug to the committer. This has edge cases, but it's best
> > to at least handle the easy ones and then figure out edge cases later.
> > The experience is going to be better than what it is right now.
> 
> Is the above a reference to just the "assigned to", or overall process?
> If overall can you elaborate a little more on why this would be better
> (I am not saying it is not, attempting to understand how you see it).

I assume this is the "assigned to" value in Bugzilla. Many BZs are
currently assigned to b...@gluster.org even when the BZ is not in NEW
anymore. Asking for a status update in the BZ is therefore more
difficult as need to be (useless to select NEEDINFO=assignee).

Thanks,
Niels

> 
> > 
> > Please provide feedback/comments by end of day Friday. I plan to add
> > this activity to the next Infra team sprint that starts on Monday (Sep 17).
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1619423
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-28 Thread Niels de Vos
On Tue, Aug 28, 2018 at 03:49:38PM +0200, Paolo Margara wrote:
> Hi,
> 
> we’ve now tested version 3.12.13 on our ovirt dev cluster and all seems
> to be ok (obviously it's too early to see if the infamous memory leak
> issue got fixed), I think that should be safe to move related packages
> from -test to release for centos-gluster312

Thanks! I've marked the packages for release and expect them to become
available on the CentOS mirrors during the day tomorrow.

Niels


> Greetings,
> 
>     Paolo M.
> 
> 
> Il 27/08/2018 07:40, Jiffin Tony Thottan ha scritto:
> >
> > The Gluster community is pleased to announce the release of Gluster
> > 3.12.13 (packages available at [1,2,3]).
> >
> > Release notes for the release can be found at [4].
> >
> > Thanks,
> > Gluster community
> >
> >
> > [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.13/
> > [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 
> > [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> > [4] Release notes:
> > https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Problems about acl_get_file used in posix_pacl_get

2018-08-17 Thread Niels de Vos
On Fri, Aug 17, 2018 at 05:22:17PM +0800, Kinglong Mee wrote:
> Hi Niels,
> 
> On 2018/8/17 17:13, Niels de Vos wrote:
> > On Fri, Aug 17, 2018 at 03:04:43PM +0800, Kinglong Mee wrote:
> >> Hello folks,
> >>
> >> nfs-ganesha using the new gfapi named glfs_h_acl_set/glfs_h_acl_get,
> >> at xlator posix, glusterfsd calls acl_get_file/acl_set_file (libacl 
> >> functions) to process xattrs.
> >>
> >> By default, sys_lsetxattr/sys_llistxattr/sys_lgetxattr/sys_lremovexattr 
> >> are used to process xattrs.
> >> But, unfortunately, those two functions do syscall by getxattr/setxattr.
> >> I don't think that is we want.
> >>
> >> Is it a known problem ?
> > 
> > There should not be a problem for libacl to use syscalls directly. The
> > Gluster sources use sys_ so that there can be wrappers for the
> > differences between OS's. In the end, these sys_ functions will
> > mostly call the  with (adapted) arguments.
> > 
> > I do not know what problem you are facing, but I can imagine that there
> > is a 'getxattr' symbol in the executable image that gets called by
> > libacl, instead of the 'getxattr' syscall. This will likely result in
> > very strange behaviour, if not segfaults.
> 
> Sorry for my unclear description.
> The real problem here is libacl gets/sets xattrs by getxattr/setxattr which 
> follow symbolic links,
> but, posix xlator get/set xattrs by sys_l*xattr which do not follow symbolic 
> links.

Permission checking is done by the kernel. I do not think setting ACLs
on a symlink makes much sense. More liberal permissions on the symlink
will not help with accessing the contents, and restricting permissions
on a symlink still give the user to access the contents through its real
filename.

Is there a reason that having ACLs on a symlink can have benefits?

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Problems about acl_get_file used in posix_pacl_get

2018-08-17 Thread Niels de Vos
On Fri, Aug 17, 2018 at 03:04:43PM +0800, Kinglong Mee wrote:
> Hello folks,
> 
> nfs-ganesha using the new gfapi named glfs_h_acl_set/glfs_h_acl_get,
> at xlator posix, glusterfsd calls acl_get_file/acl_set_file (libacl 
> functions) to process xattrs.
> 
> By default, sys_lsetxattr/sys_llistxattr/sys_lgetxattr/sys_lremovexattr are 
> used to process xattrs.
> But, unfortunately, those two functions do syscall by getxattr/setxattr.
> I don't think that is we want.
> 
> Is it a known problem ?

There should not be a problem for libacl to use syscalls directly. The
Gluster sources use sys_ so that there can be wrappers for the
differences between OS's. In the end, these sys_ functions will
mostly call the  with (adapted) arguments.

I do not know what problem you are facing, but I can imagine that there
is a 'getxattr' symbol in the executable image that gets called by
libacl, instead of the 'getxattr' syscall. This will likely result in
very strange behaviour, if not segfaults.

None of the Gluster libraries or xlators is allowed to expose symbols
that collide with 'standard' ones. This includes syscalls or symbols
from commonly used libraries.

To fix this, all symbols in the Gluster libraries should have a gf_
prefix. This is not commonly done for xlators, and we have had issues
with that before. All FOPs and callbacks in xlators should in general be
marked static to prevent symbol collisions.

HTH,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Access to Docker Hub Gluster organization

2018-08-14 Thread Niels de Vos
On Tue, Aug 14, 2018 at 02:09:59PM +0530, Nigel Babu wrote:
> Hello folks,
> 
> Do we know who's the admin of the Gluster organization on Docker hub? I'd
> like to be added to the org so I can set up nightly builds for all the
> GCS-related containers.

Nice! Which containers are these? The ones from the gluster-containers
repository on GitHub?

I was looking for this as well, but also for the team that exists on
quay.io.

There has been a request to keep clearly identifyable versioning for our
containers. It is something I wan to look at, but have not had the time
to do so. A description on which contaners we have, where the sources
are kept, base OS and possible branches/versions would be needed (maybe
more). If this is documented somewhere, a pointer to it would be great.
Otherwise I'll need assistence on figuring out what the current state
is.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down: RCA for tests (UNSOLVED ./tests/basic/stats-dump.t)

2018-08-13 Thread Niels de Vos
On Mon, Aug 13, 2018 at 02:32:19PM -0400, Shyam Ranganathan wrote:
> On 08/12/2018 08:42 PM, Shyam Ranganathan wrote:
> > As a means of keeping the focus going and squashing the remaining tests
> > that were failing sporadically, request each test/component owner to,
> > 
> > - respond to this mail changing the subject (testname.t) to the test
> > name that they are responding to (adding more than one in case they have
> > the same RCA)
> > - with the current RCA and status of the same
> > 
> > List of tests and current owners as per the spreadsheet that we were
> > tracking are:
> > 
> > ./tests/basic/stats-dump.t  TBD
> 
> This test fails as follows:
> 
>   01:07:31 not ok 20 , LINENUM:42
>   01:07:31 FAILED COMMAND: grep .queue_size
> /var/lib/glusterd/stats/glusterfsd__d_backends_patchy1.dump
> 
>   18:35:43 not ok 21 , LINENUM:43
>   18:35:43 FAILED COMMAND: grep .queue_size
> /var/lib/glusterd/stats/glusterfsd__d_backends_patchy2.dump
> 
> Basically when grep'ing for a pattern in the stats dump it is not
> finding the second grep pattern of "queue_size" in one or the other bricks.
> 
> The above seems incorrect, if it found "aggr.fop.write.count" it stands
> to reason that it found a stats dump, further there is a 2 second sleep
> as well in the test case and the dump interval is 1 second.
> 
> The only reason for this to fail could hence possibly be that the file
> was just (re)opened (by the io-stats dumper thread) for overwriting
> content, at which point the fopen uses the mode "w+", and the file was
> hence truncated, and the grep CLI also opened the file at the same time,
> and hence found no content.

This sounds like a dangerous approach in any case. Truncating a file
while there are potential other readers should probably not be done. I
wonder if there is a good reason for this.

A safer solution would be to create a new temporary file, write the
stats to that and once done rename it to the expected filename. Any
process reading from the 'old' file will have its file-descriptor open
and can still read the previous, but consistent contents.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ASAN Builds!

2018-08-10 Thread Niels de Vos
On Fri, Aug 10, 2018 at 05:50:28PM +0530, Nigel Babu wrote:
> Hello folks,
> 
> Thanks to Niels, we now have ASAN builds compiling and a flag for getting
> it to work locally. The patch[1] is not merged yet, but I can trigger runs
> off the patch for now. The first run is off[2]
> 
> [1]: https://review.gluster.org/c/glusterfs/+/20589/2
> [2]: https://build.gluster.org/job/asan/66/console

There has been a newer version of the patch(es) that make ASAN builds
work on el7 systems too. Nigel started a new run at
https://build.gluster.org/job/asan/68/consoleFull and it has

Enable ASAN  : yes

in the console output.

Other devs that want to test this, need to apply a few patches that have
not been merged yet. In case you have git-review installed, the
following should get them (use git-review with the git-remote origin
pointing to review.gluster.org):

1. https://review.gluster.org/c/glusterfs/+/20589
   $ git review -r origin -d 20589

2. https://review.gluster.org/c/glusterfs/+/20688
   $ git review -r origin -d 20688

3. https://review.gluster.org/c/glusterfs/+/20692
   $ git review -r origin -d 20692

With this, you should be able to build with ASAN enabled if you do
either of these:

$ ./autogen.sh && ./configure --enable-asan
$ make dist && rpmbuild --with asan -ta glusterfs*.tar.gz

This could probably e added to some 'how to debug gluster' documents.
Suggestions for the best location are welcome.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] FreeBSD smoke test may fail for older changes, rebase needed

2018-08-01 Thread Niels de Vos
On Wed, Aug 01, 2018 at 09:47:38AM -0400, Shyam Ranganathan wrote:
> On 07/31/2018 02:12 AM, Niels de Vos wrote:
> > On Mon, Jul 30, 2018 at 02:44:57PM -0400, Shyam Ranganathan wrote:
> >> On 07/28/2018 12:45 PM, Niels de Vos wrote:
> >>> On Sat, Jul 28, 2018 at 03:37:46PM +0200, Niels de Vos wrote:
> >>>> This Friday argp-standalone got installed on the FreeBSD Jenkins
> >>>> slave(s). With the library available, we can now drop the bundled and
> >>>> unmaintained contrib/argp-standlone/ from our glusterfs sources.
> >>>>
> >>>> Unfortunately building on FreeBSD fails if the header/library is
> >>>> installed. This has been corrected with https://review.gluster.org/20581
> >>>> but that means changes posted in Gerrit may need a rebase to include the
> >>>> fix for building on FreeBSD.
> >>>>
> >>>> I think I have rebased all related changes that did not have negative
> >>>> comments asking for corrections/improvement. In case I missed a change,
> >>>> please rebase your patch so the smoke test runs again.
> >>>>
> >>>> Sorry for any inconvenience that this caused,
> >>>> Niels
> >>>
> >>> It just occured to me that the argp-standalone installation also affects
> >>> the release-4.1 and release-3.12 branches. Jiffin, Shyam, do you want to
> >>> cherry-pick https://review.gluster.org/20581 to fix that, or do you
> >>> prefer an alternative that always uses the bundled version of the
> >>> library?
> >>
> >> The outcome is to get existing maintained release branches building and
> >> working on FreeBSD, would that be correct?
> > 
> > 'working' in the way that they were earlier. I do not know of any
> > (automated or manual) tests that verify the correct functioning. It is
> > build tested only. I think.
> > 
> >> If so I think we can use the cherry-picked version, the changes seem
> >> mostly straight forward, and it is possibly easier to maintain.
> > 
> > It is straight forward, but does add a new requirement on a library that
> > should get installed on the system. This is not something that we
> > normally allow during a stable release.
> > 
> >> Although, I have to ask, what is the downside of not taking it in at
> >> all? If it is just FreeBSD, then can we live with the same till release-
> >> is out?
> > 
> > Yes, it is 'just' FreeBSD build testing. Users should still be able to
> > build the stable releases on FreeBSD as long as they do not install
> > argp-standalone. In that case the bundled version will be used as the
> > stable releases still have that in their tree.
> > 
> > If the patch does not get merged, it will cause the smoke tests on
> > FreeBSD to fail. As Nigel mentions, it is possible to disable this test
> > for the stable branches.
> > 
> > An alternative would be to fix the build process, and optionally use the
> > bundled library in case it is not installed on the system. This is what
> > we normally would have done, but it seems to have been broken in the
> > case of FreeBSD + argp-standalone.
> 
> Based on the above reasoning, I would suggest that we do not backport
> this to the release branches, and disable the FreeBSD job on them, and
> if possible enable them for the next release (5).
> 
> Objections?

That is fine with me. It is prepared for GlusterFS 5, so nothing needs
to be done for that. Only for 4.1 and 3.12 FreeBSD needs to be disabled
from the smoke job(s).

I could not find the repo that contains the smoke job, otherwise I would
have tried to send a PR.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] FreeBSD smoke test may fail for older changes, rebase needed

2018-07-31 Thread Niels de Vos
On Mon, Jul 30, 2018 at 02:44:57PM -0400, Shyam Ranganathan wrote:
> On 07/28/2018 12:45 PM, Niels de Vos wrote:
> > On Sat, Jul 28, 2018 at 03:37:46PM +0200, Niels de Vos wrote:
> >> This Friday argp-standalone got installed on the FreeBSD Jenkins
> >> slave(s). With the library available, we can now drop the bundled and
> >> unmaintained contrib/argp-standlone/ from our glusterfs sources.
> >>
> >> Unfortunately building on FreeBSD fails if the header/library is
> >> installed. This has been corrected with https://review.gluster.org/20581
> >> but that means changes posted in Gerrit may need a rebase to include the
> >> fix for building on FreeBSD.
> >>
> >> I think I have rebased all related changes that did not have negative
> >> comments asking for corrections/improvement. In case I missed a change,
> >> please rebase your patch so the smoke test runs again.
> >>
> >> Sorry for any inconvenience that this caused,
> >> Niels
> > 
> > It just occured to me that the argp-standalone installation also affects
> > the release-4.1 and release-3.12 branches. Jiffin, Shyam, do you want to
> > cherry-pick https://review.gluster.org/20581 to fix that, or do you
> > prefer an alternative that always uses the bundled version of the
> > library?
> 
> The outcome is to get existing maintained release branches building and
> working on FreeBSD, would that be correct?

'working' in the way that they were earlier. I do not know of any
(automated or manual) tests that verify the correct functioning. It is
build tested only. I think.

> If so I think we can use the cherry-picked version, the changes seem
> mostly straight forward, and it is possibly easier to maintain.

It is straight forward, but does add a new requirement on a library that
should get installed on the system. This is not something that we
normally allow during a stable release.

> Although, I have to ask, what is the downside of not taking it in at
> all? If it is just FreeBSD, then can we live with the same till release-
> is out?

Yes, it is 'just' FreeBSD build testing. Users should still be able to
build the stable releases on FreeBSD as long as they do not install
argp-standalone. In that case the bundled version will be used as the
stable releases still have that in their tree.

If the patch does not get merged, it will cause the smoke tests on
FreeBSD to fail. As Nigel mentions, it is possible to disable this test
for the stable branches.

An alternative would be to fix the build process, and optionally use the
bundled library in case it is not installed on the system. This is what
we normally would have done, but it seems to have been broken in the
case of FreeBSD + argp-standalone.

Niels


> Finally, thanks for checking as the patch is not a simple bug-fix backport.
> 
> > 
> > Niels
> > 


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] FreeBSD smoke test may fail for older changes, rebase needed

2018-07-28 Thread Niels de Vos
On Sat, Jul 28, 2018 at 03:37:46PM +0200, Niels de Vos wrote:
> This Friday argp-standalone got installed on the FreeBSD Jenkins
> slave(s). With the library available, we can now drop the bundled and
> unmaintained contrib/argp-standlone/ from our glusterfs sources.
> 
> Unfortunately building on FreeBSD fails if the header/library is
> installed. This has been corrected with https://review.gluster.org/20581
> but that means changes posted in Gerrit may need a rebase to include the
> fix for building on FreeBSD.
> 
> I think I have rebased all related changes that did not have negative
> comments asking for corrections/improvement. In case I missed a change,
> please rebase your patch so the smoke test runs again.
> 
> Sorry for any inconvenience that this caused,
> Niels

It just occured to me that the argp-standalone installation also affects
the release-4.1 and release-3.12 branches. Jiffin, Shyam, do you want to
cherry-pick https://review.gluster.org/20581 to fix that, or do you
prefer an alternative that always uses the bundled version of the
library?

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] FreeBSD smoke test may fail for older changes, rebase needed

2018-07-28 Thread Niels de Vos
This Friday argp-standalone got installed on the FreeBSD Jenkins
slave(s). With the library available, we can now drop the bundled and
unmaintained contrib/argp-standlone/ from our glusterfs sources.

Unfortunately building on FreeBSD fails if the header/library is
installed. This has been corrected with https://review.gluster.org/20581
but that means changes posted in Gerrit may need a rebase to include the
fix for building on FreeBSD.

I think I have rebased all related changes that did not have negative
comments asking for corrections/improvement. In case I missed a change,
please rebase your patch so the smoke test runs again.

Sorry for any inconvenience that this caused,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Github teams/repo cleanup

2018-07-25 Thread Niels de Vos
On Wed, Jul 25, 2018 at 02:38:57PM +0200, Michael Scherer wrote:
> Le mercredi 25 juillet 2018 à 14:08 +0200, Michael Scherer a écrit :
> > Le mercredi 25 juillet 2018 à 16:06 +0530, Nigel Babu a écrit :
> > > I think our team structure on Github has become unruly. I prefer
> > > that
> > > we
> > > use teams only when we can demonstrate that there is a strong need.
> > > At the
> > > moment, the gluster-maintainers and the glusterd2 projects have
> > > teams
> > > that
> > > have a strong need. If any other repo has a strong need for teams,
> > > please
> > > speak up. Otherwise, I suggest we delete the teams and add the
> > > relevant
> > > people as collaborators on the project.
> > > 
> > > It should be safe to delete the gerrit-hooks repo. These are now
> > > Github
> > > jobs. I'm not in favor of archiving the old projects if they're
> > > going
> > > to be
> > > hidden from someone looking for it. If they just move to the end of
> > > the
> > > listing, it's fine to archive.
> > 
> > So I did a test and just archived gluster/vagrant, and it can still
> > be
> > found.
> > 
> > So I am going to archives at least the salt stuff, and the gerrit-
> > hooks 
> > one. And remove the empty one.
> 
> So while cleaning thing up, I wonder if we can remove this one:
> https://github.com/gluster/jenkins-ssh-slaves-plugin
> 
> We have just a fork, lagging from upstream and I am sure we do not use
> it.

We had someone working on starting/stopping Jenkins slaves in Rackspace
on-demand. He since has left Red Hat and I do not think the infra team
had a great interest in this either (with the move out of Rackspace).

It can be deleted from my point of view.

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] RFC: adding support for copy_file_range()

2018-07-04 Thread Niels de Vos
Hi,

at one point we'd like to add support for copy-offloading or
server-side-copy to GlusterFS. The FUSE interface should probably map to
the copy_file_range() systemcall. In order to prepare support in Gluster
for this, I have posted the following changes and would like comments on
them.

- kernel fuse
  https://lkml.org/lkml/2018/6/29/334
  https://marc.info/?l=linux-fsdevel=153029193020575=2

- libfuse
  https://github.com/libfuse/libfuse/pull/259

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Fwd: Clang-format: Update

2018-06-28 Thread Niels de Vos
On Thu, Jun 28, 2018 at 05:28:21PM +0530, Amar Tumballi wrote:
> On Thu, Jun 28, 2018 at 5:20 PM, Niels de Vos  wrote:
> 
> > On Thu, Jun 28, 2018 at 04:52:24PM +0530, Nigel Babu wrote:
> > > Hello folks,
> > >
> > > A while ago we talked about using clang-format for our codebase[1]. We
> > > started doing several pieces of this work asynchronously. Here's an
> > update
> > > on the current state of affairs:
> > >
> > > * Team agrees on a style and a config file representing the style.
> > > This has been happening asynchronously on Github[2]. Amar, Xavi, and Jeff
> > > -- Can we close out this discussion and have a config file in 2 weeks? If
> > > anyone feels strongly about coding style, please participate in the
> > > discussion now.
> > >
> > > * Commit the coding style guide to codebase and make changes in rfc.sh to
> > > use it.
> > > Waiting on 1. I can do this once we have the discussion finalized.
> > >
> > > * gluster-ant commits a single large patch for whole codebase with a
> > > standard clang-format style.
> > > This is waiting on the first two steps and should be trivial to
> > accomplish.
> > > I have access to the gluster-ant account and I can make the necessary
> > > changes.
> >
> > Can this be done as one of the last patches before branching the next
> > release? A large change like this may make backporting to the maintained
> > 3.12 and 4.1 branches more annoying. If it gets changed with a single
> > large patch, there is little need to push it through in the middle of a
> > release IMHO.
> >
> >
> We discussed a bit about it. Even if we do it at the branch of next
> release, it would still be an issue to back port to 4.1 (as it is a
> supported release for year).

Yeah, we will always have to deal with non cherry-pick backports,

> This improvement would make the review process faster (and at least new
> developer experience better), so better to do it earlier than later IMO.

I am not sure if it makes things faster/better for new developers. But
if that is the case, I will support it too.

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Fwd: Clang-format: Update

2018-06-28 Thread Niels de Vos
On Thu, Jun 28, 2018 at 04:52:24PM +0530, Nigel Babu wrote:
> Hello folks,
> 
> A while ago we talked about using clang-format for our codebase[1]. We
> started doing several pieces of this work asynchronously. Here's an update
> on the current state of affairs:
> 
> * Team agrees on a style and a config file representing the style.
> This has been happening asynchronously on Github[2]. Amar, Xavi, and Jeff
> -- Can we close out this discussion and have a config file in 2 weeks? If
> anyone feels strongly about coding style, please participate in the
> discussion now.
> 
> * Commit the coding style guide to codebase and make changes in rfc.sh to
> use it.
> Waiting on 1. I can do this once we have the discussion finalized.
> 
> * gluster-ant commits a single large patch for whole codebase with a
> standard clang-format style.
> This is waiting on the first two steps and should be trivial to accomplish.
> I have access to the gluster-ant account and I can make the necessary
> changes.

Can this be done as one of the last patches before branching the next
release? A large change like this may make backporting to the maintained
3.12 and 4.1 branches more annoying. If it gets changed with a single
large patch, there is little need to push it through in the middle of a
release IMHO.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-15 Thread Niels de Vos
On Fri, Jun 15, 2018 at 05:03:38PM +0530, Kaushal M wrote:
> In Tue, Jun 12, 2018 at 10:15 PM Niels de Vos  wrote:
> >
> > On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
> > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > As brick-mux tests were failing (and still are on master), this was
> > > > holding up the release activity.
> > > >
> > > > We now have a final fix [1] for the problem, and the situation has
> > > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > > >
> > > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > > testing, before getting to GA. The revised calendar stands as follows,
> > > >
> > > > - RC0 Tagging: 31st May, 2018
> > > > - RC0 Builds: 1st June, 2018
> > > > - June 4th-8th: RC0 testing
> > > > - June 8th: GA readiness callout
> > > > - June 11th: GA tagging
> > >
> > > GA has been tagged today, and is off to packaging.
> >
> > The glusterfs packages should land in the testing repositories from the
> > CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
> > Please test with the instructions from
> > http://lists.gluster.org/pipermail/packaging/2018-June/000553.html
> >
> > Thanks!
> > Niels
> 
> GlusterD2-v4.1.0 has been tagged and released [1].
> 
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0

Packages should become available in the CentOS Storage SIGs
centos-gluster41-test repository (el7 only) within an hour or so.
Testing can be done with the description from
http://lists.gluster.org/pipermail/packaging/2018-June/000553.html, the
package is called glusterd2.

Please let me know if the build is functioning as required and I'll mark
if for release.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-12 Thread Niels de Vos
On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > As brick-mux tests were failing (and still are on master), this was
> > holding up the release activity.
> > 
> > We now have a final fix [1] for the problem, and the situation has
> > improved over a series of fixes and reverts on the 4.1 branch as well.
> > 
> > So we hope to branch RC0 today, and give a week for package and upgrade
> > testing, before getting to GA. The revised calendar stands as follows,
> > 
> > - RC0 Tagging: 31st May, 2018
> > - RC0 Builds: 1st June, 2018
> > - June 4th-8th: RC0 testing
> > - June 8th: GA readiness callout
> > - June 11th: GA tagging
> 
> GA has been tagged today, and is off to packaging.

The glusterfs packages should land in the testing repositories from the
CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
Please test with the instructions from
http://lists.gluster.org/pipermail/packaging/2018-June/000553.html

Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] copy_file_range() syscall for offloading copying of files?

2018-06-07 Thread Niels de Vos
Hi Pranith and Amar,

The copy_file_range() syscall can support reflinks on the (local)
filesystem. This is something I'd really like to see in Gluster soonish.
There is https://github.com/gluster/glusterfs/issues/349 which discusses
some of the technical bits, but there has not been an updated since the
beginning of April.

If we can support a copy_file_range() FOP in Gluster, support for
reflinks can then be made transarant. The actual data copying will be
done in the bricks, without transporting the data back and forth between
client and server. Distribution of the data might not be optimal, but I
think that is acceptible for many use-cases where the performance of
'file cloning' is important. Many of these environments will not have
distributed volumes in any case.

Note that copy_file_range() does not guarantee that reflinks are used.
This depends on the support and implementation of the backend
filesystem. XFS in Fedora already supports reflinks (needs special mkfs
options), and we could really benefit of this for large files like VM
disk-images.

Please provide an updated status by replying to this email, and ideally
adding a note to the GitHub issue.

Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Update md-cache after triggering FOP via syncop framework?

2018-06-05 Thread Niels de Vos
On Tue, Jun 05, 2018 at 11:04:00AM +0200, David Spisla wrote:
> Hello Niels,
> 
> thank you. Now I understand this better.
> I am triggering the FOPs via syncop directly from the WORM Xlator which is
> unfortunately below the upcall xlator.
> I don't have a separate xlator, so I am searching for a solution which is
> working inside of the WORM Xlator.
> E.g. the autocommit function of the WORM Xlator is using the syncop
> framework to change the atime
> of a file. I don't know if there is a difference between FOPs triggered by
> syncop or by clients from outside.
> My guess is that there is no difference, but I am not sure.

You can experiment with moving the WORM xlator in the .vol files of the
bricks before upcall, just restart the brick processes after editing the
config files.

I can not immediately think of a reason why this would cause problems.
You could send a patch that explains your need and changes the location
of WORM (or upcall?) in the graph (see server_graph_table in
xlators/mgmt/glusterd/src/glusterd-volgen.c).

Niels


> 
> Regards
> David
> 
> 2018-06-05 9:51 GMT+02:00 Niels de Vos :
> 
> > On Mon, Jun 04, 2018 at 03:23:05PM +0200, David Spisla wrote:
> > > Dear Gluster-Devels,
> > >
> > > I'm currently using the syncop framework to trigger certain file
> > operations
> > > within the Server Translators stack. At the same time, file attributes
> > such
> > > as file rights and timestamps are changed (atime, mtime). I noticed that
> > > the md-cache does not get the changed attributes or only when the upcall
> > > xlator is activated eg by a READDIR (executing " $ stat * ").
> > > However, I would find it cleaner if right after triggering a file
> > operation
> > > by the syncop framework that would update md-cache. Is there a way to
> > > programmatically do this within the Server Translators stack?
> >
> > Hi David,
> >
> > If you place your xlator above upcall, upcall should inform the clients
> > about the changed attributes. In case it is below upcall, the internal
> > FOPs can not be tracked by upcall.
> >
> > Upcall tracks all clients that have shown interest in a particular
> > inode. If that inode is modified, the callback on the brick stack will
> > trigger a cache-invalidation on the client. I do not think there should
> > be a difference between FOPs from other clients, or locally created ones
> > through the syncop framework.
> >
> > In case this does not help or work, provide a little more details (.vol
> > file?).
> >
> > HTH,
> > Niels
> >
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Update md-cache after triggering FOP via syncop framework?

2018-06-05 Thread Niels de Vos
On Mon, Jun 04, 2018 at 03:23:05PM +0200, David Spisla wrote:
> Dear Gluster-Devels,
> 
> I'm currently using the syncop framework to trigger certain file operations
> within the Server Translators stack. At the same time, file attributes such
> as file rights and timestamps are changed (atime, mtime). I noticed that
> the md-cache does not get the changed attributes or only when the upcall
> xlator is activated eg by a READDIR (executing " $ stat * ").
> However, I would find it cleaner if right after triggering a file operation
> by the syncop framework that would update md-cache. Is there a way to
> programmatically do this within the Server Translators stack?

Hi David,

If you place your xlator above upcall, upcall should inform the clients
about the changed attributes. In case it is below upcall, the internal
FOPs can not be tracked by upcall.

Upcall tracks all clients that have shown interest in a particular
inode. If that inode is modified, the callback on the brick stack will
trigger a cache-invalidation on the client. I do not think there should
be a difference between FOPs from other clients, or locally created ones
through the syncop framework.

In case this does not help or work, provide a little more details (.vol
file?).

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Niels de Vos
On Mon, Jun 04, 2018 at 09:02:30PM +0530, Kaushal M wrote:
> On Mon, Jun 4, 2018 at 8:54 PM Kaushal M  wrote:
> >
> > On Mon, Jun 4, 2018 at 8:39 PM Kaushal M  wrote:
> > >
> > > On Sat, Jun 2, 2018 at 12:11 AM Kaushal M  wrote:
> > > >
> > > > On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> > > > wrote:
> > > > >
> > > > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > > > As brick-mux tests were failing (and still are on master), this was
> > > > > > holding up the release activity.
> > > > > >
> > > > > > We now have a final fix [1] for the problem, and the situation has
> > > > > > improved over a series of fixes and reverts on the 4.1 branch as 
> > > > > > well.
> > > > > >
> > > > > > So we hope to branch RC0 today, and give a week for package and 
> > > > > > upgrade
> > > > > > testing, before getting to GA. The revised calendar stands as 
> > > > > > follows,
> > > > > >
> > > > > > - RC0 Tagging: 31st May, 2018
> > > > >
> > > > > RC0 Tagged and off to packaging!
> > > >
> > > > GD2 has been tagged as well. [1]
> > > >
> > > > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> > >
> > > I've just realized I've made a mistake. I've pushed just the tags,
> > > without updating the branch.
> > > And now, the branch has landed new commits without my additional commits.
> > > So, I've unintentionally created a different branch.
> > >
> > > I'm planning on deleting the tag, and updating the branch with the
> > > release commits, and tagging once again.
> > > Would this be okay?
> >
> > Oh well. Another thing I messed up in my midnight release-attempt.
> > I forgot to publish the release-draft once I'd uploaded the tarballs.
> > But this makes it easier for me. Because of this no one has the
> > mis-tagged release.
> > I'll do what I planned above, and do a proper release this time.
> 
> We have a proper release this time. Source tarballs are available from [1].
> 
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

Thanks, I'll get this added to the CentOS Storage SIG repositories.

Niels


> 
> >
> > >
> > > >
> > > > >
> > > > > > - RC0 Builds: 1st June, 2018
> > > > > > - June 4th-8th: RC0 testing
> > > > > > - June 8th: GA readiness callout
> > > > > > - June 11th: GA tagging
> > > > > > - +2-4 days release announcement
> > > > > >
> > > > > > Thanks,
> > > > > > Shyam
> > > > > >
> > > > > > [1] Last fix for mux (and non-mux related):
> > > > > > https://review.gluster.org/#/c/20109/1
> > > > > >
> > > > > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > > > > >> Here is the current release activity calendar,
> > > > > >>
> > > > > >> - RC0 tagging: May 14th
> > > > > >> - RC0 builds: May 15th
> > > > > >> - May 15th - 25th
> > > > > >>   - Upgrade testing
> > > > > >>   - General testing and feedback period
> > > > > >> - (on need basis) RC1 build: May 26th
> > > > > >> - GA readiness call out: May, 28th
> > > > > >> - GA tagging: May, 30th
> > > > > >> - +2-4 days release announcement
> > > > > >>
> > > > > >> Thanks,
> > > > > >> Shyam
> > > > > >>
> > > > > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > > > > >>> Hi,
> > > > > >>>
> > > > > >>> Release 4.1 has been branched, as it was done later than 
> > > > > >>> anticipated the
> > > > > >>> calendar of tasks below would be reworked accordingly this week 
> > > > > >>> and
> > > > > >>> posted to the lists.
> > > > > >>>
> > > > > >>> Thanks,
> > > > > >>> Shyam
> > > > > >>>
> > > > > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> > > > >  Hi,
> > > > > 
> > > > >  As we have completed potential scope for 4.1 release (reflected 
> > > > >  here [1]
> > > > >  and also here [2]), it's time to talk about the schedule.
> > > > > 
> > > > >  - Branching date (and hence feature exception date): Apr 16th
> > > > >  - Week of Apr 16th release notes updated for all features in the 
> > > > >  release
> > > > >  - RC0 tagging: Apr 23rd
> > > > >  - Week of Apr 23rd, upgrade and other testing
> > > > >  - RCNext: May 7th (if critical failures, or exception features 
> > > > >  arrive late)
> > > > >  - RCNext: May 21st
> > > > >  - Week of May 21st, final upgrade and testing
> > > > >  - GA readiness call out: May, 28th
> > > > >  - GA tagging: May, 30th
> > > > >  - +2-4 days release announcement
> > > > > 
> > > > >  and, review focus. As in older releases, I am starring reviews 
> > > > >  that are
> > > > >  submitted against features, this should help if you are looking 
> > > > >  to help
> > > > >  accelerate feature commits for the release (IOW, this list is 
> > > > >  the watch
> > > > >  list for reviews). This can be found handy here [3].
> > > > > 
> > > > >  So, branching is in about 4 weeks!
> > > > > 
> > > > >  Thanks,
> > > > >  Shyam
> > > > > 
> > > > >  [1] Issues marked against release 4.1:
> > > > >  

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-02 Thread Niels de Vos
On Sat, Jun 02, 2018 at 12:11:55AM +0530, Kaushal M wrote:
> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  wrote:
> >
> > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > As brick-mux tests were failing (and still are on master), this was
> > > holding up the release activity.
> > >
> > > We now have a final fix [1] for the problem, and the situation has
> > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > >
> > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > testing, before getting to GA. The revised calendar stands as follows,
> > >
> > > - RC0 Tagging: 31st May, 2018
> >
> > RC0 Tagged and off to packaging!
> 
> GD2 has been tagged as well. [1]
> 
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

What is the status of the RPM for GD2? Can the Fedora RPM be rebuilt
directly on CentOS, or does it need additional dependencies? (Note that
CentOS does not allow dependencies from Fedora EPEL.)

Thanks,
Niels


> 
> >
> > > - RC0 Builds: 1st June, 2018
> > > - June 4th-8th: RC0 testing
> > > - June 8th: GA readiness callout
> > > - June 11th: GA tagging
> > > - +2-4 days release announcement
> > >
> > > Thanks,
> > > Shyam
> > >
> > > [1] Last fix for mux (and non-mux related):
> > > https://review.gluster.org/#/c/20109/1
> > >
> > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > >> Here is the current release activity calendar,
> > >>
> > >> - RC0 tagging: May 14th
> > >> - RC0 builds: May 15th
> > >> - May 15th - 25th
> > >>   - Upgrade testing
> > >>   - General testing and feedback period
> > >> - (on need basis) RC1 build: May 26th
> > >> - GA readiness call out: May, 28th
> > >> - GA tagging: May, 30th
> > >> - +2-4 days release announcement
> > >>
> > >> Thanks,
> > >> Shyam
> > >>
> > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > >>> Hi,
> > >>>
> > >>> Release 4.1 has been branched, as it was done later than anticipated the
> > >>> calendar of tasks below would be reworked accordingly this week and
> > >>> posted to the lists.
> > >>>
> > >>> Thanks,
> > >>> Shyam
> > >>>
> > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> >  Hi,
> > 
> >  As we have completed potential scope for 4.1 release (reflected here 
> >  [1]
> >  and also here [2]), it's time to talk about the schedule.
> > 
> >  - Branching date (and hence feature exception date): Apr 16th
> >  - Week of Apr 16th release notes updated for all features in the 
> >  release
> >  - RC0 tagging: Apr 23rd
> >  - Week of Apr 23rd, upgrade and other testing
> >  - RCNext: May 7th (if critical failures, or exception features arrive 
> >  late)
> >  - RCNext: May 21st
> >  - Week of May 21st, final upgrade and testing
> >  - GA readiness call out: May, 28th
> >  - GA tagging: May, 30th
> >  - +2-4 days release announcement
> > 
> >  and, review focus. As in older releases, I am starring reviews that are
> >  submitted against features, this should help if you are looking to help
> >  accelerate feature commits for the release (IOW, this list is the watch
> >  list for reviews). This can be found handy here [3].
> > 
> >  So, branching is in about 4 weeks!
> > 
> >  Thanks,
> >  Shyam
> > 
> >  [1] Issues marked against release 4.1:
> >  https://github.com/gluster/glusterfs/milestone/5
> > 
> >  [2] github project lane for 4.1:
> >  https://github.com/gluster/glusterfs/projects/1#column-1075416
> > 
> >  [3] Review focus dashboard:
> >  https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
> >  ___
> >  maintainers mailing list
> >  maintain...@gluster.org
> >  http://lists.gluster.org/mailman/listinfo/maintainers
> > 
> > >>> ___
> > >>> maintainers mailing list
> > >>> maintain...@gluster.org
> > >>> http://lists.gluster.org/mailman/listinfo/maintainers
> > >>>
> > >> ___
> > >> Gluster-devel mailing list
> > >> Gluster-devel@gluster.org
> > >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >>
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > maintainers mailing list
> > maintain...@gluster.org
> > http://lists.gluster.org/mailman/listinfo/maintainers
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 3.10, 3.12, 4.0: Tagged for release

2018-04-25 Thread Niels de Vos
On Wed, Apr 25, 2018 at 11:24:42AM -0400, Shyam Ranganathan wrote:
> On 04/24/2018 04:14 PM, Niels de Vos wrote:
> > On Tue, Apr 24, 2018 at 10:43:29AM -0400, Shyam Ranganathan wrote:
> >> Hi,
> >>
> >> The next set of minor updates for 3.10/12 and 4.0 are tagged and under
> >> packaging.
> > 
> > Please see the announcements of the tagging for details on the
> > packaging. I can only do the builds for the official Fedora and CentOS
> > Storage SIG repositories. Someone else should look into the packages for
> > download.gluster.org.
> 
> We currently do not have anyone else to look at these (as far as I
> know), as a result we would need to wait for Kaleb to be back and get
> these going before we announce the release.
> 
> Would we need to wait and ack the CentOS SIG packages, or ack them
> anyway? Thoughts? (ack as in tested status update for the SIG to push
> them to the repos)

As briefly mentioned on IRC, the packages for CentOS Storage SIG have
been requested to be signed+pushed to the mirrors.

Cheers,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Monitoring and acting on LVM thin-pool consumption

2018-04-10 Thread Niels de Vos
Recently I have been implementing "volume clone" support in Heketi. This
uses the snapshot+clone functionality from Gluster. In order to create
snapshots and clone them, it is required to use LVM thin-pools on the
bricks. This is where my current problem originates

When there are cloned volumes, the bricks of these volumes use the same
thin-pool as the original bricks. This makes sense, and allows cloning
to be really fast! There is no need to copy data from one brick to a new
one, the thin-pool provides copy-on-write semantics.

Unfortunately it can be rather difficult to estimate how large the
thin-pool should be when the initial Gluster Volume is created.
Over-allocation is likely needed, but by how much? It may not be clear
how many clones there will be made, nor how much % of data will change
on each of the clones.

A wrong estimate can easily cause the thin-pool to become full. When
that happens, the filesystem on the bricks will go readonly. Mounting
the filesystem read-writable may not be possible at all. I've even seen
/dev entries for the LV getting removed. This makes for a horrible
Gluster experience, and it can be tricky to recover from it.

In order to make thin-provisioning more stable in Gluster, I would like
to see integrated monitoring of (thin) LVs and some form of acting on
crucial events. One idea would be to make the Gluster Volume read-only
when it detects that a brick is almost out-of-space. This is close to
what local filesystems do when their block-device is having issues.

The 'dmeventd' process already monitors LVM, and by default writes to
'dmesg'. Checking dmesg for warnings is not really a nice solution, so
maybe we should write a plugin for dmeventd. Possibly something exists
already what we can use, or take inspiration from.

Please provide ideas, thoughts and any other comments. Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-23 Thread Niels de Vos
On Fri, Mar 23, 2018 at 10:17:28AM +0530, Amar Tumballi wrote:
> On Thu, Mar 22, 2018 at 11:34 PM, Shyam Ranganathan 
> wrote:
> 
> > On 03/21/2018 04:12 AM, Amar Tumballi wrote:
> > > Current 4.1 project release lane is empty! I cleaned it up, because I
> > > want to hear from all as to what content to add, than add things
> > marked
> > > with the 4.1 milestone by default.
> > >
> > >
> > > I would like to see we have sane default values for most of the options,
> > > or have group options for many use-cases.
> >
> > Amar, do we have an issue that lists the use-cases and hence the default
> > groups to be provided for the same?
> >
> >
> Considering group options' task is more in glusterd2, the issue is @
> https://github.com/gluster/glusterd2/issues/614 &
> https://github.com/gluster/glusterd2/issues/454

Maybe somehow link those with
https://github.com/gluster/glusterfs/issues/294 ?

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Proposing]GlusterFS Test Day

2018-03-16 Thread Niels de Vos
On Wed, Mar 07, 2018 at 07:13:18AM -0500, Sumantro Mukherjee wrote:
> Hey Niels,
> 
> I work for the Fedora QA team and I was wondering if you would like to 
> have a GlusterFS test day which is where we can test GlusterFS for F28 
> (specially the modular server)

Hi Sumantro!

It would be great to have a Gluster test day at one point. I do not know
if the Fedora 28 Modular Server already has been prepared to include
Gluster (glusterfs + additional tools). The only references I am aware of
are https://bugzilla.redhat.com/show_bug.cgi?id=1518150 and
https://bugzilla.redhat.com/show_bug.cgi?id=1523245 . Both do not seem
to have made progress the last months.

In order to set something up, it would be best to discuss your ideas on
the gluster-devel@gluster.org list (you may need to subscribe). All
developers and many testers are expected to be responsive there.

HTH,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gNFS service management from glusterd

2018-02-23 Thread Niels de Vos
On Wed, Feb 21, 2018 at 08:25:21PM +0530, Atin Mukherjee wrote:
> On Wed, Feb 21, 2018 at 4:24 PM, Xavi Hernandez  wrote:
> 
> > Hi all,
> >
> > currently glusterd sends a SIGKILL to stop gNFS, while all other services
> > are stopped with a SIGTERM signal first (this can be seen in
> > glusterd_svc_stop() function of mgmt/glusterd xlator).
> >
> 
> > The question is why it cannot be stopped with SIGTERM as all other
> > services. Using SIGKILL blindly while write I/O is happening can cause
> > multiple inconsistencies at the same time. For a replicated volume this is
> > not a problem because it will take one of the replicas as the "good" one
> > and continue, but for a disperse volume, if the number of inconsistencies
> > is bigger than the redundancy value, a serious problem could appear.
> >
> > The probability of this is very small (I've tried to reproduce this
> > problem on my laptop but I've been unable), but it exists.
> >
> > Is there any known issue that prevents gNFS to be stopped with a SIGTERM ?
> > or can it be changed safely ?
> >
> 
> I firmly believe that we need to send SIGTERM as that's the right way to
> gracefully shutdown a running process but what I'd request from NFS folks
> to confirm if there's any background on why it was done with SIGKILL.

No background about this is known to me. I had a quick look through the
git logs, but could not find an explanation.

I agree that SIGTERM would be more appropriate.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] run-tests-in-vagrant

2018-02-16 Thread Niels de Vos
On Fri, Feb 16, 2018 at 10:15:28AM +0100, Niels de Vos wrote:
> On Fri, Feb 16, 2018 at 10:08:51AM +0530, Nigel Babu wrote:
> > So we have a job that's unmaintained and unwatched. If nobody steps up to
> > own it in the next 2 weeks, I'll be deleting this job.
> 
> I fixed the downloading of the Vagrant box with
> https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/122 .

In addition to the download error, I have improved the return status of
the ./run-tests-in-vagrant.sh script so that it returns the status of
the jobs taht are run in the VM:
  https://review.gluster.org/19575

Please review that one too :)

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] run-tests-in-vagrant

2018-02-16 Thread Niels de Vos
On Fri, Feb 16, 2018 at 10:08:51AM +0530, Nigel Babu wrote:
> So we have a job that's unmaintained and unwatched. If nobody steps up to
> own it in the next 2 weeks, I'll be deleting this job.

I fixed the downloading of the Vagrant box with
https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/122 .

Maybe Talur can help with updating the box so that geo-replication tests
function? If there is an Ansible role/repository with the changes that
were done on the Jenkins slaves, that could possibly help.

Is it possible to provide a Vargrant box configured similar to the
Jenkins slaves (without the Jenkins bits and other internal pieces) from
the same deployment as the slaves? That would make things less manual
and much easier to consume.

Thanks!
Niels


> 
> On Wed, Feb 14, 2018 at 4:49 PM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > On Wed, Feb 14, 2018 at 11:15:23AM +0530, Nigel Babu wrote:
> > > Hello,
> > >
> > > Centos CI has a run-tests-in-vagrant job. Do we continue to need this
> > > anymore? It still runs master and 3.8. I don't see this job adding much
> > > value at this point given we only look at results that are on
> > > build.gluster.org. I'd like to use the extra capacity for other tests
> > that
> > > will run on centos-ci.
> >
> > The ./run-tests-in-vagrant.sh script is ideally what developers run
> > before submitting their patches. In case it fails, we should fix it.
> > Being able to run tests locally is something many of the new
> > contributors want to do. Having a controlled setup for the testing can
> > really help with getting new contributors onboard.
> >
> > Hmm, and the script/job definitely seems to be broken with at least two
> > parts:
> > - the Vagrant version on CentOS uses the old URL to get the box
> > - 00-georep-verify-setup.t fails, but the result is marked as SUCCESS
> >
> > It seems we need to get better at watching the CI, or at least be able
> > to receive and handle notifications...
> >
> > Thanks,
> > Niels
> >
> 
> 
> 
> -- 
> nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] run-tests-in-vagrant

2018-02-14 Thread Niels de Vos
On Wed, Feb 14, 2018 at 11:15:23AM +0530, Nigel Babu wrote:
> Hello,
> 
> Centos CI has a run-tests-in-vagrant job. Do we continue to need this
> anymore? It still runs master and 3.8. I don't see this job adding much
> value at this point given we only look at results that are on
> build.gluster.org. I'd like to use the extra capacity for other tests that
> will run on centos-ci.

The ./run-tests-in-vagrant.sh script is ideally what developers run
before submitting their patches. In case it fails, we should fix it.
Being able to run tests locally is something many of the new
contributors want to do. Having a controlled setup for the testing can
really help with getting new contributors onboard.

Hmm, and the script/job definitely seems to be broken with at least two
parts:
- the Vagrant version on CentOS uses the old URL to get the box
- 00-georep-verify-setup.t fails, but the result is marked as SUCCESS

It seems we need to get better at watching the CI, or at least be able
to receive and handle notifications...

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] use-case for 4 replicas and 1 arbiter

2018-02-12 Thread Niels de Vos
Hi Ravi,

Last week I was in a discussion about 4-way replication and one arbiter
(5 bricks per set). It seems that it is not possible to create this
configuration through the CLI. What would it take to make this
available?

The idea is to get a high available storage, split over three
datacenters. Two large datacenter have red and blue racks (separated
power supplies, networking etc.) and the smaller datacenter can host the
arbiter brick.

.--.   .--.
|   DC-1   |   |   DC-2   |
| .---red---.  .--blue---. |   | .---red---.  .--blue---. |
| | |  | | |   | | |  | | |
| | |  | | |   | | |  | | |
| |  [b-1]  |  |  [b-2]  | |===| |  [b-3]  |  |  [b-4]  | |
| | |  | | |   | | |  | | |
| | |  | | |   | | |  | | |
| '-'  '-' |   | '-'  '-' |
'--'   '--'
   \   /
\ /
 \   /
  .-.
  | DC-3|
  | .-. |
  | | | |
  | | | |
  | |  [a-1]  | |
  | | | |
  | | | |
  | '-' |
  '-'

Creating the volume looks like this, and errors out:

   # gluster volume create red-blue replica 5 arbiter 1 \
   dc1-red-svr1:/bricks/b-1 dc1-blue-svr1:/bricks/b-2 \
   dc2-red-svr1:/bricks/b-3 dc2-blue-svr1:/bricks/b-4 \
   dc3-svr1:/bricks/a-1
   For arbiter configuration, replica count must be 3 and arbiter count
   must be 1. The 3rd brick of the replica will be the arbiter

Possibly the thin-arbiter from https://review.gluster.org/19545 could be
a replacement for the 'full' arbiter. But that may require more time to
get stable than the current arbiter?

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Preview of glusterfs packaging with glusterd2 (Fedora and CentOS RPMs)

2018-01-30 Thread Niels de Vos
On Mon, Jan 29, 2018 at 03:37:20PM -0500, Kaleb S. KEITHLEY wrote:
> This is built on top of glusterfs-3.12. Obviously this will change to
> 4.0 (4.0rc0, etc.) This is derived from the
> .../extras/rpms/glusterd2.spec in the glusterd2 source.
> 
> see https://koji.fedoraproject.org/koji/taskinfo?taskID=24543030
> 
> (Having to "bundle" the generated source and the -vendor source tar
> files does make for a big .src.rpm.)
> 
> Question for Debian and Ubuntu users: would you want to see the
> glusterd2 bits included in the -common or -server sub-package or would
> you like to see a separate sub-package for glusterd2?

I am not sure if the advantages outweight the difficulties with
bundling two projects into one package. My personal preference would be
to have a glusterd2 package and a glusterfs one, completely separate
from each other, just with tight Requires:...

It is not clear to me when/if GD2 gets merged back into the glusterfs
repository. In case the projects stay separate, they should probably get
packaged that way too. In case they get merged, *yuck*, we'll have to
deal with a project that contains C, Python, Bash and Golang sources :-(

I've suggested this before, but did not get much feedback yet, maybe
now?
 - http://lists.gluster.org/pipermail/packaging/2018-January/000439.html
 - http://lists.gluster.org/pipermail/packaging/2018-January/000444.html

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Rafi KC attending DevConf and FOSDEM

2018-01-26 Thread Niels de Vos
On Fri, Jan 26, 2018 at 06:24:36PM +0530, Mohammed Rafi K C wrote:
> Hi All,
> 
> I'm attending both DevConf (25-28) and Fosdem (3-4). If any of you are
> attending the conferences and would like to talk about gluster, please
> feel free to ping me through irc nick rafi on freenode or message me on
> +436649795838

In addition to that at FOSDEM, there is a Gluster stand (+Ceph, and next
to oVirt) on level 1 (ground floor) of the K building[0]. We'll try to
have some of the developers and other contributors to the project around
at all times. Come and talk to us about your use-cases, questions and
words of encouragement ;-)

There are several talks related to Gluster too! On Saturday there is
"Optimizing Software Defined Storage for the Age of Flash" [1], and on
Sunday the Software Defined Storage DevRoom has scheduled many more.

Hope to see you there!
Niels


0. https://fosdem.org/2018/schedule/buildings/#k
1. https://fosdem.org/2018/schedule/event/optimizing_sds/
2. https://fosdem.org/2018/schedule/track/software_defined_storage/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Niels de Vos
On Wed, Jan 24, 2018 at 02:24:06PM +0530, Pranith Kumar Karampuri wrote:
> hi,
>In the same commit you mentioned earlier, there was this code
> earlier:
> -/* Returns 1 if the stat seems to be filled with zeroes. */
> -int
> -nfs_zero_filled_stat (struct iatt *buf)
> -{
> -if (!buf)
> -return 1;
> -
> -/* Do not use st_dev because it is transformed to store the xlator
> id
> - * in place of the device number. Do not use st_ino because by
> this time
> - * we've already mapped the root ino to 1 so it is not guaranteed
> to be
> - * 0.
> - */
> -if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> -return 1;
> -
> -return 0;
> -}
> -
> -
> 
> I moved this to a common library function that can be used in afr as well.
> Why was it there in NFS? +Niels for answering that question.

Sorry, I dont know why that was done. It was introduced with the initial
gNFS implementation, long before I started to work with Gluster. The
only reference I have is this from
xlators/nfs/server/src/nfs3-helpers.c:nfs3_stat_to_post_op_attr()

 371 /* Some performance translators return zero-filled stats when they
 372  * do not have up-to-date attributes. Need to handle this by not
 373  * returning these zeroed out attrs.
 374  */

This may not be true for the current situation anymore.

HTH,
Niels


> 
> If I give you a patch which will assert the error condition, would it be
> possible for you to figure out the first xlator which is unwinding the iatt
> with nlink count as zero but ctime as non-zero?
> 
> On Wed, Jan 24, 2018 at 1:03 PM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
> 
> > Hi,  Pranith Kumar,
> >
> >
> >
> > Can you tell me while need set buf->ia_nlink to “0”in function
> > gf_zero_fill_stat(), which API or Application will care it?
> >
> > If I remove this line and also update corresponding in function
> > gf_is_zero_filled_stat,
> >
> > The issue seems gone, but I can’t confirm will lead to other issues.
> >
> >
> >
> > So could you please double check it and give your comments?
> >
> >
> >
> > My change is as the below:
> >
> >
> >
> > gf_boolean_t
> >
> > gf_is_zero_filled_stat (struct iatt *buf)
> >
> > {
> >
> > if (!buf)
> >
> > return 1;
> >
> >
> >
> > /* Do not use st_dev because it is transformed to store the xlator
> > id
> >
> >  * in place of the device number. Do not use st_ino because by
> > this time
> >
> >  * we've already mapped the root ino to 1 so it is not guaranteed
> > to be
> >
> >  * 0.
> >
> >  */
> >
> > //if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> >
> > if (buf->ia_ctime == )
> >
> > return 1;
> >
> >
> >
> > return 0;
> >
> > }
> >
> >
> >
> > void
> >
> > gf_zero_fill_stat (struct iatt *buf)
> >
> > {
> >
> > //   buf->ia_nlink = 0;
> >
> > buf->ia_ctime = 0;
> >
> > }
> >
> >
> >
> > Thanks & Best Regards
> >
> > George
> >
> > *From:* Lian, George (NSB - CN/Hangzhou)
> > *Sent:* Friday, January 19, 2018 10:03 AM
> > *To:* Pranith Kumar Karampuri ; Zhou, Cynthia (NSB -
> > CN/Hangzhou) 
> > *Cc:* Li, Deqian (NSB - CN/Hangzhou) ;
> > Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
> > ping@nokia-sbell.com>
> >
> > *Subject:* RE: [Gluster-devel] a link issue maybe introduced in a bug fix
> > " Don't let NFS cache stat after writes"
> >
> >
> >
> > Hi,
> >
> > >>> Cool, this works for me too. Send me a mail off-list once you are
> > available and we can figure out a way to get into a call and work on this.
> >
> >
> >
> > Have you reproduced the issue per the step I listed in
> > https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?
> >
> >
> >
> > If not, I would like you could try it yourself , which the difference
> > between yours and mine is just create only 2 bricks instead of 6 bricks.
> >
> >
> >
> > And Cynthia could have a session with you if you needed when I am not
> > available in next Monday and Tuesday.
> >
> >
> >
> > Thanks & Best Regards,
> >
> > George
> >
> >
> >
> > *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> > gluster.org ] *On Behalf Of *Pranith
> > Kumar Karampuri
> > *Sent:* Thursday, January 18, 2018 6:03 PM
> > *To:* Lian, George (NSB - CN/Hangzhou) 
> > *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
> > Li, Deqian (NSB - CN/Hangzhou) ;
> > Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
> > ping@nokia-sbell.com>
> > *Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug fix
> > " Don't let NFS cache stat after writes"
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Jan 18, 2018 at 12:17 PM, Lian, George 

Re: [Gluster-devel] Rawhide RPM builds failing

2018-01-24 Thread Niels de Vos
On Wed, Jan 24, 2018 at 02:53:40PM +0530, Nigel Babu wrote:
> More details: https://build.gluster.org/job/rpm-rawhide/1182/

With the changes for bug 1536186 it works fine for me. One patch still
needs to get merged though.

The error in the root.log of the job looks unrelated, it may have been
caused by a broken package in Fedora Rawhide. I could not identify a
real bug in the .spec.

Niels


> On Wed, Jan 24, 2018 at 2:03 PM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > On Wed, Jan 24, 2018 at 09:14:51AM +0530, Nigel Babu wrote:
> > > Hello folks,
> > >
> > > Our rawhide rpm builds seem to be failing with what looks like a specfile
> > > issue. It's worth looking into this now before F28 is released in May.
> >
> > Do you have more details? The errors from a build.log from mock would
> > help. Which .spec are you using, the one from the GlusterFS sources, or
> > the one from Fedora?
> >
> > Please report it as a bug, either against Fedora/glusterfs or
> > GlusterFS/build.
> >
> > Thanks!
> > Niels
> >
> 
> 
> 
> -- 
> nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rawhide RPM builds failing

2018-01-24 Thread Niels de Vos
On Wed, Jan 24, 2018 at 09:14:51AM +0530, Nigel Babu wrote:
> Hello folks,
> 
> Our rawhide rpm builds seem to be failing with what looks like a specfile
> issue. It's worth looking into this now before F28 is released in May.

Do you have more details? The errors from a build.log from mock would
help. Which .spec are you using, the one from the GlusterFS sources, or
the one from Fedora?

Please report it as a bug, either against Fedora/glusterfs or
GlusterFS/build.

Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Setting up dev environment

2018-01-20 Thread Niels de Vos
On Sat, Jan 20, 2018 at 01:20:00PM +, Ram Ankireddypalle wrote:
> Thanks Niels. Finally managed it to get working:
> 
> 1) Cloned using  https from review.gluster.org/glusterfs.git
> 2) Had to generate a ssh password in review.gluster.org
> 3) Provide the generated ssh password when prompted for the password to 
> authenticate with review.gluster.org

That sounds like there is a problem with your ssh key configuration,
either in your $HOME/.ssh/ or on
https://review.gluster.org/#/settings/ssh-keys . Using ssh-keys is
surely safer than passwords, and probably a little more userfriendly
too. It is probably worth it to fix your settings at one point.

Niels


> 
> Thanks and Regards,
> Ram
> 
> -----Original Message-
> From: Niels de Vos [mailto:nde...@redhat.com] 
> Sent: Saturday, January 20, 2018 8:13 AM
> To: Ram Ankireddypalle
> Cc: Gluster Devel (gluster-devel@gluster.org)
> Subject: Re: [Gluster-devel] Setting up dev environment
> 
> On Sat, Jan 20, 2018 at 02:17:16AM +, Ram Ankireddypalle wrote:
> > Hi,
> >   I am trying to set  a dev environment and send out the code that I 
> > worked on out for review.
> >   I tried setting up a build environment using doc @ 
> > http://docs.gluster.org/en/latest/Developer-guide/Development-Workflow
> > /
> > 
> >  I am seeing the following error.
> > 
> >  git clone ssh://ram-ankireddypa...@git.gluster.org/glusterfs.git 
> > glusterfs
> >  Cloning into 'glusterfs'...
> >  ssh: connect to host git.gluster.org port 22: Connection timed 
> > out
> > 
> > Please suggest what could be the issue here.
> 
> I normally use review.gluster.org instead of the old(?) alias 
> git.gluster.org. But, that does not really matter and should not cause the 
> problem you are seeing.
> 
> An easier way to debug connection problems is by trying to login over ssh 
> directly. If it works, you'll get a banner and the server will log you out 
> again.
> 
> $ ssh nixpa...@review.gluster.org
> 
>   Welcome to Gerrit Code Review
> 
>   Hi Niels de Vos, you have successfully connected over SSH.
> 
>   Unfortunately, interactive shells are disabled.
>   To clone a hosted Git repository, use:
> 
>   git clone ssh://nixpa...@review.gluster.org/REPOSITORY_NAME.git
> 
> Connection to review.gluster.org closed.
> 
> To troubleshoot, use 'ssh -vvv ram-ankireddypa...@review.gluster.org'
> and go through the logs.
> 
> Good luck!
> Niels
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for the
> sole use of the intended recipient. Any unauthorized review, use or 
> distribution
> by others is strictly prohibited. If you have received the message by mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


  1   2   3   4   5   6   7   8   9   >