Re: [Gluster-users] What is the right way to bring down a Glusterfs server for maintenance?

2019-07-03 Thread John Strunk
Nope. Just: * Ensure all volumes are fully healed so you don't run into split brain * Go ahead and shutdown the server needing maintenance ** If you just want gluster down on that sever node: stop glusterd and kill the glusterfs bricks, then do what you need to do ** If you just want to power off:

Re: [Gluster-users] Self Heal Confusion

2018-12-20 Thread John Strunk
Assuming your bricks are up... yes, the heal count should be decreasing. There is/was a bug wherein self-heal would stop healing but would still be running. I don't know whether your version is affected, but the remedy is to just restart the self-heal daemon. Force start one of the volumes that

[Gluster-users] GCS milestone 0.3

2018-11-15 Thread John Strunk
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.3. GCS is following a release cadence of 2 weeks for all the point releases leading up to 1.0. This enables developers and users to get an overall experience of the latest changes on a more frequent basis. From GCS 1.0

[Gluster-users] GCS release 0.2

2018-11-02 Thread John Strunk
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.2. This is a follow-on to last month’s release of version 0.1. In addition to various bug fixes and enhancements, highlights include: - Update of glusterd2 container to 20181102 nightly - Deploy environment now uses

Re: [Gluster-users] selfheal operation takes infinite to complete

2018-10-23 Thread John Strunk
I'll leave it to others to help debug slow heal... As for 'heal info' taking a long time, you can use `gluster vol heal gv1 info summmary` to just get the counts. That will probably get you the stats you are really interested in (whether heal is progressing). -John On Tue, Oct 23, 2018 at 5:31

Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-13 Thread John Strunk
I believe the ask is also around instructions for actually obtaining the new bits from the distros... what repos need to be changed (if any) and whether old packages need to be removed. On Thu, Sep 13, 2018 at 9:16 AM Shyam Ranganathan wrote: > On 09/12/2018 04:05 PM, Nicolas SCHREVEL wrote: >

Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-12 Thread John Strunk
with 3.12 and 3.10 > releases" > > And there is no info about "PPA" Management. > Do i have to remove 3.13 PPA first, install over ... > > Where installing on Ubuntu there is information in documentation : > > https://docs.gluster.org/en/latest/Install-Guide/Install/#for-ubuntu

Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-12 Thread John Strunk
The upgrade guide covers live upgrade. https://docs.gluster.org/en/latest/Upgrade-Guide/ -John On Wed, Sep 12, 2018 at 3:10 PM Nicolas SCHREVEL wrote: > Hi, > > I have two cluster with 3 bricks on Ubuntu 16.04 with GlusterFS 3.13 PPA. > > What is the best way to upgrade 3.13.2 to to 4.0

Re: [Gluster-users] Redis db permission issue while running GitLab in Kubernetes with Gluster

2017-09-08 Thread John Strunk
gt; kubernetes.io > Edit This Page. Configure a Security Context for a Pod or Container. A > security context defines privilege and access control settings for a Pod or > Container. > > -- > *From:* gluster-users-boun...@gluster.org <gluster-users-bo

Re: [Gluster-users] Redis db permission issue while running GitLab in Kubernetes with Gluster

2017-09-07 Thread John Strunk
I don't think this is a gluster problem... Each container is going to have its own notion of user ids, hence the mystery uid 1000 in the redis container. I suspect that if you exec into the gitlab container, it may be the one running as 1000 (guessing based on the file names). If you want to