Nope. Just:
* Ensure all volumes are fully healed so you don't run into split brain
* Go ahead and shutdown the server needing maintenance
** If you just want gluster down on that sever node: stop glusterd and kill
the glusterfs bricks, then do what you need to do
** If you just want to power off:
Assuming your bricks are up... yes, the heal count should be decreasing.
There is/was a bug wherein self-heal would stop healing but would still be
running. I don't know whether your version is affected, but the remedy is
to just restart the self-heal daemon.
Force start one of the volumes that
Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.3. GCS is following a release cadence of 2 weeks for all the
point releases leading up to 1.0. This enables developers and users to get
an overall experience of the latest changes on a more frequent basis. From
GCS 1.0
Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.2. This is a follow-on to last month’s release of version 0.1.
In addition to various bug fixes and enhancements, highlights include:
- Update of glusterd2 container to 20181102 nightly
- Deploy environment now uses
I'll leave it to others to help debug slow heal...
As for 'heal info' taking a long time, you can use `gluster vol heal gv1
info summmary` to just get the counts. That will probably get you the stats
you are really interested in (whether heal is progressing).
-John
On Tue, Oct 23, 2018 at 5:31
I believe the ask is also around instructions for actually obtaining the
new bits from the distros... what repos need to be changed (if any) and
whether old packages need to be removed.
On Thu, Sep 13, 2018 at 9:16 AM Shyam Ranganathan
wrote:
> On 09/12/2018 04:05 PM, Nicolas SCHREVEL wrote:
>
with 3.12 and 3.10
> releases"
>
> And there is no info about "PPA" Management.
> Do i have to remove 3.13 PPA first, install over ...
>
> Where installing on Ubuntu there is information in documentation :
>
> https://docs.gluster.org/en/latest/Install-Guide/Install/#for-ubuntu
The upgrade guide covers live upgrade.
https://docs.gluster.org/en/latest/Upgrade-Guide/
-John
On Wed, Sep 12, 2018 at 3:10 PM Nicolas SCHREVEL
wrote:
> Hi,
>
> I have two cluster with 3 bricks on Ubuntu 16.04 with GlusterFS 3.13 PPA.
>
> What is the best way to upgrade 3.13.2 to to 4.0
gt; kubernetes.io
> Edit This Page. Configure a Security Context for a Pod or Container. A
> security context defines privilege and access control settings for a Pod or
> Container.
>
> --
> *From:* gluster-users-boun...@gluster.org <gluster-users-bo
I don't think this is a gluster problem...
Each container is going to have its own notion of user ids, hence the
mystery uid 1000 in the redis container. I suspect that if you exec into
the gitlab container, it may be the one running as 1000 (guessing based on
the file names). If you want to
10 matches
Mail list logo