I would also say do not forget to set "sync=disabled".
Original Message
Subject: Re: [Gluster-users] Production cluster planning
Local Time: September 26, 2016 7:59 PM
UTC Time: September 26, 2016 5:59 PM
From: jlawre...@squaretrade.com
To: Lindsay Mathieson
ubject: Re: [Gluster-users] Production cluster planning
Local Time: September 26, 2016 11:08 PM
UTC Time: September 26, 2016 9:08 PM
From: lindsay.mathie...@gmail.com
To: gluster-users@gluster.org
On 27/09/2016 4:13 AM, mabi wrote:
> I would also say do not forget to set "sync=disabled"
Hi,
I have a GlusterFS server version 3.7.12 and mount my volumes on my clients
using FUSE native GlusterFS. Now I was wondering if it is safe to upgrade the
GlusterFS client on my clients to 3.7.15 without upgrading my server to 3.7.15?
Regards,
y
it all boils down to this specific
Original Message
Subject: Re: [Gluster-users] Production cluster planning
Local Time: September 30, 2016 12:41 PM
UTC Time: September 30, 2016 10:41 AM
From: lindsay.mathie...@gmail.com
To: mabi <m...@protonmail.ch>, Gluster Users <gluster
UTC Time: September 30, 2016 10:41 AM
From: lindsay.mathie...@gmail.com
To: mabi <m...@protonmail.ch>, Gluster Users <gluster-users@gluster.org>
On 29/09/2016 4:32 AM, mabi wrote:
> hat's not correct. There is no risk of corruption using
> "sync=disabled". In the worst
I was wondering with your setup you mention, how high are your context
switches? I mean what is your typical average context switch and what are your
highest context switch peeks (as seen in iostat).
Best,
M.
Original Message
Subject: Re: [Gluster-users] Production
Hi Lindsay
Just noticed that you have your ZFS logs on a single disk, you like living
dangerously ;-) a you should have a mirror for the slog to be on the safe side.
Cheers,
M.
Original Message
Subject: Re: [Gluster-users] Improving IOPS
Local Time: November 5, 2016
Hello,
I just upgraded from GlusterFS 3.7.12 to 3.7.16 and checked the op-version for
all my volumes and found out that I am still using op-version 30706 as you can
see below:
Option Value
-- -
cluster.op-version 30706
My question here is: should I manually set the op-version for all
: oleksa...@natalenko.name
To: mabi <m...@protonmail.ch>, Gluster Users <gluster-users@gluster.org>
IIRC, latest opversion is 30712.
On October 22, 2016 1:38:44 PM GMT+02:00, mabi <m...@protonmail.ch> wrote:
>Hello,
>
>I just upgraded from GlusterFS 3.7.12 to 3.7.16 and chec
Hello,
Where can I find the upgrade guide to 3.8 ? I would like to upgrade from 3.7.15
to 3.8.5 soon but on the following page:
[http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/](http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/?highlight=upgrade)
it is still
Is it possible to set the checkpoint on the master node from the slave node
where the snapshot will actually be done?
Because I can see the problem that you have a "backup" script on the slave node
which actually does the snapshot but somehow it has to run/set the checkpoint
on the master
@gluster.org
On 27/10/2016 6:35 AM, mabi wrote:
> I was wondering with your setup you mention, how high are your context
> switches? I mean what is your typical average context switch and what
> are your highest context switch peeks (as seen in iostat).
Wouldn't that be vmstat?
--
Lindsay
Hi,
I would like to compile GlusterFS 3.8.6 manually on a Linux Debian 8 server.
The configure step works fine but a make immediately fails with the following
error:
Makefile:90: *** missing separator (did you mean TAB instead of 8 spaces?).
Stop.
Any ideas what is wrong here and how to fix?
.
Original Message
Subject: Re: [Gluster-users] Makefile:90: *** missing separator (did you mean
TAB instead of 8 spaces?). Stop.
Local Time: December 8, 2016 7:29 AM
UTC Time: December 8, 2016 6:29 AM
From: anoo...@redhat.com
To: mabi <m...@protonmail.ch>
Gluster Users &l
Hello,
I just upgraded from GlusterFS 3.7.17 to 3.8.6 and somehow messed up my volume
by rebooting without shutting things down properly I suppose. Now I have some
files which need to be healed (17 files on node 1 to be precise and 0 on node
2). I have a replica 2 by the way.
So in order to
Hi Milos,
On a more generic note, the process of expanding an existing volume is
described here in the documentation:
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#expanding-volumes
Best,
M.
Original Message
Subject:
17 4:33 AM
From: khire...@redhat.com
To: mabi <m...@protonmail.ch>
Gluster Users <gluster-users@gluster.org>
Hi Mabi,
What's the rsync version being used?
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi" <m...@protonmail.ch>
> To
Anyone?
Original Message
Subject: Deletion of old CHANGELOG files in .glusterfs/changelogs
Local Time: March 31, 2017 11:22 PM
UTC Time: March 31, 2017 9:22 PM
From: m...@protonmail.ch
To: Gluster Users
Hi,
I am using geo-replication since now over
Hi,
I am using geo-replication since now over a year on my 3.7.20 GlusterFS volumes
and noticed that the CHANGELOG. in the .glusterfs/changelogs
directory of a brick never get deleted. I have for example over 120k files in
one of these directories and it is growing constantly.
So my question,
-
Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
"(unreachable)")
Local Time: April 11, 2017 9:18 AM
UTC Time: April 11, 2017 7:18 AM
From: khire...@redhat.com
To: mabi <m...@protonmail.ch>
Gluster Users <gluster-users@gluster.org>
Hi,
Then please us
Amazing, thanks Amar for creating the issue!
Cheers,
M.
Original Message
Subject: Re: [Gluster-users] Fw: Deletion of old CHANGELOG files in
.glusterfs/changelogs
Local Time: April 6, 2017 4:08 AM
UTC Time: April 6, 2017 2:08 AM
From: atumb...@redhat.com
To: mabi &l
files in
.glusterfs/changelogs
Local Time: April 5, 2017 8:44 AM
UTC Time: April 5, 2017 6:44 AM
From: atumb...@redhat.com
To: Mohammed Rafi K C <rkavu...@redhat.com>
mabi <m...@protonmail.ch>, Gluster Users <gluster-users@gluster.org>
Local Time: March 31, 2017 11:22 PM
UTC Time
] Bugfix release GlusterFS 3.8.11 has landed
Local Time: April 20, 2017 5:17 PM
UTC Time: April 20, 2017 3:17 PM
From: nde...@redhat.com
To: mabi <m...@protonmail.ch>
gluster-users@gluster.org
On Wed, Apr 19, 2017 at 01:46:14PM -0400, mabi wrote:
> Sorry for insisting but where can I find the
Hello,
I am planning to upgade from 3.7.20 to 3.8.11 but unfortunately the "Upgrading
to 3.8" guide is missing:
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/
Where can I find the instruction to upgrade to 3.8?
Best regards,
M.___
).
Is this issue with DHT fixed in the latest 3.8.x release?
Regards,
M.
Original Message
Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
"(unreachable)")
Local Time: April 13, 2017 7:57 AM
UTC Time: April 13, 2017 5:57 AM
From: khire...@redhat.com
T
has landed
Local Time: April 22, 2017 9:07 AM
UTC Time: April 22, 2017 7:07 AM
From: pkara...@redhat.com
To: mabi <m...@protonmail.ch>
Niels de Vos <nde...@redhat.com>, gluster-users@gluster.org
<gluster-users@gluster.org>
If your volume has replication/erasure coding then it is
Hello,
I am using distributed geo replication with two of my GlusterFS 3.7.20
replicated volumes and just noticed that the geo replication for one volume is
not working anymore. It is stuck since the 2017-02-23 22:39 and I tried to stop
and restart geo replication but still it stays stuck at
Does anyone know why this guide is missing?
Regards,
M.
Original Message
Subject: Upgrading to 3.8 guide missing
Local Time: April 14, 2017 11:18 AM
UTC Time: April 14, 2017 9:18 AM
From: m...@protonmail.ch
To: Gluster Users
Hello,
I am planning to
p file?
> Local Time: July 31, 2017 9:25 AM
> UTC Time: July 31, 2017 7:25 AM
> From: ravishan...@redhat.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
> On 07/31/2017 12:20 PM, mabi wrote:
>
>> I did a find on this inod
essage
> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
> Local Time: July 31, 2017 10:55 AM
> UTC Time: July 31, 2017 8:55 AM
> From: ravishan...@redhat.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org&
ter to replica 2
> Local Time: July 29, 2017 12:32 PM
> UTC Time: July 29, 2017 10:32 AM
> From: ksan...@redhat.com
> To: mabi <m...@protonmail.ch>, Rahul Hinduja <rhind...@redhat.com>, Kotresh
> Hiremath Ravishankar <khire...@redhat.com>
> Gluster Users <
ct: Re: [Gluster-users] Not possible to stop geo-rep after adding
>> arbiter to replica 2
>> Local Time: July 29, 2017 12:32 PM
>> UTC Time: July 29, 2017 10:32 AM
>> From: ksan...@redhat.com
>> To: mabi <m...@protonmail.ch>, Rahul Hinduja <rhind...@redhat.c
Hi,
Sorry for mailing again but as mentioned in my previous mail, I have added an
arbiter node to my replica 2 volume and it seem to have gone fine except for
the fact that there is one single file which needs healing and does not get
healed as you can see here from the output of a "heal info":
Hello
To my two node replica volume I have added an arbiter node for safety purpose.
On that volume I also have geo replication running and would like to stop it is
status "Faulty" and keeps trying over and over to sync without success. I am
using GlusterFS 3.8.11.
So in order to stop geo-rep I
now if you need any more
info. I would be glad to resolve this weird file which needs to be healed but
can not.
Best regards,
Mabi
> Original Message
> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
> Local Time: July 30, 2017 3:31 AM
> U
Hello,
As you might have read in my previous post on the mailing list I have added an
arbiter node to my GlusterFS 3.8.11 replica 2 volume. After some healing issues
and help of Ravi that could get fixed but now I just noticed that my quotas are
all gone.
When I run the following command:
ssage
>> Subject: Re: [Gluster-users] Quotas not working after adding arbiter brick
>> to replica 2
>> Local Time: August 2, 2017 1:06 PM
>> UTC Time: August 2, 2017 11:06 AM
>> From: sunni...@redhat.com
>> To: mabi <m...@protonmail.ch>
>> Glust
gust 2, 2017 11:06 AM
> From: sunni...@redhat.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
>
> Mabi,
> We have fixed a couple of issues in the quota list path.
> Could you also please attach the quota.conf file
> (/var/lib/gluster
I am still stuck without being able to delete the geo-replication session. Can
anyone help?
> Original Message
> Subject: How to delete geo-replication session?
> Local Time: August 1, 2017 12:15 PM
> UTC Time: August 1, 2017 10:15 AM
> From: m...@protonmail.ch
> To: Gluster
I managed to workaround this issue by addding "[arch=amd64]" to my apt source
list for gluster like this:
deb [arch=amd64]
http://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.14/Debian/jessie/apt
jessie main
In case that can help any others with the same siutation (where they also have
Hello,
I just deleted (permanently) my geo-replication session using the following
command:
gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete
and noticed that the .glusterfs/changelogs on my volume still exists. Is it
safe to delete the whole directly myself with
Hi,
By default the owner and group of a GlusterFS seems to be root:root now I
changed this by first mounting my volume using glusterfs/fuse on a client and
did the following
chmod 1000:1000 /mnt/myglustervolume
This changed correctly the owner and group to UID/GID 1000 of my volume but
like 1-2
Just found out I needed to set following two parameters:
gluster volume set myvol storage.owner-uid 1000
gluster volume set myvol storage.owner-gid 1000
In case that helps any one else :)
> Original Message
> Subject: set owner:group on root of volume
> Local Time: July 11, 2017
Unfortunately the root directory of my volume still get its owner and group
resetted to root. Can someone explain why or help with this issue? I need it to
be set to UID/GID 1000 and stay like that.
Thanks
> Original Message
> Subject: Re: set owner:group on root of volume
>
Regards,
M.
> Original Message
> Subject: Re: [Gluster-users] set owner:group on root of volume
> Local Time: July 23, 2017 8:15 PM
> UTC Time: July 23, 2017 6:15 PM
> From: vbel...@redhat.com
> To: mabi <m...@protonmail.ch>, Gluster Users <gluster-users@gluster.org&
...
> Original Message
> Subject: Re: [Gluster-users] set owner:group on root of volume
> Local Time: July 23, 2017 8:15 PM
> UTC Time: July 23, 2017 6:15 PM
> From: vbel...@redhat.com
> To: mabi <m...@protonmail.ch>, Gluster Users <gluster-users@gluster.org>
Hello,
Today while freeing up some space on my OS disk I just discovered that there is
a /var/lib/misc/glusterfsd directory which seems to save data related to
geo-replication.
In particular there is a hidden sub-directory called ".processed" as you can
see here:
Can anyone tell me how to find out what is going wrong here? I the meantime I
have 272 FAILURES count and I can't find anything in the GlusterFS
documentation on how to troubleshoot the FAILURES count in geo-replication.
Thank you.
> Original Message
> Subject: How to deal
Anyone has an idea? or shall I open a bug for that?
> Original Message
> Subject: Re: set owner:group on root of volume
> Local Time: July 18, 2017 3:46 PM
> UTC Time: July 18, 2017 1:46 PM
> From: m...@protonmail.ch
> To: Gluster Users
>
Hello,
I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers
using ZFS as filesystem. Now in order to avoid a split-brain situation I would
like to add a third node as arbiter.
Regarding the arbiter node I have a few questions:
- can the arbiter node be a virtual machine?
formance_Enhancements.html
>
> [3]
> http://events.linuxfoundation.org/sites/events/files/slides/glusterfs-arbiter-VAULT-2016.pdf
>
> On 29.06.2017 11:55, Raghavendra Talur wrote:
>
>> On 28-Jun-2017 5:49 PM, "mabi" <m...@protonmail.ch> wrote:
>>
&g
ans it has something to do with indexing. But is this
warning normal? anything I can do about it?
Regards,
M.
> Original Message
> Subject: Re: [Gluster-users] Arbiter node as VM
> Local Time: June 29, 2017 11:55 PM
> UTC Time: June 29, 2017 9:55 PM
> From: dougti+glu
Hello,
I have a replica 2 with a remote slave node for geo-replication (GlusterFS
3.8.11 on Debian 8) and saw for the first time a non zero number in the
FAILURES column when running:
gluster volume geo-replcation myvolume remotehost:remotevol status detail
Right now the number under the
Anyone?
> Original Message
> Subject: Persistent storage for docker containers from a Gluster volume
> Local Time: June 25, 2017 6:38 PM
> UTC Time: June 25, 2017 4:38 PM
> From: m...@protonmail.ch
> To: Gluster Users
> Hello,
> I have a two node
Hi,
I have a 3 nodes replica (including arbiter) volume with GlusterFS 3.8.11 and
this night one of my nodes (node1) had an out of memory for some unknown reason
and as such the Linux OOM killer has killed the glusterd and glusterfs process.
I restarted the glusterd process but now that node is
cksum = 733515336 on peer arbiternode.intra.oriented.ch
Best regards,
Mabi
> Original Message
> Subject: Re: [Gluster-users] State: Peer Rejected (Connected)
> Local Time: August 6, 2017 9:31 AM
> UTC Time: August 6, 2017 7:31 AM
> From: potato...@potatogim.ne
emote cksum = 733515336 on peer node2.domain.tld
> [2017-08-06 08:16:57.275558] E [MSGID: 106012]
> [glusterd-utils.c:2988:glusterd_compare_friend_volume] 0-management: Cksums
> of quota configuration of volume myvolume differ. local cksum = 3823389269,
> remote cksum = 733515336 o
Original Message
> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
> Local Time: July 31, 2017 3:27 AM
> UTC Time: July 31, 2017 1:27 AM
> From: ravishan...@redhat.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users
ation.
Alternatively how can I fix this situation? but I think the easiest would be to
delete the geo replication session.
Regards,
Mabi
> Original Message
> Subject: Re: [Gluster-users] How to delete geo-replication session?
> Local Time: August 8, 2017 7:19 AM
>
luster-users] How to delete geo-replication session?
> Local Time: August 8, 2017 11:20 AM
> UTC Time: August 8, 2017 9:20 AM
> From: avish...@redhat.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
> Sorry I missed your previous mail.
&g
ld explain this behavior.
> Original Message
> Subject: Re: [Gluster-users] GlusterFS 3.8 Debian 8 apt repository broken
> Local Time: August 4, 2017 12:33 PM
> UTC Time: August 4, 2017 10:33 AM
> From: kkeit...@redhat.com
> To: mabi <m...@protonmail.ch>
> Glu
2
> Local Time: August 4, 2017 3:28 PM
> UTC Time: August 4, 2017 1:28 PM
> From: sunni...@redhat.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
>
> Hi mabi,
> This is a likely issue where the last gfid entry in the quota.conf file i
Hi,
I would really like to get rid of this geo-replication session as I am stuck
with it right now. For example I can't even stop my volume as it complains
about that geo-replcation...
Can someone let me know how I can delete it?
Thanks
> Original Message
> Subject: How to
Hi,
When creating a geo-replication session is the gverify.sh used or ran
respectively? or is gverify.sh just an ad-hoc command to test manually if
creating a geo-replication creationg would succeed?
Best,
M.___
Gluster-users mailing list
ster-users] self-heal not working
> Local Time: August 22, 2017 11:51 AM
> UTC Time: August 22, 2017 9:51 AM
> From: ravishan...@redhat.com
> To: mabi <m...@protonmail.ch>
> Ben Turner <btur...@redhat.com>, Gluster Users <gluster-users@gluster.org>
&g
to stop
the volume before?
Regards,
M.
Original Message
Subject: Re: [Gluster-users] glustershd: unable to get index-dir on
myvolume-client-0
Local Time: May 2, 2017 10:56 AM
UTC Time: May 2, 2017 8:56 AM
From: ravishan...@redhat.com
To: mabi <m...@protonmail.ch>, Glust
UTC Time: May 9, 2017 4:50 AM
From: sunni...@redhat.com
To: mabi <m...@protonmail.ch>
Gluster Users <gluster-users@gluster.org>
Hi mabi,
This bug was fixed recently,
https://bugzilla.redhat.com/show_bug.cgi?id=1414346. It would be available in
3.11 release. I will plan to ba
Subject: Re: [Gluster-users] Quota limits gone after upgrading to 3.8
Local Time: May 10, 2017 8:48 AM
UTC Time: May 10, 2017 6:48 AM
From: sunni...@redhat.com
To: mabi <m...@protonmail.ch>
Gluster Users <gluster-users@gluster.org>
Hi Mabi,
Note that limits are stil
C Time: May 17, 2017 12:37 AM
From: ravishan...@redhat.com
To: mabi <m...@protonmail.ch>, Gluster Users <gluster-users@gluster.org>
On 05/16/2017 11:13 PM, mabi wrote:
Today I even saw up to 400k context switches for around 30 minutes on my two
nodes replica... Does anyone el
Today I even saw up to 400k context switches for around 30 minutes on my two
nodes replica... Does anyone else have so high context switches on their
GlusterFS nodes?
I am wondering what is "normal" and if I should be worried...
Original Message
Subject: 120k context switches
Hello,
I have a two node replica 3.8 GlusterFS cluster and am trying to find out the
best way to use a GlusterFS volume as persistent storage for docker containers
to store their data (e.g. web assets).
I was thinking that the simplest method would be to mount my GlusterFS volume
for that
Dear Krutika,
Sorry for asking so naively but can you tell me on what factor do you base that
the client and server event-threads parameters for a volume should be set to 4?
Is this metric for example based on the number of cores a GlusterFS server has?
I am asking because I saw my GlusterFS
Subject: Re: [Gluster-users] 120k context switches on GlsuterFS nodes
Local Time: May 22, 2017 7:45 PM
UTC Time: May 22, 2017 5:45 PM
From: j...@julianfamily.org
To: gluster-users@gluster.org
On 05/22/17 10:27, mabi wrote:
Sorry for posting again but I was really wondering if it is somehow possible
h Kumar Karampuri <pkara...@redhat.com>, mabi <m...@protonmail.ch>
Gluster Users <gluster-users@gluster.org>, Gluster Devel
<gluster-de...@gluster.org>
On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote:
+ gluster-devel
On Wed, May 17, 2017 at 10:50 PM, mabi <m...@pr
AM
UTC Time: May 3, 2017 1:09 AM
From: ravishan...@redhat.com
To: mabi <m...@protonmail.ch>
Gluster Users <gluster-users@gluster.org>
On 05/02/2017 11:48 PM, mabi wrote:
Hi Ravi,
Thanks for the pointer, you are totally right the "dirty" directory is missing
on my node1.
Hi,
I have a two nodes GlusterFS 3.8.11 replicated volume and just noticed today in
the glustershd.log log file a lot of the following warning messages:
[2017-05-01 18:42:18.004747] W [MSGID: 108034]
[afr-self-heald.c:479:afr_shd_index_sweep] 0-myvolume-replicate-0: unable to
get index-dir on
Hello,
I upgraded last week my 2 nodes replica GlusterFS cluster from 3.7.20 to 3.8.11
and on one of the volumes I use the quota feature of GlusterFS. Unfortunately,
I just noticed by using the usual command "gluster volume quota myvolume list"
that all my quotas on that volume are gone. I had
Hi, has anyone any advice to give about my question below? Thanks!
> Original Message
> Subject: Manually delete .glusterfs/changelogs directory ?
> Local Time: August 16, 2017 5:59 PM
> UTC Time: August 16, 2017 3:59 PM
> From: m...@protonmail.ch
> To: Gluster Users
ocal Time: August 22, 2017 11:51 AM
> UTC Time: August 22, 2017 9:51 AM
> From: ravishan...@redhat.com
> To: mabi <m...@protonmail.ch>
> Ben Turner <btur...@redhat.com>, Gluster Users <gluster-users@gluster.org>
>
> On 08/22/2017 02:30 PM, mabi wrote:
>
>> T
:46.407404779 +0200
Birth: -
> Original Message
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 21, 2017 9:34 PM
> UTC Time: August 21, 2017 7:34 PM
> From: btur...@redhat.com
> To: mabi <m...@protonmail.ch>
> Gluster Users
/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
> Original Message
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 21, 2017 11:35 PM
> UTC Time: August 21, 2017 9:35 PM
> From: btur...@redhat.com
> To: mabi &
rough FUSE? If yes, how long? This is a
production system that's why I am asking.
> Original Message
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 22, 2017 6:26 AM
> UTC Time: August 22, 2017 4:26 AM
> From: ravishan...@redhat.com
> To: ma
gt; From: sarum...@redhat.com
> To: mabi <m...@protonmail.ch>, Gluster Users <gluster-users@gluster.org>
>
> On Saturday 19 August 2017 02:05 AM, mabi wrote:
>
>> Hi,
>>
>> When creating a geo-replication session is the gverify.sh used or ran
>> respectiv
Hi,
I have a replicat 2 with arbiter GlusterFS 3.8.11 cluster and there is
currently one file listed to be healed as you can see below but never gets
healed by the self-heal daemon:
Brick node1.domain.tld:/data/myvolume/brick
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png
Status:
PM
> UTC Time: August 27, 2017 1:45 PM
> From: ravishan...@redhat.com
> To: mabi <m...@protonmail.ch>
> Ben Turner <btur...@redhat.com>, Gluster Users <gluster-users@gluster.org>
>
> Yes, the shds did pick up the file for healing (I saw messages like "
August 28, 2017 5:58 AM
> UTC Time: August 28, 2017 3:58 AM
> From: ravishan...@redhat.com
> To: Ben Turner <btur...@redhat.com>, mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
>
> On 08/28/2017 01:57 AM, Ben Turner wrote:
>&g
f-heal not working
> Local Time: August 28, 2017 10:41 AM
> UTC Time: August 28, 2017 8:41 AM
> From: ravishan...@redhat.com
> To: mabi <m...@protonmail.ch>
> Ben Turner <btur...@redhat.com>, Gluster Users <gluster-users@gluster.org>
>
> On 08/28/2017 01:2
n...@redhat.com
> To: mabi <m...@protonmail.ch>
> Ben Turner <btur...@redhat.com>, Gluster Users <gluster-users@gluster.org>
>
> Great, can you raise a bug for the issue so that it is easier to keep track
> (plus you'll be notified if the patch is posted)
Hi Ravi,
Did you get a chance to have a look at the log files I have attached in my last
mail?
Best,
Mabi
> Original Message
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 24, 2017 12:08 PM
> UTC Time: August 24, 2017 10:08
gelogs directory ?
> Local Time: August 31, 2017 8:56 AM
> UTC Time: August 31, 2017 6:56 AM
> From: broglia...@gmail.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
>
> Hi Mabi,
> If you will not use that geo-replication volume sessi
Hi Aravinda,
Very nice initiative, thank you very much! As as small recommendation it would
be nice to have a "nagios/icinga" mode, maybe through a "-n" parameter which
will do the health check and output the status ina nagios/icinga compatible
format. As such this tool could be directly used
right now 3 unsynched
files on my arbiter node like I used to do before upgrading. This problem
started since I upgraded to 3.12.7...
Thank you very much in advance for your advise.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On April 9, 2018 2:31 PM, Ravishankar N <ravis
>
> On 05/15/2018 12:38 PM, mabi wrote:
>
> > Dear all,
> >
> > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday
> > from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice
> > that I still have exactly the same pro
Hello,
I just wanted to ask if you had time to look into this bug I am encountering
and if there is anything else I can do?
For now in order to get rid of these 3 unsynched files shall I do the same
method that was suggested to me in this thread?
Thanks,
Mabi
‐‐‐ Original Message
directories, I
don't know if this is relevant or not but thought I would just mention.
‐‐‐ Original Message ‐‐‐
On May 23, 2018 9:25 AM, Ravishankar N <ravishan...@redhat.com> wrote:
>
>
> On 05/23/2018 12:47 PM, mabi wrote:
>
> > Hello,
> >
>
1613e-2ac0-48bd-8ace-f2f723f3796c/2016.03.15 AVB_Photovoltaik-Versicherung
2013.pdf), client:
nextcloud.domain.com-7972-2018/05/10-20:31:46:163206-myvol-private-client-2-0-0,
error-xlator: myvol-private-posix [Directory not empty]
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On May 17,
Hi,
In the past I was using geo-replication but unconfigured it on my two volumes
by using:
gluster volume geo-replication ... stop
gluster volume geo-replication ... delete
Now I found out that I still have some old files in /var/lib/misc/glusterfsd
belonging to my two volumes which were
Hello,
I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume quota
list" that my quotas on that volume are broken. The command returns
no output and no errors but by looking in /var/log/glusterfs.cli I found the
following errors:
[2018-02-09 19:31:24.242324] E
Would anyone be able to help me fix my quotas again?
Thanks
Original Message
On February 9, 2018 8:35 PM, mabi <m...@protonmail.ch> wrote:
>Hello,
>
> I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume
> quota list" t
owt...@redhat.com> wrote:
>Hi,
>
> A part of the log won't be enough to debug the issue.
> Need the whole log messages till date.
> You can send it as attachments.
>
> Yes the quota.conf is a binary file.
>
> And I need the volume status output too.
>
> On Tue, Feb 13, 2
1 - 100 of 189 matches
Mail list logo