Hi WK,
There are a few patches [1] that are still undergoing review . It
would be good to wait for some more time until trying it out. If you
are interested in testing, I'll be happy to inform you once they get
merged.
[1] https://review.gluster.org/#/c/20095/,
A couple of us have seen https://bugzilla.redhat.com/show_bug.cgi?id=1593826 on
fuse mounts, seems to be present in 3.12.9 and later, client side. Servers seem
fine, it looks like a client side leak to me in. Running client 3.12.8 or .6
against some 3.12.11 servers are showing now problems for
When upgrading ovirt to use gluster 3.12 I am experiencing memory leaks and
every week have to put hosts in maintenance and activate again to refresh
memory. Still have this issue and hoping for a bug fix on next releases. I
recall a gluster bug already open for this.
On Aug 2, 2018 18:02,
Yes, you should file a bug to track this issue and to share information.
Also, I would like to have logs which are present in /var/log/messages,
specially mount logs with name mnt.log or something.
Following are the points I would like to bring in to your notice-
1 - Are you sure that all
Just wondering if anyone else is running into the same behavior with
disperse volumes described below and what I might be able to do about it.
I am using ubuntu 18.04LTS on Odroid HC-2 hardware (armhf) and have
installed gluster 4.1.2 via PPA. I have 12 member nodes each with a single
brick. I
02.08.2018 18:40, Ashish Pandey пишет:
I think it should be rephrased a little bit -
"When one brick is up: Fail FOP with EIO."
should be
"When only one brick is up out of 3 bricks: Fail FOP with EIO."
So we have 2 data bricks and one thin arbiter brick. Out of these 3
bricks if only one
I think it should be rephrased a little bit -
"When one brick is up: Fail FOP with EIO."
should be
"When only one brick is up out of 3 bricks: Fail FOP with EIO."
So we have 2 data bricks and one thin arbiter brick. Out of these 3 bricks if
only one brick is UP then we will fail IO.
---
Could you look of any rsync processes hung in master or slave?
On Thu, Aug 2, 2018 at 11:18 AM, Marcus Pedersén
wrote:
> Hi Kortesh,
> rsync version 3.1.2 protocol version 31
> All nodes run CentOS 7, updated the last couple of days.
>
> Thanks
> Marcus
>
>
> Marcus Pedersén
Hi Kotresh,
I get the following and then it hangs:
strace: Process 5921 attachedwrite(2, "rsync: link_stat
\"/tmp/gsyncd-au"..., 12811
When sync is running I can see rsync with geouser on the slave node.
Regards
Marcus
Marcus Pedersén
Systemadministrator
On both active master nodes there is an rsync process. As in:
root 5921 0.0 0.0 115424 1176 ?SAug01 0:00 rsync -aR0
--inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs
--xattrs --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no
-i
Cool, just check whether they are hung by any chance with following command.
#strace -f -p 5921
On Thu, Aug 2, 2018 at 12:25 PM, Marcus Pedersén
wrote:
> On both active master nodes there is an rsync process. As in:
>
> root 5921 0.0 0.0 115424 1176 ?SAug01 0:00 rsync
>
Hi Kortesh,
rsync version 3.1.2 protocol version 31
All nodes run CentOS 7, updated the last couple of days.
Thanks
Marcus
Marcus Pedersén
Systemadministrator
Interbull Centre
Sent from my phone
Den 2 aug. 2018 06:13 skrev Kotresh Hiremath
01.08.2018 22:04, Amar Tumballi пишет:
This recently added document talks about some of the technicalities of
the feature:
https://docs.gluster.org/en/latest/Administrator%20Guide/Thin-Arbiter-Volumes/
Please go through and see if it answers your questions.
-Amar
Hello!
I have question:
13 matches
Mail list logo