Hi Marcus,
Geo-rep had few important fixes in 4.1.3. Is it possible to upgrade and
check whether the issue is still seen?
Thanks,
Kotresh HR
On Sat, Sep 1, 2018 at 5:08 PM, Marcus Pedersén
wrote:
> Hi again,
>
> I found another problem on the other master node.
>
> The node toggles
On Fri, Aug 31, 2018 at 1:18 PM Hu Bert wrote:
> Hi Pranith,
>
> i just wanted to ask if you were able to get any feedback from your
> colleagues :-)
>
Sorry, I didn't get a chance to. I am working on a customer issue which is
taking away cycles from any other work. Let me get back to you once
Hey,
We need some more information to debug this.
I think you missed to send the output of 'gluster volume info '.
Can you also provide the bricks, shd and glfsheal logs as well?
In the setup how many peers are present? You also mentioned that "one of
the file servers have two processes for each
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl faster.
It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue
is encountered ?
Thanks,
Kotresh HR
On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma
Hi Kotresh/Support,
Request your help to get it fix. My slave is not getting sync with master. When
I restart the session after doing the indexing off then only it shows the file
at slave but that is also blank with zero size.
At master: file size is 5.8 GB.
[root@gluster-poc-noida distvol]#
We've got an odd problem where clients are blocked from writing to Gluster
volumes until the first node of the Gluster cluster is rebooted.
I suspect I've either configured something incorrectly with the arbiter /
replica configuration of the volumes, or there is some sort of bug in the
We've got an odd problem where clients are blocked from writing to Gluster
volumes until the first node of the Gluster cluster is rebooted.
I suspect I've either configured something incorrectly with the arbiter /
replica configuration of the volumes, or there is some sort of bug in the
Hi all,
We're investigating geo-replication and noticed that when using non-root
geo-replication, the sync user cannot access various gluster commands, e.g.
one of the session commands ends up running this on the slave:
Popen: command returned error cmd=/usr/sbin/gluster
Hello,
I wanted to report that I had this morning a similar issue on another server
where a few PHP-FPM processes get blocked on different GlusterFS volume mounted
through a FUSE mount. This GlusterFS volume has no quota enabled so it might
not be quota related after all.
Here would be the