Answer inline.
On Tue, Sep 11, 2018 at 4:19 PM, Kotte, Christian (Ext) <
christian.ko...@novartis.com> wrote:
> Hi all,
>
>
>
> I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup.
> The gsyncd.log on the master is fine, but I have some strange changelog
> warnings and
We'll be in #gluster-meeting on IRC, agenda lives in:
https://bit.ly/gluster-community-meetings
No agenda items yet, maybe you have some?
- amye
--
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Hello list. I had happily been sharing a Gluster volume with Samba using
vfs_gluster, but it has recently stopped working right. I think it might have
been after updating Samba from 4.6.2 to 4.7.1 (as part of updating CentOS 7.4
to 7.5). The shares suffer a variety of weird issues,
Hello,
I have a 3 node cluster running 2 three-way dist/replicate volumes for Ovirt
and three new nodes currently that I'd like to add. I've unfortunately not had
time to closely follow this list the last few months, and am having trouble
finding any status on the corruption issue with
Hi Milind,
I do not know if this will help, but using ausearch on one of the master nodes
gives this:
time->Tue Sep 11 03:28:56 2018
type=PROCTITLE msg=audit(1536629336.548:1202535):
Hi all,
I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup. The
gsyncd.log on the master is fine, but I have some strange changelog warnings
and errors on the interimmaster:
gsyncd.log
…
[2018-09-11 10:38:35.575464] I [master(worker /bricks/brick1/brick):1460:crawl]
Hi Hari,
thank you very much for the explanation and for your important support.
Best regards,
Mauro
> Il giorno 11 set 2018, alle ore 10:49, Hari Gowtham ha
> scritto:
>
> Hi Mauro,
>
> It was because the quota crawl takes some time and it was working on it.
> When we ran the fix-issues
Hi Mauro,
It was because the quota crawl takes some time and it was working on it.
When we ran the fix-issues it makes changes to the backend and does a lookup.
It takes time for the whole thing to reflect in the quota list command.
Earlier, it didnt reflect as it was still crawling. So this is
Hi,
since I feared that the logs would fill up the partition (again) I
checked the systems daily and finally found the reason. The glusterfs
process on the client runs out of memory and get's killed by OOM after
about four days. Since rsync runs for a couple of days longer till it
ends I never
Marcus,
Is it possible to send over the SELinux errors that you encountered before
turning it off ?
We could inspect and get the SELinux issues fixed as an aside.
On Mon, Sep 10, 2018 at 4:43 PM, Marcus Pedersén
wrote:
> Hi Kotresh,
>
> I have been running 4.1.3 from the end of August.
>
>
10 matches
Mail list logo