On 08/15/2018 11:07 PM, Pablo Schandin wrote:
I found another log that I wasn't aware of in
/var/log/glusterfs/brick, that is te mount log, I confused the log
files. In this file I see a lot of entries like this one:
[2018-08-15 16:41:19.568477] I [addr.c:55:compare_addr_and_update]
Since the issue seems to critical and there is no longer lock is held on
Master branch, I will try to do a 3.12 release ASAP.
--
Jiffin
On Tuesday 14 August 2018 05:48 PM, Nithya Balachandran wrote:
I agree as well. This is a bug that is impacting users.
On 14 August 2018 at 16:30,
Hi again Sunny,
Just a bit curious if you find anything in the logs that is useful and can help
me get the geo-replication running.
Many thanks in advance!
Regards
Marcus
Från: gluster-users-boun...@gluster.org för
Marcus Pedersén
Skickat: den 13
Is this 'or' of atime settings also in 3.10/3.12 versions?
If it's set off in gluster but on in mount will atime be updated?
On August 15, 2018 2:15:17 PM EDT, Kotresh Hiremath Ravishankar
wrote:
>Hi David,
>
>The feature is to provide consistent time attributes (atime, ctime,
>mtime)
>across
I am using gluster to host KVM/QEMU images. I am seeing an intermittent
issue where access to an image will hang. I have to do a lazy dismount of
the gluster volume in order to break the lock and then reset the impacted
virtual machine.
It happened again today and I caught the events below in
Hi David,
The feature is to provide consistent time attributes (atime, ctime, mtime)
across replica set.
The feature is enabled with following two options.
gluster vol set utime on
gluster vol set ctime on
The features currently does not honour mount options related time
attributes such as
I found another log that I wasn't aware of in /var/log/glusterfs/brick,
that is te mount log, I confused the log files. In this file I see a lot
of entries like this one:
[2018-08-15 16:41:19.568477] I [addr.c:55:compare_addr_and_update]
0-/mnt/brick1/gv1: allowed = "172.20.36.10", received
On Wed, 2018-08-15 at 13:42 +0800, Pui Edylie wrote:
> Hi Karli,
>
> I think Alex is right in regards with the NFS version and state.
>
> I am only using NFSv3 and the failover is working per expectation.
OK, so I've remade the test again and it goes like this:
1) Start copy loop[*]
2) Power
Dear Gluster Community,
in the Chapter "Standalone" point 3 of the release notes for 4.1.0
https://docs.gluster.org/en/latest/release-notes/4.1.0/
there is an introduction to the new utime feature. What kind of options are
not allowed if I want to mount a volume? There is "noatime,realatime"
Hello again :-)
The self heal must have finished as there are no log entries in
glustershd.log files anymore. According to munin disk latency (average
io wait) has gone down to 100 ms, and disk utilization has gone down
to ~60% - both on all servers and hard disks.
But now system load on 2
10 matches
Mail list logo