I have not tested with option attribute-timeout=0 -o entry-timeout=0,
But I think to set that two options to 0 will have impact on performance.
Also set consistent-metadata on will also have impact for performance, so I’d
rather keep consistent-metadata to be off and use ctime feature to solve
Hi,
I know that this may not be your presumption of this ctime feature, however,
our delima is that
1. product need to support date change scenario.
2. if we disable ctime feature, we will meet tar issue.
3. if we set consistent-metadata to on to work around this tar issue, there
is
The whole ctime features relies on time provided by clients which are time
synchronized. This patch brings in the time from server to compare against
the time sent from client.
As Amar mentioned, this doesn't fit well into the scheme of how ctime is
designed. Definitely keeping it optional and
Ok, thanks for your feedback!
I will do local test to verify this patch first.
cynthia
From: Amar Tumballi
Sent: 2020年3月17日 13:18
To: Zhou, Cynthia (NSB - CN/Hangzhou)
Cc: Kotresh Hiremath Ravishankar ; Gluster Devel
Subject: Re: [Gluster-devel] could you help to check about a glusterfs
On Tue, Mar 17, 2020 at 10:18 AM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi glusterfs expert,
>
> Our product need to tolerate change date to future and then change back.
>
> How about change like this ?
>
>
>
Hi glusterfs expert,
Our product need to tolerate change date to future and then change back.
How about change like this ?
https://review.gluster.org/#/c/glusterfs/+/24229/1/xlators/storage/posix/src/posix-metadata.c
when time change to future and change back , should still be able to update
Hi,
One more question, I find each client has the same future time stamp where are
those time stamps from, since Since it is different from any brick stored time
stamp. And after I modify files from clients, it remains the same.
[root@mn-0:/home/robot]
# stat /mnt/export/testfile
File:
Hi,
This is abnormal test case, however, when this happened it will have big impact
on the apps using those files. And this can not be restored automatically
unless disable some xlator, I think it is unacceptable for the user apps.
cynthia
From: Kotresh Hiremath Ravishankar
Sent: 2020年3月12日
All the perf xlators depend on time (mostly mtime I guess). In my setup,
only quick read was enabled and hence disabling it worked for me.
All perf xlators needs to be disabled to make it work correctly. But I
still failed to understand how normal this kind of workload ?
Thanks,
Kotresh
On Thu,
When disable both quick-read and performance.io-cache off everything is back to
normal
I attached the log when only enable quick-read and performance.io-cache is
still on glusterfs trace log
When execute command “cat /mnt/export/testfile”
Can you help to find why this still to fail to show
From my local test only when disable both features.ctime and ctime.noatime this
issue is gone.
Or
Do echo 3 >/proc/sys/vm/drop_caches after each time when some client change the
file , can cat command show correct data(same as brick )
cynthia
From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent:
Hi,
Thanks for your responding!
I’ve tried to disable quick-read:
[root@mn-0:/home/robot]
# gluster v get export all| grep quick
performance.quick-read off
performance.nfs.quick-read off
however, this issue still exists.
Two clients see different contents.
it seems
Hi,
I figured out what's happening. The issue is that the file has 'c|a|m' time
set to future (The file is created after the date is set to +30 days). This
is done from client-1. On client-2 with correct date, when data is
appended, it doesn't update the mtime and ctime because of both mtime and
13 matches
Mail list logo