Hi David,

With this feature enabled, the consistent time attributes (mtime, ctime,
atime) will be
maintained in xattr on the file. With this feature enabled, gluster will
not used time
attributes from backend. It will be served from xattr of that file which
will be
consistent across replica set.

Thanks,
Kotresh

On Thu, Aug 16, 2018 at 12:28 PM, David Spisla <[email protected]> wrote:

> Hello Kotresh,
> its no problem for me that the atime will be updated, importat is a
> consistent mtime and ctime on the bricks of my replica set.
> I have turned on both options you mentioned. After that I created a file
> on my FUSE mount (mounted with noatime). But on all my bricks of the
> replica set
> the mtime and ctime is not consistent. What about the brick mount? Is
> there a special mount option?
> I have a four node cluster and on each node there is only one brick. See
> below all my volume options:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Volume Name: test1Type: ReplicateVolume ID:
> e6576010-d9e3-4a98-bcfd-d4a452e92198Status: StartedSnapshot Count: 0Number
> of Bricks: 1 x 4 = 4Transport-type: tcpBricks:Brick1:
> davids-c1-n1:/gluster/brick1/glusterbrickBrick2:
> davids-c1-n2:/gluster/brick1/glusterbrickBrick3:
> davids-c1-n3:/gluster/brick1/glusterbrickBrick4:
> davids-c1-n4:/gluster/brick1/glusterbrickOptions
> Reconfigured:storage.ctime: onfeatures.utime:
> onperformance.client-io-threads: offnfs.disable:
> ontransport.address-family: inetuser.smb: disablefeatures.read-only:
> offfeatures.worm: offfeatures.worm-file-level: onfeatures.retention-mode:
> relaxnetwork.ping-timeout: 10features.cache-invalidation:
> onfeatures.cache-invalidation-timeout: 600performance.nl-cache:
> onperformance.nl-cache-timeout: 600client.event-threads:
> 32server.event-threads: 32cluster.lookup-optimize:
> onperformance.stat-prefetch: onperformance.cache-invalidation:
> onperformance.md-cache-timeout: 600performance.cache-samba-metadata:
> onperformance.cache-ima-xattrs: onperformance.io-thread-count:
> 64cluster.use-compound-fops: onperformance.cache-size:
> 512MBperformance.cache-refresh-timeout: 10performance.read-ahead:
> offperformance.write-behind-window-size: 4MBperformance.write-behind:
> onstorage.build-pgfid: onauth.ssl-allow: *client.ssl: offserver.ssl:
> offchangelog.changelog: onfeatures.bitrot: onfeatures.scrub:
> Activefeatures.scrub-freq: dailycluster.enable-shared-storage: enable*
>
> Regards
> David Spisla
>
> Am Mi., 15. Aug. 2018 um 20:15 Uhr schrieb Kotresh Hiremath Ravishankar <
> [email protected]>:
>
>> Hi David,
>>
>> The feature is to provide consistent time attributes (atime, ctime,
>> mtime) across replica set.
>> The feature is enabled with following two options.
>>
>> gluster vol set <vol> utime on
>> gluster vol set <vol> ctime on
>>
>> The features currently does not honour mount options related time
>> attributes such as 'noatime'.
>> So even though the volume is mounted with noatime, it will still update
>> atime with this feature
>> enabled.
>>
>> Thanks,
>> Kotresh HR
>>
>> On Wed, Aug 15, 2018 at 3:51 PM, David Spisla <[email protected]> wrote:
>>
>>> Dear Gluster Community,
>>> in the Chapter "Standalone" point 3 of the release notes for 4.1.0
>>> https://docs.gluster.org/en/latest/release-notes/4.1.0/
>>>
>>> there is an introduction to the new utime feature. What kind of options
>>> are not allowed if I want to mount a volume? There is "noatime,realatime"
>>> mentioned. Does the second mean "relatime". I never heard of "realatime".
>>> Is there any recommendation for the mount options?
>>>
>>> Regards
>>> David Spisla
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> [email protected]
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>> Thanks and Regards,
>> Kotresh H R
>>
>


-- 
Thanks and Regards,
Kotresh H R
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to