Hi,

I have a replicate x3 volume with the following config:

```
Volume Name: gvol1
Type: Replicate
Volume ID: 384acec2-5b5f-40da-bf0e-5c53d12b3ae2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vm0:/srv/brick1/gvol1
Brick2: vm1:/srv/brick1/gvol1
Brick3: vm2:/srv/brick1/gvol1
Options Reconfigured:
storage.ctime: on
features.utime: on
storage.fips-mode-rchecksum: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
```

This volume was actually created on v3.8, but as since been upgraded (version 
by version) to v4.1.4 and it's working fine (for the most part):

```
Client connections for volume gvol1
----------------------------------------------
Brick : vm0:/srv/brick1/gvol1
Clients connected : 6
Hostname                                               BytesRead    
BytesWritten       OpVersion
--------                                               ---------    
------------       ---------
10.X.0.5:49143                                          2096520         2480212 
          40100
10.X.0.6:49141                                            14000           12812 
          40100
10.X.0.4:49134                                           258324          333456 
          40100
10.X.0.4:49141                                        565372566      1643447105 
          40100
10.X.0.5:49145                                        491262003       291782440 
          40100
10.X.0.6:49139                                        482629418       328228888 
          40100
----------------------------------------------
Brick : vm1:/srv/brick1/gvol1
Clients connected : 6
Hostname                                               BytesRead    
BytesWritten       OpVersion
--------                                               ---------    
------------       ---------
10.X.0.6:49146                                           658516          508904 
          40100
10.X.0.5:49133                                          4142848         7139858 
          40100
10.X.0.4:49138                                             4088            3696 
          40100
10.X.0.4:49140                                        471405874       284488736 
          40100
10.X.0.5:49140                                        585193563      1670630439 
          40100
10.X.0.6:49138                                        482407454       330274812 
          40100
----------------------------------------------
Brick : vm2:/srv/brick1/gvol1
Clients connected : 6
Hostname                                               BytesRead    
BytesWritten       OpVersion
--------                                               ---------    
------------       ---------
10.X.0.6:49133                                          1789624         4340938 
          40100
10.X.0.5:49137                                          3010064         3005184 
          40100
10.X.0.4:49143                                             4268            3744 
          40100
10.X.0.4:49139                                        471328402       283798376 
          40100
10.X.0.5:49139                                        491404443       293342568 
          40100
10.X.0.6:49140                                        561683906       830511730 
          40100
----------------------------------------------
```

I'm now getting a lot of errors on the brick log file, like:

`The message "W [MSGID: 113117] [posix-metadata.c:627:posix_set_ctime] 
0-gvol1-posix: posix set mdata failed, No ctime : 
/srv/brick1/gvol1/.glusterfs/18/d0/18d04927-1ec0-4779-8c5b-7ebb82e4a614 
gfid:18d04927-1ec0-4779-8c5b-7ebb82e4a614 [Function not implemented]" repeated 
2 times between [2018-09-21 08:21:52.480797] and [2018-09-21 08:22:07.529625]`

For different files but the most common is a file that the Node.js application 
that runs on top of the gluster via a fuse client (glusterfs) stats every 5s 
for changes, https://nodejs.org/api/fs.html#fs_fs_stat_path_options_callback

I think this is also related to another issue, when reading the file it returns 
an empty result (not always), as the app reports:

`2018-09-21 08:22:00 | [vm0] [nobody] sync hosts: invalid applications.json, 
response was empty.`

Doing `gluster volume heal gvol1 info` yields 0 for all bricks.

Should I be concerned about the warning, is this a known issue? If not, what 
could be causing the empty file return sometimes? Could they be related?

The application that is running on top of the cluster build and spawns other 
node.js applications with mostly small files, do you have any optimization tips 
for it?

FYI it is a slightly modified version of https://github.com/totaljs/superadmin 
to run as a web farm.

Thank you so much for any help in advance,

Kind Regards,
Pedro Maia Costa






_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to