Hi Pat,
Do you still see the problem of missing files? If yes please provide the
following :
1. gluster volume info
2. ls -l of the directory containing the missing files from the mount point
and from the individual bricks.
Regards,
Niyhya
On Thu, 29 Aug 2019 at 18:57, Pat Riehecky wrote:
I've observed an interesting behavior in Gluster 5.6. I had a file
which was placed on incorrect subvolume (aparrently by the rebalancing
process). I could stat and read the file just fine over FUSE mount
point, with this entry appearing in log file:
[2019-09-18 13:00:04.484683] D [MSGID: 0]
Thanks for the quick answer!
I think I can reduce data on the "full" bricks, solving the problem temporarily.
The thing is, that the behavior changed from 3.12 to 6.5: 3.12 didn't have
problems with almost full bricks, so I thought everything was fine. Then, after
the upgrade, I ran into
Dear all,
I have a situation where "mkdir" on a client produces stale file handles.
This happened after upgrading from 3.12 to 6.5
I believe I found the reason for it:
6.5 (but not 3.12) checks if there is space left on the device before doing a
"mkdir", but calculates the "fullness" in