hi Guys
I'm seeing "Gfid mismatch detected" in the logs but no split
brain indicated (4-way replica)
Brick
swir-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.USER-HOME
Status: Connected
Total Number of entries: 22
Number of entries in heal pending: 22
Number of entries in split-brain: 0
Number
Hello,
We're having an issue with a geo-replication process with unusually high
CPU use and giving "Entry not present on master. Fixing gfid mismatch in
slave" errors. Can anyone help on this?
We have 3 GlusterFS replica nodes (we'll call the master), which also push
data to a remote server
I am using nfs mount of gluster volume to get better performance
On Wed, May 27, 2020 at 10:02 AM wrote:
> - # gluster --version
> glusterfs 7.5
>
> - # gluster volume status atlassian
> Status of volume: atlassian
> Gluster process TCP Port RDMA Port Online
>
I am running gluster 7.5 on CentOS 7 which does not have gnfs compiled.
I had to build it from source with ./configure --enable-gnfs
--without-libtirpc[1] and then I could nfs mount the gluster volume
[1] https://docs.gluster.org/en/latest/Developer-guide/Building-GlusterFS/
On Fri, May 29,
Hi,
I have GridFTP + a network speedup solution in network + GlusterFS as a
file system component in a disk-to-disk data transferring scenario. For
glusterfs, I start with creating bricks inside /dev/sda1 filesystem. During
the file transfer (12 GB), it seems that glusterfs tries to write a
Turned off nfs-server service and now I am getting a different error message
[root@node1 ~]# mount -vv -t nfs -o vers=3,mountproto=tcp 192.168.1.121:/gv0
/nfs_mount/
mount.nfs: timeout set for Fri May 29 16:43:30 2020
mount.nfs: trying text-based options
Correct, every brick is a separate xfs-formatted disk attached to the
machine. There are two disks per machine, the ones mounted in `/data2`
are the newer ones.
Thanks for the reassurance, that means we can take as long as
necessary to diagnose this. Let me know if I there's more data I can
Hi Petr,
it's absolutely safe to use this volume. you will not see any problems even
if the actual used size is greater than the reported total size of the
volume and it is safe to upgrade as well.
Can you please share the output of the following:
l1. sblk output from all the 3 nodes in the
Thanks!
One more question -- I don't really mind having the wrong size
reported by df, but I'm worried whether it is safe to use the volume.
Will it be okay if I write to it? For example, once the actual used
size is greater than the reported total size of the volume, should I
expect problems?
Nope, for now. I will update you if we figure out any other workaround.
Thanks for your help!
On Fri, May 29, 2020 at 2:50 PM Petr Certik wrote:
> I'm afraid I don't have the resources to try and reproduce from the
> beginning. Is there anything else I can do to get you more
> information?
>
>
On Fri, May 29, 2020 at 1:28 PM jifeng-call <17607319...@163.com> wrote:
> Hi All,
> I have 6 servers that form a glusterfs 2x3 distributed replication volume,
> the details are as follows:
>
> [root@node1 ~]# gluster volume info
> Volume Name: ksvd_vol
> Type: Distributed-Replicate
> Volume ID:
I'm afraid I don't have the resources to try and reproduce from the
beginning. Is there anything else I can do to get you more
information?
On Fri, May 29, 2020 at 11:08 AM Sanju Rakonde wrote:
>
> The issue is not with glusterd restart. We need to reproduce from beginning
> and add-bricks to
The issue is not with glusterd restart. We need to reproduce from beginning
and add-bricks to check df -h values.
I suggest not to try on the production environment. if you have any other
machines, please let me know.
On Fri, May 29, 2020 at 1:37 PM Petr Certik wrote:
> If you mean the issue
Hi All,
I have 6 servers that form a glusterfs 2x3 distributed replication volume, the
details are as follows:
[root@node1 ~]# gluster volume info
Volume Name: abcd_vol
Type: Distributed-Replicate
Volume ID: c9848daa-b06f-4f82-a2f8-1b425b8e869c
Status: Started
Snapshot Count: 0
If you mean the issue during node restart, then yes, I think I could
reproduce that with a custom build. It's a production system, though,
so I'll need to be extremely careful.
We're using debian glusterfs-server 7.3-1 amd64, can you provide a
custom glusterd binary based off of that version?
Hi All,
I have 6 servers that form a glusterfs 2x3 distributed replication volume, the
details are as follows:
[root@node1 ~]# gluster volume info
Volume Name: ksvd_vol
Type: Distributed-Replicate
Volume ID: c9848daa-b06f-4f82-a2f8-1b425b8e869c
Status: Started
Snapshot Count: 0
Number of
Surprising! Will you be able to reproduce the issue and share the logs if I
provide a custom build with more logs?
On Thu, May 28, 2020 at 1:35 PM Petr Certik wrote:
> Thanks for your help! Much appreciated.
>
> The fsid is the same for all bricks:
>
> imagegluster1:
>
>
17 matches
Mail list logo