Re: [Gluster-users] just discovered that OpenShift/OKD dropped GlusterFS storage support...

2020-03-18 Thread Aravinda VK
Hi Arman,

Not sure about Openshift dropping the support for GlusterFS. But a few 
GlusterFS maintainers started a new project “Kadalu” to provide persistent 
storage solutions based on GlusterFS for applications running in 
Kubernetes/Openshift/Rancher(or any other variant of K8s).

More details about the project is available here https://kadalu.io 


A few highlights of Kadalu,

- Based on GlusterFS
- Not using management layer of GlusterFS(ie.., Glusterd), instead integrated 
natively with Kubernetes APIs to manage GlusterFS
- RWO and RWX PVs are supported
- Easy to setup and use

Blog: https://kadalu.io/blog 
Github: https://github.com/kadalu/kadalu 

We are making a steady progress towards Goal of making storage easy in 
Kubernetes and our v1.0 release. Please try Kadalu Storage and provide feedback.

Thanks
Aravinda Vishwanathapura
https://kadalu.io

> On 18-Mar-2020, at 10:19 PM, Arman Khalatyan  wrote:
> 
> Hello everybody,
> any reason why  OpenShift/OKD dropped GlusterFS storage support?
> now in the documentation
> https://docs.openshift.com/container-platform/4.2/migration/migrating_3_4/planning-migration-3-to-4.html#migration-considerations
>  
> 
>   
>  is only:
> UNSUPPORTED PERSISTENT STORAGE OPTIONS
> Support for the following persistent storage options from OpenShift Container 
> Platform 3.11 has changed in OpenShift Container Platform 4.2:
> 
> GlusterFS is no longer supported.
> 
> CephFS is no longer supported.
> 
> Ceph RBD is no longer supported.
> 
> iSCSI is now Technology Preview.
> 
> migrations to 4.x becoming really painfull
> 
> thanks, 
> Arman.
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users









Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Announcing Gluster release 7.3

2020-03-18 Thread Artem Russakovskii
It's there now, thank you.

Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | @ArtemR 


On Mon, Mar 16, 2020 at 10:39 AM Sheetal Pamecha 
wrote:

> Hi Artem,
>
> It seems the GlusterFS 7.3 package was created but was not built
> successfully due to some error. I have initiated the building process
> again. Expect the package to be in the repo soon
>
> Regards,
> Sheetal Pamecha
>
>
> On Mon, Mar 16, 2020 at 6:54 PM Artem Russakovskii 
> wrote:
>
>> Hi,
>>
>> I'm not seeing 7.3 here yet:
>> http://download.opensuse.org/repositories/home:/glusterfs:/Leap15.1-7/openSUSE_Leap_15.1/x86_64/
>> .
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | @ArtemR 
>>
>>
>> On Wed, Feb 19, 2020 at 10:48 PM Dmitry Melekhov  wrote:
>>
>>> 20.02.2020 10:33, Rinku Kothiya пишет:
>>> >
>>> > References:
>>> >
>>> > [1] Packages for 7.1:
>>> > https://download.gluster.org/pub/gluster/glusterfs/7/7.3/
>>> >
>>> > [2] Release notes for 7.1:
>>> > https://docs.gluster.org/en/latest/release-notes/7.3/
>>>
>>> I see 2 bugs in this message :-D
>>>
>>>
>>> 
>>>
>>> Community Meeting Calendar:
>>>
>>> APAC Schedule -
>>> Every 2nd and 4th Tuesday at 11:30 AM IST
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> NA/EMEA Schedule -
>>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> 
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/441850968
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] just discovered that OpenShift/OKD dropped GlusterFS storage support...

2020-03-18 Thread Arman Khalatyan
Ok looks good, thanks for the links, lets us see how the things are
developing in openshift and also I hope OVIRT will be still glusterfs
friendly:)
(friend of gluster since 2.x LOL)


On Wed, Mar 18, 2020 at 7:56 PM Yaniv Kaul  wrote:

>
>
> On Wed, Mar 18, 2020 at 6:50 PM Arman Khalatyan  wrote:
>
>> Hello everybody,
>> any reason why  OpenShift/OKD dropped GlusterFS storage support?
>> now in the documentation
>>
>> https://docs.openshift.com/container-platform/4.2/migration/migrating_3_4/planning-migration-3-to-4.html#migration-considerations
>>
>>  is only:
>> UNSUPPORTED PERSISTENT STORAGE OPTIONS
>> Support for the following persistent storage options from OpenShift
>> Container Platform 3.11 has changed in OpenShift Container Platform 4.2:
>>
>> GlusterFS is no longer supported.
>>
>> CephFS is no longer supported.
>>
>> Ceph RBD is no longer supported.
>>
>> iSCSI is now Technology Preview.
>>
>> migrations to 4.x becoming really painfull
>>
>
> Migration should not be that difficult, see[1].
> Ceph can be used using Rook and Ceph-CSI.
> Y.
>
> [1] https://blog.openshift.com/migrating-your-applications-to-openshift-4/
>
>>
>> thanks,
>> Arman.
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] just discovered that OpenShift/OKD dropped GlusterFS storage support...

2020-03-18 Thread Yaniv Kaul
On Wed, Mar 18, 2020 at 6:50 PM Arman Khalatyan  wrote:

> Hello everybody,
> any reason why  OpenShift/OKD dropped GlusterFS storage support?
> now in the documentation
>
> https://docs.openshift.com/container-platform/4.2/migration/migrating_3_4/planning-migration-3-to-4.html#migration-considerations
>
>  is only:
> UNSUPPORTED PERSISTENT STORAGE OPTIONS
> Support for the following persistent storage options from OpenShift
> Container Platform 3.11 has changed in OpenShift Container Platform 4.2:
>
> GlusterFS is no longer supported.
>
> CephFS is no longer supported.
>
> Ceph RBD is no longer supported.
>
> iSCSI is now Technology Preview.
>
> migrations to 4.x becoming really painfull
>

Migration should not be that difficult, see[1].
Ceph can be used using Rook and Ceph-CSI.
Y.

[1] https://blog.openshift.com/migrating-your-applications-to-openshift-4/

>
> thanks,
> Arman.
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Storage really slow on k8s

2020-03-18 Thread Rene Bon Ciric
Hello,

I have a converged setup; as you suggest in
https://github.com/gluster/gluster-kubernetes.

It's a pretty standard installation. 3 nodes are dedicated. Each one
with a system + data drives. The data drive is 500 GiB.

I am suffering, for quite a while now, of extreme slowness. Here're
some network tests.

root@k8s2:~# iperf3 -Vc glusterfs0
iperf 3.1.3
Linux k8s2.test.gva.cloudsigma.com 4.15.0-72-generic #81-Ubuntu SMP
Tue Nov 26 12:20:02 UTC 2019 x86_64
Time: Wed, 11 Mar 2020 01:47:45 GMT
Connecting to host glusterfs0, port 5201
  Cookie: k8s2.test.gva.cloudsigma.com.1583891
  TCP MSS: 1448 (default)
[  4] local 10.0.77.3 port 53642 connected to 10.0.77.33 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting
0 seconds, 10 second test
[ ID] Interval   Transfer Bandwidth   Retr  Cwnd
[  4]   0.00-1.00   sec  1.02 GBytes  8.77 Gbits/sec  1459   3.15 MBytes
[  4]   1.00-2.00   sec  1022 MBytes  8.58 Gbits/sec  2284   3.15 MBytes
[  4]   2.00-3.00   sec  1005 MBytes  8.43 Gbits/sec  2230   3.15 MBytes
[  4]   3.00-4.00   sec   982 MBytes  8.24 Gbits/sec  2334   2.38 MBytes
[  4]   4.00-5.00   sec   988 MBytes  8.29 Gbits/sec  668   2.65 MBytes
[  4]   5.00-6.00   sec   983 MBytes  8.25 Gbits/sec  1370   2.86 MBytes
[  4]   6.00-7.00   sec  1010 MBytes  8.47 Gbits/sec  1017   2.12 MBytes
[  4]   7.00-8.00   sec   995 MBytes  8.35 Gbits/sec  2220   1.27 MBytes
[  4]   8.00-9.00   sec   923 MBytes  7.75 Gbits/sec  324   1.64 MBytes
[  4]   9.00-10.00  sec  1001 MBytes  8.40 Gbits/sec  532   1.96 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval   Transfer Bandwidth   Retr
[  4]   0.00-10.00  sec  9.72 GBytes  8.35 Gbits/sec  14438 sender
[  4]   0.00-10.00  sec  9.72 GBytes  8.35 Gbits/sec  receiver
CPU Utilization: local/sender 28.6% (0.5%u/28.0%s), remote/receiver
22.5% (1.1%u/21.4%s)

iperf Done.

root@k8s2:~# iperf3 -Vc glusterfs1
iperf 3.1.3
Linux k8s2.test.gva.cloudsigma.com 4.15.0-72-generic #81-Ubuntu SMP
Tue Nov 26 12:20:02 UTC 2019 x86_64
Time: Wed, 11 Mar 2020 01:49:10 GMT
Connecting to host glusterfs1, port 5201
  Cookie: k8s2.test.gva.cloudsigma.com.1583891
  TCP MSS: 1448 (default)
[  4] local 10.0.77.3 port 42554 connected to 10.0.77.34 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting
0 seconds, 10 second test
[ ID] Interval   Transfer Bandwidth   Retr  Cwnd
[  4]   0.00-1.00   sec   941 MBytes  7.89 Gbits/sec  4309   2.03 MBytes
[  4]   1.00-2.00   sec  1017 MBytes  8.53 Gbits/sec  2577   2.21 MBytes
[  4]   2.00-3.03   sec   962 MBytes  7.83 Gbits/sec  943   2.44 MBytes
[  4]   3.03-4.00   sec   965 MBytes  8.35 Gbits/sec  2441   2.63 MBytes
[  4]   4.00-5.00   sec   951 MBytes  7.98 Gbits/sec  1785   2.86 MBytes
[  4]   5.00-6.00   sec   900 MBytes  7.55 Gbits/sec  1199   3.00 MBytes
[  4]   6.00-7.00   sec   988 MBytes  8.28 Gbits/sec  888100 KBytes
[  4]   7.00-8.00   sec   892 MBytes  7.49 Gbits/sec  2382   2.34 MBytes
[  4]   8.00-9.00   sec   967 MBytes  8.11 Gbits/sec  808   2.61 MBytes
[  4]   9.00-10.00  sec   979 MBytes  8.21 Gbits/sec  3070   2.76 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval   Transfer Bandwidth   Retr
[  4]   0.00-10.00  sec  9.34 GBytes  8.02 Gbits/sec  20402 sender
[  4]   0.00-10.00  sec  9.34 GBytes  8.02 Gbits/sec  receiver
CPU Utilization: local/sender 27.4% (0.5%u/26.9%s), remote/receiver
24.7% (0.7%u/23.9%s)

iperf Done.

root@k8s2:~# iperf3 -Vc glusterfs2
iperf 3.1.3
Linux k8s2.test.gva.cloudsigma.com 4.15.0-72-generic #81-Ubuntu SMP
Tue Nov 26 12:20:02 UTC 2019 x86_64
Time: Wed, 11 Mar 2020 02:57:33 GMT
Connecting to host glusterfs2, port 5201
  Cookie: k8s2.test.gva.cloudsigma.com.1583895
  TCP MSS: 1448 (default)
[  4] local 10.0.77.3 port 59230 connected to 10.0.77.35 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting
0 seconds, 10 second test
[ ID] Interval   Transfer Bandwidth   Retr  Cwnd
[  4]   0.00-1.00   sec  1020 MBytes  8.56 Gbits/sec  1423   1.96 MBytes
[  4]   1.00-2.00   sec   997 MBytes  8.36 Gbits/sec  2377   2.28 MBytes
[  4]   2.00-3.00   sec  1022 MBytes  8.58 Gbits/sec  1409   2.56 MBytes
[  4]   3.00-4.00   sec   976 MBytes  8.19 Gbits/sec  1184   2.79 MBytes
[  4]   4.00-5.00   sec  1011 MBytes  8.48 Gbits/sec  1063   3.00 MBytes
[  4]   5.00-6.00   sec  1016 MBytes  8.52 Gbits/sec  2608   3.00 MBytes
[  4]   6.00-7.00   sec   979 MBytes  8.21 Gbits/sec  1337   1.73 MBytes
[  4]   7.00-8.00   sec  1011 MBytes  8.48 Gbits/sec  746   2.11 MBytes
[  4]   8.00-9.00   sec  1013 MBytes  8.50 Gbits/sec  2190   2.43 MBytes
[  4]   9.00-10.00  sec   930 MBytes  7.80 Gbits/sec  1284   2.67 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ 

[Gluster-users] just discovered that OpenShift/OKD dropped GlusterFS storage support...

2020-03-18 Thread Arman Khalatyan
Hello everybody,
any reason why  OpenShift/OKD dropped GlusterFS storage support?
now in the documentation
https://docs.openshift.com/container-platform/4.2/migration/migrating_3_4/planning-migration-3-to-4.html#migration-considerations

 is only:
UNSUPPORTED PERSISTENT STORAGE OPTIONS
Support for the following persistent storage options from OpenShift
Container Platform 3.11 has changed in OpenShift Container Platform 4.2:

GlusterFS is no longer supported.

CephFS is no longer supported.

Ceph RBD is no longer supported.

iSCSI is now Technology Preview.

migrations to 4.x becoming really painfull

thanks,
Arman.




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication sync issue

2020-03-18 Thread Strahil Nikolov
On March 18, 2020 1:41:15 PM GMT+02:00, "Etem Bayoğlu"  
wrote:
>Yes I had tried.. my observation in my issue is : glusterfs crawler did
>not
>exit from a specific directory that had been synced already.  Like a
>infinite loop. It was crawling that directory endlessly. I tried so
>many
>things an time goes on.
>So I gave up and switched to nfs + rsync for now. This issue is getting
>me
>angry.
>
>
>Thank community for help. ;)
>
>On 18 Mar 2020 Wed at 09:00 Kotresh Hiremath Ravishankar <
>khire...@redhat.com> wrote:
>
>> Could you try disabling syncing xattrs and check ?
>>
>> gluster vol geo-rep  :: config
>sync-xattrs
>> false
>>
>> On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov
>
>> wrote:
>>
>>> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu" <
>>> etembayo...@gmail.com> wrote:
>>> >Hello again,
>>> >
>>> >These are gsyncd.log from master on DEBUG level. It tells entering
>>> >directory, synced files , and gfid information
>>> >
>>> >[2020-03-12 07:18:16.702286] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/358fe62c-c7e8-449a-90dd-1cc1a3b7a346
>>> >[2020-03-12 07:18:16.702420] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/04eb63e3-7fcb-45d2-9f29-6292a5072adb
>>> >[2020-03-12 07:18:16.702574] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/4363e521-d81a-4a0f-bfa4-5ee6b92da2b4
>>> >[2020-03-12 07:18:16.702704] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/bed30509-2c5f-4c77-b2f9-81916a99abd9
>>> >[2020-03-12 07:18:16.702828] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/d86f44cc-3001-4bdf-8bae-6bed2a9c8381
>>> >[2020-03-12 07:18:16.702950] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/da40d429-d89e-4dc9-9dda-07922d87b3c8
>>> >[2020-03-12 07:18:16.703075] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/befc5e03-b7a1-43dc-b6c2-0a186019b6d5
>>> >[2020-03-12 07:18:16.703198] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/4e66035f-99f9-4802-b876-2e01686d18f2
>>> >[2020-03-12 07:18:16.703378] D [master(worker
>>> >/srv/media-storage):324:regjob] _GMaster: synced
>>> >file=.gfid/d1295b51-e461-4766-b504-8e9a941a056f
>>> >[2020-03-12 07:18:16.719875] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1557813
>>> >[2020-03-12 07:18:17.72679] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1557205
>>> >[2020-03-12 07:18:17.297362] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1556880
>>> >[2020-03-12 07:18:17.488224] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1557769
>>> >[2020-03-12 07:18:17.730181] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1557028
>>> >[2020-03-12 07:18:17.869410] I [gsyncd(config-get):318:main] :
>>> >Using
>>> >session config file
>>>
>>>
>>path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>>> >[2020-03-12 07:18:18.65431] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1558442
>>> >[2020-03-12 07:18:18.352381] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1557391
>>> >[2020-03-12 07:18:18.374876] I [gsyncd(config-get):318:main] :
>>> >Using
>>> >session config file
>>>
>>>
>>path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>>> >[2020-03-12 07:18:18.482299] I [gsyncd(config-set):318:main] :
>>> >Using
>>> >session config file
>>>
>>>
>>path=/var/lib/glusterd/geo-replication/media-storage_slave-nodem_dr-media/gsyncd.conf
>>> >[2020-03-12 07:18:18.507585] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1558577
>>> >[2020-03-12 07:18:18.576061] I [gsyncd(config-get):318:main] :
>>> >Using
>>> >session config file
>>>
>>>
>>path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>>> >[2020-03-12 07:18:18.582772] D [master(worker
>>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>>> >./api/media/listing/2018/06-02/1556831
>>> >[2020-03-12 07:18:18.684170] I [gsyncd(config-get):318:main] :
>>> >Using
>>> >session config file
>>>
>>>
>>path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>>> >[2020-03-12 07:18:18.691845] E [syncdutils(worker
>>> >/srv/media-storage):312:log_raise_exception] : connection to
>peer
>>> >is
>>> >broken
>>> >[2020-03-12 07:18:18.692106] E [syncdutils(worker
>>> >/srv/media-storage):312:log_raise_exception] : 

Re: [Gluster-users] geo-replication sync issue

2020-03-18 Thread Etem Bayoğlu
Yes I had tried.. my observation in my issue is : glusterfs crawler did not
exit from a specific directory that had been synced already.  Like a
infinite loop. It was crawling that directory endlessly. I tried so many
things an time goes on.
So I gave up and switched to nfs + rsync for now. This issue is getting me
angry.


Thank community for help. ;)

On 18 Mar 2020 Wed at 09:00 Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Could you try disabling syncing xattrs and check ?
>
> gluster vol geo-rep  :: config sync-xattrs
> false
>
> On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov 
> wrote:
>
>> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu" <
>> etembayo...@gmail.com> wrote:
>> >Hello again,
>> >
>> >These are gsyncd.log from master on DEBUG level. It tells entering
>> >directory, synced files , and gfid information
>> >
>> >[2020-03-12 07:18:16.702286] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/358fe62c-c7e8-449a-90dd-1cc1a3b7a346
>> >[2020-03-12 07:18:16.702420] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/04eb63e3-7fcb-45d2-9f29-6292a5072adb
>> >[2020-03-12 07:18:16.702574] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/4363e521-d81a-4a0f-bfa4-5ee6b92da2b4
>> >[2020-03-12 07:18:16.702704] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/bed30509-2c5f-4c77-b2f9-81916a99abd9
>> >[2020-03-12 07:18:16.702828] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/d86f44cc-3001-4bdf-8bae-6bed2a9c8381
>> >[2020-03-12 07:18:16.702950] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/da40d429-d89e-4dc9-9dda-07922d87b3c8
>> >[2020-03-12 07:18:16.703075] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/befc5e03-b7a1-43dc-b6c2-0a186019b6d5
>> >[2020-03-12 07:18:16.703198] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/4e66035f-99f9-4802-b876-2e01686d18f2
>> >[2020-03-12 07:18:16.703378] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/d1295b51-e461-4766-b504-8e9a941a056f
>> >[2020-03-12 07:18:16.719875] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557813
>> >[2020-03-12 07:18:17.72679] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557205
>> >[2020-03-12 07:18:17.297362] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1556880
>> >[2020-03-12 07:18:17.488224] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557769
>> >[2020-03-12 07:18:17.730181] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557028
>> >[2020-03-12 07:18:17.869410] I [gsyncd(config-get):318:main] :
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.65431] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1558442
>> >[2020-03-12 07:18:18.352381] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557391
>> >[2020-03-12 07:18:18.374876] I [gsyncd(config-get):318:main] :
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.482299] I [gsyncd(config-set):318:main] :
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-nodem_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.507585] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1558577
>> >[2020-03-12 07:18:18.576061] I [gsyncd(config-get):318:main] :
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.582772] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1556831
>> >[2020-03-12 07:18:18.684170] I [gsyncd(config-get):318:main] :
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.691845] E [syncdutils(worker
>> >/srv/media-storage):312:log_raise_exception] : connection to peer
>> >is
>> >broken
>> >[2020-03-12 07:18:18.692106] E [syncdutils(worker
>> >/srv/media-storage):312:log_raise_exception] : connection to peer
>> >is
>> >broken
>> >[2020-03-12 07:18:18.694910] E [syncdutils(worker
>> >/srv/media-storage):822:errlog] Popen: command returned error cmd=ssh
>> 

Re: [Gluster-users] geo-replication sync issue

2020-03-18 Thread Kotresh Hiremath Ravishankar
Could you try disabling syncing xattrs and check ?

gluster vol geo-rep  :: config sync-xattrs
false

On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov 
wrote:

> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu" <
> etembayo...@gmail.com> wrote:
> >Hello again,
> >
> >These are gsyncd.log from master on DEBUG level. It tells entering
> >directory, synced files , and gfid information
> >
> >[2020-03-12 07:18:16.702286] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/358fe62c-c7e8-449a-90dd-1cc1a3b7a346
> >[2020-03-12 07:18:16.702420] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/04eb63e3-7fcb-45d2-9f29-6292a5072adb
> >[2020-03-12 07:18:16.702574] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/4363e521-d81a-4a0f-bfa4-5ee6b92da2b4
> >[2020-03-12 07:18:16.702704] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/bed30509-2c5f-4c77-b2f9-81916a99abd9
> >[2020-03-12 07:18:16.702828] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/d86f44cc-3001-4bdf-8bae-6bed2a9c8381
> >[2020-03-12 07:18:16.702950] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/da40d429-d89e-4dc9-9dda-07922d87b3c8
> >[2020-03-12 07:18:16.703075] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/befc5e03-b7a1-43dc-b6c2-0a186019b6d5
> >[2020-03-12 07:18:16.703198] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/4e66035f-99f9-4802-b876-2e01686d18f2
> >[2020-03-12 07:18:16.703378] D [master(worker
> >/srv/media-storage):324:regjob] _GMaster: synced
> >file=.gfid/d1295b51-e461-4766-b504-8e9a941a056f
> >[2020-03-12 07:18:16.719875] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1557813
> >[2020-03-12 07:18:17.72679] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1557205
> >[2020-03-12 07:18:17.297362] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1556880
> >[2020-03-12 07:18:17.488224] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1557769
> >[2020-03-12 07:18:17.730181] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1557028
> >[2020-03-12 07:18:17.869410] I [gsyncd(config-get):318:main] :
> >Using
> >session config file
>
> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
> >[2020-03-12 07:18:18.65431] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1558442
> >[2020-03-12 07:18:18.352381] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1557391
> >[2020-03-12 07:18:18.374876] I [gsyncd(config-get):318:main] :
> >Using
> >session config file
>
> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
> >[2020-03-12 07:18:18.482299] I [gsyncd(config-set):318:main] :
> >Using
> >session config file
>
> >path=/var/lib/glusterd/geo-replication/media-storage_slave-nodem_dr-media/gsyncd.conf
> >[2020-03-12 07:18:18.507585] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1558577
> >[2020-03-12 07:18:18.576061] I [gsyncd(config-get):318:main] :
> >Using
> >session config file
>
> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
> >[2020-03-12 07:18:18.582772] D [master(worker
> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
> >./api/media/listing/2018/06-02/1556831
> >[2020-03-12 07:18:18.684170] I [gsyncd(config-get):318:main] :
> >Using
> >session config file
>
> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
> >[2020-03-12 07:18:18.691845] E [syncdutils(worker
> >/srv/media-storage):312:log_raise_exception] : connection to peer
> >is
> >broken
> >[2020-03-12 07:18:18.692106] E [syncdutils(worker
> >/srv/media-storage):312:log_raise_exception] : connection to peer
> >is
> >broken
> >[2020-03-12 07:18:18.694910] E [syncdutils(worker
> >/srv/media-storage):822:errlog] Popen: command returned error cmd=ssh
> >-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> >/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto
> >-S
> >/tmp/gsyncd-aux-ssh-WaMqpG/241afba5343394352fc3f9c251909232.sock
> >slave-node
> >/nonexistent/gsyncd slave media-storage slave-node::dr-media
> >--master-node
> >master-node --master-node-id 023cdb20-2737-4278-93c2-0927917ee314
> >--master-brick /srv/media-storage --local-node slave-node
> >--local-node-id
> >cf34fc96-a08a-49c2-b8eb-a3df5a05f757 --slave-timeout 120
> >--slave-log-level
> >DEBUG --slave-gluster-log-level INFO