Re: [Gluster-users] Volume parameters listed more than once

2020-04-21 Thread Kotresh Hiremath Ravishankar
On Mon, Apr 20, 2020 at 9:37 PM Yaniv Kaul wrote: > > > On Mon, Apr 20, 2020 at 5:38 PM Dmitry Antipov > wrote: > >> # gluster volume info >> >> Volume Name: TEST0 >> Type: Distributed-Replicate >> Volume ID: ca63095f-58dd-4ba8-82d6-7149a58c1423 >> Status: Created >> Snapshot Count: 0 >> Number

Re: [Gluster-users] geo-replication sync issue

2020-03-18 Thread Kotresh Hiremath Ravishankar
Could you try disabling syncing xattrs and check ? gluster vol geo-rep :: config sync-xattrs false On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov wrote: > On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu" < > etembayo...@gmail.com> wrote: > >Hello again, > > > >These are gsyncd.log from

Re: [Gluster-users] Geo-replication /var/lib space question

2020-02-13 Thread Kotresh Hiremath Ravishankar
All '.processed' directories (under working_dir and working_dir/.history) contain processed changelogs and is no longer required by geo-replication apart from debugging purposes. That directory can be cleaned up if it's consuming too much space. On Wed, Feb 12, 2020 at 11:36 PM Sunny Kumar

Re: [Gluster-users] Unable to setup geo replication

2019-12-01 Thread Kotresh Hiremath Ravishankar
sender=3.1.3] > > > > The data is synced over to the other machine when I view the file there > > [root@pgsotc10 mnt]# cat file1 > > testdata > > [root@pgsotc10 mnt]# > > > > *From:* Kotresh Hiremath Ravishankar > *Sent:* Wednesday, November 27, 2019 5:25

Re: [Gluster-users] Unable to setup geo replication

2019-11-27 Thread Kotresh Hiremath Ravishankar
col data stream (code 12) at io.c(226) > [sender=3.1.3] > > [root@jfsotc22 mnt]# > > > > *From:* Kotresh Hiremath Ravishankar > *Sent:* Tuesday, November 26, 2019 7:22 PM > *To:* Tan, Jian Chern > *Cc:* gluster-users@gluster.org > *Subject:* Re: [Gluster-users] Unable t

Re: [Gluster-users] Unable to setup geo replication

2019-11-26 Thread Kotresh Hiremath Ravishankar
; version 31, so both are up to date as far as I know. > > Gluster version on both machines are glusterfs 5.10 > > OS on both machines are Fedora 29 Server Edition > > > > *From:* Kotresh Hiremath Ravishankar > *Sent:* Tuesday, November 26, 2019 3:04 PM > *To:* Tan,

Re: [Gluster-users] Unable to setup geo replication

2019-11-25 Thread Kotresh Hiremath Ravishankar
The error code 14 related IPC where any of pipe/fork fails in rsync code. Please upgrade the rsync if not done. Also check the rsync versions between master and slave to be same. Which version of gluster are you using? What's the host OS? What's the rsync version ? On Tue, Nov 26, 2019 at 11:34

Re: [Gluster-users] Geo_replication to Faulty

2019-11-18 Thread Kotresh Hiremath Ravishankar
a.redhat.com/show_bug.cgi?id=1709248). > > On Tue, Nov 19, 2019 at 11:22 AM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> Which version of gluster are you using? >> >> On Tue, Nov 19, 2019 at 11:00 AM deepu srinivasan >> wrote: >>

Re: [Gluster-users] Geo_replication to Faulty

2019-11-18 Thread Kotresh Hiremath Ravishankar
Which version of gluster are you using? On Tue, Nov 19, 2019 at 11:00 AM deepu srinivasan wrote: > Hi kotresh > Is there a stable release in 6.x series? > > > On Tue, Nov 19, 2019, 10:44 AM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> T

Re: [Gluster-users] Geo_replication to Faulty

2019-11-18 Thread Kotresh Hiremath Ravishankar
This issue has been recently fixed with the following patch and should be available in latest gluster-6.x https://review.gluster.org/#/c/glusterfs/+/23570/ On Tue, Nov 19, 2019 at 10:26 AM deepu srinivasan wrote: > > Hi Aravinda > *The below logs are from master end:* > > [2019-11-16

Re: [Gluster-users] Geo-replication does not send filesystem changes

2019-07-05 Thread Kotresh Hiremath Ravishankar
The session is moved from "history crawl" to "changelog crawl". After this point, there are no changelogs to be synced as per the logs. Please checking in ".processing" directories if there are any pending changelogs to be synced at "/var/lib/misc/gluster/gsyncd///.processing" If there are no

Re: [Gluster-users] Exception in Geo-Replication

2019-07-02 Thread Kotresh Hiremath Ravishankar
You should be looking into the other log file (changes-.log) for actual failure. In your case "changes-home-sas-gluster-data-code-misc.log" On Tue, Jul 2, 2019 at 12:33 PM deepu srinivasan wrote: > Any Update on this issue ? > > On Mon, Jul 1, 2019 at 4:19 PM deepu srinivasan > wrote: > >> Hi

Re: [Gluster-users] Geo Replication Stop even after migratingto 5.6

2019-06-14 Thread Kotresh Hiremath Ravishankar
GlusterFS >> geo-replication begins synchronizing all the data. All files are compared >> using checksum, which can be a lengthy and high resource utilization >> operation on large data sets. >> >> > On Fri, Jun 14, 2019 at 12:30 PM Kotresh Hiremath Ravishankar <

Re: [Gluster-users] Geo Replication stops replicating

2019-06-05 Thread Kotresh Hiremath Ravishankar
>>> >>> [2019-06-05 08:52:44.426839] I [MSGID: 106496] >>> [glusterd-handler.c:3187:__glusterd_handle_mount] 0-glusterd: Received >>> mount req >>> >>> [2019-06-05 08:52:44.426886] E [MSGID: 106061] >>> [glusterd-mountbroker.c:555:gluster

Re: [Gluster-users] Geo Replication stops replicating

2019-06-04 Thread Kotresh Hiremath Ravishankar
Ccing Sunny, who was investing similar issue. On Tue, Jun 4, 2019 at 5:46 PM deepu srinivasan wrote: > Have already added the path in bashrc . Still in faulty state > > On Tue, Jun 4, 2019, 5:27 PM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >>

Re: [Gluster-users] Geo Replication stops replicating

2019-06-04 Thread Kotresh Hiremath Ravishankar
09:logerr] Popen: >> /usr/sbin/gluster> 2 : failed with this errno (No such file or directory) > > > On Tue, Jun 4, 2019 at 5:10 PM deepu srinivasan > wrote: > >> Hi >> As discussed I have upgraded gluster from 4.1 to 6.2 version. But the Geo >> replicati

Re: [Gluster-users] Geo Replication stops replicating

2019-05-31 Thread Kotresh Hiremath Ravishankar
is listed in ps aux. Only when i set rsync option to > " " and restart all the process the rsync process is listed in ps aux. > > > On Fri, May 31, 2019 at 4:23 PM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> Yes, rsync config option sh

Re: [Gluster-users] Geo Replication stops replicating

2019-05-31 Thread Kotresh Hiremath Ravishankar
result . > >> 1559298781.338234 write(2, "rsync: link_stat >> \"/tmp/gsyncd-aux-mount-EEJ_sY/.gfid/3fa6aed8-802e-4efe-9903-8bc171176d88\" >> failed: No such file or directory (2)", 128 > > seems like a file is missing ? > > On Fri, May 31, 2019 at 3:25 PM Kotresh H

Re: [Gluster-users] Geo Replication stops replicating

2019-05-31 Thread Kotresh Hiremath Ravishankar
:16 PM deepu srinivasan > wrote: > >> Hi Kotresh >> We have tried the above-mentioned rsync option and we are planning to >> have the version upgrade to 6.0. >> >> On Fri, May 31, 2019 at 11:04 AM Kotresh Hiremath Ravishankar < >> khire...@redhat.com> wrote

Re: [Gluster-users] Geo Replication stops replicating

2019-05-30 Thread Kotresh Hiremath Ravishankar
Hi, This looks like the hang because stderr buffer filled up with errors messages and no one reading it. I think this issue is fixed in latest releases. As a workaround, you can do following and check if it works. Prerequisite: rsync version should be > 3.1.0 Workaround: gluster volume

Re: [Gluster-users] [geo-rep] Replication faulty - gsyncd.log OSError: [Errno 13] Permission denied

2019-03-20 Thread Kotresh Hiremath Ravishankar
>> >> [2018-09-25 14:10:39.650958] I [resource(slave >> master/bricks/brick1/brick):1096:connect] GLUSTER: Mounting gluster volume >> locally... >> >> [2018-09-25 14:10:40.729355] I [resource(slave >> master/bricks/brick1/brick):1119:connect] GLUSTER: Mount

Re: [Gluster-users] [Gluster-devel] Bitrot: Time of signing depending on the file size???

2019-03-05 Thread Kotresh Hiremath Ravishankar
s and they already had a signature. I don't know the > reason for this. Maybe the client still keep th fd open? I opened a bug for > this: > https://bugzilla.redhat.com/show_bug.cgi?id=1685023 > > Regards > David > > Am Fr., 1. März 2019 um 18:29 Uhr schrieb Kotresh Hi

Re: [Gluster-users] [Gluster-devel] Bitrot: Time of signing depending on the file size???

2019-03-01 Thread Kotresh Hiremath Ravishankar
Interesting observation! But as discussed in the thread bitrot signing processes depends 2 min timeout (by default) after last fd closes. It doesn't have any co-relation with the size of the file. Did you happen to verify that the fd was still open for large files for some reason? On Fri, Mar

[Gluster-users] Elasticsearch on gluster

2018-11-13 Thread Kotresh Hiremath Ravishankar
Hi, I tried setting up elastic search on gluster. Here is the note of it. Hope it helps someone trying to setup ELK stack on gluster. https://hrkscribbles.blogspot.com/2018/11/elastic-search-on-gluster.html -- Thanks and Regards, Kotresh H R ___

Re: [Gluster-users] [geo-rep] Replication faulty - gsyncd.log OSError: [Errno 13] Permission denied

2018-09-24 Thread Kotresh Hiremath Ravishankar
eoaccount-ARDW1E > > [2018-09-24 13:51:16.116595] W [glusterfsd.c:1514:cleanup_and_exit] > (-->/lib64/libpthread.so.0(+0x7e25) [0x7fafbc9eee25] > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55d5dac5dd65] > -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55d5dac5db

Re: [Gluster-users] [geo-rep] Replication faulty - gsyncd.log OSError: [Errno 13] Permission denied

2018-09-21 Thread Kotresh Hiremath Ravishankar
The problem occured on slave side whose error is propagated to master. Mostly any traceback with repce involved is related to problem in slave. Just check few lines above in the log to find the slave node, the crashed worker is connected to and get geo replication logs to further debug. On

Re: [Gluster-users] posix set mdata failed, No ctime

2018-09-21 Thread Kotresh Hiremath Ravishankar
You can ignore this error. It is fixed and should be available in next 4.1.x release. On Sat, 22 Sep 2018, 07:07 Pedro Costa, wrote: > Forgot to mention, I’m running all VM’s with 16.04.1-Ubuntu, Kernel > 4.15.0-1023-azure #24 > > > > > > *From:* Pedro Costa > *Sent:* 21 September 2018 10:16

Re: [Gluster-users] 4.1.x geo-replication "changelogs could not be processed completely" issue

2018-09-11 Thread Kotresh Hiremath Ravishankar
Answer inline. On Tue, Sep 11, 2018 at 4:19 PM, Kotte, Christian (Ext) < christian.ko...@novartis.com> wrote: > Hi all, > > > > I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup. > The gsyncd.log on the master is fine, but I have some strange changelog > warnings and

Re: [Gluster-users] GlusterFS 4.1.3, Geo replication unable to setup

2018-09-06 Thread Kotresh Hiremath Ravishankar
Hi Nico, The glusterd has crashed on this node. Please raise a bug with core file? Please use the following tool [1] to setup geo-rep by bringing back the glusterd if you are finding it difficult with geo-rep setup steps and let us know if if it still crashes? [1]

Re: [Gluster-users] Geo-Replication issue

2018-09-06 Thread Kotresh Hiremath Ravishankar
gsync_status_mst_slv] 0-management: /var/lib/glusterd/geo- > replication/glusterdist_gluster-poc-sj_gluster/monitor.status statefile > not present. [No such file or directory] > > > > /Krishna > > > > *From:* Kotresh Hiremath Ravishankar > *Sent:* Thursday, Septem

Re: [Gluster-users] Geo-Replication issue

2018-09-06 Thread Kotresh Hiremath Ravishankar
: geo-replication status > glusterdist gluster-poc-sj::gluster : session is not active > > [2018-09-06 07:56:38.486229] I [MSGID: 106028] [glusterd-geo-rep.c:4903: > glusterd_get_gsync_status_mst_slv] 0-management: /var/lib/glusterd/geo- > replication/glusterdist_gluster-poc-sj_gluster

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-06 Thread Kotresh Hiremath Ravishankar
TestInt18.08-b001.t.Z > > du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or > directory > > [root@gluster-poc-sj ~]# > > > > File not reached at slave. > > > > /Krishna > > > > *From:* Krishna Verma > *Sent:* Monday, Se

Re: [Gluster-users] Geo-Replication issue

2018-09-06 Thread Kotresh Hiremath Ravishankar
Hi Krishna, glusterd log file would help here Thanks, Kotresh HR On Thu, Sep 6, 2018 at 1:02 PM, Krishna Verma wrote: > Hi All, > > > > I am getting issue in geo-replication distributed gluster volume. In a > session status it shows only peer node instead of 2. And I am also not able > to

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-03 Thread Kotresh Hiremath Ravishankar
cd(config-get):297:main] : Using > session config file path=/var/lib/glusterd/geo-replication/glusterdist_ > gluster-poc-sj_glusterdist/gsyncd.conf > > [2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] : Using > session config file path=/var/lib/glusterd/geo- > replic

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-03 Thread Kotresh Hiremath Ravishankar
t you to please have a look. > > > > /Krishna > > > > > > > > *From:* Kotresh Hiremath Ravishankar > *Sent:* Monday, September 3, 2018 10:19 AM > > *To:* Krishna Verma > *Cc:* Sunny Kumar ; Gluster Users < > gluster-users@gluster.org&g

Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty

2018-09-02 Thread Kotresh Hiremath Ravishankar
ter.org> för Marcus Pedersén > *Skickat:* den 31 augusti 2018 16:09 > *Till:* khire...@redhat.com > > *Kopia:* gluster-users@gluster.org > *Ämne:* Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does > not work Now: Upgraded to 4.1.3 geo node Faulty > > > I

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-02 Thread Kotresh Hiremath Ravishankar
on > > geo-replication.indexing: on > > transport.address-family: inet > > nfs.disable: on > > [root@gluster-poc-noida distvol]# > > > > Please help to fix, I believe its not a normal behavior of gluster rsync. > > > > /Krishna > > *From:* Krishn

Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty

2018-08-31 Thread Kotresh Hiremath Ravishankar
Hi Marcus, Could you attach full logs? Is the same trace back happening repeatedly? It will be helpful you attach the corresponding mount log as well. What's the rsync version, you are using? Thanks, Kotresh HR On Fri, Aug 31, 2018 at 12:16 PM, Marcus Pedersén wrote: > Hi all, > > I had

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Kotresh Hiremath Ravishankar
there is no distribution and there is only one brick participating in syncing. Could you retest and confirm. > > > /Krishna > > > > > > *From:* Kotresh Hiremath Ravishankar > *Sent:* Thursday, August 30, 2018 3:20 PM > > *To:* Krishna Verma > *Cc:* Sunny Kumar

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Kotresh Hiremath Ravishankar
e count like 3*3 or > 4*3 :- Are you refereeing to create a distributed volume with 3 master > node and 3 slave node? > Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Kotresh Hiremath Ravishankar
glusterep /data/gluster/gv0root > ssh://gluster-poc-sj::glusterepgluster-poc-sjPassive > N/AN/A > > [root@gluster-poc-noida gluster]# > > > > Thanks in advance for your all time support. > > > > /Krishna > > > > *From:* Kotre

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-29 Thread Kotresh Hiremath Ravishankar
ily: inet > > nfs.disable: on > > performance.client-io-threads: off > > geo-replication.indexing: on > > geo-replication.ignore-pid-check: on > > changelog.changelog: on > > [root@gluster-poc-noida glusterfs]# > > > > Could you please help me in that al

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Kotresh Hiremath Ravishankar
a ~]# > > > > Is it looks good what we exactly need or di I need to create any more link > or How to get “libgfchangelog.so” file if missing. > > > > /Krishna > > > > *From:* Kotresh Hiremath Ravishankar > *Sent:* Tuesday, August 28, 2018 4:22 PM > *To:*

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Kotresh Hiremath Ravishankar
t; > gluster-poc-noidaglusterep /data/gluster/gv0root > gluster-poc-sj::glusterepN/A FaultyN/A N/A > > noi-poc-gluster glusterep /data/gluster/gv0root >gluster-poc-sj::glusterepN/A Faulty > N/A N/A >

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Kotresh Hiremath Ravishankar
Hi Krishna, Since your libraries are in /usr/lib64, you should be doing #ldconfig /usr/lib64 Confirm that below command lists the library #ldconfig -p | grep libgfchangelog On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar wrote: > can you do ldconfig /usr/local/lib and share the output of

Re: [Gluster-users] Question to utime feature for release 4.1.0

2018-08-16 Thread Kotresh Hiremath Ravishankar
efresh-timeout: 10performance.read-ahead: > offperformance.write-behind-window-size: 4MBperformance.write-behind: > onstorage.build-pgfid: onauth.ssl-allow: *client.ssl: offserver.ssl: > offchangelog.changelog: onfeatures.bitrot: onfeatures.scrub: > Activefeatures.scrub-freq: dailycluster

Re: [Gluster-users] Question to utime feature for release 4.1.0

2018-08-15 Thread Kotresh Hiremath Ravishankar
Hi David, The feature is to provide consistent time attributes (atime, ctime, mtime) across replica set. The feature is enabled with following two options. gluster vol set utime on gluster vol set ctime on The features currently does not honour mount options related time attributes such as

Re: [Gluster-users] Geo-replication stops after 4-5 hours

2018-08-02 Thread Kotresh Hiremath Ravishankar
## > Sent from my phone > #### > > Den 2 aug. 2018 08:07 skrev Kotresh Hiremath Ravishankar < > khire...@redhat.com>: > Could you look of any rsync processes hung in master or slave? > > On Thu, Aug 2, 2018 at 11:18 AM, Marcus Pedersén > wrote: >

Re: [Gluster-users] Geo-replication stops after 4-5 hours

2018-08-02 Thread Kotresh Hiremath Ravishankar
## > Marcus Pedersén > Systemadministrator > Interbull Centre > > Sent from my phone > #### > > > Den 2 aug. 2018 06:13 skrev Kotresh Hiremath Ravishankar < > khire...@redhat.com>: > > Hi Marcus, > > What's the rsync v

Re: [Gluster-users] Geo-replication stops after 4-5 hours

2018-08-01 Thread Kotresh Hiremath Ravishankar
Hi Marcus, What's the rsync version being used? Thanks, Kotresh HR On Thu, Aug 2, 2018 at 1:48 AM, Marcus Pedersén wrote: > Hi all! > > I upgraded from 3.12.9 to 4.1.1 and had problems with geo-replication. > > With help from the list with some sym links and so on (handled in another >

Re: [Gluster-users] Gluster 3.12.11 geo-replication connection to peer is broken

2018-07-23 Thread Kotresh Hiremath Ravishankar
Hi Pablo, The geo-rep status should go to Faulty if he connection to peer is broken. Does node log files failing with same error? Are these logs repeating? Does stop and start geo-rep giving the same error? Thanks, Kotresh HR On Tue, Jul 24, 2018 at 1:47 AM, Pablo J Rebollo Sosa wrote: > Hi,

Re: [Gluster-users] georeplication woes

2018-07-23 Thread Kotresh Hiremath Ravishankar
Looks like gsyncd on slave is failing for some reason. Please run the below cmd on the master. #ssh -i /var/lib/glusterd/geo-replication/secret.pem georep@gluster-4.glstr It should run gsyncd on the slave. If there is error, it should be fixed. Please share the output of above cmd. Regards,

Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-17 Thread Kotresh Hiremath Ravishankar
-command-dir Thanks, Kotresh HR On Wed, Jul 18, 2018 at 9:28 AM, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi Marcus, > > I am testing out 4.1 myself and I will have some update today. > For this particular traceback, gsyncd is not able to find the librar

Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-17 Thread Kotresh Hiremath Ravishankar
0 > [2018-07-16 19:35:16.828056] I [gsyncd(worker /urd-gds/gluster):297:main] > : Using session config file path=/var/lib/glusterd/geo- > replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf > [2018-07-16 19:35:16.828066] I [gsyncd(agent /urd-gds/gluster):297:mai

Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-13 Thread Kotresh Hiremath Ravishankar
! > > > Regards > > Marcus > > > -- > *Från:* gluster-users-boun...@gluster.org gluster.org> för Marcus Pedersén > *Skickat:* den 12 juli 2018 08:51 > *Till:* Kotresh Hiremath Ravishankar > *Kopia:* gluster-users@gluster.org > *Ämne:

Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-11 Thread Kotresh Hiremath Ravishankar
Hi Marcus, I think the fix [1] is needed in 4.1 Could you please this out and let us know if that works for you? [1] https://review.gluster.org/#/c/20207/ Thanks, Kotresh HR On Thu, Jul 12, 2018 at 1:49 AM, Marcus Pedersén wrote: > Hi all, > > I have upgraded from 3.12.9 to 4.1.1 and been

Re: [Gluster-users] Old georep files in /var/lib/misc/glusterfsd

2018-06-25 Thread Kotresh Hiremath Ravishankar
Hi Mabi, You can safely delete old files under /var/lib/misc/glusterfsd. Thanks, Kotresh On Mon, Jun 25, 2018 at 7:30 PM, mabi wrote: > Hi, > > In the past I was using geo-replication but unconfigured it on my two > volumes by using: > > gluster volume geo-replication ... stop > gluster

Re: [Gluster-users] Need Help to get GEO Replication working - Error: Please check gsync config file. Unable to get statefile's name

2018-06-20 Thread Kotresh Hiremath Ravishankar
l Gruber, Anton Gruber > > Steuernummer: 141/151/51801 > > > Am Mo., 18. Juni 2018 um 11:30 Uhr schrieb Kotresh Hiremath Ravishankar < > khire...@redhat.com>: > >> Hi Alex, >> >> Sorry, I lost the context. >> >> Which gluster version are you u

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-06-20 Thread Kotresh Hiremath Ravishankar
>> this when the system failed. The system was totally unresponsive and >> required a cold power off and then power on in order to recover the server. >> >> Many thanks for your help. >> >> Mark Betham. >> >> On 11 June 2018 at 05:53, Kotresh Hirema

Re: [Gluster-users] Need Help to get GEO Replication working - Error: Please check gsync config file. Unable to get statefile's name

2018-06-18 Thread Kotresh Hiremath Ravishankar
Hi Alex, Sorry, I lost the context. Which gluster version are you using? Thanks, Kotresh HR On Sat, Jun 16, 2018 at 2:57 PM, Axel Gruber wrote: > Hello > > i think its better to open a new Thread: > > > I tryed to install Geo Replication again - setup SSH Key - prepared > session Broker and

Re: [Gluster-users] Understand Geo Replication of a big Gluster

2018-06-14 Thread Kotresh Hiremath Ravishankar
Hi Axel, No geo-replication can't be used without SSH. It's not configurable. Geo-rep master nodes connect to slave and transfers data over ssh. I assume you have created the geo-rep session before start. In the command above, the syntax is incorrect. It should use "::" and not ":/" gluster

Re: [Gluster-users] Understand Geo Replication of a big Gluster

2018-06-14 Thread Kotresh Hiremath Ravishankar
Hi Axel, You don't need single server with 140 TB capacity for replication. The slave (backup) is also a gluster volume similar to master volume. So create the slave (backup) gluster volume with 4 or more nodes to meet the capacity of master and setup geo-rep between these two volumes.

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-06-10 Thread Kotresh Hiremath Ravishankar
s-cli-3.12.9-1.el7.x86_64 >> python2-gluster-3.12.9-1.el7.x86_64 >> glusterfs-rdma-3.12.9-1.el7.x86_64 >> glusterfs-fuse-3.12.9-1.el7.x86_64 >> >> I have also attached another screenshot showing the memory usage from the >> Gluster slave for the last 48 hours.

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-06-06 Thread Kotresh Hiremath Ravishankar
Hi Mark, Few questions. 1. Is this trace back consistently hit? I just wanted to confirm whether it's transient which occurs once in a while and gets back to normal? 2. Please upload the complete geo-rep logs from both master and slave. Thanks, Kotresh HR On Wed, Jun 6, 2018 at 7:10 PM, Mark

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-06-06 Thread Kotresh Hiremath Ravishankar
Hi Mark, Few questions. 1. Is this trace back consistently hit? I just wanted to confirm whether it's transient which occurs once in a while and gets back to normal? 2. Please upload the complete geo-rep logs from both master and slave. 3. Are the gluster versions same across master and slave?

Re: [Gluster-users] New Style Replication in Version 4

2018-05-01 Thread Kotresh Hiremath Ravishankar
Hi John Hearns, Thanks for considering gluster. The feature you are requesting is Active-Active and is not available with geo-replication in 4.0. So, the use case can't be achieved using single gluster volume. But your use case can be achieved if we keep two volumes one for analysis file and

Re: [Gluster-users] trashcan on dist. repl. volume with geo-replication

2018-03-12 Thread Kotresh Hiremath Ravishankar
Hi Dietmar, I am trying to understand the problem and have few questions. 1. Is trashcan enabled only on master volume? 2. Does the 'rm -rf' done on master volume synced to slave ? 3. If trashcan is disabled, the issue goes away? The geo-rep error just says the it failed to create the directory

Re: [Gluster-users] geo replication

2018-03-06 Thread Kotresh Hiremath Ravishankar
Hi, It is failing to get the virtual xattr value of "trusted.glusterfs.volume-mark" at master volume root. Could you share the geo-replication logs under /var/log/glusterfs/geo-replication/*.gluster.log ? I think if there are any transient errors, stopping geo-rep and restarting master volume

Re: [Gluster-users] Geo replication snapshot error

2018-02-21 Thread Kotresh Hiremath Ravishankar
Hi, Thanks for reporting the issue. This seems to be a bug. Could you please raise a bug at https://bugzilla.redhat.com/ under community/glusterfs ? We will take a look at it and fix it. Thanks, Kotresh HR On Wed, Feb 21, 2018 at 2:01 PM, Marcus Pedersén wrote: > Hi

Re: [Gluster-users] georeplication over ssh.

2018-02-07 Thread Kotresh Hiremath Ravishankar
Ccing glusterd team for information On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <al...@netvel.net> wrote: > That makes for an interesting problem. > > I cannot open port 24007 to allow RPC access. > > On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: >

Re: [Gluster-users] georeplication over ssh.

2018-02-07 Thread Kotresh Hiremath Ravishankar
Hi Alvin, Yes, geo-replication sync happens via SSH. Ther server port 24007 is of glusterd. glusterd will be listening in this port and all volume management communication happens via RPC. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr wrote: > I am running

Re: [Gluster-users] geo-replication

2018-02-07 Thread Kotresh Hiremath Ravishankar
gt; Many thanks in advance! > > Regards > Marcus > > > On Wed, Feb 07, 2018 at 06:39:20PM +0530, Kotresh Hiremath Ravishankar > wrote: > > We are happy to help you out. Please find the answers inline. > > > > On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén <marcu

Re: [Gluster-users] add geo-replication "passive" node after node replacement

2018-02-07 Thread Kotresh Hiremath Ravishankar
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep :: create push-pem

Re: [Gluster-users] geo-replication

2018-02-07 Thread Kotresh Hiremath Ravishankar
Answers in line. On Tue, Feb 6, 2018 at 6:24 PM, Marcus Pedersén wrote: > Hi again, > I made some more tests and the behavior I get is that if any of > the slaves are down the geo-replication stops working. > It this the way distributed volumes work, if one server goes

Re: [Gluster-users] geo-replication

2018-02-07 Thread Kotresh Hiremath Ravishankar
We are happy to help you out. Please find the answers inline. On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén wrote: > Hi all, > > I am planning my new gluster system and tested things out in > a bunch of virtual machines. > I need a bit of help to understand how

Re: [Gluster-users] geo-replication command rsync returned with 3

2018-02-06 Thread Kotresh Hiremath Ravishankar
Hi, As a quick workaround for geo-replication to work. Please configure the following option. gluster vol geo-replication :: config access_mount true The above option will not do the lazy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be

Re: [Gluster-users] geo-replication initial setup with existing data

2018-01-30 Thread Kotresh Hiremath Ravishankar
Hi, Geo-replication expects the gfids (unique identifier similar to inode number in backend file systems) to be same for a file both on master and slave gluster volume. If the data is directly copied by other means other than geo-replication, gfid will be different. The crashes you are seeing is

Re: [Gluster-users] geo-replication command rsync returned with 3

2018-01-24 Thread Kotresh Hiremath Ravishankar
It is clear that rsync is failing. Are the rsync versions on all masters and slave nodes same? I have seen that has caused problems sometimes. -Kotresh HR On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz wrote: > Hi all, > i have made some tests on the latest Ubuntu

Re: [Gluster-users] Deploying geo-replication to local peer

2018-01-16 Thread Kotresh Hiremath Ravishankar
Hi Viktor, Answers inline On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov wrote: > Hi, > > I'm looking for glusterfs feature that can be used to transform data > between > volumes of different types provisioned on the same nodes. > It could be, for example, transformation

Re: [Gluster-users] active-active georeplication?

2017-10-24 Thread Kotresh Hiremath Ravishankar
Hi, No, gluster doesn't support active-active geo-replication. It's not planned in near future. We will let you know when it's planned. Thanks, Kotresh HR On Tue, Oct 24, 2017 at 11:19 AM, atris adam wrote: > hi everybody, > > Have glusterfs released a feature named

Re: [Gluster-users] how to verify bitrot signed file manually?

2017-09-29 Thread Kotresh Hiremath Ravishankar
Hi Amudhan, Sorry for the late response as I was busy with other things. You are right bitrot uses sha256 for checksum. If file-1, file-2 are marked bad, the I/O should be errored out with EIO. If that is not happening, we need to look further into it. But what's the file contents of file-1 and

Re: [Gluster-users] Advice needed for geo replication

2017-05-05 Thread Kotresh Hiremath Ravishankar
Hi Felipe, All the observations you have made are correct. AFR is a synchronous replication where the client replicates the data which is limited by the speed by the slowest node (in your case HDD node). AFR is the replicating each brick and is part of single volume. At the end, you will have

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Kotresh Hiremath Ravishankar
this is an RFE, it would be available from 3.11 and would not be back ported to 3.10.x Thanks and Regards, Kotresh H R - Original Message - > From: "Serkan Çoban" <cobanser...@gmail.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: &quo

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Kotresh Hiremath Ravishankar
Hi https://github.com/gluster/glusterfs/issues/188 is merged in master and needs to go in 3.11 Thanks and Regards, Kotresh H R - Original Message - > From: "Kaushal M" > To: "Shyam" > Cc: gluster-users@gluster.org, "Gluster Devel"

Re: [Gluster-users] High load on glusterfsd process

2017-04-25 Thread Kotresh Hiremath Ravishankar
impossible to upgrade to latest version, atleast 3.7.20 would do. It has minimal conflicts. I can help you out with that. Thanks and Regards, Kotresh H R - Original Message - > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com> > To: "Kotresh Hiremath Ravishankar&

Re: [Gluster-users] High load on glusterfsd process

2017-04-24 Thread Kotresh Hiremath Ravishankar
..@redhat.com> > Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" > <gluster-users@gluster.org>, "Kotresh Hiremath > Ravishankar" <khire...@redhat.com> > Sent: Monday, April 24, 2017 11:30:57 AM > Subject:

Re: [Gluster-users] Geo replication stuck (rsync: link_stat "(unreachable)")

2017-04-16 Thread Kotresh Hiremath Ravishankar
Answers inline. Thanks and Regards, Kotresh H R - Original Message - > From: "mabi" <m...@protonmail.ch> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: "Gluster Users" <gluster-users@gluster.org> > Sent: Thurs

Re: [Gluster-users] Geo replication stuck (rsync: link_stat "(unreachable)")

2017-04-12 Thread Kotresh Hiremath Ravishankar
might need to cleanup the problematic directory on slave from the backend. Thanks and Regards, Kotresh H R - Original Message - > From: "mabi" <m...@protonmail.ch> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: "Gluster Users

Re: [Gluster-users] Geo replication stuck (rsync: link_stat "(unreachable)")

2017-04-11 Thread Kotresh Hiremath Ravishankar
Hi, Then please use set the following rsync config and let us know if it helps. gluster vol geo-rep :: config rsync-options "--ignore-missing-args" Thanks and Regards, Kotresh H R - Original Message - > From: "mabi" <m...@protonmail.ch> > To: "

Re: [Gluster-users] Geo replication stuck (rsync: link_stat "(unreachable)")

2017-04-09 Thread Kotresh Hiremath Ravishankar
Hi Mabi, What's the rsync version being used? Thanks and Regards, Kotresh H R - Original Message - > From: "mabi" > To: "Gluster Users" > Sent: Saturday, April 8, 2017 4:20:25 PM > Subject: [Gluster-users] Geo replication stuck (rsync:

Re: [Gluster-users] Geo-Replication not detecting changes

2017-03-30 Thread Kotresh Hiremath Ravishankar
ng iptables on both master and slave nodes and check again? #iptables -F Thanks and Regards, Kotresh H R - Original Message - > From: "Jeremiah Rothschild" <jerem...@franz.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: gluster-user

Re: [Gluster-users] Geo-Replication not detecting changes

2017-03-29 Thread Kotresh Hiremath Ravishankar
Hi Jeremiah, That's really strange. Please enable DEBUG logs for geo-replication as below and send us the logs under "/var/log/glusterfs/geo-replication//*.log" from master node gluster vol geo-rep :: config log-level DEBUG Geo-rep has two ways to detect changes. 1. changelog (Changelog

Re: [Gluster-users] why is geo-rep so bloody impossible?

2017-02-20 Thread Kotresh Hiremath Ravishankar
This could happen if two same ssh-key pub keys one with "command=..." and one with out distributed to slave ~/.ssh/authorized_keys. Please check and remove the one without "command=..". It should work. For passwordless SSH connection, a separate ssh key pair should be create. Thanks and

Re: [Gluster-users] should geo repl pick up changes to a vol?

2017-02-08 Thread Kotresh Hiremath Ravishankar
Hi lejeczek, Try stop force. gluster vol geo-rep :: stop force Thanks and Regards, Kotresh H R Thanks and Regards, Kotresh H R - Original Message - > From: "lejeczek" <pelj...@yahoo.co.uk> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.c

Re: [Gluster-users] should geo repl pick up changes to a vol?

2017-02-02 Thread Kotresh Hiremath Ravishankar
Hi, The following steps needs to be followed when a brick is added from new node on master. 1. Stop geo-rep 2. Run the following command on the master node where passwordless SSH connection is configured, in order to create a common pem pub file. # gluster system:: execute gsec_create

Re: [Gluster-users] geo repl status: faulty & errors

2017-02-02 Thread Kotresh Hiremath Ravishankar
Answers inline Thanks and Regards, Kotresh H R - Original Message - > From: "lejeczek" > To: gluster-users@gluster.org > Sent: Wednesday, February 1, 2017 5:48:55 PM > Subject: [Gluster-users] geo repl status: faulty & errors > > hi everone, > > trying geo-repl

Re: [Gluster-users] Geo-replication failed to delete from slave file partially written to master volume.

2016-12-26 Thread Kotresh Hiremath Ravishankar
miss syncing few files in ENOSPC scenario. Thanks and Regards, Kotresh H R - Original Message - > From: "Viktor Nosov" <vno...@stonefly.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: gluster-users@gluster.org > Sent: Satu

Re: [Gluster-users] Geo-replication failed to delete from slave file partially written to master volume.

2016-12-09 Thread Kotresh Hiremath Ravishankar
rom: "Viktor Nosov" <vno...@stonefly.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: gluster-users@gluster.org > Sent: Wednesday, December 7, 2016 10:48:52 PM > Subject: RE: [Gluster-users] Geo-replication failed to delete from slave file

Re: [Gluster-users] Geo-replication failed to delete from slave file partially written to master volume.

2016-12-06 Thread Kotresh Hiremath Ravishankar
Hi Viktor, Please share geo-replication-slave mount logs from slave nodes. Thanks and Regards, Kotresh H R - Original Message - > From: "Viktor Nosov" > To: gluster-users@gluster.org > Cc: vno...@stonefly.com > Sent: Tuesday, December 6, 2016 7:13:22 AM > Subject:

Re: [Gluster-users] How to force remove geo session?

2016-11-20 Thread Kotresh Hiremath Ravishankar
Hi, Glad, you could get it rectified. But, having same slave volume for two different geo-rep sessions is never recommended. The two sessions end up writing to same slave node. It's always one master volume to many different slave volume configuration if required. If ssh-keys are deleted on

  1   2   >