Hi John,
There seems to a bug in cli command line parsing which is not allowing
options prefixed with double hyphen apart from few. As a workaround to fix
this for now, do following steps.
1. Stop geo-replication session.
2. Add the following at the end of the file
Hi Dave,
Are you hitting the below bug and so not able to sync symlinks ?
https://bugzilla.redhat.com/show_bug.cgi?id=1105283
Does geo-rep status say Not Started ?
Thanks and Regards,
Kotresh H R
- Original Message -
From: David Gibbons david.c.gibb...@gmail.com
To: gluster-users
and Regards,
Kotresh H R
- Original Message -
From: David Gibbons david.c.gibb...@gmail.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users Gluster-users@gluster.org, vno...@stonefly.com
Sent: Tuesday, December 9, 2014 6:16:03 PM
Subject: Re: [Gluster-users] Geo-Replication
R
- Original Message -
From: David Gibbons david.c.gibb...@gmail.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users Gluster-users@gluster.org, vno...@stonefly.com
Sent: Wednesday, December 10, 2014 6:12:00 PM
Subject: Re: [Gluster-users] Geo-Replication Issue
Hi,
The setup is failing while doing compatibility test between master and slave
cluster.
The gverify.sh script is failing to get master volume details for the same.
Could you run the following and paste the output here?
bash -x /usr/local/libexec/glusterfs/gverify.sh master-vol-name root
Hi Cyril,
From the brick logs, it seems the changelog-notifier thread has got killed for
some reason,
as notify is failing with EPIPE.
Try the following. It should probably help:
1. Stop geo-replication.
2. Disable changelog: gluster vol set master-vol-name changelog.changelog off
3. Enable
Yes, geo-rep internally uses fuse mount.
I will explore further and get back to you
if there is a way.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Cyril N PEPONNET (Cyril) cyril.pepon...@alcatel-lucent.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster
Hi Cyril,
That's great, you could get it worked!!!
Sorry, you were on latest 3.7:)
Thanks and Regards,
Kotresh H R
- Original Message -
From: Cyril N PEPONNET (Cyril) cyril.pepon...@alcatel-lucent.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users gluster
Hi Cyril,
Need some clarifications. Comments inline.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Cyril N PEPONNET (Cyril) cyril.pepon...@alcatel-lucent.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent: Tuesday
Hi Cyril,
Replies inline.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Cyril N PEPONNET (Cyril) cyril.pepon...@alcatel-lucent.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent: Wednesday, May 27, 2015 9:28:00 PM
Hi Cyril,
Could you please attach the geo-replication logs?
Thanks and Regards,
Kotresh H R
- Original Message -
From: Cyril N PEPONNET (Cyril) cyril.pepon...@alcatel-lucent.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent
Hi Wodel,
Is the sync mode, tar over ssh (i.e., config use_tarssh is true) ?
If yes, there is known issue with it and patch is already up in master.
But it can be resolved in either of the two ways.
1. If sync mode required is tar over ssh, just disable sync_xattrs which is true
by default.
Hi Cyril,
Answers inline
Thanks and Regards,
Kotresh H R
- Original Message -
From: Cyril N PEPONNET (Cyril) cyril.pepon...@alcatel-lucent.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent: Friday, May 22, 2015 9:34:47 PM
Hi Marco,
'gf_changelog_register' is an API exposed from the shared library
'libgfchangelog.so'.
Please check whether 'libgfchangelog.so' is available to linker by using
following command.
#ldconfig -p | grep libgfchangelog
If it is not found, please find where the libgfchangelog.so is
Great, hope that should work. Let's see
Thanks and Regards,
Kotresh H R
- Original Message -
From: Cyril N PEPONNET (Cyril) cyril.pepon...@alcatel-lucent.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent: Friday, May 22, 2015 5
Hi John,
Could you share which version of Gluster are you running?
Thanks and Regards,
Kotresh H R
- Original Message -
From: John Gardeniers jgardeni...@objectmastery.com
To: Gluster-users@gluster.org List gluster-users@gluster.org
Sent: Monday, July 13, 2015 3:50:25 AM
Subject:
Hi Milos,
Geo-replication expects gfids of files between master and slaves to be same for
the data syncing.
Interfering the geo-rep sync by any kind of manual methods that could change
the gfids is not
recommended and data for those files will not be synced.
To bring back to the sane state,
Hi Aravinda,
I used it yesterday. It greatly simplifies the geo-rep setup.
It would be great if it is enhanced to troubleshoot what's
wrong in already corrupted setup.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Aravinda"
> To: "Gluster Devel"
rom: "vyyy杨雨阳" <yuyangy...@ctrip.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Saravanakumar Arumugam" <sarum...@redhat.com>,
> Gluster-users@gluster.org, "Aravinda Vishwanathapura Krishna
> Murthy" <avish..
Comments inline
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "vyyy杨雨阳" <yuyangy...@ctrip.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Saravanakumar Arumugam" <sarum...@redhat.com>,
>
Hi,
Thanks for reporting the issue. Please raise the bug so we don't loose track
of this issue.
And also upload the geo-repliction logs and glusterd logs. We will look into it.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "ML mail"
> To:
Added gluster-users.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> To: "Gluster Devel" <gluster-de...@gluster.org>
> Sent: Monday, March 7, 2016 3:03:08 PM
> Subject: [Gl
mugam" <sarum...@redhat.com>,
> Gluster-users@gluster.org, "Aravinda Vishwanathapura Krishna
> Murthy" <avish...@redhat.com>, "Kotresh Hiremath Ravishankar"
> <khire...@redhat.com>
> Sent: Thursday, May 19, 2016 2:15:34 PM
> Subject: 答复: 答复: [Glust
/lib/glusterd/geo-replication' i.e
(__)
Please check and let us know.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "vyyy杨雨阳" <yuyangy...@ctrip.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Saravanakum
this master volume or only one?
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "vyyy杨雨阳" <yuyangy...@ctrip.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Saravanakumar Arumugam" <sarum..
Hi Brian,
Thanks for reporting the issue.
Could you please post the geo-replication logs?
It would help us find out why geo-replication has failed to sync.
You can find master geo-rep logs under /var/log/glusterfs/geo-replication
and slave logs under /var/log/glusterfs/geo-replication-slaves
Answers inline
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "vyyy杨雨阳" <yuyangy...@ctrip.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Saravanakumar Arumugam" <sarum...@redhat.com>,
>
listed (unless it has become corrupted again).
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Ashish Pandey" <aspan...@redhat.com>
> To: "itlinux_team" <itli...@imppc.org>
> Cc: gluster-users@gluster.org, "Kotresh Hiremat
Hi,
You don't need to manually delete from quarantine directory.
Bitrot should take care of it.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "itlinux_team" <itli...@imppc.org>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.c
Hi,
The following steps needs to be followed when a brick is added from new node on
master.
1. Stop geo-rep
2. Run the following command on the master node where passwordless SSH
connection is configured, in order to create a common pem pub file.
# gluster system:: execute gsec_create
Answers inline
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "lejeczek"
> To: gluster-users@gluster.org
> Sent: Wednesday, February 1, 2017 5:48:55 PM
> Subject: [Gluster-users] geo repl status: faulty & errors
>
> hi everone,
>
> trying geo-repl
Hi lejeczek,
Try stop force.
gluster vol geo-rep :: stop force
Thanks and Regards,
Kotresh H R
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "lejeczek" <pelj...@yahoo.co.uk>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.c
This could happen if two same ssh-key pub keys one with "command=..." and one
with out
distributed to slave ~/.ssh/authorized_keys. Please check and remove the one
without "command=..".
It should work. For passwordless SSH connection, a separate ssh key pair should
be create.
Thanks and
ng happens. If so we need to figure out who holds this fd for such a
long time.
And also we need to figure is this issue specific to EC volume.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Amudhan P" <amudha...@gmail.com>
> To: "Kotresh Hirem
Hi Amudhan,
Thanks for testing out the bitrot feature and sorry for the delayed response.
Please find the answers inline.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Amudhan P"
> To: "Gluster Users"
> Sent: Friday,
for such a long time.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Amudhan P" <amudha...@gmail.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Sent: Wednesday, September 21, 2016 8:15:33 PM
> Subject: Re: [G
-
> From: "Amudhan P" <amudha...@gmail.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Gluster Users" <gluster-users@gluster.org>
> Sent: Thursday, September 22, 2016 11:25:28 AM
> Subject: Re: 3.8.3 Bitrot signatur
a look at it.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Amudhan P" <amudha...@gmail.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Gluster Users" <gluster-users@gluster.org>
> Sent: Thursda
gfstst-node5:/media/disk2/brick2/data/G$ time md5sum test54-bs10M-c10.nul
> bed3c0a4a1407f584989b4009e9ce33f test54-bs10M-c10.nul
>
> real0m0.166s
> user0m0.062s
> sys 0m0.011s
>
> As you can see that 'test54-bs10M-c10.nul' file took around 12 minutes t
Hi,
We would like to propose the following talk.
Title: Gluster Geo-replication
Theme: Stability and Performance
We plan to cover the following things.
- Introduction
- New Features
- Stability and Usability Improvements
- Performance Improvements.
-
“scrub status”, do you have a sample output for a positive hit you could
> share easily?
>
> > On Oct 26, 2016, at 12:05 AM, Kotresh Hiremath Ravishankar
> > <khire...@redhat.com> wrote:
> >
> > Correcting the command..I had missed 'scrub' ke
Hi Jackie,
"gluster vol bitrot status" should show the corrupted files gfid.
If you want to get the info from logs, it will be logged as below.
[2016-10-26 05:21:20.767774] A [MSGID: 118023]
[bit-rot-scrub.c:246:bitd_compare_ckum] 0-master-bit-rot-0: CORRUPTION
DETECTED: Object /dir1/file1
Hi,
Could you please restart glusterd in DEBUG mode and share the glusterd logs?
*Starting glusterd in DEBUG mode as follows.
#glusterd -LDEBUG
*Stop the volume
#gluster vol stop
Share the glusterd logs.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Chao-Ping
Hi Ping,
That's good to here. Let us know if you face any issues further.
We are happy to help you.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Chao-Ping Chien" <cch...@eqbank.ca>
> To: "Kotresh Hiremath Ravishankar" <khire
Hi,
Glad, you could get it rectified. But, having same slave volume for two
different
geo-rep sessions is never recommended. The two sessions end up writing to
same slave node. It's always one master volume to many different slave volume
configuration if required. If ssh-keys are deleted on
Correcting the command..I had missed 'scrub' keyword.
"gluster vol bitrot scrub status"
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> To: "Jackie Tung" <jac...@dri
rom: "Viktor Nosov" <vno...@stonefly.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Wednesday, December 7, 2016 10:48:52 PM
> Subject: RE: [Gluster-users] Geo-replication failed to delete from slave file
Hi Viktor,
Please share geo-replication-slave mount logs from slave nodes.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Viktor Nosov"
> To: gluster-users@gluster.org
> Cc: vno...@stonefly.com
> Sent: Tuesday, December 6, 2016 7:13:22 AM
> Subject:
miss syncing few files in ENOSPC scenario.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Viktor Nosov" <vno...@stonefly.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Satu
ng iptables on both master and slave nodes and check again?
#iptables -F
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Jeremiah Rothschild" <jerem...@franz.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: gluster-user
Hi Jeremiah,
That's really strange. Please enable DEBUG logs for geo-replication as below
and send
us the logs under "/var/log/glusterfs/geo-replication//*.log" from
master node
gluster vol geo-rep :: config log-level DEBUG
Geo-rep has two ways to detect changes.
1. changelog (Changelog
Hi Mabi,
What's the rsync version being used?
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi"
> To: "Gluster Users"
> Sent: Saturday, April 8, 2017 4:20:25 PM
> Subject: [Gluster-users] Geo replication stuck (rsync:
Hi,
Then please use set the following rsync config and let us know if it helps.
gluster vol geo-rep :: config rsync-options
"--ignore-missing-args"
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi" <m...@protonmail.ch>
> To: "
might need to cleanup the problematic
directory on slave from the backend.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi" <m...@protonmail.ch>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Gluster Users
Answers inline.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi" <m...@protonmail.ch>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Gluster Users" <gluster-users@gluster.org>
> Sent: Thurs
..@redhat.com>
> Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users"
> <gluster-users@gluster.org>, "Kotresh Hiremath
> Ravishankar" <khire...@redhat.com>
> Sent: Monday, April 24, 2017 11:30:57 AM
> Subject:
impossible to upgrade to
latest version, atleast 3.7.20 would do. It has minimal
conflicts. I can help you out with that.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Kotresh Hiremath Ravishankar&
Hi
https://github.com/gluster/glusterfs/issues/188 is merged in master
and needs to go in 3.11
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kaushal M"
> To: "Shyam"
> Cc: gluster-users@gluster.org, "Gluster Devel"
this is an RFE, it would be available from 3.11 and would not
be back ported to 3.10.x
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Serkan Çoban" <cobanser...@gmail.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: &quo
Hi Felipe,
All the observations you have made are correct. AFR is a synchronous replication
where the client replicates the data which is limited by the speed by the
slowest
node (in your case HDD node). AFR is the replicating each brick and is part of
single
volume. At the end, you will have
Hi Amudhan,
Sorry for the late response as I was busy with other things. You are right
bitrot uses sha256 for checksum.
If file-1, file-2 are marked bad, the I/O should be errored out with EIO.
If that is not happening, we need
to look further into it. But what's the file contents of file-1 and
Hi,
No, gluster doesn't support active-active geo-replication. It's not planned
in near future. We will let you know when it's planned.
Thanks,
Kotresh HR
On Tue, Oct 24, 2017 at 11:19 AM, atris adam wrote:
> hi everybody,
>
> Have glusterfs released a feature named
s-cli-3.12.9-1.el7.x86_64
>> python2-gluster-3.12.9-1.el7.x86_64
>> glusterfs-rdma-3.12.9-1.el7.x86_64
>> glusterfs-fuse-3.12.9-1.el7.x86_64
>>
>> I have also attached another screenshot showing the memory usage from the
>> Gluster slave for the last 48 hours.
Hi Alex,
Sorry, I lost the context.
Which gluster version are you using?
Thanks,
Kotresh HR
On Sat, Jun 16, 2018 at 2:57 PM, Axel Gruber wrote:
> Hello
>
> i think its better to open a new Thread:
>
>
> I tryed to install Geo Replication again - setup SSH Key - prepared
> session Broker and
Hi Axel,
You don't need single server with 140 TB capacity for replication. The
slave (backup) is also a gluster volume similar to master volume.
So create the slave (backup) gluster volume with 4 or more nodes to meet
the capacity of master and setup geo-rep between these two volumes.
Hi Axel,
No geo-replication can't be used without SSH. It's not configurable.
Geo-rep master nodes connect to slave and transfers data over ssh.
I assume you have created the geo-rep session before start.
In the command above, the syntax is incorrect. It should use "::" and not
":/"
gluster
Hi Mark,
Few questions.
1. Is this trace back consistently hit? I just wanted to confirm whether
it's transient which occurs once in a while and gets back to normal?
2. Please upload the complete geo-rep logs from both master and slave.
3. Are the gluster versions same across master and slave?
Hi Mark,
Few questions.
1. Is this trace back consistently hit? I just wanted to confirm whether
it's transient which occurs once in a while and gets back to normal?
2. Please upload the complete geo-rep logs from both master and slave.
Thanks,
Kotresh HR
On Wed, Jun 6, 2018 at 7:10 PM, Mark
Hi Mabi,
You can safely delete old files under /var/lib/misc/glusterfsd.
Thanks,
Kotresh
On Mon, Jun 25, 2018 at 7:30 PM, mabi wrote:
> Hi,
>
> In the past I was using geo-replication but unconfigured it on my two
> volumes by using:
>
> gluster volume geo-replication ... stop
> gluster
l Gruber, Anton Gruber
>
> Steuernummer: 141/151/51801
>
>
> Am Mo., 18. Juni 2018 um 11:30 Uhr schrieb Kotresh Hiremath Ravishankar <
> khire...@redhat.com>:
>
>> Hi Alex,
>>
>> Sorry, I lost the context.
>>
>> Which gluster version are you u
Hi John Hearns,
Thanks for considering gluster. The feature you are requesting is
Active-Active and is not available with geo-replication in 4.0.
So, the use case can't be achieved using single gluster volume. But your
use case can be achieved if we keep two volumes
one for analysis file and
Hi Viktor,
Answers inline
On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov wrote:
> Hi,
>
> I'm looking for glusterfs feature that can be used to transform data
> between
> volumes of different types provisioned on the same nodes.
> It could be, for example, transformation
It is clear that rsync is failing. Are the rsync versions on all masters
and slave nodes same?
I have seen that has caused problems sometimes.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz
wrote:
> Hi all,
> i have made some tests on the latest Ubuntu
Hi,
Geo-replication expects the gfids (unique identifier similar to inode
number in backend file systems) to be same
for a file both on master and slave gluster volume. If the data is directly
copied by other means other than geo-replication,
gfid will be different. The crashes you are seeing is
Hi Alvin,
Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
glusterd.
glusterd will be listening in this port and all volume management
communication
happens via RPC.
Thanks,
Kotresh HR
On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr wrote:
> I am running
Ccing glusterd team for information
On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <al...@netvel.net> wrote:
> That makes for an interesting problem.
>
> I cannot open port 24007 to allow RPC access.
>
> On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
>
Hi,
As a quick workaround for geo-replication to work. Please configure the
following option.
gluster vol geo-replication :: config
access_mount true
The above option will not do the lazy umount and as a result, all the
master and slave volume mounts
maintained by geo-replication can be
Answers in line.
On Tue, Feb 6, 2018 at 6:24 PM, Marcus Pedersén
wrote:
> Hi again,
> I made some more tests and the behavior I get is that if any of
> the slaves are down the geo-replication stops working.
> It this the way distributed volumes work, if one server goes
Hi,
When S3 is added to master volume from new node, the following cmd should
be run to generate and distribute ssh keys
1. Generate ssh keys from new node
#gluster system:: execute gsec_create
2. Push those ssh keys of new node to slave
#gluster vol geo-rep :: create
push-pem
gt; Many thanks in advance!
>
> Regards
> Marcus
>
>
> On Wed, Feb 07, 2018 at 06:39:20PM +0530, Kotresh Hiremath Ravishankar
> wrote:
> > We are happy to help you out. Please find the answers inline.
> >
> > On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén <marcu
We are happy to help you out. Please find the answers inline.
On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén
wrote:
> Hi all,
>
> I am planning my new gluster system and tested things out in
> a bunch of virtual machines.
> I need a bit of help to understand how
Hi,
Thanks for reporting the issue. This seems to be a bug.
Could you please raise a bug at https://bugzilla.redhat.com/ under
community/glusterfs ?
We will take a look at it and fix it.
Thanks,
Kotresh HR
On Wed, Feb 21, 2018 at 2:01 PM, Marcus Pedersén
wrote:
> Hi
Hi Marcus,
What's the rsync version being used?
Thanks,
Kotresh HR
On Thu, Aug 2, 2018 at 1:48 AM, Marcus Pedersén
wrote:
> Hi all!
>
> I upgraded from 3.12.9 to 4.1.1 and had problems with geo-replication.
>
> With help from the list with some sym links and so on (handled in another
>
##
> Marcus Pedersén
> Systemadministrator
> Interbull Centre
>
> Sent from my phone
> ####
>
>
> Den 2 aug. 2018 06:13 skrev Kotresh Hiremath Ravishankar <
> khire...@redhat.com>:
>
> Hi Marcus,
>
> What's the rsync v
##
> Sent from my phone
> ####
>
> Den 2 aug. 2018 08:07 skrev Kotresh Hiremath Ravishankar <
> khire...@redhat.com>:
> Could you look of any rsync processes hung in master or slave?
>
> On Thu, Aug 2, 2018 at 11:18 AM, Marcus Pedersén
> wrote:
>
Hi David,
The feature is to provide consistent time attributes (atime, ctime, mtime)
across replica set.
The feature is enabled with following two options.
gluster vol set utime on
gluster vol set ctime on
The features currently does not honour mount options related time
attributes such as
efresh-timeout: 10performance.read-ahead:
> offperformance.write-behind-window-size: 4MBperformance.write-behind:
> onstorage.build-pgfid: onauth.ssl-allow: *client.ssl: offserver.ssl:
> offchangelog.changelog: onfeatures.bitrot: onfeatures.scrub:
> Activefeatures.scrub-freq: dailycluster
t;
> gluster-poc-noidaglusterep /data/gluster/gv0root
> gluster-poc-sj::glusterepN/A FaultyN/A N/A
>
> noi-poc-gluster glusterep /data/gluster/gv0root
>gluster-poc-sj::glusterepN/A Faulty
> N/A N/A
>
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar wrote:
> can you do ldconfig /usr/local/lib and share the output of
a ~]#
>
>
>
> Is it looks good what we exactly need or di I need to create any more link
> or How to get “libgfchangelog.so” file if missing.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Tuesday, August 28, 2018 4:22 PM
> *To:*
-command-dir
Thanks,
Kotresh HR
On Wed, Jul 18, 2018 at 9:28 AM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Marcus,
>
> I am testing out 4.1 myself and I will have some update today.
> For this particular traceback, gsyncd is not able to find the librar
0
> [2018-07-16 19:35:16.828056] I [gsyncd(worker /urd-gds/gluster):297:main]
> : Using session config file path=/var/lib/glusterd/geo-
> replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
> [2018-07-16 19:35:16.828066] I [gsyncd(agent /urd-gds/gluster):297:mai
Looks like gsyncd on slave is failing for some reason.
Please run the below cmd on the master.
#ssh -i /var/lib/glusterd/geo-replication/secret.pem georep@gluster-4.glstr
It should run gsyncd on the slave. If there is error, it should be fixed.
Please share the output of above cmd.
Regards,
Hi Pablo,
The geo-rep status should go to Faulty if he connection to peer is broken.
Does node log files failing with same error? Are these logs repeating?
Does stop and start geo-rep giving the same error?
Thanks,
Kotresh HR
On Tue, Jul 24, 2018 at 1:47 AM, Pablo J Rebollo Sosa wrote:
> Hi,
Hi Nico,
The glusterd has crashed on this node. Please raise a bug with core file?
Please use the following tool [1] to setup geo-rep by bringing back the
glusterd
if you are finding it difficult with geo-rep setup steps and let us know if
if it still crashes?
[1]
: geo-replication status
> glusterdist gluster-poc-sj::gluster : session is not active
>
> [2018-09-06 07:56:38.486229] I [MSGID: 106028] [glusterd-geo-rep.c:4903:
> glusterd_get_gsync_status_mst_slv] 0-management: /var/lib/glusterd/geo-
> replication/glusterdist_gluster-poc-sj_gluster
gsync_status_mst_slv] 0-management: /var/lib/glusterd/geo-
> replication/glusterdist_gluster-poc-sj_gluster/monitor.status statefile
> not present. [No such file or directory]
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Thursday, Septem
Hi Krishna,
glusterd log file would help here
Thanks,
Kotresh HR
On Thu, Sep 6, 2018 at 1:02 PM, Krishna Verma wrote:
> Hi All,
>
>
>
> I am getting issue in geo-replication distributed gluster volume. In a
> session status it shows only peer node instead of 2. And I am also not able
> to
TestInt18.08-b001.t.Z
>
> du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or
> directory
>
> [root@gluster-poc-sj ~]#
>
>
>
> File not reached at slave.
>
>
>
> /Krishna
>
>
>
> *From:* Krishna Verma
> *Sent:* Monday, Se
e count like 3*3 or
> 4*3 :- Are you refereeing to create a distributed volume with 3 master
> node and 3 slave node?
>
Yes, that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copying 1GB file and
1 - 100 of 141 matches
Mail list logo