On Saturday 19 August 2017 02:05 AM, mabi wrote:
Hi,
When creating a geo-replication session is the gverify.sh used or ran
respectively?
Yes, It is executed as part of geo-replication session creation.
or is gverify.sh just an ad-hoc command to test manually if creating a
Hi,
Thanks all who joined!
Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
interface.
Docker container accesses Gluster volume to access (get/put) objects.
We are working on a custom solution which will avoids gluster-swift
altogether.
We will update here once it is ready. Stay tuned.
2017-03-08 9:53 GMT+01:00 Saravanakumar Arumugam <sarum...@redhat.com>:
Hi,
Hi,
I have posted a blog about accessing Gluster volume via S3 interface.[1]
Here, Gluster volume is exposed as a object storage.
Object storage functionality is implemented with changes to Swift
storage and
swift3 plugin is used to expose S3 interface. [4]
gluster-object is available as
Hi,
Thanks all who joined!
Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
Hi,
Thanks all who joined!
Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
*This is the first bug triage meeting using hackmd.io (as
http://public.pad.fsfe.org/ is going to be decommissioned).**
**Agenda and Group Triage links updated**
*
Meeting
Hi,
Thanks all who joined!
Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
lexandr
On Sat, Nov 19, 2016 at 11:16 AM, Saravanakumar Arumugam
<sarum...@redhat.com <mailto:sarum...@redhat.com>> wrote:
On 11/19/2016 01:39 AM, Alexandr Porunov wrote:
Hello,
I try to enable shared storage for Geo-Replication but I am
not sure that
On 11/19/2016 01:39 AM, Alexandr Porunov wrote:
Hello,
I try to enable shared storage for Geo-Replication but I am not sure
that I do it properly.
Here is what I do:
# gluster volume set all cluster.enable-shared-storage enable
volume set: success
# mount -t glusterfs
On 11/15/2016 03:46 AM, Shirwa Hersi wrote:
Hi,
I'm using glusterfs geo-replication on version 3.7.11, one of the
bricks becomes faulty and does not replicated to slave bricks after i
start geo-replication session.
Following are the logs related to the faulty brick, can someone please
On 11/11/2016 09:09 PM, Sander Eikelenboom wrote:
Friday, November 11, 2016, 4:28:36 PM, you wrote:
Feature requests to in Bugzilla anyway.
Create your volume with the populated brick as brick one. Start it and "heal
full".
gluster> volume create testvolume transport tcp
gluster>
Hi,
I am working on moving fsfe pages to github wiki( as discussed in
Gluster community meeting, yesterday).
Identified the following links present in fsfe etherpad.
I need your help to check whether the link(maybe created by you) needs
to be moved to github wiki.
Also, if any other
Hi all,
The minutes of today's meeting:
Meeting summary
agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
(Saravanakmr, 12:01:19)
Roll call (Saravanakmr, 12:01:28)
Next week’s meeting host (Saravanakmr, 12:05:25)
ACTION: skoduri will host bug triage meeting on
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
Hi,
Thanks all who joined!
Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
Hi,
On 08/23/2016 03:09 PM, Beard Lionel (BOSTON-STORAGE) wrote:
Hi,
I have noticed that when using a distribute volume, if a brick is not
accessible, volume is still accessible in read-write mode, but some
files can’t be created (depending on filename).
Is it possible to force a
On 08/08/2016 08:59 PM, Atin Mukherjee wrote:
On Mon, Aug 8, 2016 at 3:18 PM, Niels de Vos <nde...@redhat.com
<javascript:_e(%7B%7D,'cvml','nde...@redhat.com');>> wrote:
On Mon, Aug 08, 2016 at 02:37:43PM +0530, Saravanakumar Arumugam
wrote:
>
> On 08/07
On 08/07/2016 04:17 PM, ML mail wrote:
Hi,
Can someone explain me what is the op-version everybody is speaking about on
the mailing list?
op-version is a way to determine which gluster version you are running.
This is quite useful during upgrade process, to check for backward
On 07/25/2016 10:29 AM, Saravanakumar Arumugam wrote:
Hi,
1.
Can you check /root/.ssh/authorized_keys (in master host) ?
Sorry..typo , this is in slave host (ks4 in your case)
It should contain only entries starting with "command=" .
If there is any duplicate entry withou
Hi,
1.
Can you check /root/.ssh/authorized_keys (in master host) ?
It should contain only entries starting with "command=" .
If there is any duplicate entry without "command=" , delete the same.
and check the geo-rep status again.
2.
This is to confirm ssh connection between master and
On 07/11/2016 03:59 PM, Kaleb Keithley wrote:
Starting with the 3.8 releases EPEL packages are in the CentOS Storage SIG
repos.
If you want to stay on 3.7, edit your /etc/yum.repos.d/glusterfs-epel.repo file
and change .../LATEST/... to .../3.7/LATEST/...
(There have been several emails to
Hi,
Thanks all who joined!
Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can
Hi all,
The Gluster Bug Triage Meeting will start in approx. 1 hour 30 minutes from now.
Please join if you are interested in getting a decent status of bugs
that have recently been filed, and maintainers/developers did not pickup
yet.
The meeting also includes a little bit about testing and
Hi,
Please find minutes of June 14 Bug Triage meeting.
Meeting summary
1.
1. agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
(Saravanakmr
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
Hi,
Please find the meeting minutes and summary:
Minutes:
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.html
Minutes (text):
Hi, This meeting is scheduled for anyone interested in learning more
about, or assisting with the Bug Triage. Meeting details: - location:
#gluster-meeting on Freenode IRC (
https://webchat.freenode.net/?channels=gluster-meeting ) - date: every
Tuesday - time: 12:00 UTC (in your terminal, run:
**:*Saravanakumar Arumugam [mailto:sarum...@redhat.com]
*发送时间:* Thursday, May 19, 2016 1:59 PM
*收件人:* vyyy杨雨阳 <yuyangy...@ctrip.com>; Gluster-users@gluster.org
*主题:* Re: [Gluster-users] 答复: geo-replication status partial faulty
Hi,
There seems to be some issue in glusterfs01.sh3.ctripcorp.com
Hi,
There seems to be some issue in glusterfs01.sh3.ctripcorp.com slave node.
Can you share the complete logs ?
You can increase verbosity of debug messages like this:
gluster volume geo-replication ::volume>config log-level DEBUG
Also, check /root/.ssh/authorized_keys in
On 04/28/2016 07:26 PM, Scott Creeley wrote:
Is there a way to set an option to return errors directly to the caller?
Looking at the man-page doesn't appear so...but wondering if there is some
trick to accomplish this or how hard that would be to implement.
For example, current behavior:
Hi,
Thanks for the participation. Please find meeting summary below.
Meeting ended Tue Apr 19 12:58:58 2016 UTC. Information about MeetBot at
http://wiki.debian.org/MeetBot .
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.html
Minutes
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your
Hi,
Replies inline.
Thanks,
Saravana
On 03/31/2016 04:00 AM, Gmail wrote:
I’ve rebuilt the cluster again, making a fresh installation. And now
the error is different.
MASTER NODE MASTER VOLMASTER BRICK SLAVE USER
SLAVE SLAVE NODE
Hi,
Please find the minutes of today's Gluster Community Bug Triage meeting
below. Thanks to everyone who have attended the meeting.
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-22/gluster_bug_triage.2016-03-22-12.00.html
Minutes (text):
Hi,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your
Hi,
On 03/09/2016 01:06 PM, jayakrishnan mm wrote:
Hi,
I have installed the 3.7.6 version from .deb on my Ubuntu14.04 PC.
It is working fine.
Now I have built the 3.7.6..from sources as follows.
1. ./autogen.sh
2. ./configure --enable-debug
3. make
4. sudo make install -- DESTDIR=/
On 03/03/2016 05:38 PM, Yannick Perret wrote:
Hello,
I can't find if it is possible to set a prefered server on a
per-client basis for replica volumes, so I ask the question here.
The context: we have 2 storage servers, each in one building. We also
have several virtual machines on each
:14 PM, Saravanakumar Arumugam
<sarum...@redhat.com> wrote:
On 02/01/2016 07:22 PM, ML mail wrote:
I just found out I needed to run the getfattr on a mount and not on the
glusterfs server directly. So here are the additional output you asked for:
# getfattr -n glusterfs.gfid.string -m . logo-
arried out on this gfid)
Regards
ML
On Monday, February 1, 2016 1:30 PM, Saravanakumar Arumugam
<sarum...@redhat.com> wrote:
Hi,
On 02/01/2016 02:14 PM, ML mail wrote:
Hello,
I just set up distributed geo-replication to a slave on my 2 nodes' replicated
volume and noticed quite a fe
Hi,
On 02/01/2016 02:14 PM, ML mail wrote:
Hello,
I just set up distributed geo-replication to a slave on my 2 nodes' replicated
volume and noticed quite a few error messages (around 70 of them) in the
slave's brick log file:
The exact log file is:
Hi,
Replies inline..
Thanks,
Saravana
On 12/18/2015 10:02 PM, Dietmar Putz wrote:
Hello again...
after having some big trouble with an xfs issue in kernel 3.13.0-x and
3.19.0-39 which has been 'solved' by downgrading to 3.8.4
Hi,
This seems like XFS filesystem issue.
Can you communicate this error to xfs mailing list?
Thanks,
Saravana
On 12/06/2015 05:23 AM, Julius Thomas wrote:
Dear Gluster Users,
after fixing the problem in the last mail from my colleague by
upgrading to kernel 3.19.0-39-generic in case of
Hi Wade,
There seems to be some issue in syncing the existing data in the volume
using Xsync crawl.
( To give some background: When geo-rep is started it goes to filesystem
crawl(Xsync) and sync all the data to slave, and then the session
switches to CHANGELOG mode).
We are looking in to
er I wanted it to run under.
As per my knowledge, it should display with specific user which you have
setup.
Please share complete command details and logs. (Also, review all your
commands to check whether everything is setup as mentioned).
On Monday, September 21, 2015 8:07 AM, Saravanakum
log , if you still face any issues.
Also, report back if it helps, so that we can fix it here.
On Saturday, September 19, 2015 6:18 AM, Saravanakumar Arumugam
<sarum...@redhat.com> wrote:
Hi,
The underlying filesystem which you use seems like ZFS.
I don't have much idea about zfs. Yo
Hi,
The underlying filesystem which you use seems like ZFS.
I don't have much idea about zfs. You may want to check this link:
http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
As far as the error is concerned, it is trying to use stat command to
get inode details.
(stat
-January/020080.html)
Regards
ML
On Tuesday, September 15, 2015 9:16 AM, Saravanakumar Arumugam
<sarum...@redhat.com> wrote:
Hi,
You are right, This tool may not be compatible with 3.6.5.
I have tried myself with 3.6.5, but faced this error.
==
georepset
s
ML
On Monday, September 14, 2015 9:38 AM, Saravanakumar Arumugam
<sarum...@redhat.com> wrote:
Hi,
<< Unable to fetch slave volume details. Please check the slave cluster
and slave volume. geo-replication command failed
Have you checked whether you are able to reach the S
Hi,
<< Unable to fetch slave volume details. Please check the slave cluster
and slave volume. geo-replication command failed
Have you checked whether you are able to reach the Slave node from the
master node?
There is a super simple way of setting up geo-rep written by Aravinda.
Refer:
Hi Atin/Kaushal,
I am interested to take up selective read-only mode feature. (Bug#829042)
I will look into this and talk to you further.
Thanks,
Saravana
On 08/13/2015 08:58 PM, Atin Mukherjee wrote:
Can we have some volunteers of these BZs?
-Atin
Sent from one plus one
On Aug 12, 2015
humble.deva...@gmail.com mailto:humble.deva...@gmail.com wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of
GlusterFS and having an average of one release per week , we need
more helping hands on this task.
The responsibility includes building
3.6.3 and 3.7.2 that broke it?
Possibly this commit broke it in 3.7:
commit f1ac02a52f4019e7890ce501af7e825ef703d14d
Author: Saravanakumar Arumugam sarum...@redhat.com
Date: Tue May 5 17:03:39 2015 +0530
geo-rep: rename handling in dht volume(changelog changes)
I have sent out a patch
57 matches
Mail list logo