A better solution to the pid file problem is to add -p
/var/run/ganesha.nfsd.pid” to the OPTIONS in /etc/sysconfig/ganesha, so that it
becomes:
# cat /etc/sysconfig/ganesha
OPTIONS=-L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p
/var/run/ganesha.nfsd.pid”
This is
On 06/09/2015 02:48 PM, Alessandro De Salvo wrote:
Hi,
OK, the problem with the VIPs not starting is due to the ganesha_mon
heartbeat script looking for a pid file called
/var/run/ganesha.nfsd.pid, while by default ganesha.nfsd v.2.2.0 is
creating /var/run/ganesha.pid, this needs to be
Anyone who could help? We just ran into this exact same problem again. I
just noticed we are running GlusterFS 3.7.1 on the clients (oVirt
hosts/VDSM). Could this be an issue?
On 8 June 2015 at 11:40, Tiemen Ruiten t.rui...@rdmedia.com wrote:
Some extra points:
- 10.100.3.41 is one of the
On 06/09/2015 02:06 PM, Alessandro De Salvo wrote:
Hi Soumya,
Il giorno 09/giu/2015, alle ore 08:06, Soumya Koduri skod...@redhat.com ha
scritto:
On 06/09/2015 01:31 AM, Alessandro De Salvo wrote:
OK, I found at least one of the bugs.
The /usr/libexec/ganesha/ganesha.sh has the
Hi,
OK, the problem with the VIPs not starting is due to the ganesha_mon heartbeat
script looking for a pid file called /var/run/ganesha.nfsd.pid, while by
default ganesha.nfsd v.2.2.0 is creating /var/run/ganesha.pid, this needs to be
corrected. The file is in
Hi Vijay,
Thanks for having replied.
Unfortunately, i check each bricks on my stockage pool and dont find any backup
file.. damage!
Thank you again!
Good luck and see you,
Geoffrey
--
Geoffrey Letessier
Responsable informatique ingénieur
Hi,
Il giorno 09/giu/2015, alle ore 11:46, Soumya Koduri skod...@redhat.com ha
scritto:
On 06/09/2015 02:48 PM, Alessandro De Salvo wrote:
Hi,
OK, the problem with the VIPs not starting is due to the ganesha_mon
heartbeat script looking for a pid file called
Hi Vijiay,
it’s the latest one, I guess:
# gluster --version
glusterfs 3.7.1 built on Jun 1 2015 17:53:10
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies
Hi Soumya,
Il giorno 09/giu/2015, alle ore 08:06, Soumya Koduri skod...@redhat.com ha
scritto:
On 06/09/2015 01:31 AM, Alessandro De Salvo wrote:
OK, I found at least one of the bugs.
The /usr/libexec/ganesha/ganesha.sh has the following lines:
if [ -e /etc/os-release ]; then
On Tuesday 09 June 2015 01:08 PM, Geoffrey Letessier wrote:
Hi,
Yes of course:
[root@lucifer ~]# pdsh -w cl-storage[1,3] du -s
/export/brick_home/brick*/amyloid_team
cl-storage1: 1608522280/export/brick_home/brick1/amyloid_team
cl-storage3: 1619630616/export/brick_home/brick1/amyloid_team
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
On 06/09/2015 01:31 AM, Alessandro De Salvo wrote:
OK, I found at least one of the bugs.
The /usr/libexec/ganesha/ganesha.sh has the following lines:
if [ -e /etc/os-release ]; then
RHEL6_PCS_CNAME_OPTION=
fi
This is OK for RHEL 7, but does not work for = 7. I have
Hi Brian,
which version were you using before 3.7.1? Did that problem occur with
version 3.7.1 meanwhile?
Greetings,
Roger Lehmann
Am 04.06.2015 um 16:55 schrieb Andrus, Brian Contractor:
I have similar issues with gluster and am starting to wonder if it really is
stable for VM images.
My
Hi,
Is there really any difference if you heal a volume by running the 'gluster
volume
heal' command compared to if you run the 'find vol -exec stat \;' command?
As far as I understand, both will sync the bricks, although I guess the the
gluster
command is asynchronous.
Regards
Andreas
On Monday 08 June 2015 07:11 PM, Geoffrey Letessier wrote:
In addition, i notice a very big difference between the sum of DU on
each brick and « quota list » display, as you can read below:
[root@lucifer ~]# pdsh -w cl-storage[1,3] du -sh
/export/brick_home/brick*/amyloid_team
cl-storage1:
Hi Alessandro,
We have recently fixed one issues related to below warning message.
Please provide the output of 'gluster --version', we will check if the
issue is the same.
Thanks,
Vijay
On Monday 08 June 2015 06:43 PM, Alessandro De Salvo wrote:
OK, many thanks Rajesh.
I just wanted to
Hi,
Yes of course:
[root@lucifer ~]# pdsh -w cl-storage[1,3] du -s
/export/brick_home/brick*/amyloid_team
cl-storage1: 1608522280 /export/brick_home/brick1/amyloid_team
cl-storage3: 1619630616 /export/brick_home/brick1/amyloid_team
cl-storage1: 1614057836 /export/brick_home/brick2/amyloid_team
On 6/8/2015 5:55 PM, Brian Ericson wrote:
Am I misunderstanding cluster.read-subvolume/cluster.read-subvolume-index?
I have two regions, A and B with servers a and b in, respectfully,
each region. I have clients in both regions. Intra-region communication is
fast, but the pipe between the
Hi,
I detected a directory in split-brain on my file system, but I'm surprised that
'gluster volume heal vol info split-brain' doesn't report it.
I would have expected it to show only split-brain issues, compared to 'gluster
volume
heal vol info' that shows more info
regarding the healing
On 06/09/2015 09:21 AM, Ted Miller wrote:
On 6/8/2015 5:55 PM, Brian Ericson wrote:
Am I misunderstanding
cluster.read-subvolume/cluster.read-subvolume-index?
I have two regions, A and B with servers a and b in,
respectfully, each region. I have clients in both regions.
Intra-region
Am I misunderstanding cluster.read-subvolume/cluster.read-subvolume-index?
I have two regions, A and B with servers a and b in,
respectfully, each region. I have clients in both regions. Intra-region
communication is fast, but the pipe between the regions is terrible.
I'd like to minimize
Sorry for neglecting to mention the version, it's 3.7.1.
I've filed a bug to track this.
https://bugzilla.redhat.com/show_bug.cgi?id=1229808
___
Gluster-users mailing list
Gluster-users@gluster.org
Another update: the fact that I was unable to use vol set ganesha.enable
was due to another bug in the ganesha scripts. In short, they are all
using the following line to get the location of the conf file:
CONF=$(cat /etc/sysconfig/ganesha | grep CONFFILE | cut -f 2 -d =)
First of all by default
On 06/09/2015 09:47 PM, Alessandro De Salvo wrote:
Another update: the fact that I was unable to use vol set ganesha.enable
was due to another bug in the ganesha scripts. In short, they are all
using the following line to get the location of the conf file:
CONF=$(cat /etc/sysconfig/ganesha |
Hello,
I have a gluster 3.7 setup that I recently created. I can cd into
directories that should be there and then do an ls, but if I do an ls from
the parent directory the directory that I know is there does not show up. I
should also mention that I create the gluster bricks with preexisting
On 06/09/2015 11:10 AM, Jeff Darcy wrote:
So, maybe passing these options as a mount command doesn't work/is a
no-op, but what I don't understand is why -- given that there is no
measure by which glusterfs should ever conclude the replica in the
other region is ever faster than the replica in
So, maybe passing these options as a mount command doesn't work/is a
no-op, but what I don't understand is why -- given that there is no
measure by which glusterfs should ever conclude the replica in the
other region is ever faster than the replica in the same region.
If read-subvolume or
Roger
I was using the last latest 3.7.0 and before that 3.6.3
So far I have NOT had the issue with 3.7.1, so that makes me quite happy.
Brian Andrus
-Original Message-
From: Roger Lehmann [mailto:roger.lehm...@marktjagd.de]
Sent: Monday, June 08, 2015 11:46 PM
To: Andrus, Brian
On 06/09/2015 10:37 AM, Jeff Darcy wrote:
Am I misunderstanding cluster.read-subvolume/cluster.read-subvolume-index?
I have two regions, A and B with servers a and b in,
respectfully, each region. I have clients in both regions. Intra-region
communication is fast, but the pipe between the
Hi,
I have enabled the full debug already, but I see nothing special. Before
exporting any volume the log shows no error, even when I do a showmount (the
log is attached, ganesha.log.gz). If I do the same after exporting a volume
nfs-ganesha does not even start, complaining for not being able
Hello Vijay,
Quota-verify is still running since a couple of hours (more than 10) and each
output file sizes (4 files because 4 bricks per replica) are very huge: around
800MB per file in the first server and 5GB per file in the second one. Do your
still want these? How can I send it to you?
Hi Ben,
Can you tell me more about? creation and setting steps, etc.
Thanks in advance.
Geoffrey
--
Geoffrey Letessier
Responsable informatique ingénieur système
UPR 9080 - CNRS - Laboratoire de Biochimie Théorique
Institut de Biologie
OK, I can confirm that the ganesha.nsfd process is actually not
answering to the calls. Here it is what I see:
# rpcinfo -p
program vers proto port service
104 tcp111 portmapper
103 tcp111 portmapper
102 tcp111 portmapper
10
Hi Ted!
Thanks for your reply, for the implementation of Service N2 (As2
Mendelson B45 + gluster 3.3.1-15.el6.x86_64 (1 brick)
we have implemented one brick
As you can see, Mendelson has a Brick, and use it for to process teh
Purchase Ordes feed in SOA21 by MULE process
[root@mendelson
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-06-09/gluster-meeting.2015-06-09-12.00.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-06-09/gluster-meeting.2015-06-09-12.00.txt
Log:
I just found this on Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1134305
rpc actor failed to complete successfully messages in Glusterd
Is it related?
On 9 June 2015 at 11:30, Tiemen Ruiten t.rui...@rdmedia.com wrote:
Anyone who could help? We just ran into this exact same problem
`info split-brain` has been re-implemented in in glusterfs 3.7 (the
logic is now in glfs-heal.c) and should work correctly. CC'in Anuradha
who implemented it for confirmation.
-Ravi
On 06/09/2015 08:08 PM, Andreas Hollaus wrote:
Hi,
I detected a directory in split-brain on my file system,
On 6/9/2015 8:30 AM, Pablo Silva wrote:
Hi Ted!
Thanks for your reply, for the implementation of Service N2 (As2
Mendelson B45 + gluster 3.3.1-15.el6.x86_64 (1 brick)
we have implemented one brick
As you can see, Mendelson has a Brick, and use it for to process teh
Purchase Ordes feed
Thanks Ted, now .. there we go
-Pablo
On Tue, Jun 9, 2015 at 10:28 AM, Ted Miller tmil...@sonsetsolutions.org
wrote:
On 6/9/2015 8:30 AM, Pablo Silva wrote:
Hi Ted!
Thanks for your reply, for the implementation of Service N2 (As2
Mendelson B45 + gluster 3.3.1-15.el6.x86_64 (1
39 matches
Mail list logo