>> Also can u please mention which version of ganesha and details of
>> ganesha.conf . Latest stable release for
I am using ganesha.nfsd Release = V2.3.2
ganesha.conf is the standard glusterfs FSAL.
>> I am wondering why it is sending gettattr call on "security.selinux".
Fyi, selinux is
On 12/08/16 07:23, Deepak Naidu wrote:
I tried more things to figure out the issue. Like upgrading NFS-ganesha to the
latest version(as the earlier version had some bug regarding crashing), that
helped a bit.
But still again the ls -ls or rm -rf files were hanging but not much as
earlier.
Ok I tried kernel.nfs(ie no & its hangs as well. So as you said Vijay, the
issue might be with my setup on virtual on the network side.
--
Deepak
-Original Message-
From: Deepak Naidu
Sent: Thursday, August 11, 2016 6:54 PM
To: 'Vijay Bellur'; 'gluster-users@gluster.org'
Subject: RE:
I tried more things to figure out the issue. Like upgrading NFS-ganesha to the
latest version(as the earlier version had some bug regarding crashing), that
helped a bit.
But still again the ls -ls or rm -rf files were hanging but not much as
earlier. So upgrade of NFS ganesha to stable version
Hi,
If i have existing 2 nodes with Distributed - Replicated setup like below :
gluster volume create test-volume replica 2
node1:/exp1/brick1 node2:/exp2/brick2
node1:/exp1/brick3 node2:/exp2/brick4
node1:/exp1/brick5 node2:/exp2/brick6
And now i want to add another new node to the gluster
On 08/11/2016 02:05 PM, Gandalf Corvotempesta wrote:
Il 11 ago 2016 7:21 PM, "Dan Lavu" > ha scritto:
>
> Is it possible? Looking at everything, it just seems like I need the
content of the bricks and whatever is in /etc/glusterd and
2016-08-11 16:13 GMT+02:00 Gandalf Corvotempesta
:
> # gluster volume heal gv0 info heal-failed
> Gathering list of heal failed entries on volume gv0 has been
> unsuccessful on bricks that are down. Please check if all brick
> processes are running.
Healing is now
Il 11 ago 2016 7:21 PM, "Dan Lavu" ha scritto:
>
> Is it possible? Looking at everything, it just seems like I need the
content of the bricks and whatever is in /etc/glusterd and
/var/lib/glusterd maintaining the same hostname, IP and the same Gluster
version?
>
>
Why not start
/etc/glusterd is normally left defaulted so you only need that if you've
changed it.
Other than that, you're exactly right.
On 08/11/2016 10:21 AM, Dan Lavu wrote:
Is it possible? Looking at everything, it just seems like I need the
content of the bricks and whatever is in /etc/glusterd and
Is it possible? Looking at everything, it just seems like I need the
content of the bricks and whatever is in /etc/glusterd and
/var/lib/glusterd maintaining the same hostname, IP and the same Gluster
version?
Thanks,
Dan
___
Gluster-users mailing list
2016-08-11 16:34 GMT+02:00 Joe Julian :
> When you replace a brick, the new filesystem will not have that attribute so
> "force" is required to override that safety check.
IMHO, as we are not *starting* the volume (is already started)
another command should be used.
If
Just to be clear, I’m talking about the usage reported by the gluster vol quota
list command.
> On Aug 11, 2016, at 9:31 AM, Sergei Gerasenko wrote:
>
> Hi Selvaganesh,
>
> Thanks so much for your help. I didn’t have that option on probably because I
> originally had a
You are correct. Needs to be changed. Will edit it in the next few days.
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Anuradha Talur"
> Cc: "gluster-users"
> Sent: Thursday, August 11, 2016
start ... force is, indeed, a dangerous command. If your brick failed to mount,
gluster will not find the volume-id extended attribute and will recognize that
the path is not a brick and will not start the brick daemon preventing filling
up your root partition with replicated files.
When you
Hi Selvaganesh,
Thanks so much for your help. I didn’t have that option on probably because I
originally had a lower version of cluster and then upgraded. I turned the
option on just now.
The usage is still off. Should I wait a certain time?
Thanks,
Sergei
> On Aug 9, 2016, at 7:26 AM,
2016-08-11 16:25 GMT+02:00 Anuradha Talur :
> The replace-brick you did, mentioned in the previous mails was correct and
> fine.
> You said you have different names for old and new brick, so it works.
> setfattr is *not* required in this case.
>
> In the above case that you
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Anuradha Talur"
> Cc: "gluster-users"
> Sent: Thursday, August 11, 2016 7:46:37 PM
> Subject: Re: [Gluster-users] Clarification on common tasks
>
2016-08-11 16:13 GMT+02:00 Anuradha Talur :
> There was a document that I'm not able to locate right now.
> The first step after mounting the brick was to set volume ID using
> setfattr -n trusted.glusterfs.volume-id -v .
> I think there were more steps, I will update once I
2016-08-11 16:09 GMT+02:00 Lindsay Mathieson :
> gluster volume heal statistics heal-count
>
> much quicker.
Perfect. Thanks.
By running "gluster volume heal gv0 statistics" i can see some failed entries:
Starting time of crawl: Thu Aug 11 15:57:44 2016
Crawl is
- Original Message -
> From: "Anuradha Talur"
> To: "Gandalf Corvotempesta"
> Cc: "gluster-users"
> Sent: Thursday, August 11, 2016 5:47:12 PM
> Subject: Re: [Gluster-users] Clarification on common tasks
>
On 12/08/2016 12:05 AM, Gandalf Corvotempesta wrote:
ny wat to show the heal status without dumping the whole file list?
I'm using shards
as test, I have thousands of shard files, the output is very, very
very long and almost unusable.
Something shorter is available? A progress or
2016-08-11 14:17 GMT+02:00 Anuradha Talur :
> After you mount the new-brick, you will have to run
> gluster v replace-brick old_brick new_brick commit force.
Tried this. Worked immediatly, the brick was replaced and healing is running.
Any wat to show the heal status without
2016-08-11 13:08 GMT+02:00 Lindsay Mathieson :
> Also "gluster volume status" lists the pid's of all the bricks processes.
Ok, let's break everything., just to try.
This is a working cluster. I have 3 server with 1 brick each, in
replica 3, thus, all files are
Thanks for the update :-).
On Thu, Aug 11, 2016 at 6:14 PM, Dan Lavu wrote:
> I've since ordered a different switch, the same manufacturer as the HBAs.
> We have decided to rebuild the lab since we were having issues with oVirt
> as well. We can disregard this, unless the issue
On Thu, Aug 11, 2016 at 4:29 PM, Serkan Çoban wrote:
> I can wait for the patch to complete, please inform me when you ready.
> If it will take too much time to solve the crawl issue I can test
> without it too...
>
I don't know the Root cause for the problem, so I am not
I've since ordered a different switch, the same manufacturer as the HBAs.
We have decided to rebuild the lab since we were having issues with oVirt
as well. We can disregard this, unless the issue is reproducable with the
new equipment, I believe it is equipment related.
On Thu, Aug 11, 2016 at
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "gluster-users"
> Sent: Thursday, August 11, 2016 2:43:34 PM
> Subject: [Gluster-users] Clarification on common tasks
>
> I would like to make some clarification on
On 11/08/2016 7:13 PM, Gandalf Corvotempesta wrote:
1) kill the brick process (how can I ensure which is the brick process
to kill)?
glusterfsd is the prick status
Also "gluster volume status" lists the pid's of all the bricks processes.
2) unmount the brick, in example:
unmount /dev/sdc
I can wait for the patch to complete, please inform me when you ready.
If it will take too much time to solve the crawl issue I can test
without it too...
Serkan
On Thu, Aug 11, 2016 at 5:52 AM, Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Aug 10, 2016 at 1:58 PM, Serkan
I would like to make some clarification on common tasks needed by
gluster administrators.
A) Let's assume a disk/brick is failed (or is going to fail) and I
would like to replace.
Which is the proper way to do so with no data loss or downtime ?
Looking on mailing list, seems to be the following:
Heal completed but I will try this by simulating a disk fail in
cluster and reply to you. Thanks for the help.
On Thu, Aug 11, 2016 at 9:52 AM, Pranith Kumar Karampuri
wrote:
>
>
> On Fri, Aug 5, 2016 at 8:37 PM, Serkan Çoban wrote:
>>
>> Hi again,
>>
I don't think these will help. We need to trigger parallel heals, I gave
the command as a reply to one of your earlier threads. Sorry again for the
delay :-(.
On Tue, Aug 9, 2016 at 3:53 PM, Serkan Çoban wrote:
> Does increasing any of below values helps ec heal speed?
>
Added Rafi, Raghavendra who work on RDMA
On Mon, Aug 8, 2016 at 7:58 AM, Dan Lavu wrote:
> Hello,
>
> I'm having some major problems with Gluster and oVirt, I've been ripping
> my hair out with this, so if anybody can provide insight, that will be
> fantastic. I've tried both
On Fri, Aug 5, 2016 at 8:37 PM, Serkan Çoban wrote:
> Hi again,
>
> I am seeing the above situation in production environment now.
> One disk on one of my servers broken. I killed the brick process,
> replace the disk, mount it and then I do a gluster v start force.
>
>
Hi All,
The meeting minutes and logs for this weeks meeting are available at
the links below.
- Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-10/weekly_community_meeting_10aug2016.2016-08-10-12.00.html
- Minutes (text):
On Fri, Aug 5, 2016 at 2:53 PM, luca giacomelli
wrote:
> Hi, I'm trying to implement something similar to
> http://blog.gluster.org/2016/04/using-lio-with-gluster/ and
> https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/
>
>
I included Prashanth and Rejy who know about SELinux in detail.
On Fri, Aug 5, 2016 at 4:05 PM, tecforte jason
wrote:
> Hi,
>
> I have my glusterfs configured as distributed replicated mode. total node
> is 3.
>
> I have below error when issue command service glusterd
+Prasanna, author of the blog post.
On Fri, Aug 5, 2016 at 2:53 PM, luca giacomelli
wrote:
> Hi, I'm trying to implement something similar to
> http://blog.gluster.org/2016/04/using-lio-with-gluster/ and
> https://pkalever.wordpress.com/2016/06/29/non-shared-persist
>
38 matches
Mail list logo