[Gluster-users] glusterfs-nagios 1.1.0 on debian Jessie

2016-05-02 Thread prost pierrick
Hy everyone, We tried to install glusterFS-nagios on our glusterFS cluster on debian Jessi . When I tried to build source, I'm block on this dependency : glusternagios module Witch difference between Nagios addon and Nagios server addons ?

[Gluster-users] Anyway to remove the last brick or change it?

2016-05-02 Thread TomK
Hey All, New here and first time posting. I've made a typo and entered: gluster volume create mdsglusterv01 transport rdma mdskvm-p01:/mnt/p01-d01/glusterv01/ but couldn't start since rdma didn't exist: [root@mdskvm-p01 glusterfs]# ls -altri /usr/lib64/glusterfs/3.7.11/rpc-transport/*

Re: [Gluster-users] Hey All, Anyway to remove the last brick or change it? Added erronously.

2016-05-02 Thread TomK
Thanks very much! Did that then recovered the ID as well using the below off of google: vol=mdsglusterv01 brick=/mnt/p01-d01/glusterv01 setfattr -n trusted.glusterfs.volume-id \ -v 0x$(grep volume-id /var/lib/glusterd/vols/$vol/info \ | cut -d= -f2 | sed 's/-//g') $brick Another question I

Re: [Gluster-users] geo-replication going offline in 3.5 and 3.6 with 30 days

2016-05-02 Thread Kotresh Hiremath Ravishankar
Hi Brian, Thanks for reporting the issue. Could you please post the geo-replication logs? It would help us find out why geo-replication has failed to sync. You can find master geo-rep logs under /var/log/glusterfs/geo-replication and slave logs under /var/log/glusterfs/geo-replication-slaves

[Gluster-users] Hey All, Anyway to remove the last brick or change it? Added erronously.

2016-05-02 Thread TomK
Hey All, New here and first time posting. I've made a typo in configuration and entered: gluster volume create mdsglusterv01 transport rdma mdskvm-p01:/mnt/p01-d01/glusterv01/ but couldn't start since rdma didn't exist: [root@mdskvm-p01 glusterfs]# ls -altri

Re: [Gluster-users] SSD tier experimenting

2016-05-02 Thread Mohammed Rafi K C
comments are inline. On 05/02/2016 09:42 PM, Dan Lambright wrote: > > - Original Message - >> From: "Sergei Hanus" >> To: "Mohammed Rafi K C" >> Cc: "Dan Lambright" >> Sent: Monday, May 2, 2016 9:40:22 AM >> Subject: Re:

Re: [Gluster-users] Hey All, Anyway to remove the last brick or change it? Added erronously.

2016-05-02 Thread Chen Chen
Hi Tom, You may try: gluster volume set volname config.transport tcp ref: http://www.gluster.org/community/documentation/index.php/RDMA_Transport#Changing_Transport_of_Volume Best regards, Chen On 5/3/2016 9:55 AM, TomK wrote: Hey All, New here and first time posting. I've made a typo in

Re: [Gluster-users] how to detach the peer off line, which carries data

2016-05-02 Thread Alan Millar
> From: ?? > I have 3 peers. eg. P1, P2 and P3, and each of them has 2 bricks, > e.g. P1 have 2 bricks, b1 and b2. >P2 has 2 bricks, b3 and b4. >P3 has 2 bricks, b5 and b6. > > Based that above, I create a volume (afr volume) like this: > > b1 and b3

[Gluster-users] SSD tier experimenting

2016-05-02 Thread Sergei Hanus
Good day, colleagues. I'm experimenting with adding tier to gluster volume. The problem I face - after adding tier, the volume status is stuck "in progress" : Task Status of Volume data -- Task : Tier

Re: [Gluster-users] [Gluster-devel] Fwd: dht_is_subvol_filled messages on client

2016-05-02 Thread Serkan Çoban
I started a rebalace and it did not fix the issue... On Mon, May 2, 2016 at 9:11 AM, Serkan Çoban wrote: >>1. What is the out put of du -hs ? Please get this >>information for each of the brick that are part of disperse. > There are 20 bricks in disperse-56 and the du -hs

Re: [Gluster-users] gluster 3.7.9 permission denied and mv errors

2016-05-02 Thread Sakshi Bansal
Regarding the ENOTEMPTY error I need some more information: 1) Are there multiple clients running the same script or performing rmdir on the directory that complained? 2) Was there a rebalance running when you saw the error? 3) Before you ran the second rmdir that successfully removed the

Re: [Gluster-users] how to detach the peer off line, which carries data

2016-05-02 Thread 袁仲
I am sorry that I get you misunderstanding. Actually I can stop the volume and even delete it. What I really want to express is that the volume does not allow to be stopped and deleted as some virtual machines running on it. In the case above, P1 has crashed and I have to reinstall the system for

Re: [Gluster-users] Freezing during heal

2016-05-02 Thread Kevin Lemonnier
Hi, So after some testing, it is a lot better but I do still have some problems with 3.7.11. When I reboot a server it seems to have some strange behaviour sometimes, but I need to test that better. Removing a server from the network, waiting for a while then adding it back and letting it heal

Re: [Gluster-users] Volume Tiering

2016-05-02 Thread Lindsay Mathieson
On 2/05/2016 10:15 PM, Mohammed Rafi K C wrote: - I presume the files are promoted across all bricks. i.e you can't have different files promoted per brick. I didn't get your question correctly. But I will try to answer generically . File movement happens from one tier to another tier, as

Re: [Gluster-users] Volume Tiering

2016-05-02 Thread Mohammed Rafi K C
comments are inline. On 05/02/2016 05:06 PM, Lindsay Mathieson wrote: > On 2/05/2016 2:29 PM, Mohammed Rafi K C wrote: >> Hi Lindsay, >> >> Volume level data tiering was made fully supported in latest 3.7 >> releases, though an active effort is still going on to increase the >> small file

Re: [Gluster-users] how to detach the peer off line, which carries data

2016-05-02 Thread Atin Mukherjee
On 05/02/2016 01:30 PM, 袁仲 wrote: > I am sorry that I get you misunderstanding. > Actually I can stop the volume and even delete it. What I really want to > express is that the volume does not allow to be stopped and deleted as > some virtual machines running on it. > In the case above, P1 has

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-02 Thread Niels de Vos
On Mon, May 02, 2016 at 04:14:01PM +0530, ABHISHEK PALIWAL wrote: > HI Team, > > I am exporting gluster volume using GlusterNFS with ACL support but at NFS > client while running 'setfacl' command getting "setfacl: /tmp/e: Remote I/O > error" > > > Following is the NFS option status for

[Gluster-users] dict_get errors in brick log

2016-05-02 Thread Serkan Çoban
Hi, I am getting dict_get errors in brick log. I found following and it get merged to master: https://bugzilla.redhat.com/show_bug.cgi?id=1319581 How can I find if it is merged to 3.7? I am using 3.7.11 and affected by the problem. Thanks, Serkan ___

Re: [Gluster-users] dict_get errors in brick log

2016-05-02 Thread Jiffin Tony Thottan
On 02/05/16 16:52, Serkan Çoban wrote: Hi, I am getting dict_get errors in brick log. I found following and it get merged to master: https://bugzilla.redhat.com/show_bug.cgi?id=1319581 How can I find if it is merged to 3.7? You can track change for 3.7 using

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-02 Thread ABHISHEK PALIWAL
Hi Niels, Here is the output of rpcinfo -p $NFS_SERVER root@128:/# rpcinfo -p $NFS_SERVER program vers proto port service 104 tcp111 portmapper 103 tcp111 portmapper 102 tcp111 portmapper 104 udp111 portmapper

Re: [Gluster-users] Volume Tiering

2016-05-02 Thread Lindsay Mathieson
On 2/05/2016 2:29 PM, Mohammed Rafi K C wrote: Hi Lindsay, Volume level data tiering was made fully supported in latest 3.7 releases, though an active effort is still going on to increase the small file performance. You can find a starting point through the blog post by Dan [1]. Let me

Re: [Gluster-users] Freezing during heal

2016-05-02 Thread Kevin Lemonnier
After some more testing it looks like rebooting a server is fine, everything continues to work during the reboot and then during the heal, exactly like when I simulate a network outage. I guess my problems were just leftovers from adding a brick earlier, looks like that causes real problem and

Re: [Gluster-users] [Gluster-devel] Fwd: dht_is_subvol_filled messages on client

2016-05-02 Thread Serkan Çoban
>1. What is the out put of du -hs ? Please get this >information for each of the brick that are part of disperse. There are 20 bricks in disperse-56 and the du -hs output is like: 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K /bricks/20 80K

Re: [Gluster-users] Freezing during heal

2016-05-02 Thread Krutika Dhananjay
Could you attach the glusterfs client, shd logs? -Krutika On Mon, May 2, 2016 at 2:35 PM, Kevin Lemonnier wrote: > Hi, > > So after some testing, it is a lot better but I do still have some > problems with 3.7.11. > When I reboot a server it seems to have some strange

Re: [Gluster-users] SSD tier experimenting

2016-05-02 Thread Mohammed Rafi K C
On 05/02/2016 01:19 PM, Sergei Hanus wrote: > Good day, colleagues. > > I'm experimenting with adding tier to gluster volume. The problem I > face - after adding tier, the volume status is stuck "in progress" : > > Task Status of Volume data >

[Gluster-users] Exporting Gluster Volume

2016-05-02 Thread ABHISHEK PALIWAL
HI Team, I am exporting gluster volume using GlusterNFS with ACL support but at NFS client while running 'setfacl' command getting "setfacl: /tmp/e: Remote I/O error" Following is the NFS option status for Volume: nfs.enable-ino32 no nfs.mem-factor 15 nfs.export-dirs on nfs.export-volumes on