Re: [Gluster-users] Introducing gdash - A simple GlusterFS dashboard

2014-12-04 Thread Arman Khalatyan
Wow cool thanks. Would be good to have it as a ovirt plugin. A. On Dec 4, 2014 3:03 AM, Aravinda m...@aravindavk.in wrote: Hi All, I created a small local installable web app called gdash, a simple dashboard for GlusterFS. gdash is a super-young project, which shows GlusterFS volume

Re: [Gluster-users] Introducing gdash - A simple GlusterFS dashboard

2014-12-04 Thread Gene Liverman
Very nice! I see a small Puppet module and a Vagrant setup in my immediate future for using this. Thanks for sharing! -- Gene Liverman Systems Administrator Information Technology Services University of West Georgia glive...@westga.edu ITS: Making Technology Work for You! This e-mail and any

[Gluster-users] Problem starting glusterd on CentOS 6

2014-12-04 Thread Jan-Hendrik Zab
Hello, we have a glusterfs 3.6.1 installation on CentOS 6 with two servers and one brick each. We just saw some trouble with inodes not being able to be read and decided to shut down one of our servers and run a xfs_repair and the filesystem. (This turned up nothing.) After which we tried to

[Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Peter B.
Hi all, Since the strange hickup I had on Monday (files disappearing from the volume, although existing on the bricks), another very strange (and horrible) thing happened: Out of the blue, gluster Volume-A is now pointing to the bricks of another, completely separate and independent Volume-B.

Re: [Gluster-users] Problem starting glusterd on CentOS 6

2014-12-04 Thread Jan-Hendrik Zab
Here is also a complete debug log from starting glusterd: -jhz http://www.l3s.de/~zab/glusterfs.log pgpsXcscEmQG5.pgp Description: PGP signature ___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] replication and blanacing issues

2014-12-04 Thread Kiebzak, Jason M.
It seems that I have two issues: 1) Data is not balanced between all bricks 2) one replication pair is not staying in sync I have four servers/peers, each with one brick, all running 3.6.1. There are two volumes, each running as distributed replicated volume. Below, I've included some

Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Niels de Vos
On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote: Hi All, To supplement our ongoing effort of better patch management, I am proposing the addition of more sub-maintainers for various components. The rationale behind this proposal the responsibilities of maintainers continue to

Re: [Gluster-users] Folder disappeared on volume, but exists on bricks.

2014-12-04 Thread Peter B.
I think I know now what happened: - 2 gluster volumes on 2 different servers: A and B. - A=production, B=testing - On server B we will soon expand, so I read up on how to add new nodes. - Therefore on server B I ran gluster probe A, assuming that probe was just reading *if* A would be available.

Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Peter B.
This is actually directly related to my problem is mentioned here on Monday: Folder disappeared on volume, but exists on bricks. I probed node A from server B, which caused all this. My bad. :( No data is lost, but is there any way to recover the volume information in /var/lib/glusterd on

Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Peter B.
Status update: Server A and B now consider themselves gluster peers: Each one lists the other one as peer ($ gluster peer status). However, gluster volume info volume_name only lists the bricks of B. To solve my problem and restore autonomy of A, I think I could do the following: On server A:

Re: [Gluster-users] Problem starting glusterd on CentOS 6

2014-12-04 Thread Jan-Hendrik Zab
On 04/12/14 16:05 +0100, Jan-Hendrik Zab wrote: Here is also a complete debug log from starting glusterd: -jhz http://www.l3s.de/~zab/glusterfs.log Apparently we fixed that specific problem. We deactivated iptables and the glusterd can again communicate with the other processes. We

Re: [Gluster-users] Introducing gdash - A simple GlusterFS dashboard

2014-12-04 Thread Aravinda
Added --gluster option to specify the gluster path if you are using source install. If you already installed gdash, upgrade using `sudo pip install -U gdash` Usage: sudo gdash --gluster /usr/local/sbin/gluster Updated the same in blog: http://aravindavk.in/blog/introducing-gdash/ --

Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Atin Mukherjee
On 12/04/2014 09:08 PM, Peter B. wrote: This is actually directly related to my problem is mentioned here on Monday: Folder disappeared on volume, but exists on bricks. I probed node A from server B, which caused all this. My bad. :( No data is lost, but is there any way to recover

Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Joe Julian
If the volumes had the same names, I have no idea what the result of that would be. If they had different names, it sounds like the volume data only sync'ed one direction. Theoretically, you can look in /var/lib/glusterd/vols and ensure that both volume directories exist on both servers (I

Re: [Gluster-users] replication and blanacing issues

2014-12-04 Thread Kiebzak, Jason M.
As a follow up: I created another replicated striped volume - two brick replica, striped against two sets of servers (four servers in all) - same config as mentioned below. I started pouring data into it, and here's my output from `du`: peer1 - 47G peer2 - 47G peer3 - 47G peer4 -

[Gluster-users] A year's worth of Gluster

2014-12-04 Thread Franco Broi
1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each server has 10Gbit Ethernet. Each brick is a ZOL RADIZ2 pool with a single filesystem. ___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] Hello, small simple question about creating volume

2014-12-04 Thread Alexey Shalin
Hello I have 3 servers and 1 server (client) 3 servers having /data folder with size 300Gb Can you tell me the best combination to create volume? gluster volume create my_volume replica 3 transport tcp node1:/data node2:/data node3:/data is this ok for best performance ? Thx

[Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Alexey Shalin
Hello I create volume with next command: gluster volume create opennebula replica 3 transport tcp node1:/data node2:/data node3:/data # gluster volume info Volume Name: opennebula Type: Replicate Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/data Brick2:

Re: [Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Alexey Shalin
seems issue was in auth.allow: 176.126.164.0/22 hmmm... client and nodes was from this subnet.. any way .. I recreated volume and now everything looks good thx --- Старший Системный Администратор Алексей Шалин ОсОО Хостер kg -

Re: [Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Joe Julian
On 12/04/2014 10:13 PM, Alexey Shalin wrote: seems issue was in auth.allow: 176.126.164.0/22 hmmm... client and nodes was from this subnet.. any way .. I recreated volume and now everything looks good The documentation states that Valid IP address which includes wild card patterns

Re: [Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Alexey Shalin
Yeah. sorry :) --- Старший Системный Администратор Алексей Шалин ОсОО Хостер kg - http://www.hoster.kg ул. Ахунбаева 123 (здание БГТС) h...@hoster.kg ___ Gluster-users mailing list

Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Pranith Kumar Karampuri
On 12/04/2014 08:32 PM, Niels de Vos wrote: On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote: Hi All, To supplement our ongoing effort of better patch management, I am proposing the addition of more sub-maintainers for various components. The rationale behind this proposal the

Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Raghavendra Bhat
On Thursday 04 December 2014 08:32 PM, Niels de Vos wrote: On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote: Hi All, To supplement our ongoing effort of better patch management, I am proposing the addition of more sub-maintainers for various components. The rationale behind this

Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Vijaikumar M
On Thursday 04 December 2014 08:32 PM, Niels de Vos wrote: On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote: Hi All, To supplement our ongoing effort of better patch management, I am proposing the addition of more sub-maintainers for various components. The rationale behind this

[Gluster-users] smth wrongs with glusterfs

2014-12-04 Thread Alexey Shalin
Hello, again Smth wrong with my install gluster OS:Debian cat /etc/debian_version 7.6 Package: glusterfs-server Versions: 3.2.7-3+deb7u1 Description: I have 3 servers with bricks (192.168.1.1 - node1,192.168.1.2 - node2, 192.168.1.3 - node3) volume create by: gluster volume create