Wow cool thanks.
Would be good to have it as a ovirt plugin.
A.
On Dec 4, 2014 3:03 AM, Aravinda m...@aravindavk.in wrote:
Hi All,
I created a small local installable web app called gdash, a simple
dashboard for GlusterFS.
gdash is a super-young project, which shows GlusterFS volume
Very nice! I see a small Puppet module and a Vagrant setup in my immediate
future for using this. Thanks for sharing!
--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
This e-mail and any
Hello,
we have a glusterfs 3.6.1 installation on CentOS 6 with two servers and
one brick each. We just saw some trouble with inodes not being able to
be read and decided to shut down one of our servers and run a xfs_repair
and the filesystem. (This turned up nothing.)
After which we tried to
Hi all,
Since the strange hickup I had on Monday (files disappearing from the
volume, although existing on the bricks), another very strange (and
horrible) thing happened:
Out of the blue, gluster Volume-A is now pointing to the bricks of
another, completely separate and independent Volume-B.
Here is also a complete debug log from starting glusterd:
-jhz
http://www.l3s.de/~zab/glusterfs.log
pgpsXcscEmQG5.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
It seems that I have two issues:
1) Data is not balanced between all bricks
2) one replication pair is not staying in sync
I have four servers/peers, each with one brick, all running 3.6.1. There are
two volumes, each running as distributed replicated volume. Below, I've
included some
On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:
Hi All,
To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this proposal the responsibilities of maintainers continue to
I think I know now what happened:
- 2 gluster volumes on 2 different servers: A and B.
- A=production, B=testing
- On server B we will soon expand, so I read up on how to add new nodes.
- Therefore on server B I ran gluster probe A, assuming that probe
was just reading *if* A would be available.
This is actually directly related to my problem is mentioned here on Monday:
Folder disappeared on volume, but exists on bricks.
I probed node A from server B, which caused all this. My bad.
:(
No data is lost, but is there any way to recover the volume information in
/var/lib/glusterd on
Status update:
Server A and B now consider themselves gluster peers: Each one lists
the other one as peer ($ gluster peer status).
However, gluster volume info volume_name only lists the bricks of B.
To solve my problem and restore autonomy of A, I think I could do the
following:
On server A:
On 04/12/14 16:05 +0100, Jan-Hendrik Zab wrote:
Here is also a complete debug log from starting glusterd:
-jhz
http://www.l3s.de/~zab/glusterfs.log
Apparently we fixed that specific problem. We deactivated iptables and
the glusterd can again communicate with the other processes. We
Added --gluster option to specify the gluster path if you are using
source install.
If you already installed gdash, upgrade using `sudo pip install -U
gdash`
Usage:
sudo gdash --gluster /usr/local/sbin/gluster
Updated the same in blog: http://aravindavk.in/blog/introducing-gdash/
--
On 12/04/2014 09:08 PM, Peter B. wrote:
This is actually directly related to my problem is mentioned here on Monday:
Folder disappeared on volume, but exists on bricks.
I probed node A from server B, which caused all this. My bad.
:(
No data is lost, but is there any way to recover
If the volumes had the same names, I have no idea what the result of
that would be.
If they had different names, it sounds like the volume data only sync'ed
one direction. Theoretically, you can look in /var/lib/glusterd/vols and
ensure that both volume directories exist on both servers (I
As a follow up:
I created another replicated striped volume - two brick replica, striped
against two sets of servers (four servers in all) - same config as mentioned
below. I started pouring data into it, and here's my output from `du`:
peer1 - 47G
peer2 - 47G
peer3 - 47G
peer4 -
1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each
server has 10Gbit Ethernet.
Each brick is a ZOL RADIZ2 pool with a single filesystem.
___
Gluster-users mailing list
Gluster-users@gluster.org
Hello
I have 3 servers and 1 server (client)
3 servers having /data folder with size 300Gb
Can you tell me the best combination to create volume?
gluster volume create my_volume replica 3 transport tcp node1:/data node2:/data
node3:/data
is this ok for best performance ?
Thx
Hello
I create volume with next command:
gluster volume create opennebula replica 3 transport tcp node1:/data
node2:/data node3:/data
# gluster volume info
Volume Name: opennebula
Type: Replicate
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node1:/data
Brick2:
seems issue was in
auth.allow: 176.126.164.0/22
hmmm... client and nodes was from this subnet.. any way .. I
recreated volume and now everything looks good
thx
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg -
On 12/04/2014 10:13 PM, Alexey Shalin wrote:
seems issue was in
auth.allow: 176.126.164.0/22
hmmm... client and nodes was from this subnet.. any way .. I
recreated volume and now everything looks good
The documentation states that Valid IP address which includes wild
card patterns
Yeah. sorry :)
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg
___
Gluster-users mailing list
On 12/04/2014 08:32 PM, Niels de Vos wrote:
On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:
Hi All,
To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this proposal the
On Thursday 04 December 2014 08:32 PM, Niels de Vos wrote:
On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:
Hi All,
To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this
On Thursday 04 December 2014 08:32 PM, Niels de Vos wrote:
On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:
Hi All,
To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this
Hello, again
Smth wrong with my install gluster
OS:Debian
cat /etc/debian_version 7.6
Package: glusterfs-server
Versions:
3.2.7-3+deb7u1
Description: I have 3 servers with bricks (192.168.1.1 -
node1,192.168.1.2 - node2, 192.168.1.3 - node3)
volume create by:
gluster volume create
25 matches
Mail list logo