XiaoZan,
The socket_connect error logs that you are observing are
not really spam logs. With this snippet of log it's impossible
to say why you are seeing these logs. The reason you see the log
message repeat is because gluster's transport layer tries to
connect once every 3s until it
On 09/09/2014 11:35 AM, Ilya Ivanov wrote:
Ahh, thank you, now I get it. I deleted it on one node and it
replicated to another one. Now I get the following output:
[root@gluster1 var]# gluster volume heal gv01 info
Brick gluster1:/home/gluster/gv01/
gfid:d3def9e1-c6d0-4b7d-a322-b5019305182e
What's a gfid split-brain and how is it different from normal split-brain?
I accessed the file with stat, but heal info still shows Number of
entries: 1
[root@gluster1 gluster]# getfattr -d -m. -e hex gv01/123
# getfattr -d -m. -e hex gv01/123
# file: gv01/123
Ahh, thank you, now I get it. I deleted it on one node and it replicated to
another one. Now I get the following output:
[root@gluster1 var]# gluster volume heal gv01 info
Brick gluster1:/home/gluster/gv01/
gfid:d3def9e1-c6d0-4b7d-a322-b5019305182e
Number of entries: 1
Brick
Anyone is welcome to join us in this IRC meeting on Freenode. This event
is happening in #gluster-meeting in about 5 minutes from now.
Please see the agenda:
- https://public.pad.fsfe.org/p/gluster-bug-triage
Thanks,
Niels
___
Gluster-users mailing
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2014-09-09/gluster-meeting.2014-09-09-12.03.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2014-09-09/gluster-meeting.2014-09-09-12.03.txt
Log:
Greetings all,
I am setting up a new pair of servers in the data center to be used
exclusively as Gluster storage servers for VMWare. Each server is
identical with 16G RAM, dual 10G NICs, 12x 600G 15K-RPM SAS drives, and an
LSI RAID controller. Each will be running CentOS 6.5 with the latest
On 08/06/2014 06:26 PM, Justin Clift wrote:
- Original Message -
Did we get to break the tie? :)
Yep. Latest results are:
* 5:30 PM IST / 12:00 UTC - 47 votes (52%)
* 6:30 PM IST / 13:00 UTC - 9 votes (10%)
* 7:30 PM IST / 14:00 UTC - 12 votes (13%)
* 8:30 PM IST /
I'm currently using GlusterFS 3.5.2 on a pair of production servers to
share uploaded files and it's been reliable and working well with just the
two servers. I've done some local tests of trying to add and remove servers
and the results have not been good. I'm starting to think I have the wrong
I have a very simple two nodes setup.
After I rebooted one the nodes, a directory deep inside the hierarchy on
the other node has become dead. By dead I mean any process trying to
access its content becomes stuck.
Absolutly nothing appears on the logs.
How can I fix that very very quickly?
TIA
10 matches
Mail list logo