Hi Gluster people ...
In My Geo-Replication FAR SERVER i get these messages in my log
and they are A LOT !!!
What is going on ???
[2014-09-19 13:06:42.088341] W [fuse-bridge.c:1214:fuse_err_cbk]
0-glusterfs-fuse: 7979073: SETXATTR()
/.gfid/47407ff4-a921-4e47-b452-32785d0f641c => -1 (Invalid
15/09/2014 08:03 πμ, Aravinda wrote:
Please share the log snippets(on every master brick node) from
/var/log/glusterfs/geo-replication//*.log if you see any
errors.
--
regards
Aravinda
On 09/13/2014 07:02 PM, HL wrote:
Hello
I've upgraded all my nodes from 3.3.x to 3.5.2 glusterfs
sin
Hello
I've upgraded all my nodes from 3.3.x to 3.5.2 glusterfs
since the geo-replication was confed under 3.3.x I'deleted all configs
and files on the remote system ...
Created the a new volume as the howtos say ...
how ever after stoping and starting georeplication a ZILION times on the
re
Could someone fix this ???
On page
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes
the link
"RDMA connection manager needs IPoIB for connection establishment. More
details can be found *here*."
where here =
"https://github.com/gluster/glusterfs/blob/master/doc/rd
Is there a time schedule to see these updates in a DEBIAN repo ??
What is the policy on this issue ??
Cheers and thanks.
Harry
On 15/07/2013 07:38 μμ, Vijay Bellur wrote:
Hi All,
3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS
3.4.0 can be downloaded from [1]
and release
Been there ...
here is my 10cent advise
a) Prepare for tomorrow
b) Rest
c) Think
d) Plan
e) act
I am sure it will work for you when calmed
Tech hints.
ifconfig iface mtu 9000 or whatever your nic can afford
Having a 100Mbit is not a good idea.
I 've recently located a dual port 1Gbit nic on ebay
On 10/07/2013 04:05 μμ, Vijay Bellur wrote:
A lot of volumes or a lot of delta to self-heal could trigger this crash.
3.3.2 containing this fix should be out real soon now. Appreciate
your patience in this regard.
Thanks,
Vijay
I hope this update will reach the debian wheezy repo.
Regards
Dear List,
I am currently testing glusterfs on a small scale non-production env
with ordinary nics.
I would like to purchase a couple of inifiniband nics in order to
connect 3 servers in point to point mode
that is with no switches in between.
Since I've noticed that some of you have this
On 04/07/2013 07:39 μμ, Vijay Bellur wrote:
On 07/04/2013 07:36 PM, HL wrote:
Hello list,
I have a 2 node replica clusterfs
Volume Name: GLVol
Type: Replicate
Volume ID: 2106bc3d-d184-42db-ab6e-e547ff816113
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1
Hello list,
I have a 2 node replica clusterfs
Volume Name: GLVol
Type: Replicate
Volume ID: 2106bc3d-d184-42db-ab6e-e547ff816113
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node01:/export/fm_glust
Brick2: node02:/export/fe_glust
Options Reconfigured:
auth.all
10 matches
Mail list logo