On 10/21/2012 02:18 PM, Israel Shirk wrote:
Haris, try the NFS mount. Gluster typically triggers healing through
the client, so if you skip the client, nothing heals.
Not true anymore. With 3.3 there's a self-heal daemon that will handle
the heals. You do risk reading stale data if you don't read through the
client though.
The native Gluster client tends to be really @#$@#$@# stupid. It'll
send reads to Singapore while you're in Virginia (and there are bricks
0.2ms away),
False. The client will read from the first-to-respond. Yes, if Singapore
is responding faster than Virginia you might want to figure out why
Virginia is so overloaded that it's taking more than 200ms to respond,
but really that shouldn't be the case.
then when healing is needed it will take a bunch of time to do that,
all the while it's blocking your application or web server, which
under heavy loads will cause your entire application to buckle.
False. 3.3 uses granular locking which won't block your application.
The NFS client is dumb, which in my mind is a lot better - it'll just
do what you tell it to do and allow you to compensate for connectivity
issues yourself using something like Linux-HA.
The "NFS client" is probably more apt than you meant. It is both
GlusterFS client and NFS server, and it connects to the bricks and
performs reads and self-heal in exactly the same way as the fuse client.
You have to keep in mind when using gluster that 99% of the people
using it are running their tests on a single server (see the recent
notes about how testing patches are only performed on a single server),
False. There are many more testers than that, most of which are outside
of the development team.
and most applications don't distribute or mirror to bricks more than a
few hundred yards away. Their idea of geo-replication is that you
send your writes to the other side of the world (which may or may not
be up at the moment), then twiddle your thumbs for a while and hope it
gets back to you. So, that said, it's possible to get it to work, and
it's almost better than lsyncd, but it'll still make you cry periodically.
Ok, back to happy time :)
Hi everyone,
I am using Gluster in replication mode.
Have 3 bricks on 3 different physical servers connected with WAN. This
makes writing but also reading files from Gluster mounted volume
very slow.
To remedy this I have made my web application read Gluster files from
the brick directly (I make a readonly bind mount of the brick), but
write to the Gluster FS mounted volume so that the files will
instantly
replicate on all 3 servers. At least, "instant replication" is what I
envision GLuster will do for me :)
My problem is that files sometimes do not replicate to all 3 servers
instantly. There are certainly short network outages which may prevent
instant replication and I have situations like this:
ssh web1-prod ls -l
/home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
<http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
-rw-r--r-- 1 apache apache 75901 Oct 19 18:00
/home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
<http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
web2-prod.
ssh web2-prod ls -l
/home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
<http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
-rw-r--r-- 1 apache apache 0 Oct 19 18:00
/home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
<http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
web3-prod.
ssh web3-prod ls -l
/home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
<http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
-rw-r--r--. 1 apache apache 75901 Oct 19 18:00
/home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
<http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
Where the file on web2 server brick has a size of 0. So serving this
file from web2 makes my application make errors..
I have had a brain-split situation couple of times and resolved
manually. The above kind of situation is not a brain-split and
resolves
and (re-)replicates completly with a simple "ls -l" on the file in
question from any of the servers.
My question is:
I suppose that the problem here is incomplete replication for the file
in question due to temporary network problems.
How to insure the complete replication immediatly after the
network has
been restored?
kind regards
Haris Zukanovic
--
--
Haris Zukanovic
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users