Hi,
I have a cluster of 2 servers running 3.7.3 with replication, and standard
NFS (no ganesha). This in on CentOS 6.
I use CTDB with 2 virtual IPs (one for each server in a normal
situation) to share the volume over NFS and CIFS (samba).
fnctl() file locking doesn't seem to work when the
Sorry only read replicated in your first mail, I squashed the distributed
one :'(
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-04 9:47 GMT+02:00 Florian Oppermann gluster-us...@flopwelt.de:
In my current configuration I have a distributed and replicated volume
which is (to my
Minutes :
http://meetbot.fedoraproject.org/gluster-meeting/2015-08-04/gluster-meeting.2015-08-04-12.08.html/
Minutes (text) :
http://meetbot-raw.fedoraproject.org/gluster-meeting/2015-08-04/gluster-meeting.2015-08-04-12.08.txt
Log :
As you are in replicate mode, all write will be send synchronously to all
bricks, and in your case to a single hdd.
I thought that every file will be sent to 2 bricks synchronously but if
I write several files they are distributed between the three pairs of
bricks. Therefore the performance
In my current configuration I have a distributed and replicated volume
which is (to my understanding) similar to a raid 10.
On 04.08.2015 08:51, Mathieu Chateau wrote:
In a replicated scheme, it's like a raid 1 (mirror).
You write as slow as the slowest disk. Client will wait for all brick
In a replicated scheme, it's like a raid 1 (mirror).
You write as slow as the slowest disk. Client will wait for all brick
writes confirmation.
In this scheme, you wouldn't much more than 3 bricks.
I think you mox up with distributed scheme, which is like a raid 0 stripped.
This one get more
On 08/03/2015 10:07 PM, Prasun Gera wrote:
Does the 3.7 client work for precise ?
There is no 3.7 client for precise. The build is 'all or nothing'.
3.6 for precise ought to work with 3.7 servers. Try it and see.
It would be great if someone in the community wanted to step up and
figure out
Hi Vijay,
du command can round-off the values, could you check the values with 'du -sk’?
It’s ongoing. I’ll let you know the new value ASAP.
We will investigate on this issue and update you soon on the same.
FYI it’s mainly concerning 1 brick per replicate (and thus its replicate brick:
OK, after a couple of times/hours (the last message was blocked but sent around
9h30 AM french time), here is the result:
# du -sk /home/sterpone_team/
10583360073 /home/sterpone_team/
In other words: ~9.86TB
So, as you can read below, the result is globally the same than before (with
'du
Hi all,
I'm back and tested those things.
Michael was right. I've enabled read-ahead option and nothing changed.
So the thing causes the problem with libgfapi and d8 virtio drivers
is performance.write-behind. If it is off, everything works perfect. If I
set it On, different problems are as result
On 08/05/2015 04:49 AM, Peter Becker wrote:
qmaster@srvamqpy01:~$ gluster --version
glusterfs 3.2.5 built on Jan 31 2012 07:39:59
FWIW, this is a rather old release. Can you see if the issue is
recurring with glusterfs 3.7?
-Ravi
___
Hi Ravi,
Not easily: 3.2.5 is what comes with Ubuntu 12.04 and based on previous posts
on this list it seems I can't just go and compile a newer one.
If our setup should work and an upgrade might fix the issues we see, then we
could try Ubuntu 14.04 - it's not a supported environment in our
Thanks, Ravi - I was not aware of that.
I'll update and try again.
Peter
From: Ravishankar N [mailto:ravishan...@redhat.com]
Sent: Wednesday, 5 August 2015 12:03 PM
To: Peter Becker; Gluster-users@gluster.org
Subject: Re: [Gluster-users] Split brain after rebooting half of a two-node
Hello,
We are trying to run a pair of ActiveMQ nodes on top of glusterfs, using the
approach described in
http://activemq.apache.org/shared-file-system-master-slave.html
This seemed to work at first, but if I start rebooting machines while under
load I seem to quickly get into this problem:
2015-08-04 19:25 GMT+02:00 Atin Mukherjee atin.mukherje...@gmail.com:
Creating data from back end is definitely not recommended as features like
replication heavily depends on the external attributes set on the file and
having data created directly from back end wouldn't have the xattrs set.
Thank you Mathieu for your answer!
2015-08-03 20:24 GMT+02:00 Mathieu Chateau mathieu.chat...@lotp.fr:
date is not the same but is content different ?
Yes, the content is different.
The file on the node1 is the updated one, the other two nodes had the
old version.
You may have disable the
-Atin
Sent from one plus one
On Aug 4, 2015 10:27 PM, shacky shack...@gmail.com wrote:
Thank you Mathieu for your answer!
2015-08-03 20:24 GMT+02:00 Mathieu Chateau mathieu.chat...@lotp.fr:
date is not the same but is content different ?
Yes, the content is different.
The file on the
Small update: Stopping and restarting the volume made it accessible
again with acceptable performance. It is significantly slower than the
local harddrive when it comes to metadata access: find . | wc -l took
almost 2 minutes for 36k files with only 2% cpu time. But that is okay.
Nevertheless the
Is there any restful api available for 3.6.4 version. Was trying options to
manage cluster using apis.
I have one more question is: is there any way to execute the gluster peer
probe commands from the new server, I know due to trusted only existing
nodes in the cluster need to execute commands to
Hi there,
Is it a matter of a simple :
chmod 700 /data
Or have I missed something?
Obviously root could still access /data, but unprivileged users would no
longer have access to there.
Thibault.
On 4 Aug 2015 6:31 pm, shacky shack...@gmail.com wrote:
2015-08-04 19:25 GMT+02:00 Atin Mukherjee
Now it’s more fast but it’s interesting to notice a différence (a decease) in
the du -sk output:
[root@lucifer ~]# du -sk /home/sterpone_team/
10583360073 /home/sterpone_team/
[root@lucifer ~]# time du -sk /home/sterpone_team/
10583360057 /home/sterpone_team/
real21m21.068s
user
On 5 August 2015 at 00:01, John S bun...@gmail.com wrote:
Is there any restful api available for 3.6.4 version. Was trying options
to manage cluster using apis.
There is no restful api support in gluster to manage your cluster. Yet that
is. AFAIK it might make it to glusterfs-3.8 or
On 08/05/2015 07:01 AM, Peter Becker wrote:
Hi Ravi,
Not easily: 3.2.5 is what comes with Ubuntu 12.04 and based on
previous posts on this list it seems I can’t just go and compile a
newer one.
Hi Peter,
Would adding the PPA (https://launchpad.net/~gluster) not work? 3.5
seems to be
23 matches
Mail list logo