Hello,
You are right, I found my mistake. I forgot to extend the second server's
partition
I installed the latest version (3.4) on Centos 6.4.
In order to bench GlusterFS without being limited by disk subsystem, I created
a ramdrive (15GB), on two servers.
The major constraint is that I
Amar Tumballi atumb...@redhat.com writes:
On 11/12/2013 12:54 PM, Øystein Viggen wrote:
Should I file a bug about this somewhere? It seems easy enough to
replicate.
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
Thanks. I've tried to describe it as best I can, and linked back
Hello all,
I'm new to gluster. In order to gain some knowledge, and test a few
things I decided to install it on three servers and play around with
it a bit.
My setup:
Three servers dc1-09, dc2-09, dc2-10. All with RHEL 6.4, and Gluster
3.4.0 (from RHS 2.1)
Each server has three disks, mounted
Hi,
I've just noticed one of my FUSE clients has an extremely large log file
for one of its volumes. /var/log/glusterfs/home.log is 470GB right now,
and contains millions (billions?) of the following entries:
[2013-11-12 05:21:57.671118] I [dict.c:370:dict_get]
Hi All,
I am interested in some feedback on putting multiple bricks on one physical
disk. Each brick being assigned to a different volume. Here is the scenario:
4 disks per server, 4 servers, 2x2 distribute/replicate
I would prefer to have just one volume but need to do geo-replication on
some
On 11/12/2013 03:03 PM, Øystein Viggen wrote:
Amar Tumballi atumb...@redhat.com writes:
On 11/12/2013 12:54 PM, Øystein Viggen wrote:
Should I file a bug about this somewhere? It seems easy enough to
replicate.
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
Thanks. I've tried
On 11/06/2013 10:53 AM, B.K.Raghuram wrote:
Here are the steps that I did to reproduce the problem. Essentially,
if you try to remove a brick that is not the same as the localhost
then it seems to migrate the files on the localhost brick instead and
hence there is a lot of data loss.. If
I would suggest using different partitions for each brick. We use LVM
and start off with a relativity small amount allocated space, then grow
the partitions as needed. If you were to place 2 bricks on the same
partition then the free space is not going to show correctly. Example:
1TB
Like Eric, I too use lvm to partition off bricks for different volumes.
You can even specify which physical device a brick is on when you're
creating your brick, ie. lvcreate -n myvol_brick_a -l50 vg_gluster
/dev/sda1. This is handy if you have to replace the disk while the old
one is still
Ira,
Thank you for the response. I suspect that your patch will resolve this
issue as well -- however, an upgrade to SMB 3.6.20 continues to display the
total volume size behavior, instead of the glusterFS folder-quota behavior
as expected. I note that your patch was accepted into 3.6.next but
On 11/9/2013 2:39 AM, Shawn Heisey wrote:
They are from the same log file - the one that I put on my dropbox
account and linked in the original message. They are consecutive log
entries.
Further info from our developer that is looking deeper into these problems:
Ouch. I know
11 matches
Mail list logo