Ok, ignore this issue. User and group ownership was not set correctly on the
brick directories.
KR
Davy
On 14 Sep 2015, at 14:13, Davy Croonen
> wrote:
Hi all
We have currently a production cluster with 2 nodes in a distributed
Bump...
Anybody has any clues as to how I can try and identify the cause of
the slowness?
Diego
On Wed, Sep 9, 2015 at 7:42 PM, Diego Remolina wrote:
> Hi,
>
> I am running two glusterfs servers as replicas. I have a 3rd server
> which provides quorum. Since gluster was
Hi Diego,
I think it's the overhead of fstat() calls. Gluster keeps its metadata
on the bricks themselves, and this has to be looked up for every file
access. For big files this is not an issue as it only happens once, but
when accessing lots of small files this overhead rapidly builds up,
Hi all
We have currently a production cluster with 2 nodes in a distributed replicated
setup with glusterfs version 3.6.4 which was updated from gluster version
3.5.x. I just expanded the cluster with 2 extra nodes with glusterfs version
3.6.4 installed but when running the rebalancing command
Hi Alex,
Thanks for the reply, I was aware of the performance issues with small
files, but never expected an order of magnitude slower. I understand
some improvements were made to 3.7.x to help with low small file
performance, however I did not see any big changes after upgrading
from 3.6.x to
- Original Message -
> From: "Diego Remolina"
> To: "Alex Crow"
> Cc: gluster-users@gluster.org
> Sent: Monday, September 14, 2015 9:26:17 AM
> Subject: Re: [Gluster-users] Very slow roaming profiles on top of glusterfs
>
> Hi Alex,
>
>
See below
On Mon, Sep 14, 2015 at 11:06 AM, Ben Turner wrote:
> - Original Message -
>> From: "Diego Remolina"
>> To: "Alex Crow"
>> Cc: gluster-users@gluster.org
>> Sent: Monday, September 14, 2015 9:26:17 AM
>>
any ideas?
2015-09-13 20:41 GMT+08:00 Yaroslav Molochko :
> So, I've done:
> root@PSC01SERV008:/var/log# tail -f syslog | grep -Ev
> 'docker|kubelet|kube-proxy'
> Sep 13 12:18:16 psc01serv008 systemd[1]: Stopped GlusterFS an clustered
> file-system server.
> Sep 13 12:19:21
Hey guys,
I'm trying to do an experiment with glusterfs and having some problems
and will appreciate if someone points out what I'm doing wrong.
I've created replicated volume across two nodes.
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID:
I noticed that several sharding bugs relating to file size got closed
because 3.7.4 got released. However, file sizes are still messed up in
3.7.4. I know this is a beta feature, but you cannot even write a single
file without getting the wrong file size.
Neither buffered or direct IO works:
#
Have you considered the disperse volume? We'd normally advocate 6 servers
for a +2 redundancy factor though.
Paul C
On Tue, Sep 15, 2015 at 5:47 AM, wrote:
> Gluster users,
>
> I am looking to implement GlusterFS on my network for large, expandable,
> and redundant storage.
On 09/14/2015 10:47 AM, aa...@ajserver.com wrote:
simple replication
that requires at least 3 of the 5 bricks have a copy of the data so I can
lose any 2 bricks without data loss.
This is not possible with GlusterFS, you have to specify up front which
brick has which replica.
I have tried
Hi Elvind,
At the moment, the focus is on making sharding work with gluster-as-VM-store
use case (which is a typical large-files-with-single-writer workload).
After this, I do plan to make it work for general-purpose use cases too.
With respect to this specific issue, could you set
Hi,
<< Unable to fetch slave volume details. Please check the slave cluster
and slave volume. geo-replication command failed
Have you checked whether you are able to reach the Slave node from the
master node?
There is a super simple way of setting up geo-rep written by Aravinda.
Refer:
Gluster users,
I am looking to implement GlusterFS on my network for large, expandable,
and redundant storage.
I have 5 servers with 1 brick each. All I want is a simple replication
that requires at least 3 of the 5 bricks have a copy of the data so I can
lose any 2 bricks without data loss. I
Yes I can ping the slave node with it's name and IP address, I've even entered
manually its name in /etc/hosts.
Does this nice python script also work for Gluster 3.6? The blog post only
speaks about 3.7...
Regards
ML
On Monday, September 14, 2015 9:38 AM, Saravanakumar Arumugam
Could you try
* disabling iptables (& firewalld if enabled)
* restart rpcbind service
* restart glusterd
If this doesn't work, (mentioned in one of the forums)
Add below line in '/etc/hosts.allow' file.
ALL: 127.0.0.1 : ALLOW
Restart rpcbind and glusterd services.
Thanks,
Soumya
On
So I now tried the tool but it doesn't work neither as you can see here:
[OK] gfs1geo.domain.com is Reachable(Port 22)
[OK] SSH Connection established r...@gfs1geo.domain.com
[OK] Master Volume and Slave Volume are compatible (Version: 3.6.5)
[OK] Common secret pub file present
Atin,
I performed a gluster volume set performance.flush-behind off/on
toggle on both volumes and after that the probe was successful.
So many thanks for your support.
Some additional info, in our lab I did some tests starting with gluster version
3.6.4 and was not able to reproduce the
On 09/14/2015 02:33 PM, Davy Croonen wrote:
> Atin,
>
> I performed a /gluster volume set
> performance.flush-behind/ /off/on/ toggle on both volumes and after that
> the probe was successful.
>
> So many thanks for your support.
>
> Some additional info, in our lab I did some tests starting
20 matches
Mail list logo