Re: [Gluster-users] Slow performance of gluster volume

2017-09-04 Thread Krutika Dhananjay
I'm assuming you are using this volume to store vm images, because I see shard in the options list. Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of

Re: [Gluster-users] Poor performance with shard

2017-09-04 Thread Krutika Dhananjay
Hi, Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of the writes - such as creating the shard and then writing to it and then updating the aggregated

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-04 Thread Ivan Rossi
The latter one is the one I have been referring to. And it is pretty dangerous Imho Il 31/ago/2017 01:19, ha scritto: > Solved as to 3.7.12. The only bug left is when adding new bricks to > create a new replica set, now sure where we are now on that bug but > that's not a

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-04 Thread Atin Mukherjee
On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire wrote: > Serkan, > I have gone through other mails in the mail thread as well but responding > to this one specifically. > > Is this a source install or an RPM install ? > If this is an RPM install, could you please install the

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-04 Thread Serkan Çoban
>1. On 80 nodes cluster, did you reboot only one node or multiple ones? Tried both, result is same, but the logs/stacks are from stopping and starting glusterd only on one server while others are running. >2. Are you sure that pstack output was always constantly pointing on strcmp >being stuck?

Re: [Gluster-users] heal info OK but statistics not working

2017-09-04 Thread Atin Mukherjee
Please provide the output of gluster volume info, gluster volume status and gluster peer status. On Mon, Sep 4, 2017 at 4:07 PM, lejeczek wrote: > hi all > > this: > $ vol heal $_vol info > outputs ok and exit code is 0 > But if I want to see statistics: > $ gluster vol

[Gluster-users] heal info OK but statistics not working

2017-09-04 Thread lejeczek
hi all this: $ vol heal $_vol info outputs ok and exit code is 0 But if I want to see statistics: $ gluster vol heal $_vol statistics Gathering crawl statistics on volume GROUP-WORK has been unsuccessful on bricks that are down. Please check if all brick processes are running. I suspect -

Re: [Gluster-users] [Gluster-devel] docs.gluster.org

2017-09-04 Thread Niels de Vos
On Fri, Sep 01, 2017 at 06:21:38PM -0400, Amye Scavarda wrote: > On Fri, Sep 1, 2017 at 9:42 AM, Michael Scherer wrote: > > Le vendredi 01 septembre 2017 à 14:02 +0100, Michael Scherer a écrit : > >> Le mercredi 30 août 2017 à 12:11 +0530, Nigel Babu a écrit : > >> > Hello, >

[Gluster-users] Poor performance with shard

2017-09-04 Thread Roei G
Hey everyone! I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet connection. The storage is configured with 3 gluster volumes, every volume has 12 bricks (4 bricks on every server, 1 per ssd in the server). With the 'features.shard' off option my writing speed (using the 'dd'

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-04 Thread Atin Mukherjee
On Mon, 4 Sep 2017 at 20:04, Serkan Çoban wrote: > I have been using a 60 server 1560 brick 3.7.11 cluster without > problems for 1 years. I did not see this problem with it. > Note that this problem does not happen when I install packages & start > glusterd & peer probe

Re: [Gluster-users] heal info OK but statistics not working

2017-09-04 Thread Atin Mukherjee
Ravi/Karthick, If one of the self heal process is down, will the statstics heal-count command work? On Mon, Sep 4, 2017 at 7:24 PM, lejeczek wrote: > 1) one peer, out of four, got separated from the network, from the rest of > the cluster. > 2) that unavailable(while it

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-04 Thread Serkan Çoban
I have been using a 60 server 1560 brick 3.7.11 cluster without problems for 1 years. I did not see this problem with it. Note that this problem does not happen when I install packages & start glusterd & peer probe and create the volumes. But after glusterd restart. Also note that this still

Re: [Gluster-users] peer rejected but connected

2017-09-04 Thread Gaurav Yadav
Executing "gluster volume set all cluster.op-version "on all the existing nodes will solve this problem. If issue still persists please provide me following logs (working-cluster + newly added peer) 1. glusterd.info file from /var/lib/glusterd from all nodes 2. glusterd.logs from all nodes 3.

Re: [Gluster-users] heal info OK but statistics not working

2017-09-04 Thread lejeczek
1) one peer, out of four, got separated from the network, from the rest of the cluster. 2) that unavailable(while it was unavailable) peer got detached with "gluster peer detach" command which succeeded, so now cluster comprise of three peers 3) Self-heal daemon (for some reason) does not

[Gluster-users] high (sys) cpu util and high load

2017-09-04 Thread Ingard Mevåg
Hi I'm seeing quite high cpu sys utilisation and an increased system load the past few days on my servers. It appears it doesn't start at exactly the same time for the different servers, but I've not (yet) been able to pin the cpu usage to a specific task or entries in the logs. The cluster is

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-04 Thread Atin Mukherjee
On Mon, Sep 4, 2017 at 5:28 PM, Serkan Çoban wrote: > >1. On 80 nodes cluster, did you reboot only one node or multiple ones? > Tried both, result is same, but the logs/stacks are from stopping and > starting glusterd only on one server while others are running. > > >2.

[Gluster-users] Slow performance of gluster volume

2017-09-04 Thread Abi Askushi
Hi all, I have a gluster volume used to host several VMs (managed through oVirt). The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network for the storage. When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct) out of the volume (e.g. writing at /root/)