I did the changes (one brick from 09th server and one replica from 10th server
and continued with this order) and re-test. Nothing changed. Still slow.
(exactly same result.)
-Gencer.
From: Gandalf Corvotempesta [mailto:gandalf.corvotempe...@gmail.com]
Sent: Friday, June 30, 2017 8:19 PM
Gluster Monthly Newsletter, June 2017
Important happenings for Gluster for June:
---
Gluster Summit 2017!
Gluster Summit 2017 will be held in Prague, Czech Republic on October
27 and 28th.
More details at:
https://www.gluster.org/events/summit2017
---
Our weekly community meeting has changed: we'
I can ask our other engineer but I don't have those figues.
-Alastair
On 30 June 2017 at 13:52, Serkan Çoban wrote:
> Did you test healing by increasing disperse.shd-max-threads?
> What is your heal times per brick now?
>
> On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil
> wrote:
> > We are us
I'm delighted to announce that registration and the call for proposals
for Gluster Summit 2017 in Prague, CZ is open.
We're changing it up a bit this year, anyone can register, and if
you'd like to apply for travel funding, please indicate this on the
registration form. Don't worry, you'll get a c
Hello,
I have a replica 2 with a remote slave node for geo-replication (GlusterFS
3.8.11 on Debian 8) and saw for the first time a non zero number in the
FAILURES column when running:
gluster volume geo-replcation myvolume remotehost:remotevol status detail
Right now the number under the FAILURES
Did you test healing by increasing disperse.shd-max-threads?
What is your heal times per brick now?
On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil wrote:
> We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the
> rebuild time are bottlenecked by matrix operations which scale a
Il 30 giu 2017 3:51 PM, ha scritto:
Note: I also noticed that you said “order”. Do you mean when we create via
volume set we have to make an order for bricks? I thought gluster handles
(and do the math) itself.
Yes, you have to specify the exact order
Gluster is not flexible in this way and doe
We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the
rebuild time are bottlenecked by matrix operations which scale as the
square of the number of data stripes. There are some savings because of
larger data chunks but we ended up using 8+3 and heal times are about half
compar
Hi,
I was wondering if there were any additional test we could perform to
help debug the group write-permissions issue?
Thanks
Pat
On 06/27/2017 12:29 PM, Pat Haley wrote:
Hi Soumya,
One example, we have a common working directory dri_fleat in the
gluster volume
drwxrwsr-x 22 root dr
Thanks for the hints.
Now I added the arbiter 1 to my replica 2 using the volume add-brick command
and it is now in the healing process in order to copy all the metadata files on
my arbiter node.
On one of my replica nodes in the brick log file for that particular volume I
notice a lot of the fo
I already tried 512MB but re-try again now and results are the same. Both
without tuning;
Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb.
I attached two logs (shard and stripe logs)
Note: I also noticed that you said “order”. Do you mean when we create via
volume set w
Just noticed that the way you have configured your brick order during
volume-create makes both replicas of every set reside on the same machine.
That apart, do you see any difference if you change shard-block-size to
512MB? Could you try that?
If it doesn't help, could you share the volume-profil
On Thu, 2017-06-29 at 17:13 +0200, Dietmar Putz wrote:
> Hello Anoop,
>
> thank you for your reply
>
> answers inside...
>
> best regards
>
> Dietmar
>
>
> On 29.06.2017 10:48, Anoop C S wrote:
> > On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> > > Hello,
> > >
> > > recently w
Hi Krutika,
Sure, here is volume info:
root@sr-09-loc-50-14-18:/# gluster volume info testvol
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 30426017-59d5-4091-b6bc-279a905b704a
Status: Started
Snapshot Count: 0
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Brick
Could you please provide the volume-info output?
-Krutika
On Fri, Jun 30, 2017 at 4:23 PM, wrote:
> Hi,
>
>
>
> I have an 2 nodes with 20 bricks in total (10+10).
>
>
>
> First test:
>
>
>
> 2 Nodes with Distributed – Striped – Replicated (2 x 2)
>
> 10GbE Speed between nodes
>
>
>
> “dd” perfo
On Fri, Jun 30, 2017 at 1:31 AM, Jan wrote:
> Hi all,
>
> Gluster and Ganesha are amazing. Thank you for this great work!
>
> I’m struggling with one issue and I think that you might be able to help
> me.
>
> I spent some time by playing with Gluster and Ganesha and after I gain
> some experience
Hi,
I have an 2 nodes with 20 bricks in total (10+10).
First test:
2 Nodes with Distributed - Striped - Replicated (2 x 2)
10GbE Speed between nodes
"dd" performance: 400mb/s and higher
Downloading a large file from internet and directly to the gluster:
250-300mb/s
Now same t
Hi,
I have an 2 nodes with 20 bricks in total (10+10).
First test:
2 Nodes with Distributed - Striped - Replicated (2 x 2)
10GbE Speed between nodes
"dd" performance: 400mb/s and higher
Downloading a large file from internet and directly to the gluster:
250-300mb/s
Now same t
On Thu, 29 Jun 2017 at 22:51, Victor Nomura wrote:
> Thanks for the reply. What would be the best course of action? The data
> on the volume isn’t important right now but I’m worried when our setup goes
> to production we don’t have the same situation and really need to recover
> our Gluster se
Hi,
Jan, by multiple times I meant whether you were able to do the whole
setup multiple times and face the same issue.
So that we have a consistent reproducer to work on.
As grepping shows that the process doesn't exist the bug I mentioned
doesn't hold good.
Seems like another issue irrelevant to
Hi Jan,
It is not recommended that you automate the script for 'volume start
force'.
Bricks do not go offline just like that. There will be some genuine issue
which triggers this. Could you please attach the entire glusterd.logs and
the brick logs around the time so that someone would be able to l
Hi Hari,
thank you for your support!
Did I try to check offline bricks multiple times?
Yes – I gave it enough time (at least 20 minutes) to recover but it stayed
offline.
Version?
All nodes are 100% equal – I tried fresh installation several times during
my testing, Every time it is CentOS Minim
On 06/30/2017 12:53 PM, Gandalf Corvotempesta wrote:
Yes but why killing gluster notifies all clients and a graceful
shutdown don't?
I think this is a bug, if I'm shutting down a server, it's obvious
that all clients should stop to connect to it
Oh It is a bug (or a known-issue ;-) ) alrig
Yes but why killing gluster notifies all clients and a graceful shutdown
don't?
I think this is a bug, if I'm shutting down a server, it's obvious that all
clients should stop to connect to it
Il 30 giu 2017 3:24 AM, "Ravishankar N" ha scritto:
> On 06/30/2017 12:40 AM, Renaud Fortier wrote:
Hi Jan,
comments inline.
On Fri, Jun 30, 2017 at 1:31 AM, Jan wrote:
> Hi all,
>
> Gluster and Ganesha are amazing. Thank you for this great work!
>
> I’m struggling with one issue and I think that you might be able to help me.
>
> I spent some time by playing with Gluster and Ganesha and after
25 matches
Mail list logo