an
> handle two node failures. We are using 28 disk servers but our next
> cluster will use 68 disk servers.
>
>
> On Thu, Apr 20, 2017 at 1:19 PM, Ingard Mevåg <ing...@jotta.no> wrote:
> > Hi
> >
> > We've been looking at supermicro 60 and 90 bay servers. Are
Hi
We've been looking at supermicro 60 and 90 bay servers. Are anyone else
using these models (or similar density) for gluster?
Specifically I'd like to setup a distributed disperse volume with 8 of
these servers.
Any insight, does and donts or best practice guidelines would be
appreciated :)
Hi
I've been playing with disperse volumes the past week, and so far i can not
get more than 12MB/s when i do a write test. I've tried a distributed
volume on the same bricks and gotten close to gigabit speeds. iperf
confirms gigabit speeds to all three servers in the storage pool.
The three
2017-04-25 9:03 GMT+02:00 Xavier Hernandez <xhernan...@datalab.es>:
> Hi Ingard,
>
> On 24/04/17 14:43, Ingard Mevåg wrote:
>
>> I've done some more testing with tc and introduced latency on one of my
>> testservers. With 9ms latency artificially introduced using
when testing DC1 <-> DC2 (which has ~9ms ping).
I know distribute volumes were more sensitive to latency in the past. At
least I can max out a 1gig link with 9-10ms latency when using distribute.
Disperse seems to max at 12-14MB/s with 8-10ms latency.
ingard
2017-04-24 14:03 GMT+02:00 Ingard
ill take ages to heal a drive with that file count...
>
> On Mon, May 8, 2017 at 3:59 PM, Ingard Mevåg <ing...@jotta.no> wrote:
> > With attachments :)
> >
> > 2017-05-08 14:57 GMT+02:00 Ingard Mevåg <ing...@jotta.no>:
> >>
> >> Hi
> &
Hi
We've got 3 servers with 60 drives each setup with an EC volume running on
gluster 3.10.0
The servers are connected via 10gigE.
We've done the changes recommended here :
https://bugzilla.redhat.com/show_bug.cgi?id=1349953#c17 and we're able to
max out the network with the iozone tests
2017-09-05 19:58 GMT+02:00 Vijay Bellur <vbel...@redhat.com>:
> On 09/04/2017 10:00 AM, Ingard Mevåg wrote:
>
>> Hi
>>
>> I'm seeing quite high cpu sys utilisation and an increased system load
>> the past few days on my servers. It appears it doesn
Hi
We're seeing high? number of pending calls on two of our glusterfs 3.10
clusters.
We have not tried to tune anything except changing server.event-threads: 2
gluster volume status callpool | grep Pending results in various numbers
but more often that not a fair few of the bricks have 200-400
Hi
I'm seeing quite high cpu sys utilisation and an increased system load the
past few days on my servers. It appears it doesn't start at exactly the
same time for the different servers, but I've not (yet) been able to pin
the cpu usage to a specific task or entries in the logs.
The cluster is
Hi
We've recently done some testing with a 3.12 disperse cluster. The
performance of filesystem stat calls was terrible, taking multiple seconds.
We dumped client side stats to see what was going on and noticed gluster
STAT was the culprit. tpcdump shows a STAT call being sent and replied to
After discussing with Xavi in #gluster-dev we found out that we could
eliminate the slow lstats by disabling disperse.eager-lock.
There is an open issue here :
https://bugzilla.redhat.com/show_bug.cgi?id=1546732
___
Gluster-users mailing list
If you could replicate the problem you had and provide the volume info +
profile that was requested from the redhat guys that would help in trying
to understand what is happening with your workload. Also if possible the
script you used to generate the load.
We've had our share of difficulties
We got extremely slow stat calls on our disperse cluster running latest
3.12 with clients also running 3.12.
When we downgraded clients to 3.10 the slow stat problem went away.
We later found out that by disabling disperse.eager-lock we could run the
3.12 clients without much issue (a little bit
14 matches
Mail list logo