Thanks for pointing me to the documentation. That's perfect, I can now plan my
upgrade to 3.8.11. By the way I was wondering why is a self-heal part of the
upgrade procedure? Is it just in case or is it mandatory?
Regards
M.
Original Message
Subject: Re: [Gluster-users]
On 04/19/2017 04:11 PM, Eric K. Miller wrote:
We have a requirement to stay on CentOS 7.2 for a while (due to some
bugs in 7.3 components related to libvirt). So we have the yum repos
set to CentOS 7.2, not 7.3. When installing Gluster (latest version in
the repo, which turns out to be
On Wed, Apr 19, 2017 at 8:42 AM, Tom Zhou wrote:
> Setup:
>
> server : ubuntu 16.04
> glusterfs version: 3.10
>
> volume type: Disperse volume (4+2) nodes
>
> mount type: glusterfs fuse
>
>
> Problem:
>
> when grep heavily on a mounted Disperse volume, "transport disconnected "
Hi Pranith,
> 1) At the moment heals happen in parallel only for files not directories.
i.e. same shd process doesn't heal 2 directories at a time. But it > can
do as many file heals as shd-max-threads option. That could be the reason
why Amudhan faced better performance after a while, but > it
Hi
We've been looking at supermicro 60 and 90 bay servers. Are anyone else
using these models (or similar density) for gluster?
Specifically I'd like to setup a distributed disperse volume with 8 of
these servers.
Any insight, does and donts or best practice guidelines would be
appreciated :)
On Thu, Apr 20, 2017 at 06:58:51AM -0400, Kaleb S. KEITHLEY wrote:
> On 04/19/2017 04:11 PM, Eric K. Miller wrote:
> > We have a requirement to stay on CentOS 7.2 for a while (due to some
> > bugs in 7.3 components related to libvirt). So we have the yum repos
> > set to CentOS 7.2, not 7.3.
Hello and Thanks,
yes i’know this repository, but i’need it for raspberry pi, for normal Debian 8
i’have. In the current raspberry repository is only the version 3.5.2 .
Mario Roeber
er...@port-x.de
Sie möchten mit mir Verschluesselt eMails austauschen? Hier mein oeffendlicher
Schlüssel.
Thanks Amar and Mohamed!
My question was mainly aiming at things like programmatical limitations.
We're already running 2 Gluster-Clusters with 4 nodes each.
3 bricks = 100 TB/node = 400 TB total.
So with Gluster 3.x it's 8PB, possibly more with Gluster 4.x.
Right?
Thank you very much again!
On Wed, Apr 19, 2017 at 01:46:14PM -0400, mabi wrote:
> Sorry for insisting but where can I find the upgrading to 3.8 guide?
> This is the only guide missing from the docs... I would like to
> upgrade from 3.7 and would like to follow the documentation to make
> sure everything goes well.
The
On Wed, Apr 19, 2017 at 06:31:45PM +, Mahdi Adnan wrote:
> Hi,
>
>
> I think bug 1440635 has not been fixed yet.
> https://bugzilla.redhat.com/show_bug.cgi?id=1440635
Indeed, that bug has been re-opened. Some fixes were merged for the bug,
so there might be specific corner cases where the
On Thu, Apr 20, 2017 at 12:32 PM, Peter B. wrote:
> Thanks Amar and Mohamed!
>
> My question was mainly aiming at things like programmatical limitations.
> We're already running 2 Gluster-Clusters with 4 nodes each.
> 3 bricks = 100 TB/node = 400 TB total.
>
> So with
What is your use case? Disperse is good for archive workloads, big files.
I suggest you to buy 10 servers and use 8+2 EC configuration. This way you can
handle two node failures. We are using 28 disk servers but our next
cluster will use 68 disk servers.
On Thu, Apr 20, 2017 at 1:19 PM, Ingard
On Thu, Apr 13, 2017 at 8:17 PM, Shyam wrote:
> On 02/28/2017 10:17 AM, Shyam wrote:
>>
>> Hi,
>>
>> With release 3.10 shipped [1], it is time to set the dates for release
>> 3.11 (and subsequently 4.0).
>>
>> This mail has the following sections, so please read or revisit as
No, but there are few disk failures happened. since my volume type is
disperse I have replaced disks from one of the disperse set and mount disk
in same mount point in node and started volume with force to bring it to
service.
On Wed, Apr 19, 2017 at 9:46 PM, Amar Tumballi
14 matches
Mail list logo