Virt node 4.3.7 with
>> the shipped gluster version. I upgraded to 4.3.8 with Gluster 6.7 and
>> let's see how production ready this really is.
>>
>> -Chris.
>>
>> On 07/02/2020 08:46, Paolo Margara wrote:
>>> Hi,
>>>
>>> th
Hi,
this is interesting, this happen always with gluster 6.6 or only in
certain cases?
I ask this because I have two ovirt clusters with gluster, both with
gluster v6.6, in one case I've upgraded from 6.5 to 6.6 as Strahil, and
I haven't hit this bug.
When upgrading my clusters I follow exactly
Hi all,
I've the same problem while upgrading to gluster 6.6, in one case from
gluster 5 in the other from gluster 3.12.
It's safe to ignore these messages or there is some issue in our
configuration? Or a bug, or a packaging issue or something else?
Any suggestions are appreciated.
Hi,
this release will be the last of the 3.12.x branch prior it reach the EOL?
Greetings,
Paolo
Il 16/10/18 17:41, Jiffin Tony Thottan ha scritto:
>
> The Gluster community is pleased to announce the release of Gluster
> 3.12.15 (packages available at [1,2,3]).
>
> Release notes for the
Hi list,
on a dev system I'm testing some options that are supposed to give
improved performance, I'm running ovirt with gfapi enabled with gluster
3.12.13 and when I set "cluster.use-compound-fops" to "on" every VMs are
paused due to a storage IO error while the file system continue to be
Hi,
we’ve now tested version 3.12.13 on our ovirt dev cluster and all seems
to be ok (obviously it's too early to see if the infamous memory leak
issue got fixed), I think that should be safe to move related packages
from -test to release for centos-gluster312
Greetings,
Paolo M.
Il
Il 12/07/2018 14:23, Niels de Vos ha scritto:
> On Wed, Jul 11, 2018 at 11:23:59AM +0200, Niels de Vos wrote:
>> On Wed, Jul 11, 2018 at 09:26:45AM +0200, Paolo Margara wrote:
>>> Hi Niels,
>>>
>>> I want just report that packages for release 3.12.10 and
Hi Niels,
I want just report that packages for release 3.12.10 and 3.12.11 are
still not available on the mirrors.
Greetings,
Paolo
Il 04/07/2018 09:11, Niels de Vos ha scritto:
> On Tue, Jul 03, 2018 at 05:20:44PM -0500, Darrell Budic wrote:
>> I’ve now tested 3.12.11 on my centos 7.5
Dear all,
I encountered the same issue, I saw that this is fixed in 3.12.7 but I
cannot find this release in the main repo (centos storage SIG), only in
the test one.
What is the expectation to see this release available into the main repo?
Greetings,
Paolo
Il 09/03/2018 10:41, Stefan
Hi,
this patch it's already available in the community version of gluster
3.12? In which version? If not, there is plan to backport it?
Greetings,
Paolo
Il 16/03/2018 13:24, Atin Mukherjee ha scritto:
> Have sent a backport request https://review.gluster.org/19730 at
> release-3.10
in the oVirt GUI, there is anything that
could I do from the gluster prospective to solve this issue? Considering
that 3.8 is near EOL also upgrading to 3.10 could be an option.
Greetings,
Paolo
Il 20/07/2017 15:37, Paolo Margara ha scritto:
>
> OK, on my nagios instance I've disabled g
volume are run simultaneously which can
> result into transactions collision and you can end up with one command
> succeeding and others failing. Ideally if you are running volume
> status command for monitoring it's suggested to be run from only one node.
>
> On Thu, Jul 20, 2017 at 3
34b73
* (node3) virtnode-0-2-gluster: d9047ecd-26b5-467b-8e91-50f76a0c4d16
In this case restarting glusterd on node3 usually solve the issue.
What could be the root cause of this behavior? How can I fix this once
and for all?
If needed I could provide the full log file.
Greetings,
Paolo Margara
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto:
>
>
> On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara
> <paolo.marg...@polito.it <mailto:paolo.marg...@polito.it>> wrote:
>
> Hi Pranith,
>
> I'm using this guide
>
> ht
follow for the upgrade? We can fix the
> documentation if there are any issues.
>
> On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishan...@redhat.com
> <mailto:ravishan...@redhat.com>> wrote:
>
> On 06/29/2017 01:08 PM, Paolo Margara wrote:
>>
>>
not stopped also the brick processes?
Now how can I recover from this issue? Restarting all brick processes is
enough?
Greetings,
Paolo Margara
Il 28/06/2017 18:41, Pranith Kumar Karampuri ha scritto:
>
>
> On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishan..
meanwhile.
Thanks.
Greetings,
Paolo Margara
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
17 matches
Mail list logo