Re: [Gluster-users] [ovirt-users] Re: ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-07 Thread Paolo Margara
Virt node 4.3.7 with >> the shipped gluster version. I upgraded to 4.3.8 with Gluster 6.7 and >> let's see how production ready this really is. >> >> -Chris. >> >> On 07/02/2020 08:46, Paolo Margara wrote: >>> Hi, >>> >>> th

Re: [Gluster-users] [ovirt-users] ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-07 Thread Paolo Margara
Hi, this is interesting, this happen always with gluster 6.6 or only in certain cases? I ask this because I have two ovirt clusters with gluster, both with gluster v6.6, in one case I've upgraded from 6.5 to 6.6 as Strahil, and I haven't hit this bug. When upgrading my clusters I follow exactly

Re: [Gluster-users] socket.so: undefined symbol: xlator_api - bd.so: cannot open shared object file & crypt.so: cannot open shared object file: No such file or directory

2019-11-12 Thread Paolo Margara
Hi all, I've the same problem while upgrading to gluster 6.6, in one case from gluster 5 in the other from gluster 3.12. It's safe to ignore these messages or there is some issue in our configuration? Or a bug, or a packaging issue or something else? Any suggestions are appreciated.

Re: [Gluster-users] Announcing Glusterfs release 3.12.15 (Long Term Maintenance)

2018-10-17 Thread Paolo Margara
Hi, this release will be the last of the 3.12.x branch prior it reach the EOL? Greetings,     Paolo Il 16/10/18 17:41, Jiffin Tony Thottan ha scritto: > > The Gluster community is pleased to announce the release of Gluster > 3.12.15 (packages available at [1,2,3]). > > Release notes for the

[Gluster-users] Issue enabling use-compound-fops with gfapi

2018-09-14 Thread Paolo Margara
Hi list, on a dev system I'm testing some options that are supposed to give improved performance, I'm running ovirt with gfapi enabled with gluster 3.12.13 and when I set "cluster.use-compound-fops" to "on" every VMs are paused due to a storage IO error while the file system continue to be

Re: [Gluster-users] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-28 Thread Paolo Margara
Hi, we’ve now tested version 3.12.13 on our ovirt dev cluster and all seems to be ok (obviously it's too early to see if the infamous memory leak issue got fixed), I think that should be safe to move related packages from -test to release for centos-gluster312 Greetings,     Paolo M. Il

Re: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term Maintenance)

2018-07-12 Thread Paolo Margara
Il 12/07/2018 14:23, Niels de Vos ha scritto: > On Wed, Jul 11, 2018 at 11:23:59AM +0200, Niels de Vos wrote: >> On Wed, Jul 11, 2018 at 09:26:45AM +0200, Paolo Margara wrote: >>> Hi Niels, >>> >>> I want just report that packages for release 3.12.10 and

Re: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term Maintenance)

2018-07-11 Thread Paolo Margara
Hi Niels, I want just report that packages for release 3.12.10 and 3.12.11 are still not available on the mirrors. Greetings,     Paolo Il 04/07/2018 09:11, Niels de Vos ha scritto: > On Tue, Jul 03, 2018 at 05:20:44PM -0500, Darrell Budic wrote: >> I’ve now tested 3.12.11 on my centos 7.5

Re: [Gluster-users] wrong size displayed with df after upgrade to 3.12.6

2018-04-12 Thread Paolo Margara
Dear all, I encountered the same issue, I saw that this is fixed in 3.12.7 but I cannot find this release in the main repo (centos storage SIG), only in the test one. What is the expectation to see this release available into the main repo? Greetings,     Paolo Il 09/03/2018 10:41, Stefan

Re: [Gluster-users] Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error

2018-03-17 Thread Paolo Margara
Hi, this patch it's already available in the community version of gluster 3.12? In which version? If not, there is plan to backport it? Greetings,     Paolo Il 16/03/2018 13:24, Atin Mukherjee ha scritto: > Have sent a backport request https://review.gluster.org/19730 at > release-3.10

Re: [Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2017-07-26 Thread Paolo Margara
in the oVirt GUI, there is anything that could I do from the gluster prospective to solve this issue? Considering that 3.8 is near EOL also upgrading to 3.10 could be an option. Greetings, Paolo Il 20/07/2017 15:37, Paolo Margara ha scritto: > > OK, on my nagios instance I've disabled g

Re: [Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2017-07-20 Thread Paolo Margara
volume are run simultaneously which can > result into transactions collision and you can end up with one command > succeeding and others failing. Ideally if you are running volume > status command for monitoring it's suggested to be run from only one node. > > On Thu, Jul 20, 2017 at 3

[Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2017-07-20 Thread Paolo Margara
34b73 * (node3) virtnode-0-2-gluster: d9047ecd-26b5-467b-8e91-50f76a0c4d16 In this case restarting glusterd on node3 usually solve the issue. What could be the root cause of this behavior? How can I fix this once and for all? If needed I could provide the full log file. Greetings, Paolo Margara

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-29 Thread Paolo Margara
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.marg...@polito.it <mailto:paolo.marg...@polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > > ht

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-29 Thread Paolo Margara
follow for the upgrade? We can fix the > documentation if there are any issues. > > On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishan...@redhat.com > <mailto:ravishan...@redhat.com>> wrote: > > On 06/29/2017 01:08 PM, Paolo Margara wrote: >> >>

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-29 Thread Paolo Margara
not stopped also the brick processes? Now how can I recover from this issue? Restarting all brick processes is enough? Greetings, Paolo Margara Il 28/06/2017 18:41, Pranith Kumar Karampuri ha scritto: > > > On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishan..

[Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-28 Thread Paolo Margara
meanwhile. Thanks. Greetings, Paolo Margara ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users