Nope, not in any official repo. I only use those suggested by oVirt, ie:
http://centos.bhs.mirrors.ovh.net/ftp.centos.org/7/storage/x86_64/gluster-3.7/
No 3.7.14 there. Thanks though.
Scott
On Sat, Aug 13, 2016 at 11:23 AM David Gossage
wrote:
> On Sat, Aug 13, 2016 at 11:00 AM, David Gossa
Sounds good, except they aren't even that ("suggestions during testing
phase"). They will flat out break the configuration. So they shouldn't be
tests AT ALL. They shouldn't be anything except the "don't do this."
Thanks.
Scott
On Sat, Aug 13, 2016 at 11:01 AM David Gossage
wrote:
> On Sat,
On Sat, Aug 13, 2016 at 11:00 AM, David Gossage wrote:
> On Sat, Aug 13, 2016 at 8:19 AM, Scott wrote:
>
>> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
>> works for me too where 3.7.12/13 did not.
>>
>> I did find that you should NOT turn off network.remote-dio or tur
On Sat, Aug 13, 2016 at 8:19 AM, Scott wrote:
> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
> works for me too where 3.7.12/13 did not.
>
> I did find that you should NOT turn off network.remote-dio or turn
> on performance.strict-o-direct as suggested earlier in the t
Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
works for me too where 3.7.12/13 did not.
I did find that you should NOT turn off network.remote-dio or turn
on performance.strict-o-direct as suggested earlier in the thread. They
will prevent dd (using direct flag) and othe
So far gluster 3.7.14 seems to have resolved issues at least on my test
box. dd commands that failed previously now work with sharding on zfs
backend,
Where before I couldn't even mount a new storage domain it now mounted and
I have a test vm being created.
Still have to let VM run for a few day
On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay
wrote:
> Hi,
>
> 1. Could you please attach the glustershd logs from all three nodes?
>
Here are ccgl1 and ccgl2. as previously mentioned ccgl3 third node was
down from bad nic so no relevant logs would be on that node.
>
> 2. Also, so far w
Hi,
1. Could you please attach the glustershd logs from all three nodes?
2. Also, so far what we know is that the 'Operation not permitted' errors
are on the main vm image itself and not its individual shards (ex
deb61291-5176-4b81-8315-3f1cf8e3534d). Could you do the following:
Get the inode nu
On Mon, Jul 25, 2016 at 3:48 PM, Alexander Wels wrote:
> On Monday, July 25, 2016 01:49:32 PM David Gossage wrote:
> > On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels
> wrote:
> > > On Monday, July 25, 2016 01:37:47 PM David Gossage wrote:
> > > > > > My test install of ovirt 3.6.7 and gluster 3
On Monday, July 25, 2016 01:49:32 PM David Gossage wrote:
> On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels wrote:
> > On Monday, July 25, 2016 01:37:47 PM David Gossage wrote:
> > > > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
> > > >
> > > > locak
> > > >
> > > > >
On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels wrote:
> On Monday, July 25, 2016 01:37:47 PM David Gossage wrote:
> > > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
> > >
> > > locak
> > >
> > > > disk right now isn't allowing me to add the gluster storage at all.
> > >
On Monday, July 25, 2016 01:37:47 PM David Gossage wrote:
> > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
> >
> > locak
> >
> > > disk right now isn't allowing me to add the gluster storage at all.
> > >
> > >
> > >
> > > Keep getting some type of UI error
> >
> > Y
>
> >
>
> > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
> locak
>
> > disk right now isn't allowing me to add the gluster storage at all.
>
> >
>
> > Keep getting some type of UI error
>
> >
>
>
>
> Yes that is definitely a UI error. To get a better stack trace can you
> in
On Mon, Jul 25, 2016 at 1:07 PM, David Gossage
wrote:
>
> On Mon, Jul 25, 2016 at 1:00 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay
>> wrote:
>>
>>> OK, could you try the following:
>>>
>>> i. Set network.remote-dio to off
>
On Monday, July 25, 2016 01:00:58 PM David Gossage wrote:
> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay
>
> wrote:
> > OK, could you try the following:
> >
> > i. Set network.remote-dio to off
> >
> > # gluster volume set network.remote-dio off
> >
> > ii. Set performance.stric
On Mon, Jul 25, 2016 at 1:00 PM, David Gossage
wrote:
>
> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay
> wrote:
>
>> OK, could you try the following:
>>
>> i. Set network.remote-dio to off
>> # gluster volume set network.remote-dio off
>>
>> ii. Set performance.strict-o-direct to
On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay
wrote:
> OK, could you try the following:
>
> i. Set network.remote-dio to off
> # gluster volume set network.remote-dio off
>
> ii. Set performance.strict-o-direct to on
> # gluster volume set performance.strict-o-direct on
>
>
OK, could you try the following:
i. Set network.remote-dio to off
# gluster volume set network.remote-dio off
ii. Set performance.strict-o-direct to on
# gluster volume set performance.strict-o-direct on
iii. Stop the affected vm(s) and start again
and tell me if you notice an
Hi,
> On 25 Jul 2016, at 12:34, David Gossage wrote:
>
> On Mon, Jul 25, 2016 at 1:01 AM, Krutika Dhananjay
> wrote:
> Hi,
>
> Thanks for the logs. So I have identified one issue from the logs for which
> the fix is this: http://review.gluster.org/#/c/14669/. Because of a bug in
> the code,
On Mon, Jul 25, 2016 at 1:01 AM, Krutika Dhananjay
wrote:
> Hi,
>
> Thanks for the logs. So I have identified one issue from the logs for
> which the fix is this: http://review.gluster.org/#/c/14669/. Because of a
> bug in the code, ENOENT was getting converted to EPERM and being propagated
> up
Hi,
Thanks for the logs. So I have identified one issue from the logs for which
the fix is this: http://review.gluster.org/#/c/14669/. Because of a bug in
the code, ENOENT was getting converted to EPERM and being propagated up the
stack causing the reads to bail out early with 'Operation not permi
Hi David,
Could you also share the brick logs from the affected volume? They're
located at
/var/log/glusterfs/bricks/.log.
Also, could you share the volume configuration (output of `gluster volume
info `) for the affected volume(s) AND at the time you actually saw
this issue?
-Krutika
On Thu
As per my cluster the problem is on an higher level: you can't activate
the domains an FUSE, sanlock can't acquire the lock due to the
permission errors visible in brick log.
Am Donnerstag, den 21.07.2016, 19:17 + schrieb Scott:
> > You change the cache mode using a custom property per-VM I bel
You change the cache mode using a custom property per-VM I believe. I
don't know if this would work for the hosted engine.
I've already downgraded my system, but once you have the test machine up,
perhaps you can try it. The custom property would be:
viodiskcache=writethrough
or
viodiskcache
Den 21 jul 2016 8:39 em skrev Scott :
>
> CentOS 7 for me as well, using the zfsonlinux.org packages.
Ok, so whatever problems there may be in those packages, at least you have them
in common. That can also be a comfort:)
/K
>
> On Thu, Jul 21, 2016 at 1:26 PM David Gossage
> wrote:
>>
>> On
CentOS 7 for me as well, using the zfsonlinux.org packages.
On Thu, Jul 21, 2016 at 1:26 PM David Gossage
wrote:
> On Thu, Jul 21, 2016 at 1:24 PM, Karli Sjöberg
> wrote:
>
>>
>> Den 21 jul 2016 7:54 em skrev David Gossage > >:
>> >
>> > On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote:
>> >>
>>
On Thu, Jul 21, 2016 at 1:24 PM, Karli Sjöberg wrote:
>
> Den 21 jul 2016 7:54 em skrev David Gossage :
> >
> > On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote:
> >>
> >> Hi David,
> >>
> >> My backend storage is ZFS.
> >>
> >> I thought about moving from FUSE to NFS mounts for my Gluster volumes
Den 21 jul 2016 7:54 em skrev David Gossage :
>
> On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote:
>>
>> Hi David,
>>
>> My backend storage is ZFS.
>>
>> I thought about moving from FUSE to NFS mounts for my Gluster volumes to
>> help test. But since I use hosted engine this would be a real pain.
On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote:
> Hi David,
>
> My backend storage is ZFS.
>
> I thought about moving from FUSE to NFS mounts for my Gluster volumes to
> help test. But since I use hosted engine this would be a real pain. Its
> difficult to modify the storage domain type/path in
Hi David,
My backend storage is ZFS.
I thought about moving from FUSE to NFS mounts for my Gluster volumes to
help test. But since I use hosted engine this would be a real pain. Its
difficult to modify the storage domain type/path in the
hosted-engine.conf. And I don't want to go through the p
What back end storage do you run gluster on? xfs/zfs/ext4 etc?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Thu, Jul 21, 2016 at 8:18 AM, Scott wrote:
> I get similar problems with oVirt 4.0.1 and hosted engine. After
> upgrading all my hosts to Glust
I'm creating a test box I can more thoroughly mess with so I can submit to
bugzilla something. Since my errors all popped up while trying to get
ovirt and gluster functional again rather than thoroughly gather logs and
test my data is kinda sketchy.
*David Gossage*
*Carousel Checks Inc. | System
I get similar problems with oVirt 4.0.1 and hosted engine. After upgrading
all my hosts to Gluster 3.7.13 (client and server), I get the following:
$ sudo hosted-engine --set-maintenance --mode=none
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_modul
Hey Devid,
I have the very same problem on my test-cluster, despite on running
ovirt 4.0.
If you access your volumes via NFS all is fine, problem is FUSE. I
stayed on 3.7.13, but have no solution yet, now I use NFS.
Frank
Am Donnerstag, den 21.07.2016, 04:28 -0500 schrieb David Gossage:
> > > > >
On Thu, Jul 21, 2016 at 11:28 AM, David Gossage wrote:
> Anyone running one of recent 3.6.x lines and gluster using 3.7.13? I am
> looking to upgrade gluster from 3.7.11->3.7.13 for some bug fixes, but have
> been told by users on gluster mail list due to some gluster changes I'd
> need to chang
Anyone running one of recent 3.6.x lines and gluster using 3.7.13? I am
looking to upgrade gluster from 3.7.11->3.7.13 for some bug fixes, but have
been told by users on gluster mail list due to some gluster changes I'd
need to change the disk parameters to use writeback cache. Something to do
wi
36 matches
Mail list logo