Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread Scott
Nope, not in any official repo. I only use those suggested by oVirt, ie: http://centos.bhs.mirrors.ovh.net/ftp.centos.org/7/storage/x86_64/gluster-3.7/ No 3.7.14 there. Thanks though. Scott On Sat, Aug 13, 2016 at 11:23 AM David Gossage wrote: > On Sat, Aug 13,

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread Scott
Sounds good, except they aren't even that ("suggestions during testing phase"). They will flat out break the configuration. So they shouldn't be tests AT ALL. They shouldn't be anything except the "don't do this." Thanks. Scott On Sat, Aug 13, 2016 at 11:01 AM David Gossage

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread David Gossage
On Sat, Aug 13, 2016 at 11:00 AM, David Gossage wrote: > On Sat, Aug 13, 2016 at 8:19 AM, Scott wrote: > >> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it >> works for me too where 3.7.12/13 did not. >> >> I did find

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread David Gossage
On Sat, Aug 13, 2016 at 8:19 AM, Scott wrote: > Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it > works for me too where 3.7.12/13 did not. > > I did find that you should NOT turn off network.remote-dio or turn > on performance.strict-o-direct as

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread Scott
Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it works for me too where 3.7.12/13 did not. I did find that you should NOT turn off network.remote-dio or turn on performance.strict-o-direct as suggested earlier in the thread. They will prevent dd (using direct flag) and

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-02 Thread David Gossage
So far gluster 3.7.14 seems to have resolved issues at least on my test box. dd commands that failed previously now work with sharding on zfs backend, Where before I couldn't even mount a new storage domain it now mounted and I have a test vm being created. Still have to let VM run for a few

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-26 Thread David Gossage
On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay wrote: > Hi, > > 1. Could you please attach the glustershd logs from all three nodes? > Here are ccgl1 and ccgl2. as previously mentioned ccgl3 third node was down from bad nic so no relevant logs would be on that node.

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-26 Thread Krutika Dhananjay
Hi, 1. Could you please attach the glustershd logs from all three nodes? 2. Also, so far what we know is that the 'Operation not permitted' errors are on the main vm image itself and not its individual shards (ex deb61291-5176-4b81-8315-3f1cf8e3534d). Could you do the following: Get the inode

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread David Gossage
On Mon, Jul 25, 2016 at 3:48 PM, Alexander Wels wrote: > On Monday, July 25, 2016 01:49:32 PM David Gossage wrote: > > On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels > wrote: > > > On Monday, July 25, 2016 01:37:47 PM David Gossage wrote: > > > > > > My test

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread Alexander Wels
On Monday, July 25, 2016 01:49:32 PM David Gossage wrote: > On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels wrote: > > On Monday, July 25, 2016 01:37:47 PM David Gossage wrote: > > > > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a > > > > > > > > locak

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread David Gossage
On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels wrote: > On Monday, July 25, 2016 01:37:47 PM David Gossage wrote: > > > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a > > > > > > locak > > > > > > > disk right now isn't allowing me to add the gluster

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread Alexander Wels
On Monday, July 25, 2016 01:37:47 PM David Gossage wrote: > > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a > > > > locak > > > > > disk right now isn't allowing me to add the gluster storage at all. > > > > > > > > > > > > Keep getting some type of UI error > > > >

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread David Gossage
> > > > > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a > locak > > > disk right now isn't allowing me to add the gluster storage at all. > > > > > > Keep getting some type of UI error > > > > > > > Yes that is definitely a UI error. To get a better stack trace can you >

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread David Gossage
On Mon, Jul 25, 2016 at 1:00 PM, David Gossage wrote: > > On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay > wrote: > >> OK, could you try the following: >> >> i. Set network.remote-dio to off >> # gluster volume set

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread David Gossage
On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay wrote: > OK, could you try the following: > > i. Set network.remote-dio to off > # gluster volume set network.remote-dio off > > ii. Set performance.strict-o-direct to on > # gluster volume set

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread Krutika Dhananjay
OK, could you try the following: i. Set network.remote-dio to off # gluster volume set network.remote-dio off ii. Set performance.strict-o-direct to on # gluster volume set performance.strict-o-direct on iii. Stop the affected vm(s) and start again and tell me if you notice

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread Samuli Heinonen
Hi, > On 25 Jul 2016, at 12:34, David Gossage wrote: > > On Mon, Jul 25, 2016 at 1:01 AM, Krutika Dhananjay > wrote: > Hi, > > Thanks for the logs. So I have identified one issue from the logs for which > the fix is this:

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-25 Thread David Gossage
On Mon, Jul 25, 2016 at 1:01 AM, Krutika Dhananjay wrote: > Hi, > > Thanks for the logs. So I have identified one issue from the logs for > which the fix is this: http://review.gluster.org/#/c/14669/. Because of a > bug in the code, ENOENT was getting converted to EPERM and

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-22 Thread Krutika Dhananjay
Hi David, Could you also share the brick logs from the affected volume? They're located at /var/log/glusterfs/bricks/.log. Also, could you share the volume configuration (output of `gluster volume info `) for the affected volume(s) AND at the time you actually saw this issue? -Krutika On

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Frank Rothenstein
As per my cluster the problem is on an higher level: you can't activate the domains an FUSE, sanlock can't acquire the lock due to the permission errors visible in brick log. Am Donnerstag, den 21.07.2016, 19:17 + schrieb Scott: > > You change the cache mode using a custom property per-VM I

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Scott
You change the cache mode using a custom property per-VM I believe. I don't know if this would work for the hosted engine. I've already downgraded my system, but once you have the test machine up, perhaps you can try it. The custom property would be: viodiskcache=writethrough or

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Karli Sjöberg
Den 21 jul 2016 8:39 em skrev Scott : > > CentOS 7 for me as well, using the zfsonlinux.org packages. Ok, so whatever problems there may be in those packages, at least you have them in common. That can also be a comfort:) /K > > On Thu, Jul 21, 2016 at 1:26 PM David

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Scott
CentOS 7 for me as well, using the zfsonlinux.org packages. On Thu, Jul 21, 2016 at 1:26 PM David Gossage wrote: > On Thu, Jul 21, 2016 at 1:24 PM, Karli Sjöberg > wrote: > >> >> Den 21 jul 2016 7:54 em skrev David Gossage

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread David Gossage
On Thu, Jul 21, 2016 at 1:24 PM, Karli Sjöberg wrote: > > Den 21 jul 2016 7:54 em skrev David Gossage : > > > > On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote: > >> > >> Hi David, > >> > >> My backend storage is ZFS. > >> >

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Karli Sjöberg
Den 21 jul 2016 7:54 em skrev David Gossage : > > On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote: >> >> Hi David, >> >> My backend storage is ZFS. >> >> I thought about moving from FUSE to NFS mounts for my Gluster volumes to >> help test. But

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread David Gossage
On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote: > Hi David, > > My backend storage is ZFS. > > I thought about moving from FUSE to NFS mounts for my Gluster volumes to > help test. But since I use hosted engine this would be a real pain. Its > difficult to modify the storage

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Scott
Hi David, My backend storage is ZFS. I thought about moving from FUSE to NFS mounts for my Gluster volumes to help test. But since I use hosted engine this would be a real pain. Its difficult to modify the storage domain type/path in the hosted-engine.conf. And I don't want to go through the

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread David Gossage
What back end storage do you run gluster on? xfs/zfs/ext4 etc? *David Gossage* *Carousel Checks Inc. | System Administrator* *Office* 708.613.2284 On Thu, Jul 21, 2016 at 8:18 AM, Scott wrote: > I get similar problems with oVirt 4.0.1 and hosted engine. After > upgrading

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread David Gossage
I'm creating a test box I can more thoroughly mess with so I can submit to bugzilla something. Since my errors all popped up while trying to get ovirt and gluster functional again rather than thoroughly gather logs and test my data is kinda sketchy. *David Gossage* *Carousel Checks Inc. |

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Scott
I get similar problems with oVirt 4.0.1 and hosted engine. After upgrading all my hosts to Gluster 3.7.13 (client and server), I get the following: $ sudo hosted-engine --set-maintenance --mode=none Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Frank Rothenstein
Hey Devid, I have the very same problem on my test-cluster, despite on running ovirt 4.0. If you access your volumes via NFS all is fine, problem is FUSE. I stayed on 3.7.13, but have no solution yet, now I use NFS. Frank Am Donnerstag, den 21.07.2016, 04:28 -0500 schrieb David Gossage: > > > >

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread Sandro Bonazzola
On Thu, Jul 21, 2016 at 11:28 AM, David Gossage wrote: > Anyone running one of recent 3.6.x lines and gluster using 3.7.13? I am > looking to upgrade gluster from 3.7.11->3.7.13 for some bug fixes, but have > been told by users on gluster mail list due to some

[ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-07-21 Thread David Gossage
Anyone running one of recent 3.6.x lines and gluster using 3.7.13? I am looking to upgrade gluster from 3.7.11->3.7.13 for some bug fixes, but have been told by users on gluster mail list due to some gluster changes I'd need to change the disk parameters to use writeback cache. Something to do