On Sat, Jul 9, 2016 at 7:45 PM, Lindsay Mathieson < [email protected]> wrote:
> On 10/07/2016 5:17 AM, David Gossage wrote: > >> Came in this morning to update to 3.7.12 and noticed that 3.7.13 had been >> released. So shut down VM's and gluster volumes and updated. >> update process itself went smoothly but on starting up oVirt engine the >> main gluster storage volume didn't activate. I manually activated and it >> came up but oVirt wouldn't report on how much space was used. However >> ovirt nodes did mount and allow me to start VM's. However after a few >> minutes it would claim to be inactive again even if the nodes themselevs >> still had access and mounted volumes and the VM's were still running. Found >> these errors flooding the gluster logs on nodes. >> > > Hi David, I did a quick test this morning with Proxmox and 3.7.13 and was > able to get it working with the fuse mount *and* libgfapi. > > > One caveat - you *have* to enable qemu caching, either write-back or > write-through. 3.7.12 & 13 seem to now disable aio support, and qemu > requires that when caching is turned off. > > > There are setting for aio in gluster that I haven't played with yet. > > Comparing settings you have posted I noticed I had one difference. performance.stat-prefetch: off What effect does this have? My current line-up Options Reconfigured: features.shard-block-size: 64MB features.shard: on server.allow-insecure: on cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 36 storage.owner-uid: 36 performance.readdir-ahead: on cluster.self-heal-window-size: 1024 cluster.background-self-heal-count: 16 performance.strict-write-ordering: off nfs.disable: on nfs.addr-namelookup: off nfs.enable-ino32: off > -- > Lindsay Mathieson > >
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
