I got that too so upgraded to gluster6-rc0 nit still, this morning one
engine brick is down :
[2019-03-04 01:33:22.492206] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-03-04 01:38:34.601381] I [addr.c:54:compare_addr_and_update]
0-/glu
We recently upgraded to 4.3.0 and have found that when changing disk QoS
settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM
to reboot. We've been able to replicate this across several VMs. VMs with
IO-Threads disabled/turned off do not segfault when changing the QoS.
I have tried bumping to 5.4 now and still getting alot of "Failed
Eventhandler" errors in the logs, any ideas guys?
Den søn. 3. mar. 2019 kl. 09:03 skrev Guillaume Pavese <
guillaume.pav...@interactiv-group.com>:
> Gluster 5.4 is released but not yet in official repository
> If like me you can no
The problem is that anything on the budged doesn't have decent network + enough
storage slots.
Maybe a homemade workstation with AMD ryzen could do the trick - but this is
way over the budged compared to raspberry Pi-s
Best Regards,
Strahil NikolovOn Mar 3, 2019 12:22, Jonathan Baecker
wrote:
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system
may not be provisioned according to the playbook results: please check the logs
for the issue, fix accordingly or re-deploy from scratch.\n"}
I keep getting this when I try to deploy the Engine VM from the menu. I cha
Hello everybody!
Does anyone here have experience with a cheap, energy-saving glusterfs
storage solution? I'm thinking of something that has more power than a
rasbian Pi, 3 x 2 TB (SSD) storage, but doesn't cost much more and
doesn't consume much more power.
Would that be possible? I know th
Hi!
512 emulation was intended to support drivers that only do a fraction of
their I/O in blocks smaller 4KB. It is not optimized for performance in any
way. Under the covers, VDO is still operating on 4KB physical blocks, so
each 512-byte read is potentially amplified to a 4KB read, and each
512-
I had this issue, I believe that when I tried to fix the network manually so
that ovirt could sync the correct config, vdsm was kicking in and overwriting
my changes with what it had stored in /var/lib/vdsm/persistence/netconf/ before
the sync took place. For whatever reason this was dhcp.
_
Gluster 5.4 is released but not yet in official repository
If like me you can not wait the official release of Gluster 5.4 with the
instability bugfixes (planned for around March 12 hopefully), you can use
the following repository :
For Gluster 5.4-1 :
#/etc/yum.repos.d/Gluster5-Testing.repo
[Glu
9 matches
Mail list logo