[ovirt-users] Re: Advice around ovirt 4.3 / gluster 5.x

2019-03-03 Thread Guillaume Pavese
I got that too so upgraded to gluster6-rc0 nit still, this morning one
engine brick is down :

[2019-03-04 01:33:22.492206] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-03-04 01:38:34.601381] I [addr.c:54:compare_addr_and_update]
0-/gluster_bricks/engine/engine: allowed = "*", received addr =
"10.199.211.5"
[2019-03-04 01:38:34.601410] I [login.c:110:gf_auth] 0-auth/login: allowed
user names: 9e360b5b-34d3-4076-bc7e-ed78e4e0dc01
[2019-03-04 01:38:34.601421] I [MSGID: 115029]
[server-handshake.c:550:server_setvolume] 0-engine-server: accepted client
from
CTX_ID:f7603ec6-9914-408b-85e6-e64e9844e326-GRAPH_ID:0-PID:300490-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
(version: 6.0rc0) with subvol /gluster_bricks/engine/engine
[2019-03-04 01:38:34.610400] I [MSGID: 115036]
[server.c:498:server_rpc_notify] 0-engine-server: disconnecting connection
from
CTX_ID:f7603ec6-9914-408b-85e6-e64e9844e326-GRAPH_ID:0-PID:300490-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
[2019-03-04 01:38:34.610531] I [MSGID: 101055]
[client_t.c:436:gf_client_unref] 0-engine-server: Shutting down connection
CTX_ID:f7603ec6-9914-408b-85e6-e64e9844e326-GRAPH_ID:0-PID:300490-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
[2019-03-04 01:38:34.610574] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-03-04 01:39:18.520347] I [addr.c:54:compare_addr_and_update]
0-/gluster_bricks/engine/engine: allowed = "*", received addr =
"10.199.211.5"
[2019-03-04 01:39:18.520373] I [login.c:110:gf_auth] 0-auth/login: allowed
user names: 9e360b5b-34d3-4076-bc7e-ed78e4e0dc01
[2019-03-04 01:39:18.520383] I [MSGID: 115029]
[server-handshake.c:550:server_setvolume] 0-engine-server: accepted client
from
CTX_ID:f3be82ea-6340-4bd4-afb3-aa9db432f779-GRAPH_ID:0-PID:300885-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
(version: 6.0rc0) with subvol /gluster_bricks/engine/engine
[2019-03-04 01:39:19.711947] I [MSGID: 115036]
[server.c:498:server_rpc_notify] 0-engine-server: disconnecting connection
from
CTX_ID:f3be82ea-6340-4bd4-afb3-aa9db432f779-GRAPH_ID:0-PID:300885-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
[2019-03-04 01:39:19.712431] I [MSGID: 101055]
[client_t.c:436:gf_client_unref] 0-engine-server: Shutting down connection
CTX_ID:f3be82ea-6340-4bd4-afb3-aa9db432f779-GRAPH_ID:0-PID:300885-HOST:ps-inf-int-kvm-fr-305-210.hostics.fr-PC_NAME:engine-client-0-RECON_NO:-0
[2019-03-04 01:39:19.712484] E [MSGID: 101191]
[event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
(END)


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Mon, Mar 4, 2019 at 3:56 AM Endre Karlson 
wrote:

> I have tried bumping to 5.4 now and still getting alot of "Failed
> Eventhandler" errors in the logs, any ideas guys?
>
> Den søn. 3. mar. 2019 kl. 09:03 skrev Guillaume Pavese <
> guillaume.pav...@interactiv-group.com>:
>
>> Gluster 5.4 is released but not yet in official repository
>> If like me you can not wait the official release of Gluster 5.4 with the
>> instability bugfixes (planned for around March 12 hopefully), you can use
>> the following repository :
>>
>> For Gluster 5.4-1 :
>>
>> #/etc/yum.repos.d/Gluster5-Testing.repo
>> [Gluster5-Testing]
>> name=Gluster5-Testing $basearch
>> baseurl=
>> https://cbs.centos.org/repos/storage7-gluster-5-testing/os/$basearch/
>> enabled=1
>> #metadata_expire=60m
>> gpgcheck=0
>>
>>
>> If adventurous ;)  Gluster 6-rc0 :
>>
>> #/etc/yum.repos.d/Gluster6-Testing.repo
>> [Gluster6-Testing]
>> name=Gluster6-Testing $basearch
>> baseurl=
>> https://cbs.centos.org/repos/storage7-gluster-6-testing/os/$basearch/
>> enabled=1
>> #metadata_expire=60m
>> gpgcheck=0
>>
>>
>> GLHF
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Sun, Mar 3, 2019 at 6:16 AM Endre Karlson 
>> wrote:
>>
>>> Hi, should we downgrade / reinstall our cluster? we have a 4 node
>>> cluster that's breakin apart daily due to the issues with GlusterFS after
>>> upgrading from 4.2.8 that was rock solid. I am wondering why 4.3 was
>>> released as a stable version at all?? **FRUSTRATION**
>>>
>>> Endre
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TJKJGGWCANXWZED2WF5ZHTSRS2DVHR2/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt

[ovirt-users] Changing disk QoS causes segfault with IO-Threads enabled (oVirt 4.3.0.4-1.el7)

2019-03-03 Thread jloh
We recently upgraded to 4.3.0 and have found that when changing disk QoS 
settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM 
to reboot. We've been able to replicate this across several VMs. VMs with 
IO-Threads disabled/turned off do not segfault when changing the QoS.

Mar  1 11:49:06 srvXX kernel: IO iothread1[30468]: segfault at fff8 
ip 557649f2bd24 sp 7f80de832f60 error 5 in qemu-kvm[5576498dd000+a03000]
Mar  1 11:49:06 srvXX abrt-hook-ccpp: invalid number 'iothread1'
Mar  1 11:49:11 srvXX libvirtd: 2019-03-01 00:49:11.116+: 13365: error : 
qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer

Happy to supply some more logs to someone if they'll help but just wondering 
whether anyone else has experienced this or knows of a current fix other than 
turning io-threads off.

Cheers.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEVUBKWDS7NHHEDFK3G5K7HFRU3NJUKV/


[ovirt-users] Re: Advice around ovirt 4.3 / gluster 5.x

2019-03-03 Thread Endre Karlson
I have tried bumping to 5.4 now and still getting alot of "Failed
Eventhandler" errors in the logs, any ideas guys?

Den søn. 3. mar. 2019 kl. 09:03 skrev Guillaume Pavese <
guillaume.pav...@interactiv-group.com>:

> Gluster 5.4 is released but not yet in official repository
> If like me you can not wait the official release of Gluster 5.4 with the
> instability bugfixes (planned for around March 12 hopefully), you can use
> the following repository :
>
> For Gluster 5.4-1 :
>
> #/etc/yum.repos.d/Gluster5-Testing.repo
> [Gluster5-Testing]
> name=Gluster5-Testing $basearch
> baseurl=
> https://cbs.centos.org/repos/storage7-gluster-5-testing/os/$basearch/
> enabled=1
> #metadata_expire=60m
> gpgcheck=0
>
>
> If adventurous ;)  Gluster 6-rc0 :
>
> #/etc/yum.repos.d/Gluster6-Testing.repo
> [Gluster6-Testing]
> name=Gluster6-Testing $basearch
> baseurl=
> https://cbs.centos.org/repos/storage7-gluster-6-testing/os/$basearch/
> enabled=1
> #metadata_expire=60m
> gpgcheck=0
>
>
> GLHF
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Sun, Mar 3, 2019 at 6:16 AM Endre Karlson 
> wrote:
>
>> Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster
>> that's breakin apart daily due to the issues with GlusterFS after upgrading
>> from 4.2.8 that was rock solid. I am wondering why 4.3 was released as a
>> stable version at all?? **FRUSTRATION**
>>
>> Endre
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TJKJGGWCANXWZED2WF5ZHTSRS2DVHR2/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PYUE337CBDNJSICY3Z3CRW2OFSGLX2Q2/


[ovirt-users] Re: low power, low cost glusterfs storage

2019-03-03 Thread Strahil
The problem is that anything on the budged doesn't have decent network + enough 
storage slots.
Maybe a homemade workstation with AMD ryzen could do the trick - but this is 
way over the budged compared to raspberry Pi-s

Best Regards,
Strahil NikolovOn Mar 3, 2019 12:22, Jonathan Baecker  
wrote:
>
> Hello everybody! 
>
> Does anyone here have experience with a cheap, energy-saving glusterfs 
> storage solution? I'm thinking of something that has more power than a 
> rasbian Pi, 3 x 2 TB (SSD) storage, but doesn't cost much more and 
> doesn't consume much more power. 
>
> Would that be possible? I know the "Red Hat Gluster Storage" 
> requirements, but are they generally so high? Only a few VM images would 
> have to be on it... 
>
> Greetings 
>
> Jonathan 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWOA24SHB2CV6SDVBIYPL5PJELJDZRND/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZ2T7LKCVMTKMFJNB32TUST2JEC5HUTJ/


[ovirt-users] oVirt 4.3.0 - deploy Engine from VM

2019-03-03 Thread a . e . pool
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"}

I keep getting this when I try to deploy the Engine VM from the menu. I changed 
out hardware thinking it might be the culprit, cheked internet connections etc. 
same old same old

Any assistance will be appreciated.

Thanks,
Andre
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T3XLTYWO5LSNU6G6YDVCPDKOSO2N3WWA/


[ovirt-users] low power, low cost glusterfs storage

2019-03-03 Thread Jonathan Baecker

Hello everybody!

Does anyone here have experience with a cheap, energy-saving glusterfs 
storage solution? I'm thinking of something that has more power than a 
rasbian Pi, 3 x 2 TB (SSD) storage, but doesn't cost much more and 
doesn't consume much more power.


Would that be possible? I know the "Red Hat Gluster Storage" 
requirements, but are they generally so high? Only a few VM images would 
have to be on it...


Greetings

Jonathan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWOA24SHB2CV6SDVBIYPL5PJELJDZRND/


[ovirt-users] Re: [Vdo-devel] impact of --emulate512 setting for VDO volumes

2019-03-03 Thread Michael Sclafani
Hi!

512 emulation was intended to support drivers that only do a fraction of
their I/O in blocks smaller 4KB. It is not optimized for performance in any
way. Under the covers, VDO is still operating on 4KB physical blocks, so
each 512-byte read is potentially amplified to a 4KB read, and each
512-byte write to a 4KB read followed by a 4KB write. A workload consisting
exclusively of 512-byte randomly-distributed writes could effectively be
amplified by a factor of 16.

We have a suite of automated tests we run in 512e mode on a nightly basis.
That suite is a subset of our regular tests, containing only ones we expect
would be likely to expose problems specific to the emulation.

There should be no penalty to having emulation enabled on a volume that no
longer uses it. If the I/O is 4KB-aligned and 4KB or larger, having it
enabled won't affect it.
It does not appear the setting can be modified by the VDO manager, but I
cannot remember at this moment why that should be so.

Hope this helps.

On Fri, Mar 1, 2019 at 2:24 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> Hello,
>
> We are planning to deploy VDO with oVirt 4.3 on centos 7.6 (on SSD
> devices).
> As oVirt does not support 4K devices yet, VDO volumes are created with the
> parameter "--emulate512 enabled"
>
> What are the implications of this setting? Does it impact performance? If
> so, is it IOPS or throughput that is impacted? What about reliability (is
> that mode equally tested as standard mode)?
>
> As I saw on RH Bugzilla, support for 4K devices in oVirt will need to wait
> at least for Centos 7.7
> Once that is supported, would it be possible to transition/upgrade an
> emulate512 vdo volume to a standard one?
>
> Thanks,
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
> ___
> Vdo-devel mailing list
> vdo-de...@redhat.com
> https://www.redhat.com/mailman/listinfo/vdo-devel
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZFYDJ7EETYDDWGPCON6GMBNB3SNBDOX6/


[ovirt-users] Re: Node losing management network address?

2019-03-03 Thread jajadating
I had this issue, I believe that when I tried to fix the network manually so 
that ovirt could sync the correct config, vdsm was kicking in and overwriting 
my changes with what it had stored in /var/lib/vdsm/persistence/netconf/ before 
the sync took place. For whatever reason this was dhcp.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/23MLUL2YU7VYFGQXSHZJ3CTC3ZMIHAMR/


[ovirt-users] Re: Advice around ovirt 4.3 / gluster 5.x

2019-03-03 Thread Guillaume Pavese
Gluster 5.4 is released but not yet in official repository
If like me you can not wait the official release of Gluster 5.4 with the
instability bugfixes (planned for around March 12 hopefully), you can use
the following repository :

For Gluster 5.4-1 :

#/etc/yum.repos.d/Gluster5-Testing.repo
[Gluster5-Testing]
name=Gluster5-Testing $basearch
baseurl=
https://cbs.centos.org/repos/storage7-gluster-5-testing/os/$basearch/
enabled=1
#metadata_expire=60m
gpgcheck=0


If adventurous ;)  Gluster 6-rc0 :

#/etc/yum.repos.d/Gluster6-Testing.repo
[Gluster6-Testing]
name=Gluster6-Testing $basearch
baseurl=
https://cbs.centos.org/repos/storage7-gluster-6-testing/os/$basearch/
enabled=1
#metadata_expire=60m
gpgcheck=0


GLHF

Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Sun, Mar 3, 2019 at 6:16 AM Endre Karlson 
wrote:

> Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster
> that's breakin apart daily due to the issues with GlusterFS after upgrading
> from 4.2.8 that was rock solid. I am wondering why 4.3 was released as a
> stable version at all?? **FRUSTRATION**
>
> Endre
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TJKJGGWCANXWZED2WF5ZHTSRS2DVHR2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FGTDLXVU5AHAQUWT4HT5HNMR7HBYNKKU/