[ovirt-users] After Upgrade OVirt 4.4.9 > Version 4.4.10.7-1.el8 VM Kernel Crashes after migration

2023-01-19 Thread Ralf Schenk
 [164024.594544] 
secondary_startup_64+0xa4/0xb0

^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@Jan 
19 19:28:51 myvmXX systemd-sysctl[413]: Not setting 
net/ipv4/conf/all/promote_secondaries (explicit setting exists).


--
Databay AG Logo

*Ralf Schenk
*
fon:+49 2405 40837-0
mail:   r...@databay.de
web:www.databay.de <https://www.databay.de>

Databay AG
Jens-Otto-Krag-Str. 11
52146 Würselen



Sitz/Amtsgericht Aachen • HRB: 8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. 
Philipp Hermanns

Aufsichtsratsvorsitzender: Dr. Jan Scholzen

Datenschutzhinweise für Kunden: Hier nachlesen 
<https://www.databay.de/datenschutzhinweise-fuer-kunden>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DJZOXN45TTIJWC3XBGSEGY5UQR62BYEF/


[ovirt-users] Re: Not able to login as admin after successfull deployment of hosted engine (OVirt 4.5.1)

2022-07-15 Thread Ralf Schenk

Hello list,

"admin@localhost" did the trick That was frustrating but searching 
through the list helped !


Bye

Am 15.07.2022 um 21:26 schrieb Ralf Schenk:


Hello List,

I successfully deployed a fresh hosted-engine, but I'm not able to 
login to Administration-Portal. I'm perferctly sure about the password 
I had to type multiple times


I'm running ovirt-node-ng-4.5.1-0.20220622.0 and deployed engine via 
cli-based ovirt-hosted-engine-setup.


Neither "admin" nor "admin@internal" are working (A profile cannot be 
choosen as in earlier versions).


I can login to the monitoring part (grafana !) and also Cockpit but 
not Administration-Portal nor VM-Portal.


I can ssh into the engine and lookup the user-database which has the user.

root@engine02 ~]# ovirt-aaa-jdbc-tool query --what=user
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
-- User admin(2be16cf0-5eb7-4b0e-923e-7bdc7bc2aa6f) --
Namespace: *
Name: admin
ID: 2be16cf0-5eb7-4b0e-923e-7bdc7bc2aa6f
Display Name:
Email: root@localhost
First Name: admin
Last Name:
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2022-07-15 18:23:47Z
Account Valid To: -07-15 18:23:47Z
Account Without Password: false
Last successful Login At: 1970-01-01 00:00:00Z
Last unsuccessful Login At: 1970-01-01 00:00:00Z
Password Valid To: -05-28 18:23:49Z

However no groups by default ???

[root@engine02 ~]# ovirt-aaa-jdbc-tool query --what=group
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false

Any solution ? I don't want to repeat the hosted-engine deployment a 
fourth time after I mastered all problems with NFS permissions, GUI 
deployment not accepting my Bond which is perfectly ok called "bond0" 
etc


Bye

--
Databay AG Logo

*Ralf Schenk
*
fon:02405 / 40 83 70
mail:   r...@databay.de
web:www.databay.de <https://www.databay.de>

Databay AG
Jens-Otto-Krag-Str. 11
52146 Würselen



Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.Kfm. 
Philipp Hermanns

Aufsichtsratsvorsitzender: Dr. Jan Scholzen


___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZU2X36SQEBD5WIC7S6X4F6LPJ2XZ4WRK/

--
Databay AG Logo

*Ralf Schenk
*
fon:02405 / 40 83 70
mail:   r...@databay.de
web:www.databay.de <https://www.databay.de>

Databay AG
Jens-Otto-Krag-Str. 11
52146 Würselen



Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.Kfm. 
Philipp Hermanns

Aufsichtsratsvorsitzender: Dr. Jan Scholzen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2BGCWEQMP67HMUUARGO43N2VIY7KHHW/


[ovirt-users] Not able to login as admin after successfull deployment of hosted engine (OVirt 4.5.1)

2022-07-15 Thread Ralf Schenk

Hello List,

I successfully deployed a fresh hosted-engine, but I'm not able to login 
to Administration-Portal. I'm perferctly sure about the password I had 
to type multiple times


I'm running ovirt-node-ng-4.5.1-0.20220622.0 and deployed engine via 
cli-based ovirt-hosted-engine-setup.


Neither "admin" nor "admin@internal" are working (A profile cannot be 
choosen as in earlier versions).


I can login to the monitoring part (grafana !) and also Cockpit but not 
Administration-Portal nor VM-Portal.


I can ssh into the engine and lookup the user-database which has the user.

root@engine02 ~]# ovirt-aaa-jdbc-tool query --what=user
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
-- User admin(2be16cf0-5eb7-4b0e-923e-7bdc7bc2aa6f) --
Namespace: *
Name: admin
ID: 2be16cf0-5eb7-4b0e-923e-7bdc7bc2aa6f
Display Name:
Email: root@localhost
First Name: admin
Last Name:
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2022-07-15 18:23:47Z
Account Valid To: -07-15 18:23:47Z
Account Without Password: false
Last successful Login At: 1970-01-01 00:00:00Z
Last unsuccessful Login At: 1970-01-01 00:00:00Z
Password Valid To: -05-28 18:23:49Z

However no groups by default ???

[root@engine02 ~]# ovirt-aaa-jdbc-tool query --what=group
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false

Any solution ? I don't want to repeat the hosted-engine deployment a 
fourth time after I mastered all problems with NFS permissions, GUI 
deployment not accepting my Bond which is perfectly ok called "bond0" 
etc....


Bye

--
Databay AG Logo

*Ralf Schenk
*
fon:02405 / 40 83 70
mail:   r...@databay.de
web:www.databay.de <https://www.databay.de>

Databay AG
Jens-Otto-Krag-Str. 11
52146 Würselen



Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.Kfm. 
Philipp Hermanns

Aufsichtsratsvorsitzender: Dr. Jan Scholzen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZU2X36SQEBD5WIC7S6X4F6LPJ2XZ4WRK/


[ovirt-users] Hyperconverged Ovirt 4.3.10.4: Unresponsive Host

2022-06-06 Thread ralf
7T05:02:13 GMT', 
'cpuUser': '0.27', 'memFree': 126719, 'cpuIdle': '99.55', 'vmActive': 0, 
'v2vJobs': {}, 'cpuSysVdsmd': '0.13'}} from=::1,54704 (api:54)
2022-06-07 05:02:13,726+ INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call 
Host.getStats succeeded in 0.03 seconds (__init__:312)
2022-06-07 05:02:13,973+ INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=65629949-c74d-4f66-846a-1e2c40be9157 (api:48)
2022-06-07 05:02:13,973+ INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=65629949-c74d-4f66-846a-1e2c40be9157 (api:54)
2022-06-07 05:02:13,973+ INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:723)
2022-06-07 05:02:14,468+ INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:312)
2022-06-07 05:02:14,471+ INFO  (jsonrpc/5) [vdsm.api] START 
repoStats(domains=[u'5d6276cc-08ab-47f6-81e7-4e64aac3d386']) from=::1,54704, 
task_id=6918f715-bbf4-4c7f-af00-1764294ab665 (api:48)
2022-06-07 05:02:14,472+ INFO  (jsonrpc/5) [vdsm.api] FINISH repoStats 
return={} from=::1,54704, task_id=6918f715-bbf4-4c7f-af00-1764294ab665 (api:54)
2022-06-07 05:02:14,472+ INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)

Gluster volumes are all online.

systemctl status ovirt-ha-broker:
Jun 06 08:47:08 ovirt3.os-s.de systemd[1]: Started oVirt Hosted Engine High 
Availability Communications Broker.
Jun 06 08:47:59 ovirt3.os-s.de ovirt-ha-broker[1524]: ovirt-ha-broker 
mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result

Any ideas? 

Ralf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XT5WJCZKLIEDNI65IDTBY56VO7X7JOLT/


[ovirt-users] Move self hosted engine to a different gluster volume

2020-12-17 Thread ralf
Hi,
I apparently successfully upgraded a hyperconverged self hosted setup from  4.3 
to 4.4. During this process the selfhosted engine required a new gluster volume 
(/engine-new). I used a temporary storage for that. Is it possible to move the 
SHE back to the original volume (/engine)?
What steps would be needed? Could I just do:
1. global maintenance
2. stop engine and SHE guest
3. copy all files from glusterfs /engine-new to /engine
4. use hosted-engine --set-shared-config storage :/engine
hosted-engine --set-shared-config mnt_options
backup-volfile-servers=:
5. disable maintenance
Or are additional steps required?

Kind regards,
Ralf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RLJP6VK2TND3QQBJR6K534AZ5XNHZTDG/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-20 Thread ralf
Hi,
I today tested the steps. Actually it worked but I needed to add a few things.

0. Gluster snapshots on all volumes 
I did not need that.
1. Set a node in maintenance
2. Create a full backup of the engine
3. Set global maintenance and power off the current engine
4. Backup all gluster config files
Backup /etc/glusterfs/ /var/lib/glusterd /etc/fstab and the directories in 
/gluster_bricks

5. Reinstall the node that was set to maintnenance (step 2)
6. Install glusterfs, restore the configs from step 4
Modify the /etc/fstab, create directories in /gluster_bricks, mount the 
bricks 
I had to remove the lvm_filter to scan the logical volume group used for 
the bricks.
7. Restart glusterd and check that all bricks are up
I had to force the volumes to start
8. Wait for healing to end
9. Deploy the new HE on a new Gluster Volume, using the backup/restore 
procedure for HE
I could create a new gluster volume in the existing thin pool. During the 
deployment I specified the gluster volume:
station1:/engine-new
I used the options backvolfile-server=station2:station3

10.Add the other nodes from the oVirt cluster 
 Actually I did not need to add the nodes. They were directly available in 
the engine. 
11.Set EL7-based hosts to maintenance and power off
 My setup is based on ovirt Nodes. I put the first node in maintenance, 
created a backup of the gluster configuration and installed manually the 
ovirt-node 4.4. I reinstalled the gluster setup and waited for the gluster to 
heal.
 I then copied the ssh-key from a different node and reinstalled the node 
via the webinterface. 
 I manually set the hosted-engine option to deploy during reinstallation.
 I repeated these steps for all hosts.
12. I put the old engine gluster domain in maintenance detached and remove the 
domain. 
 Then I could remove the old storage volume as well.

In the end I was running 4.4, migrations work.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQKE2VQU6EP42ZDAA2CUWSGJM3MXYLFV/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-10 Thread ralf
Thanks a lot for the suggestions. I will try to follow your routine on a test 
setup next week.
Kind regards,

Ralf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SO42UHYYO4EZZJCUO7M34EBQRKHLLXG2/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-09 Thread ralf
I have managed to work around the hanging host by reducing the memory of the 
hosted engine during the deploy. But unfortunately the deploy still fails.

There is no real error message in the deployment log:
2020-11-09 08:58:55,337+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Wait for 
the host to be up]
2020-11-09 09:02:24,776+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2020-11-09 09:02:26,380+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
TASK [ovirt.hosted_engine_setup : debug]
2020-11-09 09:02:27,883+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
host_result_up_check: {'changed': False, 'ovirt_hosts': [{'href': 
'/ovirt-engine/api/hosts/9e504890-bcb8-40b1-813f-ee123547b3f9', 'comment': '', 
'id': '9e504890-bcb8-40b1-813f-ee123547b3f9', 'name': 'station5.example.com', 
'address': 'station5.example.com', 'affinity_labels': [], 'auto_numa_status': 
'unknown', 'certificate': {'organization': 'example.com', 'subject': 
'O=example.com,CN=station5.example.com'}, 'cluster': {'href': 
'/ovirt-engine/api/clusters/1e67ce6a-2011-11eb-8029-00163e28a2ed', 'id': 
'1e67ce6a-2011-11eb-8029-00163e28a2ed'}, 'cpu': {'name': 'Intel(R) Core(TM) 
i5-3470 CPU @ 3.20GHz', 'speed': 3554.0, 'topology': {'cores': 4, 'sockets': 1, 
'threads': 1}, 'type': 'Intel SandyBridge IBRS SSBD MDS Family'}, 
'device_passthrough': {'enabled': False}, 'devices': [], 
'external_network_provider_configurations': [], 'external_status': 'ok', 
'hardware_information': {'
 family': '103C_53307F G=D', 'manufacturer': 'Hewlett-Packard', 'product_name': 
'HP Compaq Pro 6300 SFF', 'serial_number': 'CZC41045SC', 
'supported_rng_sources': ['random', 'hwrng'], 'uuid': 
'F748FD00-9E43-11E3-9BDA-A0481C87CA32', 'version': ''}, 'hooks': [], 'iscsi': 
{'initiator': 'iqn.1994-05.com.redhat:a668abd829a1'}, 'katello_errata': [], 
'kdump_status': 'disabled', 'ksm': {'enabled': False}, 'libvirt_version': 
{'build': 0, 'full_version': 'libvirt-6.0.0-25.2.el8', 'major': 6, 'minor': 0, 
'revision': 0}, 'max_scheduling_memory': 16176381952, 'memory': 16512974848, 
'network_attachments': [], 'nics': [], 'numa_nodes': [], 'numa_supported': 
False, 'os': {'custom_kernel_cmdline': '', 'reported_kernel_cmdline': 
'BOOT_IMAGE=(hd0,msdos1)//ovirt-node-ng-4.4.2-0.20200918.0+1/vmlinuz-4.18.0-193.19.1.el8_2.x86_64
 crashkernel=auto resume=/dev/mapper/onn-swap 
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap rhgb quiet 
root=/dev/onn/ovirt-node-ng-4.4.2-0.20200918.0+1 boot=UU
 ID=78682d93-a122-4ea9-8593-224fa32b7ab4 rootflags=discard 
img.bootid=ovirt-node-ng-4.4.2-0.20200918.0+1', 'type': 'RHEL', 'version': 
{'full_version': '8 - 2.2004.0.1.el8', 'major': 8}}, 'permissions': [], 'port': 
54321, 'power_management': {'automatic_pm_enabled': True, 'enabled': False, 
'kdump_detection': True, 'pm_proxies': []}, 'protocol': 'stomp', 'se_linux': 
{'mode': 'enforcing'}, 'spm': {'priority': 5, 'status': 'none'}, 'ssh': 
{'fingerprint': 'SHA256:pUi4oFo/5DGLYbWN39rEiap3bUfVK1C/6OPEecf8GFg', 'port': 
22}, 'statistics': [], 'status': 'non_operational', 'status_detail': 
'storage_domain_unreachable', 'storage_connection_extensions': [], 'summary': 
{'active': 1, 'migrating': 0, 'total': 1}, 'tags': [], 
'transparent_huge_pages': {'enabled': True}, 'type': 'rhel', 
'unmanaged_networks': [], 'update_available': False, 'version': {'build': 26, 
'full_version': 'vdsm-4.40.26.3-1.el8', 'major': 4, 'minor': 40, 'revision': 
3}, 'vgpu_placement': 'consolidated'}], 'failed': False, 'attem
 pts': 21}
2020-11-09 09:02:29,386+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Notify the 
user about a failure]
2020-11-09 09:02:30,891+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 skipping: [localhost]
2020-11-09 09:02:32,395+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : set_fact]
2020-11-09 09:02:33,800+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2020-11-09 09:02:35,404+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Collect 
error events from the Engine]
2020-11-09 09:02:37,410+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2020-11-09 09:02:39,114+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Generate 
the error message from the engine events]
2020-11-09 09:02:40,819+0100 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': "The task includes 

[ovirt-users] Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-08 Thread ralf
Hi,
has anyone attempted an upgrade from 4.3 to 4.4 in a hyperconverged self-hosted 
setup?
The posted guidelines seem a bit contradictive and not complete. 
Has anyone tried it and could share his experiences? I am currently having 
problems when deploying the hosted engine and restoring. The host becomes 
unresponsive and has hung tasks.

Kind regards,

Ralf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6C5DAKNT4ZA42FLC2YGYYUNQLXXHHHZ/


[ovirt-users] Re: oVirt on a dedicated server with a single NIC and IP

2019-12-22 Thread Ralf Schenk
Hello,

This is overkill. Use KVM i.e. based on Ubuntu 18.04 LTS or latest
CENTOS and if you need a graphical userinterface virt-manager
https://virt-manager.org/ to manage a few virtual Domains on a single
server

Bye

Am 22.12.2019 um 12:20 schrieb alexcus...@gmail.com:
> Hi, My goal is to set up the latest oVirt on a dedicated server (Hetzner) 
> with a single NIC and IP. Then using a Reverse Proxy and/or VPN (on the same 
> host) to manage connections to VMs. I expected I could add a subnet as an 
> alias to the main interface, or set it up on VLAN, or using vSwitch at the 
> last resort, but all my attempts failed. Some configurations work until the 
> first reboot, then VDSM screw up the main interface and I lost connection to 
> the server, or Hosted Engine doesn't start from global maintenance. I've run 
> out of good ideas already. I'm extremely sorry if this question already been 
> discussed somewhere, but I found nothing helpful. I would be very appreciated 
> for any hint on how to achieve this or at least a confirmation that the 
> desired configuration is viable.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5JFQH7BY5P2WREW4ALZAF3ITK5HICEUS/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W5BDBBG4GJ3M66FTRTQMEETFLVY7I52Q/


[ovirt-users] Re: HCL: 4.3.7: Hosted engine fails

2019-12-12 Thread Ralf Schenk
 /Campaign
>   X   against HTML
>   WEB alpha-labs.net / \   in eMails
>
>   GPG Retrieval https://gpg.christian-reiss.de
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5
>   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5
>
>   "It's better to reign in hell than to serve in heaven.",
>John Milton, Paradise lost.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXW3L2HAXJCYYQZDPKKNAC7GQPUGBQHA/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4F37WXTAQCG4PCUA3U74Q7UQB4RVQJRC/


[ovirt-users] Re: HostedEngine Deployment fails on AMD EPYC 7402P 4.3.7

2019-11-28 Thread Ralf Schenk
Hello,

I did something like that via "virsh edit HostedEngine".

But how is the change written back to the shared storage
("hosted_storage" so it stays permanent  for HA Engine ?

I was able to boot up HostedEngine manually via virsh start after
removing the required flag from XML (I first added a user to sasldb in
/etc/libvirt/passwd.db to be able to log into libvirt).

Bye


Am 28.11.2019 um 05:51 schrieb Strahil:
>
> Hi Ralf,
> When the deployment fail - you can dump the xml from virsh , edit it,
> undefine the current HostedEngine and define your modified
> HostedEngine 's  xml.
>
> Once you do that, you can try to start the
> VM.
>
> Good luck.
>
> Best Regards,
> Strahil Nikolov
>
> On Nov 27, 2019 18:28, Ralf Schenk  wrote:
>
> Hello,
>
> This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7
> based Host.
>
> At the time the locally deployed Engine ist copied to
> hosted-storage (in my case NFS) and deployment tries to start the
> Engine (via ovirt-ha-agent) this fails.
>
> QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows
> "2019-11-27 16:17:16.833+: shutting down, reason=failed".
>
> Researching the cause is: The built Libvirt VM XML includes the
> feature "virt-ssbd" as requirement, which is simly not there.
>
> From VM XML:
>
>   
>     EPYC
>     
>     
>     
>
> from cat /proc/cpuinfo:
>
> processor   : 47
> vendor_id   : AuthenticAMD
> cpu family  : 23
> model   : 49
> model name  : AMD EPYC 7402P 24-Core Processor
> stepping    : 0
> microcode   : 0x830101c
> cpu MHz : 2800.000
> cache size  : 512 KB
> physical id : 0
> siblings    : 48
> core id : 30
> cpu cores   : 24
> apicid  : 61
> initial apicid  : 61
> fpu : yes
> fpu_exception   : yes
> cpuid level : 16
> wp  : yes
> flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx
> mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl
> xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni
> pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes
> xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy
> abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce
> topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3
> hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase
> bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb
> sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total
> cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock
> nrip_save tsc_scale vmcb_clean flushbyasid decodeassists
> pausefilter pfthreshold avic v_vmsave_vmload vgif umip
> overflow_recov succor smca
> bogomips    : 5600.12
> TLB size    : 3072 4K pages
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 43 bits physical, 48 bits virtual
> power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
>
> Any solution/workaround available ?
>
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3ZSUNSVA3RVQRYKBFEJOGB4DZC5NUYB/


[ovirt-users] HostedEngine Deployment fails on AMD EPYC 7402P 4.3.7

2019-11-27 Thread Ralf Schenk
Hello,

This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7 based Host.

At the time the locally deployed Engine ist copied to hosted-storage (in
my case NFS) and deployment tries to start the Engine (via
ovirt-ha-agent) this fails.

QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows
"2019-11-27 16:17:16.833+: shutting down, reason=failed".

Researching the cause is: The built Libvirt VM XML includes the feature
"virt-ssbd" as requirement, which is simly not there.

>From VM XML:

  
    EPYC
    
    
    

from cat /proc/cpuinfo:

processor   : 47
vendor_id   : AuthenticAMD
cpu family  : 23
model   : 49
model name  : AMD EPYC 7402P 24-Core Processor
stepping    : 0
microcode   : 0x830101c
cpu MHz : 2800.000
cache size  : 512 KB
physical id : 0
siblings    : 48
core id : 30
cpu cores   : 24
apicid  : 61
initial apicid  : 61
fpu : yes
fpu_exception   : yes
cpuid level : 16
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology
nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3
fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm
cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2
cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp
vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap
clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc
cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv
svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists
pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov
succor smca
bogomips    : 5600.12
TLB size    : 3072 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 43 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]

Any solution/workaround available ?

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EX2IOKWLHBXW6IP4TDTCRET673L7LNTZ/


[ovirt-users] numa pinning and reserved hugepages (1G) Bug in scheduler calculation or decision ?

2019-08-21 Thread Ralf Schenk
Hello List,

i ran into problems using numa-pinning and reserved hugepages.

- My EPYC 7281 based Servers (Dual Socket) have 8 Numa-Nodes each having
32 GB of memory for a total of 256 GB System Memory

- I'm using 192 x 1 GB hugepages reserved on the kernel cmdline
default_hugepagesz=1G hugepagesz=1G hugepages=192 This reserves 24
hugepages on each numa-node.

I wanted to pin a MariaDB VM using 32 GB (Custom Property
hugepages=1048576) to numa-nodes 0-3 of CPU-Socket 1. Pinning in GUI
etc. no problem.

When trying to start the vm this can't be done since ovirt claims that
the host can't fullfill the memory requirements - which is simply not
correct since there were > 164 hugepages free.

It should have taken 8 hugepages from each numa node 0-3 to fullfill the
32 GB Memory requirement.

I also freed the system completely from other VM's but that didn't work
either.

Is it possible that the scheduler only takes into account the "free
memory" (as seen in numactl -H below) *not reserved* by hugepages for
its decisions ? Since the host has only < 8 GB of free mem per numa-node
I can understand that VM was not able to start under that condition.

VM is runnig and using 32 hugepages without pinning but a warning states
"VM dbserver01b does not fit to a single NUMA node on host
myhost.mydomain.de. This may negatively impact its performance. Consider
using vNUMA and NUMA pinning for this VM."

This is the numa Hardware Layout and hugepages usage now with other VM's
running:

from cat /proc/meminfo

HugePages_Total: 192
HugePages_Free:  160
HugePages_Rsvd:    0
HugePages_Surp:    0

I can confirm that also under the condition of running other VM's there
are at least 8 hugepages free for each numa-node 0-3:

grep ""
/sys/devices/system/node/*/hugepages/hugepages-1048576kB/free_hugepages
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages:8
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages:23
/sys/devices/system/node/node2/hugepages/hugepages-1048576kB/free_hugepages:20
/sys/devices/system/node/node3/hugepages/hugepages-1048576kB/free_hugepages:22
/sys/devices/system/node/node4/hugepages/hugepages-1048576kB/free_hugepages:16
/sys/devices/system/node/node5/hugepages/hugepages-1048576kB/free_hugepages:5
/sys/devices/system/node/node6/hugepages/hugepages-1048576kB/free_hugepages:19
/sys/devices/system/node/node7/hugepages/hugepages-1048576kB/free_hugepages:24

numactl -h:

available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 32 33 34 35
node 0 size: 32673 MB
node 0 free: 3779 MB
node 1 cpus: 4 5 6 7 36 37 38 39
node 1 size: 32767 MB
node 1 free: 6162 MB
node 2 cpus: 8 9 10 11 40 41 42 43
node 2 size: 32767 MB
node 2 free: 6698 MB
node 3 cpus: 12 13 14 15 44 45 46 47
node 3 size: 32767 MB
node 3 free: 1589 MB
node 4 cpus: 16 17 18 19 48 49 50 51
node 4 size: 32767 MB
node 4 free: 2630 MB
node 5 cpus: 20 21 22 23 52 53 54 55
node 5 size: 32767 MB
node 5 free: 2487 MB
node 6 cpus: 24 25 26 27 56 57 58 59
node 6 size: 32767 MB
node 6 free: 3279 MB
node 7 cpus: 28 29 30 31 60 61 62 63
node 7 size: 32767 MB
node 7 free: 5513 MB
node distances:
node   0   1   2   3   4   5   6   7
  0:  10  16  16  16  32  32  32  32
  1:  16  10  16  16  32  32  32  32
  2:  16  16  10  16  32  32  32  32
  3:  16  16  16  10  32  32  32  32
  4:  32  32  32  32  10  16  16  16
  5:  32  32  32  32  16  10  16  16
  6:  32  32  32  32  16  16  10  16
  7:  32  32  32  32  16  16  16  10

--


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GXIWD6B6E3UMWARXARUDW6TIN6WWYQFU/


[ovirt-users] Either allow 2 CD-ROM's or selectable *.vfd Floppy Images from Storage via Run Once other than from deprcated ISO-Storage

2019-08-12 Thread Ralf Schenk
Hello,

when Installing Windows VM's on Ovirt we need either 2 CD-ROM's attached
as ISO Files (Installer ISO and Virtio-Win-ISO) to be able to install to
Virtio-(SCSI)-Disks.

In Ovirt 4.3.4 it is not possible to attach 2 CD-ROM's to a VM. So we
have to use Floppy Images (virtio-win-*.vfd) attached to install drivers
within Installer.

We need to use "Run Once" to attach flopppy disks. There are only *.vfd
selectable which are located on ISO-Storage.Domain, which will be
deprecated now or then.

-> We won't be able to install Windows VM's from unmodified ISO
Installer-CD's without ISO Storage Domain or making *.vfd Files
selectable via "Run Once"

When will that be available... ?

Bye

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3G5BEWATSBZCUKLPS5ZNOAFDHIVNAYQV/


[ovirt-users] Re: Does Ovirt 4.3.4 have support for NFS 4/4.1/4.2 or pNFS

2019-07-13 Thread Ralf Schenk
Hello,

I used Ovirt with Ganesha and PNFS (on Gluster not managed by OVirt).

I'm now using a Storage based in Linux on ZFS and kernel NFS using NFS
Version 4.2. Thinprovisioning and trim on VM Disk's works fine so that
VM Disk (Sparse Files on NFS) shrink when trimming inside VM. You have
to check "Enable Discard" on VM Disks. Disks are created in a few
seconds as stated.

Bye

Am 13.07.2019 um 14:52 schrieb Nir Soffer:
>
>
> On Sat, Jul 13, 2019, 00:30 Erick Perez  <mailto:epe...@quadrianweb.com>> wrote:
>
> I have read the archives and the most recent discussion was 5
> years ago. So I better ask again.
> My NAS runs Centos with NFS4.2 (and I am testing Ganesha in
> another server)
> Does Ovirt 4.3.4 have support for NFS 4/4.1/4.2 or pNFS
>
>
> We support NFS 4.2 for a while (4.2 or 4.3), but the default is still
> auto (typically NFS 3).
>
> Specially version 4.2 due to:
> Server-Side Copy: NFSv4.2 supports copy_file_range() system call,
> which allows the NFS client to efficiently copy data without
> wasting network resources.
>
>
> qemu support copy_file_range since RHEL 7.6, but I think it has some
> issues and does not perform well, and we don't enable this mode yet.
>
> What work with NFS 4.2 is sparseness and fallocate support, speeding
> copy/move/clone/upload of spares disks.
>
> For example creating 100g preallocated image would take less than a
> second.
>
>
> But this will only happen if the Ovirt (which I know is Centos
> based) supports NFS 4.2.
> Not sure If I update the NFS toolset on the Ovirt install, it will
> break something or worst.
>
>
> It should work and much better.
>
> Nir
>
> ___
> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q4YOSOY4ZF2D6YTIORIPMUD6YJNACVB3/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YOBLWT6H4THVXOY2PXSDYGWXU6UYJWIM/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGFPTVLD3VGJ6VX6ZNQDCIXZCMYT7FYX/


[ovirt-users] Re: Hosted Engine Deploy - Error at the end... [NFS ?]

2019-07-03 Thread Ralf Schenk
Hello,

thats not sufficient. Since later libvirt tries to start up
hosted-engine via qemu and that might fail because this is run as user
qemu wich also needs to read and write the files on the hosted_storage.

I always change /etc/libvirt/qemu.conf  to user="vdsm" since I can't get
any group mapping via group vdsm to succeed on my storage.

So you should also try to

sudo -u qemu mkdir test


Am 03.07.2019 um 12:48 schrieb csi-la...@cisco.com:
> Yes, as I wrote at the start, I'm able to mount NFS manually.
> Here is the results of the command you gave. [i did "cd" to the nfs share of 
> course] :
>
> [root@luli-ovirt-01 nfs_ovirt_engine]# sudo -u vdsm mkdir test
> [root@luli-ovirt-01 nfs_ovirt_engine]# ls
> test
> [root@luli-ovirt-01 nfs_ovirt_engine]#
>
> it's working manually 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FI44SCRS7CLG7UB2X4RZ4PIWBUXRPJW5/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XI677MXRDT4MMRSK536O373ROYWUVBEP/


[ovirt-users] Re: Mix CPU

2019-07-01 Thread Ralf Schenk
Hello,

you cannot mix in one Cluster. I've 2 EPYC Hosts and 8 Intel Hosts
manage by one HostedEngine VM. I cannot live-migrate VM's between both
but I can configure the VM's to run on either Cluster, using the same
storage.

Bye

Am 01.07.2019 um 13:39 schrieb supo...@logicworks.pt:
> Hello,
>
> Can we mix an Intel cpu with an AMD cpu in the same cluster?
>
> Thanks
>
> -- 
> 
> Jose Ferradeira
> http://www.logicworks.pt
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EQN7SEWJ6NXCWTVX6OESLYND4ZSDQCEZ/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VKL6QWK7JS5N5CW76MJENKLZLV2SYYFT/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-04-30 Thread Ralf Schenk
Hello,

that is definitely not my problem. Did a complete new deployment (after
rebooting host)

Before deploying on my storage:
root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
total 17
drwxrwxr-x 2 vdsm vdsm 2 Apr 30 13:53 .
drwxr-xr-x 8 root root 8 Apr  2 18:02 ..

While deploying in late stage:
root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
total 18
drwxrwxr-x 3 vdsm vdsm 4 Apr 30 14:51 .
drwxr-xr-x 8 root root 8 Apr  2 18:02 ..
drwxr-xr-x 4 vdsm vdsm 4 Apr 30 14:51 d26e4a31-8d73-449d-bebc-f2ce7a979e5d
-rwxr-xr-x 1 vdsm vdsm 0 Apr 30 14:51 __DIRECT_IO_TEST__

Immediately the error occurs in GUI:
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
response code is 400."}


Am 30.04.2019 um 13:48 schrieb Simone Tiraboschi:
>
>
> On Tue, Apr 30, 2019 at 1:35 PM Ralf Schenk  <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> I'm deploying HostedEngine to a NFS Storage. HostedEngineLocal ist
> setup and running already. But Step 4 (Moving to hosted_storage
> Domain on NFS) fails. The Host ist Node-NG 4.3.3.1 based.
>
> The intended NFS Domain gets mounted in the host but activation (I
> think via EngineAPI fails):
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail
> is "[]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\".
> HTTP response code is 400."}
>
> mount in host shows:
>
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on
> /rhev/data-center/mnt/storage.rxmgmt.databay.de:_ovirt_hosted__storage
> type nfs4
> 
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.16.252.231,local_lock=none,addr=172.16.252.3)
>
> I also sshd into the locally running engine vi 192.168.122.XX and
> the VM can mount the storage domain, too:
>
> [root@engine01 ~]# mount
> storage.rxmgmt.databay.de:/ovirt/hosted_storage /mnt/ -o vers=4.1
> [root@engine01 ~]# mount | grep nfs
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on /mnt type nfs4
> 
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.71,local_lock=none,addr=172.16.252.3)
> [root@engine01 ~]# ls -al /mnt/
> total 18
> drwxrwxr-x.  3 vdsm kvm    4 Apr 30 12:59 .
> dr-xr-xr-x. 17 root root 224 Apr 16 14:31 ..
> drwxr-xr-x.  4 vdsm kvm    4 Apr 30 12:40
> 4dc42146-b3fb-47ec-bf06-8d9bf7cdf893
> -rwxr-xr-x.  1 vdsm kvm    0 Apr 30 12:55 __DIRECT_IO_TEST__
>
> Anything I can do ?
>
>
> 99% that folder was dirty (it already contained something) when you
> started the deployment.
> I can only suggest to clean that folder and start from scratch.
>  
>
> Log-Extract of
> ovirt-hosted-engine-setup-ansible-create_storage_domain included.
>
>
>
> -- 
>
>
> *Ralf Schenk*
>     fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* <mailto:r...@databay.de>
>       
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* <http://www.databay.de>
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
> Dipl.-Kfm. Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
>
> 
> ___
> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>     List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/37WY4NUSJYMA7PMZWYSU5KCMF

[ovirt-users] Re: Ovirt Cluster completely unstable

2019-02-14 Thread Ralf Schenk
Hello,

my problems on gluster started with 4.2.6 or 4.2.7. around end of
September. I still have VM's paused the one or other day an they are
reactivated either by HA oder manually. So i want to testify your
experiences. Even while I'm using bonded network connections there are
communication problems without heavy load or other tasks running.

My new Cluster on EPYC Hardware running on NFS 4.2 storage volumes based
on ZFS runs rock solid and VM's are much faster regarding I/O. Gluster
3.12.5 sucks !

Bye

Am 14.02.2019 um 07:52 schrieb Jayme:
> I have a three node HCI gluster which was previously running 4.2 with
> zero problems.  I just upgraded it yesterday.  I ran in to a few bugs
> right away with the upgrade process, but aside from that I also
> discovered other users with severe GlusterFS problems since the
> upgrade to new GlusterFS version.  It is less than 24 hours since I
> upgrade my cluster and I just got a notice that one of my GlusterFS
> bricks is offline.  There does appear to be a very real and serious
> issue here with the latest updates.
>
>
> On Wed, Feb 13, 2019 at 7:26 PM  <mailto:dsc...@umbctraining.com>> wrote:
>
> I'm abandoning my production ovirt cluster due to instability.   I
> have a 7 host cluster running about 300 vms and have been for over
> a year.  It has become unstable over the past three days.  I have
> random hosts both, compute and storage disconnecting.  AND many
> vms disconnecting and becoming unusable.
>
> 7 host are 4 compute hosts running Ovirt 4.2.8 and three glusterfs
> hosts running 3.12.5.  I submitted a bugzilla bug and they
> immediately assigned it to the storage people but have not
> responded with any meaningful information.  I have submitted
> several logs. 
>
> I have found some discussion on problems with instability with
> gluster 3.12.5.  I would be willing to upgrade my gluster to a
> more stable version if that's the culprit.  I installed gluster
> using the ovirt gui and this is the version the ovirt gui installed.
>
> Is there an ovirt health monitor available?  Where should I be
> looking to get a resolution the problems I'm facing.
> ___
> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BL4M3JQA3IEXCQUY4IGQXOAALRUQ7TVB/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QULCBXHTKSCPKH4UV6GLMOLJE6J7M5UW/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
        
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RLO37RGAEVJHRQW3YJT4LE5I7X52CUPU/


[ovirt-users] Re: ISCSI Domain & LVM

2019-01-09 Thread Ralf Schenk
Hello,

Thanks for your help. I implemented your suggestions regarding multipath
config ("no_path_retry queue" only on root wwid) and also the Patch of
https://gerrit.ovirt.org/c/93301/ and now my root on ISCSI is filtered
from GUI and hopefully rock solid stable.

Bye


Am 08.01.2019 um 17:33 schrieb Nir Soffer:
> On Tue, Jan 8, 2019 at 5:49 PM Ralf Schenk  <mailto:r...@databay.de>> wrote:
> ...
>
> multipaths {
>
>     multipath {
>     wwid 36001405a26254e2bfd34b179d6e98ba4
>     alias    mpath-myhostname-disk1
>     }
> }
>
> ... 
>
> So you suggest to add
>
> "no_path_retry queue"
>
> to above config according to your statements in
> https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ?
>
>
> Yes
> ...
>
> And yes the disks get listed and is also shown in GUI. How can I
> filter this out ? I think https://gerrit.ovirt.org/c/93301/ shows
> a way to do this. Will this be in 4.3 ?
>
>
> Yes, it is available, but I'm not sure using 4.3 at this point is a
> good idea. It would be
> safer to apply this small patch to 4.2.
> Nir
>
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTYECWBPJIAVVL6PM7JD4NFSFZ2EIZIF/


[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

i see i have already

devices {
    device {
    vendor "LIO-ORG"
    hardware_handler   "1 alua"
    features   "1 queue_if_no_path"
    path_grouping_policy   "failover"
    path_selector  "queue-length 0"
    failback   immediate
    path_checker   directio
    #path_checker   tur
    prio   alua
    prio_args  exclusive_pref_bit
    #fast_io_fail_tmo   25
    *no_path_retry  queue*
    }
}

Which should result in the same beahaviour, correct ?

Am 08.01.2019 um 16:48 schrieb Ralf Schenk:
>
> Hello,
>
> I manually renamed them via multipath.conf
>
> multipaths {
>     multipath {
>     wwid 36001405a26254e2bfd34b179d6e98ba4
>     alias    mpath-myhostname-disk1
>     }
> }
>
> Before I configured multipath (I installed without first before I knew
> "mpath" parameter in setup !) I had the problem of readonly root a few
> times but the hints and settings (for iscsid.conf) I found out didn't
> help. Thats why I had to tweak dracut ramdisk to set up multipath and
> understand LVM activation and so on of ovirt-ng the hard way.
>
> So you suggest to add
>
> "no_path_retry queue"
>
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZIJNG5MHG7F7TGSSVBVMBMWUJCA6RAJ/


[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

I manually renamed them via multipath.conf

multipaths {
    multipath {
    wwid 36001405a26254e2bfd34b179d6e98ba4
    alias    mpath-myhostname-disk1
    }
}

Before I configured multipath (I installed without first before I knew
"mpath" parameter in setup !) I had the problem of readonly root a few
times but the hints and settings (for iscsid.conf) I found out didn't
help. Thats why I had to tweak dracut ramdisk to set up multipath and
understand LVM activation and so on of ovirt-ng the hard way.

So you suggest to add

"no_path_retry queue"

to above config according to your statements in
https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ?

I cannot access https://bugzilla.redhat.com/show_bug.cgi?id=1436415

And yes the disks get listed and is also shown in GUI. How can I filter
this out ? I think https://gerrit.ovirt.org/c/93301/ shows a way to do
this. Will this be in 4.3 ?

So far thanks for your good hints.

[
    {
    "status": "used",
    "vendorID": "LIO-ORG",
    "GUID": "mpath-myhostname-disk1",
    "capacity": "53687091200",
    "fwrev": "4.0",
    "discard_zeroes_data": 0,
    "vgUUID": "",
    "pathlist": [
    {
    "initiatorname": "default",
    "connection": "172.16.1.3",
    "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
    "portal": "1",
    "user": "myhostname",
    "password": "l3tm31scs1-2018",
    "port": "3260"
    },
    {
    "initiatorname": "ovirtmgmt",
    "connection": "192.168.1.3",
    "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
    "portal": "1",
    "user": "myhostname",
    "password": "l3tm31scs1-2018",
    "port": "3260"
    },
    {
    "initiatorname": "ovirtmgmt",
    "connection": "192.168.1.3",
    "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
    "portal": "1",
    "user": "myhostname",
    "password": "l3tm31scs1-2018",
    "port": "3260"
    }
    ],
    "pvsize": "",
    "discard_max_bytes": 4194304,
    "pathstatus": [
    {
    "capacity": "53687091200",
    "physdev": "sda",
    "type": "iSCSI",
    "state": "active",
    "lun": "0"
    },
    {
    "capacity": "53687091200",
    "physdev": "sdb",
    "type": "iSCSI",
    "state": "active",
    "lun": "0"
    },
    {
    "capacity": "53687091200",
    "physdev": "sdc",
    "type": "iSCSI",
    "state": "active",
    "lun": "0"
    }
    ],
    "devtype": "iSCSI",
    "physicalblocksize": "512",
    "pvUUID": "",
    "serial":
"SLIO-ORG_myhostname-disk_a26254e2-bfd3-4b17-9d6e-98ba4ab45902",
    "logicalblocksize": "512",
    "productID": "myhostname-disk"
    }
]

Am 08.01.2019 um 16:19 schrieb Nir Soffer:
> On Tue, Jan 8, 2019 at 4:57 PM Ralf Schenk  <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> I cannot tell if this is expected on a node-ng system. I worked
> hard to get it up and running like this. 2 Diskless Hosts (EPYC
> 2x16 Core, 256GB RAM) boot via ISCSI ibft from Gigabit Onboard,
> Initial-Ramdisk establishes multipathing and thats what I get (and
> want). So i've redundant connections (1x1GB, 2x10GB Ethernet as
> bond) to my storage (Ubuntu Box 1xEPYC 16 Core, 128Gig RAM
> currently 8 disks, 2xNvME SSD with ZFS exporting NFS 4.2 and
> targetcli-fb ISCSI targets). All disk images on NFS 4.2 and via
> ISCSI are thin-provisioned

[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

I cannot tell if this is expected on a node-ng system. I worked hard to
get it up and running like this. 2 Diskless Hosts (EPYC 2x16 Core, 256GB
RAM) boot via ISCSI ibft from Gigabit Onboard, Initial-Ramdisk
establishes multipathing and thats what I get (and want). So i've
redundant connections (1x1GB, 2x10GB Ethernet as bond) to my storage
(Ubuntu Box 1xEPYC 16 Core, 128Gig RAM currently 8 disks, 2xNvME SSD
with ZFS exporting NFS 4.2 and targetcli-fb ISCSI targets). All disk
images on NFS 4.2 and via ISCSI are thin-provisioned and the
sparse-files grow and shrink when discarding in VM's/Host via fstrim.

My exercises doing this also as UEFI boot were stopped by node-ng
installer partitioning which refused to set up a UEFI FAT Boot partition
on the already accessible ISCSI targets.. So systems do legacy BIOS boot
now.

This works happily even if I unplug one of the Ethernet-Cables.

[root@myhostname ~]# multipath -ll
mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0 LIO-ORG
,myhostname-disk
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| `- 3:0:0:0 sdb 8:16 active ready running
|-+- policy='queue-length 0' prio=50 status=enabled
| `- 4:0:0:0 sdc 8:32 active ready running
`-+- policy='queue-length 0' prio=50 status=enabled
  `- 0:0:0:0 sda 8:0  active ready running

See attached lsblk.


Am 08.01.2019 um 14:46 schrieb Nir Soffer:
> On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk  <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is
> what "vdsm-tool config-lvm-filter" suggests. Is this correct ?
>
> [root@myhostname ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-home
>   mountpoint:  /home
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume: 
> /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1
>   mountpoint:  /
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-swap
>   mountpoint:  [SWAP]
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-tmp
>   mountpoint:  /tmp
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var
>   mountpoint:  /var
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_crash
>   mountpoint:  /var/crash
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log
>   mountpoint:  /var/log
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log_audit
>   mountpoint:  /var/log/audit
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
> This is the recommended LVM filter for this host:
>
>   filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> Configure LVM filter? [yes,NO]
>
>
> Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system?
>
> Ralf, can you share the output of:
> lsblk
> multipath -ll
> multipathd show paths format "%d %P"
>
> Nir
>
>
>
>
> Am 05.01.2019 um 19:34 schrieb teh...@take3.ro
> <mailto:teh...@take3.ro>:
>> Hello Greg,
>>
>> this is what i was looking for.
>>
>> After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting 
>> them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to 
>> local LVM on the ovirt hosts.
>>
>> Additionally i made following tests:
>> - Cloning + running a VM on the ISCSI domain
>> - Detaching + (re-)attaching of the ISCSI domain
>> - Detaching, removing + (re-)import of the ISCSI domain 
>> - Creating new ISCSI domain (well, i needed to use "force operation" 
>> because creating on same ISCSI target)
>>
>> All tests were successful.
>>
>> As you wished i filed a bug: 
>> <https://github.com/oVirt/ovirt-site/issues/1857> 
>> <https://github.com/oVir

[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what
"vdsm-tool config-lvm-filter" suggests. Is this correct ?

[root@myhostname ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/onn_myhostname--iscsi-home
  mountpoint:  /home
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume: 
/dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1
  mountpoint:  /
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-swap
  mountpoint:  [SWAP]
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-tmp
  mountpoint:  /tmp
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var
  mountpoint:  /var
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var_crash
  mountpoint:  /var/crash
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log
  mountpoint:  /var/log
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log_audit
  mountpoint:  /var/log/audit
  devices: /dev/mapper/mpath-myhostname-disk1p2

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

Configure LVM filter? [yes,NO]


Am 05.01.2019 um 19:34 schrieb teh...@take3.ro:
> Hello Greg,
>
> this is what i was looking for.
>
> After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) 
> all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM 
> on the ovirt hosts.
>
> Additionally i made following tests:
> - Cloning + running a VM on the ISCSI domain
> - Detaching + (re-)attaching of the ISCSI domain
> - Detaching, removing + (re-)import of the ISCSI domain 
> - Creating new ISCSI domain (well, i needed to use "force operation" because 
> creating on same ISCSI target)
>
> All tests were successful.
>
> As you wished i filed a bug: <https://github.com/oVirt/ovirt-site/issues/1857>
> Thank you.
>
> Best regards,
> Robert
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2JVBWPZG54IMSQZ5VZ7M/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6GERQXDRCLRYUUN2NLURQTK/


[ovirt-users] Upload via GUI to VMSTORE possible but not ISO Domain

2018-12-20 Thread Ralf Schenk
Hello,

I can successfully upload disks to my Data-Domain ("VMSTORE") which is
NFS. I also can upload .iso Files there. (No porblems with SSL or
imageio-proxy). Why is the ISO Domain not available for Upload via GUI ?
Does a separate ISO Domain still make sense ? The ISO Domain is up and
running. And ist it possible to filter out the hosted_storage where the
engine lives for uploads ?


save image

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/73PMFYEQ64QN2K23A4UKNUC2JHAFNA35/


[ovirt-users] Re: hosted-engine --deploy fails on Ovirt-Node-NG 4.2.7

2018-12-03 Thread Ralf Schenk
Hello,

attached the qemu Log.

This ist the problem:

Could not open
'/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
Permission denied

When I do "su - vdsm -s /bin/bash"
I can hexdump the file !

-bash-4.2$ id
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock)
-bash-4.2$ hexdump -Cn 512
/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08
  eb 63 90 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0 
|.c..|
0010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00 
|...|.!..|
0020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75 
|8.uu|
0030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 01 8b 
|.|...t..|
0040  4c 02 cd 13 ea 00 7c 00  00 eb fe 00 00 00 00 00 
|L.|.|
0050  00 00 00 00 00 00 00 00  00 00 00 80 01 00 00 00 
||
0060  00 00 00 00 ff fa 90 90  f6 c2 80 74 05 f6 c2 70 
|...t...p|
0070  74 02 b2 80 ea 79 7c 00  00 31 c0 8e d8 8e d0 bc 
|ty|..1..|
0080  00 20 fb a0 64 7c 3c ff  74 02 88 c2 52 be 05 7c  |.
..d|<.t...R..||
0090  b4 41 bb aa 55 cd 13 5a  52 72 3d 81 fb 55 aa 75 
|.A..U..ZRr=..U.u|
00a0  37 83 e1 01 74 32 31 c0  89 44 04 40 88 44 ff 89 
|7...t21..D.@.D..|
00b0  44 02 c7 04 10 00 66 8b  1e 5c 7c 66 89 5c 08 66 
|D.f..\|f.\.f|
00c0  8b 1e 60 7c 66 89 5c 0c  c7 44 06 00 70 b4 42 cd 
|..`|f.\..D..p.B.|
00d0  13 72 05 bb 00 70 eb 76  b4 08 cd 13 73 0d 5a 84 
|.r...p.vs.Z.|
00e0  d2 0f 83 de 00 be 85 7d  e9 82 00 66 0f b6 c6 88 
|...}...f|
00f0  64 ff 40 66 89 44 04 0f  b6 d1 c1 e2 02 88 e8 88 
|d.@f.D..|
0100  f4 40 89 44 08 0f b6 c2  c0 e8 02 66 89 04 66 a1 
|.@.D...f..f.|
0110  60 7c 66 09 c0 75 4e 66  a1 5c 7c 66 31 d2 66 f7 
|`|f..uNf.\|f1.f.|
0120  34 88 d1 31 d2 66 f7 74  04 3b 44 08 7d 37 fe c1 
|4..1.f.t.;D.}7..|
0130  88 c5 30 c0 c1 e8 02 08  c1 88 d0 5a 88 c6 bb 00 
|..0Z|
0140  70 8e c3 31 db b8 01 02  cd 13 72 1e 8c c3 60 1e 
|p..1..r...`.|
0150  b9 00 01 8e db 31 f6 bf  00 80 8e c6 fc f3 a5 1f 
|.1..|
0160  61 ff 26 5a 7c be 80 7d  eb 03 be 8f 7d e8 34 00 
|a.|..}}.4.|
0170  be 94 7d e8 2e 00 cd 18  eb fe 47 52 55 42 20 00 
|..}...GRUB .|
0180  47 65 6f 6d 00 48 61 72  64 20 44 69 73 6b 00 52  |Geom.Hard
Disk.R|
0190  65 61 64 00 20 45 72 72  6f 72 0d 0a 00 bb 01 00  |ead.
Error..|
01a0  b4 0e cd 10 ac 3c 00 75  f4 c3 00 00 00 00 00 00 
|.<.u|
01b0  00 00 00 00 00 00 00 00  4f ee 04 00 00 00 80 20 
|O.. |
01c0  21 00 83 aa 28 82 00 08  00 00 00 00 20 00 00 aa 
|!...(... ...|
01d0  29 82 8e fe ff ff 00 08  20 00 00 f8 5b 05 00 fe  |)...
...[...|
01e0  ff ff 83 fe ff ff 00 00  7c 05 00 f8 c3 00 00 00 
||...|
01f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa 
|..U.|
0200


Am 03.12.2018 um 16:43 schrieb Simone Tiraboschi:
>
>
> On Mon, Dec 3, 2018 at 2:07 PM Ralf Schenk  <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> I try to deploy hosted-engine to a NFS Share accessible by
> (currently) two hosts. The host is running latest ovirt-node-ng 4.2.7.
>
> hosted-engine --deploy fails constantly in late stage when trying
> to run engine from NFS. It already ran as "HostedEngineLocal" and
> I think is then migrated to NFS storage.
>
> Engine seems to be deployed to NFS already:
>
> [root@epycdphv02 ~]# ls -al
> /rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine
> total 23
> drwxrwxrwx 3 vdsm kvm    4 Dec  3 13:01 .
> drwxr-xr-x 3 vdsm kvm 4096 Dec  1 17:11 ..
> drwxr-xr-x 6 vdsm kvm    6 Dec  3 13:09
> 1dacf1ea-0934-4840-bed4-e9d023572f59
> -rwxr-xr-x 1 vdsm kvm    0 Dec  3 13:42 __DIRECT_IO_TEST__
>
> NFS Mount:
>
> storage01.office.databay.de:/ovirt/engine on
> /rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine
> type nfs4
> 
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.121,local_lock=none,addr=192.168.1.3)
>
> Libvirt quemu states an error:
>
> Could not open
> 
> '/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
> Permission denied
>
> Even permissions of mentioned file seem to be ok. SELINUX is
> disabled since I had a lots of problems with earlier versions
> trying to deploy hosted-engine.
>
> You can keep it on without any know issue.
>  
>
> [root@epycdphv02 ~]# ls -al
> 
> '/

[ovirt-users] hosted-engine --deploy fails on Ovirt-Node-NG 4.2.7

2018-12-03 Thread Ralf Schenk
ok: [localhost]
[ INFO  ] TASK [Find the local appliance image]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set local_vm_disk_path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Give the vm time to flush dirty buffers]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Copy engine logs]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Remove temporary entry in /etc/hosts for the local VM]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20181203132110.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix accordingly or re-deploy from scratch.
  Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20181203124703-45t4ja.log
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NK2TNNPA7TFLPKWNAU34BL4T7QANUDKB/


[ovirt-users] Re: Upgrade 4.1 to 4.2 lsot gfapi Disk Access for VM's

2018-08-20 Thread Ralf Schenk
Hello,

after enabling it, restarting ovirt-engine service and restarting the VM
I got back gfapi based Disks:

    
  
  
    
  
  
  
  d5f3657f-ac7a-4d89-8a83-e7c47ee0ef05
  
  
  
    

Am 20.08.2018 um 15:15 schrieb Alex K:
> Hi,
>
> On Mon, Aug 20, 2018 at 10:45 AM Ralf Schenk  <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> very interesting output. Feature Lost...
>
> [root@engine-mciii ~]# engine-config -g LibgfApiSupported
> LibgfApiSupported: false version: 3.6
> LibgfApiSupported: false version: 4.0
> LibgfApiSupported: true version: 4.1
> LibgfApiSupported: false version: 4.2
>
> Did you enable it? did it fix your issue? 
>
> Bye
>
>
> Am 17.08.2018 um 17:57 schrieb Alex K:
>> CORRECTION
>>
>> On Fri, Aug 17, 2018 at 6:55 PM Alex K > <mailto:rightkickt...@gmail.com>> wrote:
>>
>> Hi,
>>
>> On Fri, Aug 17, 2018 at 6:21 PM Ralf Schenk > <mailto:r...@databay.de>> wrote:
>>
>> Hello,
>>
>> after upgradeing my whole cluster of 8 Hosts to ovirt
>> 4.2.5 and setting compability of cluster and datacenter
>> to 4.2 my existing virtual machines start using FUSE
>> mounted disk-images on my gluster volumes.
>>
>> One bad and slow thing I thought that I got rid of
>> starting with 4.1.x finally !
>>
>> Disk definition from virsh -r dumpxml VM of old running
>> vm (not restarted yet !):
>>
>>     
>>   > error_policy='stop' io='threads'/>
>>   > 
>> name='gv0/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/d5f3657f-ac7a-4d89-8a83-e7c47ee0ef05/f99c6cb4-1791-4a55-a0b9-2ff0ec1a4dd7'>
>>     
>>   
>>   
>>   
>>   d5f3657f-ac7a-4d89-8a83-e7c47ee0ef05
>>   
>>   
>>   > target='0' unit='0'/>
>>     
>>
>> Disk definition from virsh -r dumpxml VM of new started
>> running vm:
>>
>>     
>>   > error_policy='stop' io='threads'/>
>>   > 
>> file='/rhev/data-center/mnt/glusterSD/glusterfs.mylocal.domain:_gv0/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/1f0db0ef-a6af-4e3e-90e6-f681d071496b/1b3c3a
>>   
>>   
>>   1f0db0ef-a6af-4e3e-90e6-f681d071496b
>>   
>>   
>>   > target='0' unit='0'/>
>>     
>>
>>
>> How do I get back my gfapi gluster-based Disks ?
>>
>> Can you check at engine the gfapi support with: 
>>
>> engine-config -g LibgfApiSupported
>>
>> If not enabled you can try: 
>>
>> engine-config -s LibgfApiSupported=true 
>>
>>
>> Then restart ovirt-engine service, then shutdown/power up one VM to 
>> see.
>>
>>
>>
>> -- 
>>
>>
>> *Ralf Schenk*
>> fon +49 (0) 24 05 / 40 83 70
>> fax +49 (0) 24 05 / 40 83 759
>> mail *r...@databay.de* <mailto:r...@databay.de>
>>      
>> *Databay AG*
>> Jens-Otto-Krag-Straße 11
>> D-52146 Würselen
>> *www.databay.de* <http://www.databay.de>
>>
>> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
>> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch
>> Yavari, Dipl.-Kfm. Philipp Hermanns
>> Aufsichtsratsvorsitzender: Wilhelm Dohmen
>>
>> 
>> ----
>> ___
>> Users mailing list -- users@ovirt.org
>> <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-le...@ovirt.org
>> <mailto:users-le...@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> h

[ovirt-users] Upgrade 4.1 to 4.2 lsot gfapi Disk Access for VM's

2018-08-17 Thread Ralf Schenk
Hello,

after upgradeing my whole cluster of 8 Hosts to ovirt 4.2.5 and setting
compability of cluster and datacenter to 4.2 my existing virtual
machines start using FUSE mounted disk-images on my gluster volumes.

One bad and slow thing I thought that I got rid of starting with 4.1.x
finally !

Disk definition from virsh -r dumpxml VM of old running vm (not
restarted yet !):

    
  
  
    
  
  
  
  d5f3657f-ac7a-4d89-8a83-e7c47ee0ef05
  
  
  
    

Disk definition from virsh -r dumpxml VM of new started running vm:

    
  
  
  1f0db0ef-a6af-4e3e-90e6-f681d071496b
  
  
  
    


How do I get back my gfapi gluster-based Disks ?



-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/27UBFV326G5O4PABY5JBKSR4GQSS4XKY/


[ovirt-users] Ovirt Node NG (4.2.3.1-0.20180530) Boot fail ISCSI using ibft after installation

2018-06-17 Thread Ralf Schenk
Hello,

I successfully installed Ovirt Node NG from ISO to an ISCSI target
attached via first network interface by using following extensions to
the grub cmdline:

"rd.iscsi.ibft=1 ip=ibft ip=eno2:dhcp"

I want to use the server as diskless ovirt-node-ng Server.

After successful install the system reboots and starts up but it fails
later in dracut even having detected correctly the disk and all the LV's.

I think "iscsistart" is run multiple times even after already being
logged in to the ISCSI-Target and that fails finally like that:

*[  147.644872] localhost dracut-initqueue[1075]: iscsistart: initiator
reported error (15 - session exists)*
[  147.645588] localhost dracut-initqueue[1075]: iscsistart: Logging
into iqn.2018-01.de.databay.office:storage01.epycdphv02-disk1
172.16.1.3:3260,1
[  147.651027] localhost dracut-initqueue[1075]: Warning: 'iscsistart -b
' failed with return code 0
[  147.807510] localhost systemd[1]: Starting Login iSCSI Target...
[  147.809293] localhost iscsistart[6716]: iscsistart: TargetName not
set. Exiting iscsistart
[  147.813625] localhost systemd[1]: iscsistart_iscsi.service: main
process exited, code=exited, status=7/NOTRUNNING
[  147.824897] localhost systemd[1]: Failed to start Login iSCSI Target.
[  147.825050] localhost systemd[1]: Unit iscsistart_iscsi.service
entered failed state.
[  147.825185] localhost systemd[1]: iscsistart_iscsi.service failed.

-- 
Mit freundlichen Grüßen

Ralf Schenk

+ cat /lib/dracut/dracut-033-535.el7
dracut-033-535.el7
+ cat /proc/cmdline
BOOT_IMAGE=/ovirt-node-ng-4.2.3.1-0.20180530.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
 root=/dev/onn/ovirt-node-ng-4.2.3.1-0.20180530.0+1 ro crashkernel=auto 
bootdev=ibft0 ip=ibft ip=eno2:dhcp netroot=iscsi rd.iscsi.ibft=1 
rd.iscsi.firmware=1 rd.auto rd.lvm.lv=onn/var_log rd.lvm.lv=onn/tmp 
rd.lvm.lv=onn/var_log_audit rd.lvm.lv=onn/swap 
rd.lvm.lv=onn/ovirt-node-ng-4.2.3.1-0.20180530.0+1 rd.lvm.lv=onn/home 
plymouth.enable=0 rd.lvm.lv=onn/ovirt-node-ng-4.2.3.1-0.20180530.0+1 
img.bootid=ovirt-node-ng-4.2.3.1-0.20180530.0+1
+ '[' -f /etc/cmdline ']'
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f /etc/cmdline.d/40-ibft.conf ']'
+ echo /etc/cmdline.d/40-ibft.conf
/etc/cmdline.d/40-ibft.conf
+ cat /etc/cmdline.d/40-ibft.conf
ip=172.16.1.121:::255.255.255.0::ibft0:none
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f /etc/cmdline.d/45-ifname.conf ']'
+ echo /etc/cmdline.d/45-ifname.conf
/etc/cmdline.d/45-ifname.conf
+ cat /etc/cmdline.d/45-ifname.conf
ifname=ibft0:0c:c4:7a:fa:23:92
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f /etc/cmdline.d/dracut-neednet.conf ']'
+ echo /etc/cmdline.d/dracut-neednet.conf
/etc/cmdline.d/dracut-neednet.conf
+ cat /etc/cmdline.d/dracut-neednet.conf
rd.neednet=1
+ cat /proc/self/mountinfo
0 0 0:1 / / rw shared:1 - rootfs rootfs rw
18 0 0:18 / /sys rw,nosuid,nodev,noexec,relatime shared:2 - sysfs sysfs rw
19 0 0:3 / /proc rw,nosuid,nodev,noexec,relatime shared:8 - proc proc rw
20 0 0:5 / /dev rw,nosuid shared:9 - devtmpfs devtmpfs 
rw,size=131896204k,nr_inodes=32974051,mode=755
21 18 0:17 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:3 - 
securityfs securityfs rw
22 20 0:19 / /dev/shm rw,nosuid,nodev shared:10 - tmpfs tmpfs rw
23 20 0:12 / /dev/pts rw,nosuid,noexec,relatime shared:11 - devpts devpts 
rw,gid=5,mode=620,ptmxmode=000
24 0 0:20 / /run rw,nosuid,nodev shared:12 - tmpfs tmpfs rw,mode=755
25 18 0:21 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:4 - tmpfs tmpfs 
ro,mode=755
26 25 0:22 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:5 - 
cgroup cgroup 
rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
27 18 0:23 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:6 - pstore 
pstore rw
28 18 0:24 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:7 
- efivarfs efivarfs rw
29 25 0:25 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:13 - 
cgroup cgroup rw,blkio
30 25 0:26 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:14 - 
cgroup cgroup rw,freezer
31 25 0:27 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime 
shared:15 - cgroup cgroup rw,cpuacct,cpu
32 25 0:28 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime 
shared:16 - cgroup cgroup rw,net_prio,net_cls
33 25 0:29 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:17 - 
cgroup cgroup rw,devices
34 25 0:30 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime 
shared:18 - cgroup cgroup rw,perf_event
35 25 0:31 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:19 - 
cgroup cgroup rw,hugetlb
36 25 0:32 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:20 - 
cgroup cgroup rw,memory
37 25 0:33 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:21 - 
cgroup cgroup rw,cpuset
38 25 0:34 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:22 - 
cgroup cgroup rw,pids
39 0 0:35 / /var/lib/nfs/rpc_pipefs rw,relatime sh

[ovirt-users] Ovirt Node NG (4.2.3.1-0.20180530) Boot fail ISCSI using ibft after installation

2018-06-05 Thread Ralf Schenk
Hello,

I successfully installed Ovirt Node NG from ISO to an ISCSI target
attached via first network interface by using following extensions to
the grub cmdline:

"rd.iscsi.ibft=1 ip=ibft ip=eno2:dhcp"

I want to use the server as diskless ovirt-node-ng Server.

After successful install the system reboots and starts up but it fails
later in dracut even having detected correctly the disk and all the LV's.

I think "iscsistart" is run multiple times even after already being
logged in to the ISCSI-Target and that fails finally like that:

*[  147.644872] localhost dracut-initqueue[1075]: iscsistart: initiator
reported error (15 - session exists)*
[  147.645588] localhost dracut-initqueue[1075]: iscsistart: Logging
into iqn.2018-01.de.databay.office:storage01.epycdphv02-disk1
172.16.1.3:3260,1
[  147.651027] localhost dracut-initqueue[1075]: Warning: 'iscsistart -b
' failed with return code 0
[  147.807510] localhost systemd[1]: Starting Login iSCSI Target...
[  147.809293] localhost iscsistart[6716]: iscsistart: TargetName not
set. Exiting iscsistart
[  147.813625] localhost systemd[1]: iscsistart_iscsi.service: main
process exited, code=exited, status=7/NOTRUNNING
[  147.824897] localhost systemd[1]: Failed to start Login iSCSI Target.
[  147.825050] localhost systemd[1]: Unit iscsistart_iscsi.service
entered failed state.
[  147.825185] localhost systemd[1]: iscsistart_iscsi.service failed.

After a long timeout dracut drops to a shell.
I attach my shortended and cleaned rdsosreport.txt. Can someone help me
find a workaround ?

Bye


-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


+ cat /lib/dracut/dracut-033-535.el7
dracut-033-535.el7
+ cat /proc/cmdline
BOOT_IMAGE=/ovirt-node-ng-4.2.3.1-0.20180530.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
 root=/dev/onn/ovirt-node-ng-4.2.3.1-0.20180530.0+1 ro crashkernel=auto 
bootdev=ibft0 ip=ibft ip=eno2:dhcp netroot=iscsi rd.iscsi.ibft=1 
rd.iscsi.firmware=1 rd.auto rd.lvm.lv=onn/var_log rd.lvm.lv=onn/tmp 
rd.lvm.lv=onn/var_log_audit rd.lvm.lv=onn/swap 
rd.lvm.lv=onn/ovirt-node-ng-4.2.3.1-0.20180530.0+1 rd.lvm.lv=onn/home 
plymouth.enable=0 rd.lvm.lv=onn/ovirt-node-ng-4.2.3.1-0.20180530.0+1 
img.bootid=ovirt-node-ng-4.2.3.1-0.20180530.0+1
+ '[' -f /etc/cmdline ']'
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f /etc/cmdline.d/40-ibft.conf ']'
+ echo /etc/cmdline.d/40-ibft.conf
/etc/cmdline.d/40-ibft.conf
+ cat /etc/cmdline.d/40-ibft.conf
ip=172.16.1.121:::255.255.255.0::ibft0:none
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f /etc/cmdline.d/45-ifname.conf ']'
+ echo /etc/cmdline.d/45-ifname.conf
/etc/cmdline.d/45-ifname.conf
+ cat /etc/cmdline.d/45-ifname.conf
ifname=ibft0:0c:c4:7a:fa:23:92
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f /etc/cmdline.d/dracut-neednet.conf ']'
+ echo /etc/cmdline.d/dracut-neednet.conf
/etc/cmdline.d/dracut-neednet.conf
+ cat /etc/cmdline.d/dracut-neednet.conf
rd.neednet=1
+ cat /proc/self/mountinfo
0 0 0:1 / / rw shared:1 - rootfs rootfs rw
18 0 0:18 / /sys rw,nosuid,nodev,noexec,relatime shared:2 - sysfs sysfs rw
19 0 0:3 / /proc rw,nosuid,nodev,noexec,relatime shared:8 - proc proc rw
20 0 0:5 / /dev rw,nosuid shared:9 - devtmpfs devtmpfs 
rw,size=131896204k,nr_inodes=32974051,mode=755
21 18 0:17 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:3 - 
securityfs securityfs rw
22 20 0:19 / /dev/shm rw,nosuid,nodev shared:10 - tmpfs tmpfs rw
23 20 0:12 / /dev/pts rw,nosuid,noexec,relatime shared:11 - devpts devpts 
rw,gid=5,mode=620,ptmxmode=000
24 0 0:20 / /run rw,nosuid,nodev shared:12 - tmpfs tmpfs rw,mode=755
25 18 0:21 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:4 - tmpfs tmpfs 
ro,mode=755
26 25 0:22 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:5 - 
cgroup cgroup 
rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
27 18 0:23 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:6 - pstore 
pstore rw
28 18 0:24 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:7 
- efivarfs efivarfs rw
29 25 0:25 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:13 - 
cgroup cgroup rw,blkio
30 25 0:26 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:14 - 
cgroup cgroup rw,freezer
31 25 0:27 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime 
shared:15 - cgroup cgroup rw,cpuacct,cpu
32 25 0:28 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime 
shared:16 - cgroup cgroup rw,net_prio,net_cls
33 25 0:29 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,rel

[ovirt-users] Now way to install on EFI and ISCSI

2018-04-04 Thread Ralf Schenk
Hello,

I've a Epyc-based Supermicro Server that should act as ovirt host by
using ovirt-node-next installed on ISCSI Boot Target. I already
installed an EFI bootable OS (Ubuntu 16.04) on the Hardware using ISCSI
Target as Boot-Disk. So far that works.

If I try to install ovirt-node-next (tried
ovirt-node-ng-installer-ovirt-4.2-2018032906.iso,
ovirt-node-ng-installer-ovirt-4.2-2018040306.iso mounted via KVM as CD)
I'm not able to use auto-partitioning nor manual partitioning to set the
system up. Accessing the ISCSI Target (50GB) works. I created all the
mount-points, including /boot and EFI System Partition /boot/efi. The
installer automatically proposes /, /var /home and swap on thinpool LVMs
which is ok.

After having it ready the installer either doesn't accept "Done" as if
no EFI Partition was prepared. If I highlight the EFI Partition and
click Done it works but the Installer overview doesn't allow to start
installation and storage configuration ist marked with a warning sign.

The system is booted definitely via EFI from th installer CD.

Bye
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] update to centos 7.4

2017-09-23 Thread Ralf Schenk
Hello all,

I updated all my 8 hosts to Centos 7.4 and engine-VM also including
engine update from 4.1.5 to 4.1.6 and had no issues so far.

Bye
Am 14.09.2017 um 16:11 schrieb FERNANDO FREDIANI:
> It has been released yesterday. I don't thing such a quick upgrade is
> recommended. It might work well but I wouldn't find strange if there
> are issues until this is fully tested with current oVirt versions.
>
> Fernando
>
> On 14/09/2017 11:01, Nathanaël Blanchet wrote:
>> Hi all,
>>
>> Now centos 7.4 is available, is it recommanted to update nodes (and
>> engine os) knowing that ovirt 4.1 is officially supported for 7.3 or
>> later?
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Ralf Schenk
Hello,

Progress: I finally tried to migrate the machine to other hosts in the
cluster. For one this was working !

See attached vdsm.log. The migration to host microcloud25 worked as
expected, migrating back to initial host microloud22 also. Other hosts
(microcloud21, microcloud23,microcloud24 where not working at all as a
migration target.

Perhaps the working ones were the two that I rebooted after upgrading
all hosts to Ovirt 4.1.5. I'll check with another host to reboot it and
try again.Perhaps any other daemon (libvirt/supervdsm or I don't know
has to be restarted)

Bye.

Am 25.08.2017 um 14:14 schrieb Ralf Schenk:
>
> Hello,
>
> setting DNS glusterfs.rxmgmt.databay.de to only one IP didn't change
> anything.
>
> [root@microcloud22 ~]# dig glusterfs.rxmgmt.databay.de
>
> ; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> glusterfs.rxmgmt.databay.de
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35135
> ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 6
>
> ;; OPT PSEUDOSECTION:
> ; EDNS: version: 0, flags:; udp: 4096
> ;; QUESTION SECTION:
> ;glusterfs.rxmgmt.databay.de.   IN  A
>
> ;; ANSWER SECTION:
> glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.121
>
> ;; AUTHORITY SECTION:
> rxmgmt.databay.de.  84600   IN  NS  ns3.databay.de.
> rxmgmt.databay.de.  84600   IN  NS  ns.databay.de.
>
> vdsm.log still shows:
> 2017-08-25 14:02:38,476+0200 INFO  (periodic/0) [vdsm.api] FINISH
> repoStats return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay':
> '0.000295126', 'lastCheck': '0.8', 'valid': True},
> u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
> 'version': 0, 'acquired': True, 'delay': '0.000611748', 'lastCheck':
> '3.6', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0':
> {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay':
> '0.000324379', 'lastCheck': '3.6', 'valid': True},
> u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': {'code': 0, 'actual': True,
> 'version': 0, 'acquired': True, 'delay': '0.000718626', 'lastCheck':
> '4.1', 'valid': True}} from=internal,
> task_id=ec205bf0-ff00-4fac-97f0-e6a7f3f99492 (api:52)
> 2017-08-25 14:02:38,584+0200 ERROR (migsrc/ffb71f79) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') failed to initialize
> gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success
> (migration:287)
> 2017-08-25 14:02:38,619+0200 ERROR (migsrc/ffb71f79) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Failed to migrate
> (migration:429)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 411, in run
>     self._startUnderlyingMigration(time.time())
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 487, in _startUnderlyingMigration
>     self._perform_with_conv_schedule(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 563, in _perform_with_conv_schedule
>     self._perform_migration(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 529, in _perform_migration
>     self._vm._dom.migrateToURI3(duri, params, flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 69, in f
>     ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line 123, in wrapper
>     ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in
> wrapper
>     return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in
> migrateToURI3
>     if ret == -1: raise libvirtError ('virDomainMigrateToURI3()
> failed', dom=self)
> libvirtError: failed to initialize gluster connection
> (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success
>
>
> One thing I noticed in destination vdsm.log:
> 2017-08-25 10:38:03,413+0200 ERROR (jsonrpc/7) [virt.vm]
> (vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') *Alias not found for
> device type disk during migration at destination host (vm:4587)*
> 2017-08-25 10:38:03,478+0200 INFO  (jsonrpc/7) [root]  (hooks:108)
> 2017-08-25 10:38:03,492+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
> RPC call VM.migrationCreate succeeded in 0.51 seconds (__init__:539)
> 2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [vdsm.api] START
> destroy(gracefulAttempts=1) from=:::172.16.252.122,45736 (api:46)
> 2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [virt.vm]
> (vmId='ffb71f79-54cd-4

Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Ralf Schenk
Hello,

setting DNS glusterfs.rxmgmt.databay.de to only one IP didn't change
anything.

[root@microcloud22 ~]# dig glusterfs.rxmgmt.databay.de

; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> glusterfs.rxmgmt.databay.de
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35135
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 6

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;glusterfs.rxmgmt.databay.de.   IN  A

;; ANSWER SECTION:
glusterfs.rxmgmt.databay.de. 84600 IN   A   172.16.252.121

;; AUTHORITY SECTION:
rxmgmt.databay.de.  84600   IN  NS  ns3.databay.de.
rxmgmt.databay.de.  84600   IN  NS  ns.databay.de.

vdsm.log still shows:
2017-08-25 14:02:38,476+0200 INFO  (periodic/0) [vdsm.api] FINISH
repoStats return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000295126',
'lastCheck': '0.8', 'valid': True},
u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000611748', 'lastCheck':
'3.6', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code':
0, 'actual': True, 'version': 4, 'acquired': True, 'delay':
'0.000324379', 'lastCheck': '3.6', 'valid': True},
u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000718626', 'lastCheck':
'4.1', 'valid': True}} from=internal,
task_id=ec205bf0-ff00-4fac-97f0-e6a7f3f99492 (api:52)
2017-08-25 14:02:38,584+0200 ERROR (migsrc/ffb71f79) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') failed to initialize
gluster connection (src=0x7fd82001fc30 priv=0x7fd820003ac0): Success
(migration:287)
2017-08-25 14:02:38,619+0200 ERROR (migsrc/ffb71f79) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Failed to migrate
(migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
411, in run
    self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
487, in _startUnderlyingMigration
    self._perform_with_conv_schedule(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
563, in _perform_with_conv_schedule
    self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
529, in _perform_migration
    self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
69, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 123, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in
wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in
migrateToURI3
    if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirtError: failed to initialize gluster connection
(src=0x7fd82001fc30 priv=0x7fd820003ac0): Success


One thing I noticed in destination vdsm.log:
2017-08-25 10:38:03,413+0200 ERROR (jsonrpc/7) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') *Alias not found for
device type disk during migration at destination host (vm:4587)*
2017-08-25 10:38:03,478+0200 INFO  (jsonrpc/7) [root]  (hooks:108)
2017-08-25 10:38:03,492+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
RPC call VM.migrationCreate succeeded in 0.51 seconds (__init__:539)
2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [vdsm.api] START
destroy(gracefulAttempts=1) from=:::172.16.252.122,45736 (api:46)
2017-08-25 10:38:03,669+0200 INFO  (jsonrpc/2) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Release VM resources (vm:4254)
2017-08-25 10:38:03,670+0200 INFO  (jsonrpc/2) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection
(guestagent:430)
2017-08-25 10:38:03,671+0200 INFO  (jsonrpc/2) [vdsm.api] START
teardownImage(sdUUID=u'5d99af76-33b5-47d8-99da-1f32413c7bb0',
spUUID=u'0001-0001-0001-0001-00b9',
imgUUID=u'9c007b27-0ab7-4474-9317-a294fd04c65f', volUUID=None)
from=:::172.16.252.122,45736,
task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:46)
2017-08-25 10:38:03,671+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH
teardownImage return=None from=:::172.16.252.122,45736,
task_id=4878dd0c-54e9-4bef-9ec7-446b110c9d8b (api:52)
2017-08-25 10:38:03,672+0200 INFO  (jsonrpc/2) [virt.vm]
(vmId='ffb71f79-54cd-4f0e-b6b5-3670236cb497') Stopping connection
(guestagent:430)




Am 25.08.2017 um 14:03 schrieb Denis Chaplygin:
> Hello!
>
> On Fri, Aug 25, 2017 at 1:40 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> I'm usi

Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-25 Thread Ralf Schenk
Hello,

I'm using the DNS Balancing gluster hostname for years now, not only
with ovirt. No software so far had a problem. And setting the hostname
to only one Host of course breaks one advantage of a
distributed/replicated Cluster File-System like loadbalancing the
connections to the storage and/or failover if one host is missing. In
earlier ovirt it wasn't possible to specify something like
"backupvolfile-server" for a High-Available hosted-engine rollout (which
I use).

I already used live migration in such a setup. This was done with pure
libvirt setup/virsh and later using OpenNebula.

Bye



Am 25.08.2017 um 13:11 schrieb Denis Chaplygin:
> Hello!
>
> On Fri, Aug 25, 2017 at 11:05 AM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
>
> I replayed migration (10:38:02 local time) and recorded vdsm.log
> of source and destination as attached. I can't find anything in
> the gluster logs that shows an error. One information: my FQDN
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>
> points to all the gluster hosts:
>
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.121
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.125
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.127
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.122
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.124
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.123
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.126
> glusterfs.rxmgmt.databay.de <http://glusterfs.rxmgmt.databay.de>.
> 84600 IN   A   172.16.252.128
>
> I double checked all gluster hosts. They all are configured the
> same regarding "option rpc-auth-allow-insecure on" No iptables
> rules on the host.
>
>
> Do you use 'glusterfs.rxmgmt.databay.de
> <http://glusterfs.rxmgmt.databay.de>" as a storage domain host name?
> I'm not a gluster guru, but i'm afraid that some internal gluster
> client code may go crazy, when it receives different address or
> several ip addresses every time. Is it possible to try with separate
> names? You can create a storage domain using 172.16.252.121 for
> example and it should work bypassing your DNS. If it is possible to
> make that, could you please do that and retry live migration?

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-24 Thread Ralf Schenk
Hello,

I don't want to migrate a "storage domain" but a VM from Host to Host.
Live migration of a storage domain migrates all the contents of a
storage domain to another one, as I unterstand.

Bye


Am 24.08.2017 um 15:55 schrieb Gianluca Cecchi:
>
>
> On Thu, Aug 24, 2017 at 3:07 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Dear List,
>
> finally its there : Ovirt VM's can use native gluster via
> libgfapi. I was able to start a vm on gluster after setting
> "engine-config -s LibgfApiSupported=true"
>
>
> [snip] 
>
> But I'm not able to migrate the mashine live to another host in
> the cluster. Manager only states "Migration failed"
>
> I did this already years ago without management interface by only
> using libvirt commands on gluster. Why this is still not working ?
>
> Since gluster is a networked protocol I can't see any reason for it.
>
> I followed all the bugs like
> https://bugzilla.redhat.com/show_bug.cgi?id=1022961
> <https://bugzilla.redhat.com/show_bug.cgi?id=1022961> since last
> year and saw them being worked on.
>
>
> Based on official announcement here, it is a known problem:
> http://lists.ovirt.org/pipermail/users/2017-August/083771.html
>
> "
> Due  to a known issue [6], using this will break live storage migration.
> This is expected to be fixed soon. If you do not use live storage
> migration you can give it a try.
>
> ...
>
> [6] https://bugzilla.redhat.com/show_bug.cgi?id=1306562
> "
> I see that it is currently in ASSIGNED state with high priority and
> severity
>
> HIH,
> Gianluca
>

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-24 Thread Ralf Schenk
Hello,

nice to hear it worked for you.

Attached you find the vdsm.log (from migration source) including the
error and engine.log which looks ok.

Hostnames/IP-Adresses are correct and use the ovirtmgmt Network.

I checked (on both hosts):

[root@microcloud22 glusterfs]# gluster volume get gv0 storage.owner-uid
Option  Value
--  -
storage.owner-uid   36
[root@microcloud22 glusterfs]# gluster volume get gv0 storage.owner-gid
Option  Value
--  -
storage.owner-gid   36
[root@microcloud22 glusterfs]# gluster volume get gv0 server.allow-insecure
Option  Value
--  -
server.allow-insecure   on

and /etc/glusterd

root@microcloud22 glusterfs]# cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 0
    option event-threads 1
    option rpc-auth-allow-insecure on
#   option transport.address-family inet6
#   option base-port 49152
end-volume

Bye


Am 24.08.2017 um 15:25 schrieb Denis Chaplygin:
> Hello!
>
> On Thu, Aug 24, 2017 at 3:07 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Responsiveness of VM is much better (already seen when Updateng OS
> Packages). 
>
> But I'm not able to migrate the mashine live to another host in
> the cluster. Manager only states "Migration failed"
>
>
> Live migration worked for me. 
>
> Cold you please provide some details? Engine/vdsm logs in +/- 10
> minutes in the vicinity of migration failure. 

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


2017-08-24 15:29:13,761+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2017-08-24 15:29:13,803+0200 INFO  (jsonrpc/7) [vdsm.api] START 
repoStats(options=None) from=:::172.16.252.200,56136, flow_id=1acd6778, 
task_id=dbf6819c-a29d-4bb7-bb72-a92acd82d887 (api:46)
2017-08-24 15:29:13,803+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats 
return={u'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96': {'code': 0, 'actual': True, 
'version': 4, 'acquired': True, 'delay': '0.000297791', 'lastCheck': '0.8', 
'valid': True}, u'2b2a44fc-f2bd-47cd-b7af-00be59e30a35': {'code': 0, 'actual': 
True, 'version': 0, 'acquired': True, 'delay': '0.000710249', 'lastCheck': 
'0.8', 'valid': True}, u'5d99af76-33b5-47d8-99da-1f32413c7bb0': {'code': 0, 
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000295381', 
'lastCheck': '0.8', 'valid': True}, u'a7fbaaad-7043-4391-9523-3bedcdc4fb0d': 
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': 
'0.000791434', 'lastCheck': '0.8', 'valid': True}} 
from=:::172.16.252.200,56136, flow_id=1acd6778, 
task_id=dbf6819c-a29d-4bb7-bb72-a92acd82d887 (api:52)
2017-08-24 15:29:13,807+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.getStats succeeded in 0.00 seconds (__init__:539)
2017-08-24 15:29:14,296+0200 INFO  (jsonrpc/1) [vdsm.api] START 
migrate(params={u'incomingLimit': 2, u'src': u'microcloud22.rxmgmt.databay.de', 
u'dstqemu': u'172.16.252.124', u'autoConverge': u'true', u'tunneled': u'false', 
u'enableGuestEvents': True, u'dst': u'microcloud24.rxmgmt.databay.de:54321', 
u'convergenceSchedule': {u'init': [{u'params': [u'100'], u'name': 
u'setDowntime'}], u'stalling': [{u'action': {u'params': [u'150'], u'name': 
u'setDowntime'}, u'limit': 1}, {u'action': {u'params': [u'200'], u'name': 
u'setDowntime'}, u'limit': 2}, {u'action': {u'params': [u'300'], u'name': 
u'setDowntime'}, u'limit': 3}, {u'action': {u'params': [u'400'], u'name': 
u'setDowntime'}, u'limit': 4}, {u'action': {u'params': [u'500'], u'name': 
u'setDowntime'}, u'limit': 6}, {u'action': {u'params': [], u'name': u'abort'}, 
u'limit': -1}]}, u'vmId': u'ffb71f79-54cd-4f0e-b6b5-3670236cb497', 
u'abortOnError': u'true', u'outgoingLimit': 2, u'compressed': u'false', 
u'method': u'online', 'mode': 'remote'}) from=:::172.16.252.200,56136, 
flow_id=6c91f368-aa5d-47f4-8d4a-29f6f3005dc6 (api:46)
2017-08-24 15:29:14,298+0200 INFO  (jsonrpc/1) [vdsm.api

[ovirt-users] oVirt 4.1.5 Finally GlusterFS via lbgfapi

2017-08-24 Thread Ralf Schenk
Dear List,

finally its there : Ovirt VM's can use native gluster via libgfapi. I
was able to start a vm on gluster after setting "engine-config -s
LibgfApiSupported=true"

This was the disk definition as seen by "virsh -r dumxml myvm" on the host

   
  
  
    
  
  
  
  9c007b27-0ab7-4474-9317-a294fd04c65f
  
  
  
    

Responsiveness of VM is much better (already seen when Updateng OS
Packages).

But I'm not able to migrate the mashine live to another host in the
cluster. Manager only states "Migration failed"

I did this already years ago without management interface by only using
libvirt commands on gluster. Why this is still not working ?

Since gluster is a networked protocol I can't see any reason for it.

I followed all the bugs like
https://bugzilla.redhat.com/show_bug.cgi?id=1022961 since last year and
saw them being worked on.

Now I'm really disappointed. Running something like MySQL in a VM via
gluster via  FUSE is crap. :-(

Not being able to migrate machines in an ovirt cluster is crap, too if
one wants to stay current. A few VM's on my OVirt are running for ~200
days now while constantly upgrading to latest Ovirt Release. That needs
live migration of course.

Bye


-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-06-19 Thread Ralf Schenk
Hello,

Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi
access for Ovirt-VM's to gluster volumes which I thought to be possible
since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is
using fuse to mount gluster-based VM-Disks.

Bye


Am 19.06.2017 um 17:23 schrieb Darrell Budic:
> Chris-
>
> You probably need to head over to gluster-us...@gluster.org
> <mailto:gluster-us...@gluster.org> for help with performance issues.
>
> That said, what kind of performance are you getting, via some form or
> testing like bonnie++ or even dd runs? Raw bricks vs gluster
> performance is useful to determine what kind of performance you’re
> actually getting.
>
> Beyond that, I’d recommend dropping the arbiter bricks and re-adding
> them as full replicas, they can’t serve distributed data in this
> configuration and may be slowing things down on you. If you’ve got a
> storage network setup, make sure it’s using the largest MTU it can,
> and consider adding/testing these settings that I use on my main
> storage volume:
>
> performance.io <http://performance.io>-thread-count: 32
> client.event-threads: 8
> server.event-threads: 3
> performance.stat-prefetch: on
>
> Good luck,
>
>   -Darrell
>
>
>> On Jun 19, 2017, at 9:46 AM, Chris Boot <bo...@bootc.net
>> <mailto:bo...@bootc.net>> wrote:
>>
>> Hi folks,
>>
>> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
>> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
>> 6 bricks, which themselves live on two SSDs in each of the servers (one
>> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
>> SSDs. Connectivity is 10G Ethernet.
>>
>> Performance within the VMs is pretty terrible. I experience very low
>> throughput and random IO is really bad: it feels like a latency issue.
>> On my oVirt nodes the SSDs are not generally very busy. The 10G network
>> seems to run without errors (iperf3 gives bandwidth measurements of >=
>> 9.20 Gbits/sec between the three servers).
>>
>> To put this into perspective: I was getting better behaviour from NFS4
>> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
>> feel right at all.
>>
>> My volume configuration looks like this:
>>
>> Volume Name: vmssd
>> Type: Distributed-Replicate
>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet6
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io <http://performance.io>-cache: off
>> performance.stat-prefetch: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 1
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> features.shard-block-size: 128MB
>> performance.strict-o-direct: on
>> network.ping-timeout: 30
>> cluster.granular-entry-heal: enable
>>
>> I would really appreciate some guidance on this to try to improve things
>> because at this rate I will need to reconsider using GlusterFS
>> altogether.
>>
>> Cheers,
>> Chris
>>
>> -- 
>> Chris Boot
>> bo...@bootc.net <mailto:bo...@bootc.net>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-06 Thread Ralf Schenk
Hello,

I set the host to maintenance mode and tried to undeploy engine via GUI.
The action in GUI doesn't show an error but afterwards it still shows
only "Undeploy" on hosted-engine tab od the host.

Even removing the host from the cluster doesn't work because the GUI
says "The hosts maekred with * still have hosted engine deployed on
them. Hosted engine should be undeployed before they are removed"

Bye
Am 06.02.2017 um 11:44 schrieb Simone Tiraboschi:
>
>
> On Sat, Feb 4, 2017 at 11:52 AM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> I have set up 3 hosts for engine, 2 of them are working correct.
> There is no other host even having broker/agent installed. Is it
> possible that the error occurs because the hosts are multihomed
> (Management IP, IP for storage) and can communicate with different
> IP's ?
>
> Having multiple logical networks for storage, management and so on is
> a good practice and it's advised so I tend to exclude any issue there.
> The point is why your microcloud27.sub.mydomain.de
> <http://microcloud27.sub.mydomain.de> fails acquiring a lock as host 3.
> Probably the simplest fix is just setting it in maintenance mode from
> the engine, removing it and deploying it from the engine as an hosted
> engine host again. 
>
>  

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-04 Thread Ralf Schenk
Hello,

I have set up 3 hosts for engine, 2 of them are working correct. There
is no other host even having broker/agent installed. Is it possible that
the error occurs because the hosts are multihomed (Management IP, IP for
storage) and can communicate with different IP's ?

hosted-engine --vm-status on both working hosts seems correct: (3 is out
of order...)

[root@microcloud21 ~]# hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : microcloud21.sub.mydomain.de
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 5941227d
local_conf_timestamp   : 152316
Host timestamp : 152302
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=152302 (Sat Feb  4 11:49:29 2017)
host-id=1
score=3400
vm_conf_refresh_time=152316 (Sat Feb  4 11:49:43 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : microcloud24.sub.mydomain.de
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", " vm": "down",
"detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 77e25433
local_conf_timestamp   : 157637
Host timestamp : 157623
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=157623 (Sat Feb  4 11:49:34 2017)
host-id=2
score=3400
vm_conf_refresh_time=157637 (Sat Feb  4 11:49:48 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host 3 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : microcloud27.sub.mydomain.de
Host ID: 3
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 74798986
local_conf_timestamp   : 77946
Host timestamp : 77932
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=77932 (Fri Feb  3 15:19:25 2017)
host-id=3
score=0
vm_conf_refresh_time=77946 (Fri Feb  3 15:19:39 2017)
conf_on_shared_storage=True
maintenance=False
    state=AgentStopped
stopped=True

Am 03.02.2017 um 19:20 schrieb Simone Tiraboschi:

>
>
> On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> of course:
>
> [root@microcloud27 mnt]# sanlock client status
> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
> p -1 helper
> p -1 listener
> p -1 status
>
> sanlock.log attached. (Beginning 2017-01-27 where everything was fine)
>
> Thanks, the issue is here:
> 2017-02-02 19:01:22+0100 4848 [1048]: s36 lockspace 
> 7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96:3:/rhev/data-center/mnt/glusterSD/glusterfs.sub.mydomain.de:_engine/7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96/dom_md/ids:0
> 2017-02-02 19:03:42+0100 4988 [12983]: s36 delta_acquire host_id 3 busy1 3 15 
> 13129 7ad427b1-fbb6-4cee-b9ee-01f596fddfbb.microcloud
> 2017-02-02 19:03:43+0100 4989 [1048]: s36 add_lockspace fail result -262
> Could you please check if you have other hosts contending for the same
> ID (id=3 in this case).
>  

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
domain FilesystemBackend, options
{'dom_type': 'glusterfs', 'sd_uuid':
'7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'}: Request failed: 
2017-02-03 15:29:51,095 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-02-03 15:29:51,219 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Host.setKsmTune succeeded in 0.00 seconds (__init__:515)
2017-02-03 15:30:01,444 INFO  (periodic/1) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-03 15:30:01,444 INFO  (periodic/1) [dispatcher] Run and protect:
repoStats, Return response: {} (logUtils:52)
2017-02-03 15:30:01,448 ERROR (periodic/1) [root] failed to retrieve
Hosted Engine HA info (api:252)



Am 03.02.2017 um 13:39 schrieb Simone Tiraboschi:
> I see there an ERROR on stopMonitoringDomain but I cannot see the
> correspondent  startMonitoringDomain; could you please look for it?
>
> On Fri, Feb 3, 2017 at 1:16 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> attached is my vdsm.log from the host with hosted-engine-ha around
> the time-frame of agent timeout that is not working anymore for
> engine (it works in Ovirt and is active). It simply isn't working
> for engine-ha anymore after Update.
>
> At 2017-02-02 19:25:34,248 you'll find an error corresponoding to
> agent timeout error.
>
> Bye
>
>
>
> Am 03.02.2017 um 11:28 schrieb Simone Tiraboschi:
>>
>> 3. Three of my hosts have the hosted engine deployed for
>> ha. First all three where marked by a crown (running was
>> gold and others where silver). After upgrading the 3 Host
>> deployed hosted engine ha is not active anymore.
>>
>> I can't get this host back with working
>> ovirt-ha-agent/broker. I already rebooted, manually
>> restarted the services but It isn't able to get cluster
>> state according to
>> "hosted-engine --vm-status". The other hosts state the
>> host status as "unknown stale-data"
>>
>> I already shut down all agents on all hosts and issued a
>> "hosted-engine --reinitialize-lockspace" but that didn't
>> help.
>>
>> Agents stops working after a timeout-error according to log:
>>
>> MainThread::INFO::2017-02-02
>> 
>> 19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02
>> 
>> 19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02
>> 
>> 19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02
>> 
>> 19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02
>> 
>> 19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::INFO::2017-02-02
>> 
>> 19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
>> VDSM domain monitor status: PENDING
>> MainThread::ERROR::2017-02-02
>> 
>> 19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
>> Failed to start monitoring domain
>> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
>> host_id=3): timeout during domain acquisition
>> MainThread::WARNING::2017-02-02
>> 
>> 19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Error while monitoring engine: Failed to start monitoring
>> domain (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96,
>> host_id=3): timeout during domain acquisition
>> MainThread::WARNING::2017-02-02
>> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
ad96): Storage domain
> is member of pool: u'domain=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'
> MainThread::INFO::2017-02-02
> 
> 19:25:34,254::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
> Agent shutting down
>
> Simone, Martin, can you please follow up on this?
>
>
> Ralph, could you please attach vdsm logs from on of your hosts for the
> relevant time frame?

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen




vdsm.log.gz
Description: application/gzip
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
Hello,

in reality my cluster is a hyper-converged cluster. But how do I tell
this Ovirt Engine ? Of course I activated the checkbox "Gluster"
(already some versions ago around 4.0.x) but that didn't change anything.

Bye
Am 03.02.2017 um 11:18 schrieb Ramesh Nachimuthu:
>> 2. I'm missing any gluster specific management features as my gluster is not
>> managable in any way from the GUI. I expected to see my gluster now in
>> dashboard and be able to add volumes etc. What do I need to do to "import"
>> my existing gluster (Only one volume so far) to be managable ?
>>
>>
> If it is a hyperconverged cluster, then all your hosts are already managed by 
> ovirt. So you just need to enable 'Gluster Service' in the Cluster, gluster 
> volume will be imported automatically when you enable gluster service. 
>
> If it is not a hyperconverged cluster, then you have to create a new cluster 
> and enable only 'Gluster Service'. Then you can import or add the gluster 
> hosts to this Gluster cluster. 
>
> You may also need to define a gluster network if you are using a separate 
> network for gluster data traffic. More at 
> http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/
>
>
>

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ralf Schenk
stedEngine::(_stop_domain_monitor)
Failed to stop monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96): Storage domain is member
of pool: u'domain=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96'
MainThread::INFO::2017-02-02
19:25:34,254::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
Agent shutting down

The gluster volume of the engine is mounted corrctly in the host and
accessible. Files are also readable etc. No clue what to do.

4. Last but not least: Ovirt is still using fuse to access VM-Disks on
Gluster. I know - scheduled for 4.1.1 - but it was already there in
3.5.x and was scheduled for every release since then. I had this feature
with opennebula already two years ago and performance is sooo much
better So please GET IT IN  !

Bye



Am 02.02.2017 um 13:19 schrieb Sandro Bonazzola:
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it
> works fine for you :-)

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] new qemu-kvm-ev available

2017-01-28 Thread Ralf Schenk
Hello,

as I did it without any problem on 8 Hosts migrating VM's from host to
host while upgrading, I think you can do this as well...

Bye


Am 27.01.2017 um 13:51 schrieb Gianluca Cecchi:
> This morning I noticed the message about updates availalble for my
> CentOS 7.3 host.
>
> A yum update on it proposed qemu-kvm-ev 2.6.0-28.el7_3.3.1 and related
> packages
>
> The changelog from the current one (2.6.0-27.1.el7) seems quit big
> (see below)...
> I applied it and I'm going to test.
>
> After working on the first host and activatinh inside the cluster, do
> you think I can live migrate a VM from a 2.6.0-27.1.el7 host to
> a 2.6.0-28.el7_3.3.1 in your opinion?
>
> Gianluca
>
> * Fri Jan 20 2017 Sandro Bonazzola <sbona...@redhat.com
> <mailto:sbona...@redhat.com>> - ev-2.6.0-28.el7_3.3.1
> - Removing RH branding from package name
>
> * Wed Jan 04 2017 Miroslav Rezanina <mreza...@redhat.com
> <mailto:mreza...@redhat.com>> - rhev-2.6.0-28.el7_3.3
> - kvm-pc_piix-fix-compat-props-typo-for-RHEL6-machine-type.patch
> [bz#1408122]
> - kvm-net-don-t-poke-at-chardev-internal-QemuOpts.patch [bz#1410200]
> - Resolves: bz#1408122
>   (Opteron_G4 CPU model broken in QEMU 2.6 with RHEL 6 machine type)
> - Resolves: bz#1410200
>   (qemu gets SIGSEGV when hot-plug a vhostuser network)
>
> * Fri Dec 09 2016 Miroslav Rezanina <mreza...@redhat.com
> <mailto:mreza...@redhat.com>> - rhev-2.6.0-28.el7_3.2
> - kvm-numa-do-not-leak-NumaOptions.patch [bz#1397745]
> - kvm-char-free-the-tcp-connection-data-when-closing.patch [bz#1397745]
> - kvm-char-free-MuxDriver-when-closing.patch [bz#1397745]
> - kvm-ahci-free-irqs-array.patch [bz#1397745]
> - kvm-virtio-input-free-config-list.patch [bz#1397745]
> - kvm-usb-free-USBDevice.strings.patch [bz#1397745]
> - kvm-usb-free-leaking-path.patch [bz#1397745]
> - kvm-ahci-fix-sglist-leak-on-retry.patch [bz#1397745]
> - kvm-virtio-add-virtqueue_rewind.patch [bz#1402509]
> - kvm-virtio-balloon-fix-stats-vq-migration.patch [bz#1402509]
> - kvm-virtio-blk-Release-s-rq-queue-at-system_reset.patch [bz#1393041]
> - kvm-virtio-blk-Remove-stale-comment-about-draining.patch [bz#1393041]
> - Resolves: bz#1393041
>   (system_reset should clear pending request for error (virtio-blk))
> - Resolves: bz#1397745
>   (Backport memory leak fixes from QEMU 2.7)
> - Resolves: bz#1402509
>   (virtio-balloon stats virtqueue does not migrate properly)
>
> * Wed Nov 30 2016 Miroslav Rezanina <mreza...@redhat.com
> <mailto:mreza...@redhat.com>> - rhev-2.6.0-28.el7_3.1
> - kvm-ide-fix-halted-IO-segfault-at-reset.patch [bz#1393043]
> - kvm-atapi-fix-halted-DMA-reset.patch [bz#1393043]
> - kvm-ahci-clear-aiocb-in-ncq_cb.patch [bz#1393736]
> - kvm-Workaround-rhel6-ctrl_guest_offloads-machine-type-mi.patch
> [bz#1392876]
> - kvm-Postcopy-vs-xbzrle-Don-t-send-xbzrle-pages-once-in-p.patch
> [bz#1395360]
> - kvm-ui-fix-refresh-of-VNC-server-surface.patch [bz#1392881]
> - Resolves: bz#1392876
>   (windows guests migration from rhel6.8-z to rhel7.3 with
> virtio-net-pci fail)
> - Resolves: bz#1392881
>   (Graphic can't be showed out quickly if guest graphic mode is vnc)
> - Resolves: bz#1393043
>   (system_reset should clear pending request for error (IDE))
> - Resolves: bz#1393736
>   (qemu core dump when there is an I/O error on AHCI)
> - Resolves: bz#1395360
>   (Post-copy migration fails with XBZRLE compression)
>
> * Tue Sep 27 2016 Miroslav Rezanina <mreza...@redhat.com
> <mailto:mreza...@redhat.com>> - rhev-2.6.0-28.el7
> - kvm-ARM-ACPI-fix-the-AML-ID-format-for-CPU-devices.patch [bz#1373733]
> - Resolves: bz#1373733
>   (failed to run a guest VM with >= 12 vcpu under ACPI mode)
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.1.0 First Release Candidate is now available

2017-01-26 Thread Ralf Schenk
Hello,

reading the Release notes at http://www.ovirt.org/release/4.1.0/ I see a
lot of changes regarding gluster, but I'm missing the most important
information:

Will oVirt 4.1.0 finally use libgfapi to access gluster based volumes or
are we still seeing FUSE based access.

This feature that was misleadingly already described years ago in many
blog-posts for me was really important to set up a cluster with oVirt
(3.6.x at the beginning). So I was really disappointed with FUSE access
that I got.

Bye


Am 23.01.2017 um 13:35 schrieb Sandro Bonazzola:
> The oVirt Project is pleased to announce the availability of the First
> Release candidate of oVirt 4.1.0 for testing, as of January 23rd, 2016
>
> This is pre-release software. Please take a look at our community page[1]
> to know how to ask questions and interact with developers and users.
> All issues or bugs should be reported via oVirt Bugzilla[2].
> This pre-release should not to be used in production.
>
> This update is the first release candidate of the 4.1 release series.
> 4.1.0 brings more than 250 enhancements and more than 700 bugfixes,
> including more than 300 high or urgent
> severity fixes, on top of oVirt 4.0 series
> See the release notes [3] for installation / upgrade instructions and a
> list of new features and bugs fixed.
>
>
> This release is available now for:
> * Fedora 24 (tech preview)
> * Red Hat Enterprise Linux 7.3 or later
> * CentOS Linux (or similar) 7.3 or later
>
> This release supports Hypervisor Hosts running:
> * Red Hat Enterprise Linux 7.3 or later
> * CentOS Linux (or similar) 7.3 or later
> * Fedora 24 (tech preview)
> * oVirt Node 4.1
>
> See the release notes draft [3] for installation / upgrade
> instructions and
> a list of new features and bugs fixed.
>
> Notes:
> - oVirt Live iso is already available[5]
> - oVirt Node NG iso is already available[5]
> - Hosted Engine appliance is already available
>
> A release management page including planned schedule is also available[4]
>
>
> Additional Resources:
> * Read more about the oVirt 4.1.0 release
> highlights:http://www.ovirt.org/release/4.1.0/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt
> blog:http://www.ovirt.org/blog/
>
> [1] https://www.ovirt.org/community/
> [2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
> [3] http://www.ovirt.org/release/4.1.0/
> [4]
> http://www.ovirt.org/develop/release-management/releases/4.1/release-management/
> [5] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
>
>
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com <http://redhat.com>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Official Hyperconverged Gluster oVirt upgrade procedure?

2017-01-26 Thread Ralf Schenk
Hello,

i would appreciate any hint, too. I'm on 4.0.6 on Centos 7.3 since
yesterday but I'm frightened what I need to do to upgrade and be able to
manage gluster from GUI then.

Bye


Am 25.01.2017 um 21:32 schrieb Hanson:
> Hi Guys,
>
> Just wondering if we have an updated manual or whats the current
> procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
>
> Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine
> running in a gluster storage domain.
>
> Put node in maintenance mode and disable glusterfs from ovirt gui, run
> yum update?
>
> Thanks!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-13 Thread Ralf Schenk
Hello

by browsing the repository on
http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/ I can't see
any qemu-kvm-ev-2.6.* RPM.

I think this will break if I update the Ovirt-Hosts...

[root@microcloud21 yum.repos.d]# yum check-update | grep libvirt
libvirt.x86_64  2.0.0-10.el7_3.2
updates
libvirt-client.x86_64   2.0.0-10.el7_3.2
updates
libvirt-daemon.x86_64   2.0.0-10.el7_3.2
updates
libvirt-daemon-config-network.x86_642.0.0-10.el7_3.2
updates
libvirt-daemon-config-nwfilter.x86_64   2.0.0-10.el7_3.2
updates
libvirt-daemon-driver-interface.x86_64  2.0.0-10.el7_3.2
updates
libvirt-daemon-driver-lxc.x86_642.0.0-10.el7_3.2
updates
libvirt-daemon-driver-network.x86_642.0.0-10.el7_3.2
updates
libvirt-daemon-driver-nodedev.x86_642.0.0-10.el7_3.2
updates
libvirt-daemon-driver-nwfilter.x86_64   2.0.0-10.el7_3.2
updates
libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
updates
libvirt-daemon-driver-secret.x86_64 2.0.0-10.el7_3.2
updates
libvirt-daemon-driver-storage.x86_642.0.0-10.el7_3.2
updates
libvirt-daemon-kvm.x86_64   2.0.0-10.el7_3.2
updates
libvirt-lock-sanlock.x86_64 2.0.0-10.el7_3.2
updates
libvirt-python.x86_64   2.0.0-2.el7 
base

[root@microcloud21 yum.repos.d]# yum check-update | grep qemu*
ipxe-roms-qemu.noarch   20160127-5.git6366fa7a.el7  
base
libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
updates


Am 13.12.2016 um 08:43 schrieb Sandro Bonazzola:
>
>
> On Mon, Dec 12, 2016 at 6:38 PM, Chris Adams <c...@cmadams.net
> <mailto:c...@cmadams.net>> wrote:
>
> Once upon a time, Sandro Bonazzola <sbona...@redhat.com
> <mailto:sbona...@redhat.com>> said:
> > In terms of ovirt repositories, qemu-kvm-ev 2.6 is available
> right now in
> > ovirt-master-snapshot-static, ovirt-4.0-snapshot-static, and
> ovirt-4.0-pre
> > (contains 4.0.6 RC4 rpms going to be announced in a few minutes.)
>
> Will qemu-kvm-ev 2.6 be added to any of the oVirt repos for prior
> versions (such as 3.5 or 3.6)?
>
>
> You can enable CentOS Virt SIG repo by running "yum install
> centos-release-qemu-ev" on your CentOS 7 systems.
> and you'll have updated qemu-kvm-ev.
>
>  
>
> --
> Chris Adams <c...@cmadams.net <mailto:c...@cmadams.net>>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.phx.ovirt.org/mailman/listinfo/users
> <http://lists.phx.ovirt.org/mailman/listinfo/users>
>
>
>
>
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com <http://redhat.com>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ubuntu>=14.04?

2016-10-21 Thread Ralf Schenk
Hello,

you will enter hell when you try to achieve it on Ubuntu at least in a
setup using glusterfs and also LVM with thinpool.

I tried hard to do it on 14.04.x since I'm an Ubuntu fan and went to
centos 7 after a lot spent hours.

Bye


Am 21.10.2016 um 01:24 schrieb Charles Kozler:
> Probably not. There are a lot of difference between CentOS/RHEL &
> Ubuntu/Debian in this regard - paths like /etc/sysconfig and similar
> that it expects. You can run Ubuntu VMs, of course, which is why you
> found the guest utilities. If you want Debian-backed KVM solution you
> can look to Proxmox
>
> On Thu, Oct 20, 2016 at 7:22 PM, Jon Forrest
> <jon.forr...@locationlabs.com <mailto:jon.forr...@locationlabs.com>>
> wrote:
>
>
>
> On 10/20/16 4:11 PM, Charles Kozler wrote:
>
> oVirt is the upstream source project for RedHat Enterprise
> Virtualization (RHEV). As expected, its only supported on
> CentOS 7 (and
> older versions on 6)
>
>
> This makes sense. But, do either of these components work on Ubuntu,
> and, if so, how well?
>
> Thanks for any information.
>
> Jon Forrest
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start service 'ovirt-imageio-proxy' after engine-setup

2016-08-25 Thread Ralf Schenk
Thank you,

i solved it by copying the websocket-proxy cert/key-pair.

Bye


Am 24.08.2016 um 22:15 schrieb Yedidyah Bar David:
> On Wed, Aug 24, 2016 at 9:57 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello List,
>
> After upgrading ovirt-hosted-engine setup failed with "Failed to
> start service 'ovirt-imageio-proxy'".
>
> If try to start it manually via
>
> journalctl -xe shows following error:
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> self._secure_server(config, server)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> File
> "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/server.py", line
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> server_side=True)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> File "/usr/lib64/python2.7/ssl.py", line 913, in wrap_socket
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> ciphers=ciphers)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> File "/usr/lib64/python2.7/ssl.py", line 526, in __init__
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> self._context.load_cert_chain(certfile, keyfile)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> IOError: [Errno 2] No such file or directory
> Aug 24 20:51:10 engine.mydomain.local systemd[1]:
> ovirt-imageio-proxy.service: main process exited, code=exited,
> status=1/FAILURE
> Aug 24 20:51:10 engine.mydomain.local systemd[1]: Failed to start
> oVirt ImageIO Proxy.
>
> Config (/etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf) points
> to a non-existing cert
> /etc/pki/ovirt-engine/certs/imageio-proxy.cer
> and nonexisting key:
> /etc/pki/ovirt-engine/keys/imageio-proxy.key.nopass
>
>
> Right, will be fixed in 4.0.3:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1365451
>  
>
>
> How to generate a correct cert and key for correct startup ?
>
>
> There is no simple single one-liner to do that for now.
>
> Search for "pki-enroll-pkcs12.sh" for examples if you really want to.
>
> You can copy e.g. websocket-proxy key/cert.
>
> You can use the 4.0 nightly repo - bugs in MODIFIED are already fixed
> there:
>
> http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/
>
> You can answer 'no' to 'configure image i/o proxy?', if you don't need it.
> It's only needed for the new image uploader:
>
> http://www.ovirt.org/develop/release-management/features/storage/image-upload/
>
> Or you can wait for 4.0.3.
>
> Best,
>  
>
>
> Bye
> -- 
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> <tel:%2B49%20%280%29%2024%2005%20%2F%2040%2083%2070>
> fax +49 (0) 24 05 / 40 83 759
> <tel:%2B49%20%280%29%2024%2005%20%2F%2040%2083%20759>
> mail *r...@databay.de* <mailto:r...@databay.de>
>   
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* <http://www.databay.de>
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
> Dipl.-Kfm. Philipp Hermanns
> Aufsichtsratsvorsitzender: Klaus Scholzen (RA)
>
> 
>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> -- 
> Didi

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failed to start service 'ovirt-imageio-proxy' after engine-setup

2016-08-24 Thread Ralf Schenk
Hello List,

After upgrading ovirt-hosted-engine setup failed with "Failed to start
service 'ovirt-imageio-proxy'".

If try to start it manually via

journalctl -xe shows following error:
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
self._secure_server(config, server)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/server.py", line
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
server_side=True)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
"/usr/lib64/python2.7/ssl.py", line 913, in wrap_socket
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
ciphers=ciphers)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
"/usr/lib64/python2.7/ssl.py", line 526, in __init__
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
self._context.load_cert_chain(certfile, keyfile)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
IOError: [Errno 2] No such file or directory
Aug 24 20:51:10 engine.mydomain.local systemd[1]:
ovirt-imageio-proxy.service: main process exited, code=exited,
status=1/FAILURE
Aug 24 20:51:10 engine.mydomain.local systemd[1]: Failed to start oVirt
ImageIO Proxy.

Config (/etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf) points to a
non-existing cert
/etc/pki/ovirt-engine/certs/imageio-proxy.cer
and nonexisting key:
/etc/pki/ovirt-engine/keys/imageio-proxy.key.nopass

How to generate a correct cert and key for correct startup ?

Bye
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to connect Host to the Storage Domains hosted_storage.

2016-07-22 Thread Ralf Schenk
Hello,

I also see from the logs that all your Storage-Domains that work are
mounted as nfsVersion='V4' but ovirt-nfs.netsec:/ovirt/hosted-engine is
mounted as nfsVersion='null'.

Bye


Am 22.07.2016 um 16:17 schrieb Simone Tiraboschi:
> On Fri, Jul 22, 2016 at 3:47 PM, Robert Story <rst...@tislabs.com> wrote:
>> Hello,
>>
>> I'm in the process of upgrading from 3.5.x to 3.6.x. My hosted engine and
>> hosts in the primary cluster are all upgraded and appear to be running fine.
>>
>> I have a second cluster of 2 machines which are just regular hosts, without
>> the hosted-engine. Both have been marked non-operational, with the
>> following messages logged about every 5 minutes:
>>
>>
>> Failed to connect Host perses to Storage Pool Default
>>
>> Host perses cannot access the Storage Domain(s) hosted_storage attached to 
>> the Data Center Default. Setting Host state to Non-Operational.
>>
>> Host perses reports about one of the Active Storage Domains as Problematic.
>>
>> Failed to connect Host perses to Storage Servers
>>
>> Failed to connect Host perses to the Storage Domains hosted_storage.
>>
>>
>> I could see the normal storage/iso/export domains mounted on the host, and
>> the VMs running on the host are fine.
> In 3.5 only the hosts involved in hosted-engine have to access the
> hosted-engine storage domain.
> With 3.6 we introduced the capabilities to manage the engine VM from
> the engine itself so the engine has to import in the hosted-engine
> storage domain.
> This means that all the hosts in the datacenter that contains the
> cluster with the hosted-engine hosts have now to be able to connect
> the hosted-engine storage domain.
>
> Can you please check the ACL on the storage server (NFS or iSCSI) that
> you use to expose the hosted-engine storage domain?
>
>> I shut down the VMs on one host, put it in maintenance mode, installed 3.6
>> repo and ran yum update. All went well, but when I activated the host, same
>> deal.
>>
>> I've attached the engine log snippet for the activation attempt.
>>
>> Robert
>>
>> --
>> Senior Software Engineer @ Parsons
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0 Login Issue

2016-07-06 Thread Ralf Schenk
Hello,

I've got this, too. But I thought this to be normal since the
session-timeout was reached.

Bye


Am 06.07.2016 um 04:08 schrieb Melissa Mesler:
> I am running 4.0 on CentOS 7.2. Sometimes when I first log in to the
> admin page, it will give me and error that says "Request state does not
> match session state." Then if I go through the process of logging in
> again, it will go through with no issue. It doesn't do this every time
> but it does do it quite often. Any ideas on why?
>
> - MeLLy
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to Access Engine UI after Upgrade to OVirt 4.0

2016-06-29 Thread Ralf Schenk


Am 29.06.2016 um 14:35 schrieb Yedidyah Bar David:
> On Wed, Jun 29, 2016 at 3:28 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
>
>
> Am 29.06.2016 um 13:58 schrieb Yedidyah Bar David:
>>     On Wed, Jun 29, 2016 at 12:50 PM, Ralf Schenk <r...@databay.de> wrote:
>>
>> Hello,
>>
>> i upgraded hosted engine 3.6.6 to 4.0. Now I can't access the
>> Web-UI:
>>
>> If I access the UI via https://engine-mciii.rxmgmt.databay.de
>> I get the Error-Page "The client is not authorized to request
>> an authorization. It's required to access the system using FQDN."
>>
>> I used the same FQDN during setup and even in the Upgrade
>> Process this was used according to "Configuration Preview" below:
>>
>> What can I do to get UI working ? Do I have a problem with
>> case sensistivity ? (The triple-"I" in the FQDN) ?
>>
>>
>>   --== CONFIGURATION PREVIEW ==--
>>
>>   Default SAN wipe after delete   : False
>>   Firewall manager: firewalld
>>   Update Firewall : False
>>   Host FQDN   :
>> engine-mcIII.rxmgmt.databay.de
>> <http://engine-mcIII.rxmgmt.databay.de>
>>
>>
>> Since you input it with capital I's, that's what the server expects.
>> Most likely your browser converts it to lowercase.
>>
>> For now, I suggest to cleanup and setup again with lowercase
>> letters only.
>>
>> You might also open one or more bugs about this:
>>
>> 1. engine-setup should require lower-case letters only. Not sure
>> about that.
>> Perhaps it should at least warn.
>>
>> 2. The engine should convert to lowercase before comparing, when
>> deciding if
>> the fqdn is correct.
>>
>> For reference:
>>
>> FQDNs _can_ be both lowercase and uppercase (and mixed case), but are
>> case _insensitive_. Meaning, software that has to decide if two names
>> are equal has to ignore case-only differences.
>>
>> However, it seems that when your browser converts to lowercase, it's
>> doing the right thing. Didn't check myself all the links in the
>> conclusion of [1], but that's what it implies.
>>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1059169
>>
>> Best,
>>
>>
>
> Hello,
>
> I know that FQDNs are cas insensitive. I only used the name
> because its a Roman Number 3 "III". It was only for better
> visibility/readability.
>
> Nevertheless the string-comparison in engine should be done
> case-insensitive.
>
>
> Did you open a bug about that?
>
>

Hello,

reported as: https://bugzilla.redhat.com/show_bug.cgi?id=1351217

Bye

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to Access Engine UI after Upgrade to OVirt 4.0

2016-06-29 Thread Ralf Schenk


Am 29.06.2016 um 14:35 schrieb Yedidyah Bar David:
> On Wed, Jun 29, 2016 at 3:28 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
>
>
> Am 29.06.2016 um 13:58 schrieb Yedidyah Bar David:
>>     On Wed, Jun 29, 2016 at 12:50 PM, Ralf Schenk <r...@databay.de
>> <mailto:r...@databay.de>> wrote:
>>
>> Hello,
>>
>> i upgraded hosted engine 3.6.6 to 4.0. Now I can't access the
>> Web-UI:
>>
>> If I access the UI via https://engine-mciii.rxmgmt.databay.de
>> I get the Error-Page "The client is not authorized to request
>> an authorization. It's required to access the system using FQDN."
>>
>> I used the same FQDN during setup and even in the Upgrade
>> Process this was used according to "Configuration Preview" below:
>>
>> What can I do to get UI working ? Do I have a problem with
>> case sensistivity ? (The triple-"I" in the FQDN) ?
>>
>>
>>   --== CONFIGURATION PREVIEW ==--
>>
>>   Default SAN wipe after delete   : False
>>   Firewall manager: firewalld
>>   Update Firewall : False
>>   Host FQDN   :
>> engine-mcIII.rxmgmt.databay.de
>> <http://engine-mcIII.rxmgmt.databay.de>
>>
>>
>> Since you input it with capital I's, that's what the server expects.
>> Most likely your browser converts it to lowercase.
>>
>> For now, I suggest to cleanup and setup again with lowercase
>> letters only.
>>
>> You might also open one or more bugs about this:
>>
>> 1. engine-setup should require lower-case letters only. Not sure
>> about that.
>> Perhaps it should at least warn.
>>
>> 2. The engine should convert to lowercase before comparing, when
>> deciding if
>> the fqdn is correct.
>>
>> For reference:
>>
>> FQDNs _can_ be both lowercase and uppercase (and mixed case), but are
>> case _insensitive_. Meaning, software that has to decide if two names
>> are equal has to ignore case-only differences.
>>
>> However, it seems that when your browser converts to lowercase, it's
>> doing the right thing. Didn't check myself all the links in the
>> conclusion of [1], but that's what it implies.
>>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1059169
>>
>> Best,
>>
>>
>
> Hello,
>
> I know that FQDNs are cas insensitive. I only used the name
> because its a Roman Number 3 "III". It was only for better
> visibility/readability.
>
> Nevertheless the string-comparison in engine should be done
> case-insensitive.
>
>
> Did you open a bug about that?
>
>

Not yet, first I had to open this one which is more common in a
hosted-engine on gluster setup:
https://bugzilla.redhat.com/show_bug.cgi?id=1351203

Bye

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to Access Engine UI after Upgrade to OVirt 4.0

2016-06-29 Thread Ralf Schenk
Ok guys,

Its a case sensitive compare anywhere !. I renamed my hosted-engine from
"engine-mcIII.rxmgmt.databay.de" like this:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
--newname=engine-mciii.rxmgmt.databay.de, and now I can log in again.

Puh.


Am 29.06.2016 um 11:50 schrieb Ralf Schenk:
>
> Hello,
>
> i upgraded hosted engine 3.6.6 to 4.0. Now I can't access the Web-UI:
>
> If I access the UI via https://engine-mciii.rxmgmt.databay.de I get
> the Error-Page "The client is not authorized to request an
> authorization. It's required to access the system using FQDN."
>
> I used the same FQDN during setup and even in the Upgrade Process this
> was used according to "Configuration Preview" below:
>
> What can I do to get UI working ? Do I have a problem with case
> sensistivity ? (The triple-"I" in the FQDN) ?
>
>
>   --== CONFIGURATION PREVIEW ==--
>
>   Default SAN wipe after delete   : False
>   Firewall manager: firewalld
>   Update Firewall : False
>   Host FQDN   :
> engine-mcIII.rxmgmt.databay.de
>   Require packages rollback   : False
>   Upgrade packages: True
>   Engine database secured connection  : False
>   Engine database host: localhost
>   Engine database user name   : engine
>   Engine database name: engine
>   Engine database port: 5432
>   Engine database host name validation: False
>   DWH database secured connection : False
>   DWH database host   : localhost
>   DWH database user name  : ovirt_engine_history
>   DWH database name   : ovirt_engine_history
>   DWH database port   : 5432
>   DWH database host name validation   : False
>   Engine installation : True
>   PKI organization: rxmgmt.databay.de
>   DWH installation: True
>   Configure local DWH database: True
>   Engine Host FQDN:
> engine-mcIII.rxmgmt.databay.de
>   Configure VMConsole Proxy   : True
>   Configure WebSocket Proxy   : True
>
> bye
> -- 
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* <mailto:r...@databay.de>
>   
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* <http://www.databay.de>
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
> Dipl.-Kfm. Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
>
> 
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to Access Engine UI after Upgrade to OVirt 4.0

2016-06-29 Thread Ralf Schenk
Hello,

i upgraded hosted engine 3.6.6 to 4.0. Now I can't access the Web-UI:

If I access the UI via https://engine-mciii.rxmgmt.databay.de I get the
Error-Page "The client is not authorized to request an authorization.
It's required to access the system using FQDN."

I used the same FQDN during setup and even in the Upgrade Process this
was used according to "Configuration Preview" below:

What can I do to get UI working ? Do I have a problem with case
sensistivity ? (The triple-"I" in the FQDN) ?


  --== CONFIGURATION PREVIEW ==--

  Default SAN wipe after delete   : False
  Firewall manager: firewalld
  Update Firewall : False
  Host FQDN   :
engine-mcIII.rxmgmt.databay.de
  Require packages rollback   : False
  Upgrade packages: True
  Engine database secured connection  : False
  Engine database host: localhost
  Engine database user name   : engine
  Engine database name: engine
  Engine database port: 5432
  Engine database host name validation: False
  DWH database secured connection : False
  DWH database host   : localhost
  DWH database user name  : ovirt_engine_history
  DWH database name   : ovirt_engine_history
  DWH database port   : 5432
  DWH database host name validation   : False
  Engine installation : True
  PKI organization: rxmgmt.databay.de
  DWH installation: True
  Configure local DWH database: True
  Engine Host FQDN:
engine-mcIII.rxmgmt.databay.de
  Configure VMConsole Proxy   : True
  Configure WebSocket Proxy   : True

bye
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Error mounting hosted engine Volume (Glusterfs) via VDSM

2016-06-25 Thread Ralf Schenk
Hello,

I think options for mounting the hosted-engine Volume changed in latest
vdsm to support mounting from backup-volfile-servers.

[root@microcloud28 ~]# rpm -qi vdsm
Name: vdsm
Version : 4.17.28
Release : 1.el7
Architecture: noarch
Install Date: Fri 10 Jun 2016 11:17:37 AM CEST
Group   : Applications/System
Size: 3828639
License : GPLv2+
Signature   : RSA/SHA1, Fri 03 Jun 2016 12:53:20 AM CEST, Key ID
7aebbe8261e8806c
Source RPM  : vdsm-4.17.28-1.el7.src.rpm

Now my hosts have problems to mount the Volume. On hosted-engine setup I
configured the (Replica 3) volume to be
"glusterfs.rxmgmt.databay.de:/engine" which ist a Round-Robin DNS to my
gluster hosts and _not_ the DNS-Name of any gluster-brick.

Now VDSM logs:

jsonrpc.Executor/3::DEBUG::2016-06-25
19:40:02,520::fileUtils::143::Storage.fileUtils::(createdir) Cre
ating directory:
/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine
mode: None
jsonrpc.Executor/3::DEBUG::2016-06-25
19:40:02,520::storageServer::364::Storage.StorageServer.MountCon
nection::(_get_backup_servers_option) Using bricks:
['microcloud21.rxmgmt.databay.de', 'microcloud24.r
xmgmt.databay.de', 'microcloud27.rxmgmt.databay.de']
jsonrpc.Executor/3::WARNING::2016-06-25
19:40:02,520::storageServer::370::Storage.StorageServer.MountC
onnection::(_get_backup_servers_option) gluster server
u'glusterfs.rxmgmt.databay.de' is not in bricks
 ['microcloud21.rxmgmt.databay.de', 'microcloud24.rxmgmt.databay.de',
'microcloud27.rxmgmt.databay.de'
], possibly mounting duplicate servers
jsonrpc.Executor/3::DEBUG::2016-06-25
19:40:02,520::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bi
n/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin
/mount -o
backup-volfile-servers=microcloud21.rxmgmt.databay.de:microcloud24.rxmgmt.databay.de:microcl
oud27.rxmgmt.databay.de glusterfs.rxmgmt.databay.de:/engine
/rhev/data-center/mnt/glusterSD/glusterfs.
rxmgmt.databay.de:_engine (cwd None)
jsonrpc.Executor/3::ERROR::2016-06-25
19:40:02,540::hsm::2473::Storage.HSM::(connectStorageServer) Cou
ld not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 237, in connect
six.reraise(t, v, tb)
  File "/usr/share/vdsm/storage/storageServer.py", line 229, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/share/vdsm/storage/mount.py", line 225, in mount
return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (32, ';Running scope as unit run-13461.scope.\nmount.nfs: an
incorrect mount option was specified\n')

So the mount is tried as NFS which hasn't the option "-o
backup-volfile-servers=...".

As a consequence the host is disabled in engine. The only way to get it
up is mounting the volume manually to
/rhev/data-center/mnt/glusterSD/glusterfs.
rxmgmt.databay.de:_engine and activate it manually in Management-GUI.

Should and can I change the hosted_storage entry-point globally to i.e.
"microcloud21.rxmgmt.databay.de" or wouldn't it be better globally that
VDSM uses "-t glusterfs" to try to mount the gluster-Volume regardless
which DNS Name for the gluster-service is used ?

Ovirt is:

ovirt-release36.noarch 1:3.6.6-1

Bye
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VMs using hugepages

2016-05-31 Thread Ralf Schenk
Hello,

I try to get VM's to use hugepages by default. We use them on our manual
VM's set up for libvirt and experience performance advantages. I
installed vdsm-hook-hugepages, but according to
http://www.ovirt.org/develop/developer-guide/vdsm/hook/hugepages/ I have
to set hugepages=SIZE. Engine Web-Fronted doesn't show an option
anywhere to specify this.

I want the VMs to have:

  

  

Any hint ?

Versions:

vdsm.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-cli.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-gluster.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-hook-hugepages.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-hook-vmfex-dev.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-infra.noarch 4.17.28-0.el7.centos   @ovirt-3.6
vdsm-jsonrpc.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-python.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-xmlrpc.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-yajsonrpc.noarch 4.17.28-0.el7.centos   @ovirt-3.6

Engine:
ovirt-engine.noarch3.6.6.2-1.el7.centos  
@ovirt-3.6

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 3.6.6 on Centos 7.2 not using native gluster (gfapi)

2016-05-30 Thread Ralf Schenk
Hello,

thanks for the hint, but wasn't it already there ? Many documents and
screenshots show the radio-button to enable gluster on the cluster tab
of the Engine Webinterfaces.

Bye


Am 30.05.2016 um 20:28 schrieb Yaniv Kaul:
> In the short term roadmap (covered
> by https://bugzilla.redhat.com/show_bug.cgi?id=1022961 ).
> Y.
>
> On Mon, May 30, 2016 at 3:30 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> I set up 8 Hosts and self-hosted-engine running HA on 3 of them
> from gluster replica 3 Volume. HA is working, I can set one host
> of the 3 configured for hosted-engine to maintenance and engine
> migrates to other host. I did the hosted-engine --deploy with type
> gluster and my gluster hosted storage is accessed as
> glusterfs.mydomain.de:/engine
>
> I set up another gluster volume (distributed replicated 4x2=8) as
> Data storage for my virtual machines which is accessible as
> glusterfs.mydomain.de:/gv0.  ISO and Export Volume are defined
> from NFS Server.
>
> When I set up a VM on the gluster storage I expected it to run
> with native gluster support. However if I dumpxml the libvirt
> machine definition I've something like that in it's config:
>
> [...]
>
> 
>error_policy='stop' io='threads'/>
>
> file='/rhev/data-center/0001-0001-0001-0001-00b9/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/011ab08e-71af-4d5b-a6a8-9b843a10329e/3f71d6c7-9b6d-4872-abc6-01a2b3329656'/>
>   
>   
>   011ab08e-71af-4d5b-a6a8-9b843a10329e
>   
>   
>function='0x0'/>
> 
>
> I expected to have something like this:
> 
>  error_policy='stop' io='threads'/>
>
> name='gv0/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/011ab08e-71af-4d5b-a6a8-9b843a10329e/3f71d6c7-9b6d-4872-abc6-01a2b3329656'>
>   
>   
>   [...]
>
> All hosts have vdsm-gluster gluster installed:
> [root@microcloud21 libvirt]# yum list installed | grep vdsm-*
> vdsm.noarch   4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-cli.noarch   4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-gluster.noarch   4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-hook-hugepages.noarch4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-hook-vmfex-dev.noarch4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-infra.noarch 4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-jsonrpc.noarch   4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-python.noarch4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-xmlrpc.noarch4.17.28-0.el7.centos  
> @ovirt-3.6
> vdsm-yajsonrpc.noarch 4.17.28-0.el7.centos  
> @ovirt-3.6
>
> How do I get my most wanted feature native gluster support running ?
>
> -- 
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> <tel:%2B49%20%280%29%2024%2005%20%2F%2040%2083%2070>
> fax +49 (0) 24 05 / 40 83 759
> <tel:%2B49%20%280%29%2024%2005%20%2F%2040%2083%20759>
> mail *r...@databay.de* <mailto:r...@databay.de>
>   
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* <http://www.databay.de>
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
> Dipl.-Kfm. Philipp Hermanns
>     Aufsichtsratsvorsitzender: Klaus Scholzen (RA)
>
> 
>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
>

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt 3.6.6 on Centos 7.2 not using native gluster (gfapi)

2016-05-30 Thread Ralf Schenk
Hello,

I set up 8 Hosts and self-hosted-engine running HA on 3 of them from
gluster replica 3 Volume. HA is working, I can set one host of the 3
configured for hosted-engine to maintenance and engine migrates to other
host. I did the hosted-engine --deploy with type gluster and my gluster
hosted storage is accessed as glusterfs.mydomain.de:/engine

I set up another gluster volume (distributed replicated 4x2=8) as Data
storage for my virtual machines which is accessible as
glusterfs.mydomain.de:/gv0.  ISO and Export Volume are defined from NFS
Server.

When I set up a VM on the gluster storage I expected it to run with
native gluster support. However if I dumpxml the libvirt machine
definition I've something like that in it's config:

[...]


  
  
  
  
  011ab08e-71af-4d5b-a6a8-9b843a10329e
  
  
  


I expected to have something like this:


  
  
  
  [...]

All hosts have vdsm-gluster gluster installed:
[root@microcloud21 libvirt]# yum list installed | grep vdsm-*
vdsm.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-cli.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-gluster.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-hook-hugepages.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-hook-vmfex-dev.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-infra.noarch 4.17.28-0.el7.centos   @ovirt-3.6
vdsm-jsonrpc.noarch   4.17.28-0.el7.centos   @ovirt-3.6
vdsm-python.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-xmlrpc.noarch4.17.28-0.el7.centos   @ovirt-3.6
vdsm-yajsonrpc.noarch 4.17.28-0.el7.centos   @ovirt-3.6

How do I get my most wanted feature native gluster support running ?

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing Cluster CPU Type in a single Host with Hosted Engine environment

2016-05-28 Thread Ralf Braendli
Hi

I Tried this change but I can’t find a Table cluster in the postgres Database.

Best Regarfs

Ralf

Am 26.05.2016 um 15:22 schrieb Martin Polednik 
<mpoled...@redhat.com<mailto:mpoled...@redhat.com>>:

On 26/05/16 13:01 +, Ralf Braendli wrote:
Hi

Thanks a lot for you help.
Just to be sure.
The Database would be the Datebase on the HostedEngine right ?

Right.

After this operation should it work directly or is a restart required ?

You should most likely restart the machine (to avoid hitting cached
values).

And for the Bug report this should be done here 
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt ?

Yes (ovirt-engine, virt team).

Best Regards

Ralf Brändli

Am 26.05.2016 um 14:42 schrieb Martin Polednik 
<mpoled...@redhat.com<mailto:mpoled...@redhat.com>>:

On 26/05/16 07:12 +, Ralf Braendli wrote:
Hi

I have the Problem that I selected the wrong CPU Type throw the setup process.
Is it posible to change it without an new installation ?

Hi!

I'm afraid this may not be possible using "regular" approach. You
could do this by directly changing the cpu type in database, but this
is not supported operation.

Just an example what would I do in this case (but proceed carefully
before changing anything in the DB):

$ su - postgres -c "psql -t engine -c \"SELECT
split_part(trim(regexp_split_to_table(option_value, ';')), ':', 2)
FROM vdc_options WHERE option_name = 'ServerCPUList' AND version =
'3.5';\""

gives you a nice list of supported cpu names (the database name must
be exact, so it's better to paste from that list.

Intel Conroe Family
Intel Penryn Family
Intel Nehalem Family
Intel Westmere Family
Intel SandyBridge Family
Intel Haswell-noTSX Family
Intel Haswell Family
Intel Broadwell-noTSX Family
Intel Broadwell Family
AMD Opteron G1
AMD Opteron G2
AMD Opteron G3
AMD Opteron G4
AMD Opteron G5
IBM POWER8

Then you can update the cluster directly:

$ su - postgres -c "psql -t engine -c \"UPDATE cluster SET cpu_name =
'YOUR CPU NAME' WHERE name = 'YOUR CLUSTER NAME';\""

('YOUR CPU NAME' and 'YOUR CLUSTER NAME' must of course correspond to
the cpu name from the list above and the name of the cluster
respectively)

Also, could you open a bug on this? I think we should be able to do
change the CPU type without all this.

Thanks,
mpolednik

We have a single Host with a Hosted Engine installed.
With this installation I can’t put the Host into Maintenance Mode because the 
Hosted Engine will run on this Host.

The Version we us is 3.5.5-1

Best Regards

Ralf Brändli
___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing Cluster CPU Type in a single Host with Hosted Engine environment

2016-05-26 Thread Ralf Braendli
Hi

Thanks a lot for you help.
Just to be sure.
The Database would be the Datebase on the HostedEngine right ?
After this operation should it work directly or is a restart required ?

And for the Bug report this should be done here 
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt ?

Best Regards

Ralf Brändli

> Am 26.05.2016 um 14:42 schrieb Martin Polednik <mpoled...@redhat.com>:
> 
> On 26/05/16 07:12 +, Ralf Braendli wrote:
>> Hi
>> 
>> I have the Problem that I selected the wrong CPU Type throw the setup 
>> process.
>> Is it posible to change it without an new installation ?
> 
> Hi!
> 
> I'm afraid this may not be possible using "regular" approach. You
> could do this by directly changing the cpu type in database, but this
> is not supported operation.
> 
> Just an example what would I do in this case (but proceed carefully
> before changing anything in the DB):
> 
> $ su - postgres -c "psql -t engine -c \"SELECT
> split_part(trim(regexp_split_to_table(option_value, ';')), ':', 2)
> FROM vdc_options WHERE option_name = 'ServerCPUList' AND version =
> '3.5';\""
> 
> gives you a nice list of supported cpu names (the database name must
> be exact, so it's better to paste from that list.
> 
> Intel Conroe Family
> Intel Penryn Family
> Intel Nehalem Family
> Intel Westmere Family
> Intel SandyBridge Family
> Intel Haswell-noTSX Family
> Intel Haswell Family
> Intel Broadwell-noTSX Family
> Intel Broadwell Family
> AMD Opteron G1
> AMD Opteron G2
> AMD Opteron G3
> AMD Opteron G4
> AMD Opteron G5
> IBM POWER8
> 
> Then you can update the cluster directly:
> 
> $ su - postgres -c "psql -t engine -c \"UPDATE cluster SET cpu_name =
> 'YOUR CPU NAME' WHERE name = 'YOUR CLUSTER NAME';\""
> 
> ('YOUR CPU NAME' and 'YOUR CLUSTER NAME' must of course correspond to
> the cpu name from the list above and the name of the cluster
> respectively)
> 
> Also, could you open a bug on this? I think we should be able to do
> change the CPU type without all this.
> 
> Thanks,
> mpolednik
> 
>> We have a single Host with a Hosted Engine installed.
>> With this installation I can’t put the Host into Maintenance Mode because 
>> the Hosted Engine will run on this Host.
>> 
>> The Version we us is 3.5.5-1
>> 
>> Best Regards
>> 
>> Ralf Brändli
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Changing Cluster CPU Type in a single Host with Hosted Engine environment

2016-05-26 Thread Ralf Braendli
Hi 

I have the Problem that I selected the wrong CPU Type throw the setup process.
Is it posible to change it without an new installation ?

We have a single Host with a Hosted Engine installed.
With this installation I can’t put the Host into Maintenance Mode because the 
Hosted Engine will run on this Host.

The Version we us is 3.5.5-1

Best Regards 

Ralf Brändli
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine HA does not migrate on maintenance

2016-05-23 Thread Ralf Schenk
Hello,

I've set up 3 Hosts of my 8 to be able to run hosted-engine (on
glusterfs). When I set the host running the HostedEngine to maintenance
the hosted-engine VM doesn't migrate to one of the other hosts.

I think a state as shown below (Host 1 in Local maintenance but engine
running on it) should't be possible. When setting maintenance and
observing Engine VM state in Management Interface I can see Status
"migrating from.." for a short time in the interface but VM is not
migrated and Status switches back to "Up" on the host which should be
prepared for maintenance.

[root@microcloud21 ~]# hosted-engine --vm-status

--== Host 1 status ==--

Status up-to-date  : True
Hostname   : microcloud21.mydomain
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 0
stopped: False
Local maintenance  : True
crc32  : 922bd955
Host timestamp : 426722


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : microcloud24.mydomain
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : a35888e4
Host timestamp : 312349


--== Host 3 status ==--

Status up-to-date  : True
Hostname   : microcloud27.mydomain
Host ID: 3
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 06e20597
Host timestamp : 312061


-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt hosted-engine setup stuck in final phase registering host to engine

2016-05-19 Thread Ralf Schenk
Hello,

don't waste time on it. I reinstalled ovirt-hosted-engine-ha.noarch an
then after some time engine magically started. I'm now adding hosts to
the engine and will deploy two other instances of the engine on two
other hosts to get it highly available. So far my gluster seems usable
inside the engine and the hosts.

If its interesing for you:  I also setup up HA nfs-ganesha on the hosts
to provide NFS Shares to multiple VM's (will be php-fpm Backends to
Nginx) in an efficient way. I also tested and benchmarked (only
sysbench) using one host as MDS for pNFS with gluster FSAL. So I'm able
to mount my gluster via "mount ... type nfs4 -o minorversion=1" an am
rewarded with pnfs=LAYOUT_NFSV4_1_FILES in "/proc/self/mountstats". I
can see good network distribution and connections to multiple servers of
the cluster when benchmarking an NFS mount.

What I don't understand: Engine and also setup seem to have a problem
with my type 6 bond. That type proved to be best in glusterfs and nfs
performance and distribution over my 2 network-interfaces. Additionally
I'm loosing my IPMI on shared LAN if I use a type 4 802.3ad Bond.

Thats what i have:

eth0___bond0_br0 (192.168.252.x) for VMs/Hosts
eth1__/   \__bond0.10_ovirtmgmt (172.16.252.x, VLAN 10) for
Gluster, NFS, Migration, Management

Is this ok ?

Thanks a lot for your effort. I hope that I can give back something to
the community by actively using the mailing-list.

Bye

Am 18.05.2016 um 16:36 schrieb Simone Tiraboschi:
> Really really strange,
> adding Martin here.
>
>
>
> On Wed, May 18, 2016 at 4:32 PM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> When I restart (systemctl restart ovirt-ha-broker ovirt-ha-agent)
> broker seems to fail: (from journalctl -xe)
>
> -- Unit ovirt-ha-agent.service has begun starting up.
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: Traceback
> (most recent call last):
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker", line 25, in
> 
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> broker.Broker().run()
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 56, in run
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> self._initialize_logging(options.daemon)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 131, in _initialize_logging
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> level=logging.DEBUG)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 1529, in basicConfig
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: hdlr =
> FileHandler(filename, mode)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 902, in __init__
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> StreamHandler.__init__(self, self._open())
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 925, in _open
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: stream =
> open(self.baseFilename, self.mode)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: IOError:
> [Errno 6] No such device or address: '/dev/stdout'
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: Traceback (most
> recent call last):
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent", line 25, in
> 
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: agent.Agent().run()
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 77, in run
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> self._initialize_logging(options.daemon)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 159, in _initialize_logging
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> level=logging.DEBUG)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 1529, in basicConfig
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: hdlr =
> FileHandler(filename, mode)
>

Re: [ovirt-users] oVirt hosted-engine setup stuck in final phase registering host to engine

2016-05-18 Thread Ralf Schenk
Hello,

thanks a lot. Yesterday I stood at the edge off a cliff. Now I'm one
step beyond...

Setup finished sucsessfully, engine-vm was shutdown - and no HA Config,
No running engine VM afterwards. :-(

Restart of ovirt-ha-agent/ovirt-ha-broker doesn't work. Reboot of Host
neither. No (mounted) /var/run/ovirt-hosted-engine-ha/vm.conf.

I can do "hosted-engine --connect-storage" which mounts my replica 3
gluster under
/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine.

I found the tar file with configs in my volume and I am able to extract
all the files (fhanswers.conf, hosted-engine.conf, broker.conf, vm.conf)
from the conf volume which is a tar file
(/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine/de9c1cb5-a11c-4f0d-b436-7e37378702c3/images/2d2145f5-9406-40c3-b85a-c17be924dd16/b64c8bac-f978-441e-b71c-2fa1d0fc078f)
.

Is it possible to persist that and get ha up and running ? I think it
was not persisted in vdsm, am I right ?

I found your post
http://lists.ovirt.org/pipermail/users/2015-December/036305.html that
looked like a similar Problem.

(tar -cO * | dd
of=/rhev/data-center/mnt/blockSD/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
oflag=direct) ?

Bye


Am 18.05.2016 um 11:38 schrieb Simone Tiraboschi:
> The issue is here:
> 2016-05-18 00:16:17 DEBUG otopi.context context.dumpEnvironment:510
> ENV OVEHOSTED_ENGINE/appHostName=str:''hosted_engine_1''
> You are quoting it twice and so it will try to use the string with the
> inner quote which is not a valid name.
>
> Does it come from your custom '/root/engine_answers.conf'? if so
> please fix it there and try again.
>
>
> On Wed, May 18, 2016 at 11:22 AM, Ralf Schenk <r...@databay.de
> <mailto:r...@databay.de>> wrote:
>
> Hello,
>
> of course. See attached File.
>
> Bye
>
>
> Am 18.05.2016 um 11:13 schrieb Simone Tiraboschi:
>> It looks, correct.
>>
>> Can you please share the whole log file?
>>
>> On Wed, May 18, 2016 at 11:03 AM, Ralf Schenk <r...@databay.de
>> <mailto:r...@databay.de>> wrote:
>>
>> Hello,
>>
>> from the last setup Log:
>>
>> 2016-05-11 18:19:31 DEBUG otopi.context
>> context.dumpEnvironment:510 ENV
>> OVEHOSTED_ENGINE/appHostName=str:'hosted_engine_1'
>>
>>
>> Am 18.05.2016 um 10:59 schrieb Simone Tiraboschi:
>>>
>>>
>>> On Wed, May 18, 2016 at 10:33 AM, Ralf Schenk <r...@databay.de
>>> <mailto:r...@databay.de>> wrote:
>>>
>>> Dear list,
>>>
>>> i'm working for days on a hosted-engine setup on
>>> glusterfs and 'm stuck in the last phase of
>>> hosted-engine --deploy see Text output (for two days now
>>> :-( ). I've only ovirtmgmt Bridge up and running and all
>>> DNS-Names seen in the screenshot are resolvable. It
>>> shouldn't be related to your problem regarding multiple
>>> nic's in the engine-vm. As seen the hosted-engine setup
>>> is already succsessfull, engine-vm running, but the
>>> setup fails in registering the host (microcloud21...) in
>>> the engine.
>>>
>>>
>>> [...]
>>>
>>>   |- [ INFO  ] Starting engine service
>>>   |- [ INFO  ] Restarting httpd
>>>   |- [ INFO  ] Restarting ovirt-vmconsole proxy
>>> service
>>>   |- [ INFO  ] Stage: Clean up
>>>   |-   Log file is located at
>>> 
>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20160517222119-ndcyxn.log
>>>   |- [ INFO  ] Generating answer file
>>> '/var/lib/ovirt-engine/setup/answers/20160517223055-setup.conf'
>>>   |- [ INFO  ] Stage: Pre-termination
>>>   |- [ INFO  ] Stage: Termination
>>>   |- [ INFO  ] Execution of setup completed
>>> successfully
>>>   |- HE_APPLIANCE_ENGINE_SETUP_SUCCESS
>>> [ INFO  ] Engine-setup successfully completed
>>> [ INFO  ] Engine is still unreachable
>>> [ INFO  ] Engine is still not reachable, waiting...
>>> [ INFO  ] Engine replied: DB Up!Welcome to Health Status!
>>> [ INFO  ] Acquiring internal CA cert from the engine
>>> [ INFO  ] The fo

[ovirt-users] oVirt hosted-engine setup stuck in final phase registering host to engine

2016-05-18 Thread Ralf Schenk
Dear list,

i'm working for days on a hosted-engine setup on glusterfs and 'm stuck
in the last phase of hosted-engine --deploy see Text output (for two
days now :-( ). I've only ovirtmgmt Bridge up and running and all
DNS-Names seen in the screenshot are resolvable. It shouldn't be related
to your problem regarding multiple nic's in the engine-vm. As seen the
hosted-engine setup is already succsessfull, engine-vm running, but the
setup fails in registering the host (microcloud21...) in the engine.


[...]

  |- [ INFO  ] Starting engine service
  |- [ INFO  ] Restarting httpd
  |- [ INFO  ] Restarting ovirt-vmconsole proxy service
  |- [ INFO  ] Stage: Clean up
  |-   Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20160517222119-ndcyxn.log
  |- [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20160517223055-setup.conf'
  |- [ INFO  ] Stage: Pre-termination
  |- [ INFO  ] Stage: Termination
  |- [ INFO  ] Execution of setup completed successfully
  |- HE_APPLIANCE_ENGINE_SETUP_SUCCESS
[ INFO  ] Engine-setup successfully completed
[ INFO  ] Engine is still unreachable
[ INFO  ] Engine is still not reachable, waiting...
[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
[ INFO  ] Acquiring internal CA cert from the engine
[ INFO  ] The following CA certificate is going to be used, please
immediately interrupt if not correct:
[ INFO  ] Issuer: C=US, O=rxmgmt.databay.de,
CN=engine-mcIII.rxmgmt.databay.de.77792, Subject: C=US,
O=rxmgmt.databay.de, CN=engine-mcIII.rxmgmt.databay.de.77792,
Fingerprint (SHA-1): D71A5EA7C5C2C2450F1473523C3A1E022945430
[ INFO  ] Connecting to the Engine
[ ERROR ] Cannot automatically add the host to cluster Default: Host
name must be formed of alphanumeric characters, numbers or "-_."


  Please check Engine VM configuration.

  Make a selection from the options below:
  (1) Continue setup - Engine VM configuration has been fixed
  (2) Abort setup

  (1, 2)[1]: 1

  Checking for oVirt-Engine status at
engine-mcIII.rxmgmt.databay.de...
[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
[ ERROR ] Cannot automatically add the host to cluster Default: Host
name must be formed of alphanumeric characters, numbers or "-_."


  Please check Engine VM configuration.

  Make a selection from the options below:
  (1) Continue setup - Engine VM configuration has been fixed
  (2) Abort setup

  (1, 2)[1]:

Additionally I can see the error in the eninge-vm in
/var/log/ovirt-engine/engine.log:

2016-05-17 14:08:15,296 WARN 
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-13)
[9277b0a] CanDoAction of action 'AddVds' failed for user admin@internal.
Reasons: VAR__ACTION__ADD,VAR__TYPE__HOST,$server
microcloud21.rxmgmt.databay.de,VALIDATION_VDS_NAME_INVALID
2016-05-17 14:08:15,360 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-13) [] Operation Failed: [Host name must be formed of alphanumeric
characters, numbers or "-_."]

The hostname that should be registered is the correct fqdn for the
virtualization host that runs the engine-vm, points to the ip of the
ovirtmgmt-Bridge and also the IP is reverse-resolvable to the hostname.
I checked /etc/hostname of the host for non-printable characters and as
you can see that is a valid hostname.

All is running on Centos 7 Versions:
ovirt-engine-appliance.noarch 3.6-20160420.1.el7.centos  @ovirt-3.6
ovirt-engine-sdk-python.noarch3.6.5.0-1.el7.centos   @ovirt-3.6
ovirt-host-deploy.noarch  1.4.1-1.el7.centos @ovirt-3.6
ovirt-hosted-engine-ha.noarch 1.3.5.3-1.1.el7   
@centos-ovirt36
ovirt-hosted-engine-setup.noarch  1.3.5.0-1.1.el7   
@centos-ovirt36
ovirt-setup-lib.noarch1.0.1-1.el7.centos @ovirt-3.6
ovirt-vmconsole.noarch1.0.0-1.el7.centos @ovirt-3.6
ovirt-vmconsole-host.noarch   1.0.0-1.el7.centos @ovirt-3.6

Since I already repeated the setup for many times now, I need your help.
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* <mailto:r...@databay.de>

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users