[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Jason Beard
I fixed the permission error with btmp but it made no difference. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5LIUDZCLVWRX4QFEJAGJDJ463ELX5ZHF/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread J Beard
Thanks everyone for the troubleshooting so far. I agree that the cockpit auth 
file is the same as 4.4.5.  The timestamps are before the upgrade too. I get 
the same errors in the secure log. I found an error messages, looking at it 
now. 

secure log
May 18 21:50:57  unix_chkpwd[529704]: check pass; user unknown
May 18 21:50:57  unix_chkpwd[529705]: check pass; user unknown
May 18 21:50:57  unix_chkpwd[529705]: password check failed for user 
(root)

messages
May 18 22:00:23  cockpit-ws[532424]: cockpit-session: 
open(/var/log/btmp) failed: Permission denied
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PGJT54LVJFKONRAAUZBTFLNSXFCYM4MI/


[ovirt-users] Re: Unable start self hosted engine after accidental shut down

2021-05-18 Thread Eugène Ngontang
I connected to the hypervisor, and run the hosted engine from there
successfully.

However the node status is still bad.

I've set back the maintenance mode to none, but it's still showing that the
cluster is in global maintenance mode.

Furthermore it seems like the the system is having a disk storage issue
(the root fs seems full).

[root@milhouse-main ~]# hosted-engine --set-maintenance --mode=none
You have new mail in /var/spool/mail/root
[root@milhouse-main ~]# hosted-engine --vm-status


!! Cluster is in GLOBAL MAINTENANCE mode !!



--== Host milhouse-main.envrmnt.local (id: 1) status ==--

conf_on_shared_storage : True
Status up-to-date : True
Hostname : milhouse-main.envrmnt.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health": "bad", "vm":
"down_unexpected", "detail": "Down"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : c3dd57b9
local_conf_timestamp : 1652129
Host timestamp : 1652129
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1652129 (Tue May 18 22:40:57 2021)
host-id=1
score=3400
vm_conf_refresh_time=1652129 (Tue May 18 22:40:57 2021)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False


!! Cluster is in GLOBAL MAINTENANCE mode !!

You have new mail in /var/spool/mail/root
[root@milhouse-main ~]# virsh -r list
setlocale: No such file or directory
Id Name State

2 hp_gpu-node11 paused
3 fp_gpu-node5 paused
4 hp_gpu-node10 paused
5 hp_gpu-node7 paused
6 cpu-node3 paused
7 hp_gpu-node5 paused
8 fp_gpu-node1 paused
9 fp_gpu-node0 paused
10 cpu-node1 paused
11 fp_gpu-node6 paused
12 hp_gpu-node8 paused
13 fp_gpu-node10 paused
14 fp_gpu-node4 paused
15 fp_gpu-node9 paused
16 hp_gpu-node4 paused
17 fp_gpu-node15 paused
18 fp_gpu-node8 paused
19 hp_gpu-node0 paused
20 fp_gpu-node14 paused
21 fp_gpu-node2 paused
22 fp_gpu-node11 paused
23 hp_gpu-node9 paused
24 cpu-node2 paused
25 hp_gpu-node1 paused
26 hp_gpu-node2 paused
27 fp_gpu-node12 paused
28 hp_gpu-node3 paused
29 hp_gpu-node6 paused
30 infra-vm paused
31 cpu-node0 paused
32 fp_gpu-node3 paused
33 fp_gpu-node7 paused
34 fp_gpu-node13 paused
35 bigip-16.1x-milhouse paused
37 HostedEngine running

You have new mail in /var/spool/mail/root
[root@milhouse-main ~]# nodectl check
Status: FAILED
Bootloader ... OK
Layer boot entries ... OK
Valid boot entries ... OK
Mount points ... OK
Separate /var ... OK
Discard is used ... OK
Basic storage ... OK
Initialized VG ... OK
Initialized Thin Pool ... OK
Initialized LVs ... OK
Thin storage ... FAILED - It looks like the LVM layout is not correct. The
reason could be an incorrect installation.
Checking available space in thinpool ... FAILED - Data or Metadata usage is
above threshold. Check the output of `lvs`
Checking thinpool auto-extend ... OK
vdsmd ... OK
[root@milhouse-main ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home rhvh Vwi-aotz-- 1.00g pool00 4.79
pool00 rhvh twi-aotz-- 422.84g 89.91 15.03
rhvh-4.3.7.1-0.20191211.0 rhvh Vwi---tz-k <325.37g pool00 root
rhvh-4.3.7.1-0.20191211.0+1 rhvh Vwi-a-tz-- <325.37g pool00
rhvh-4.3.7.1-0.20191211.0 18.30
rhvh-4.3.8.1-0.20200126.0 rhvh Vri---tz-k <325.37g pool00
rhvh-4.3.8.1-0.20200126.0+1 rhvh Vwi-aotz-- <325.37g pool00
rhvh-4.3.8.1-0.20200126.0 95.65
root rhvh Vri---tz-k <325.37g pool00
swap rhvh -wi-ao 4.00g
tmp rhvh Vwi-aotz-- 1.00g pool00 5.15
var rhvh Vwi-aotz-- 15.00g pool00 6.58
var_crash rhvh Vwi-aotz-- 10.00g pool00 60.48
var_log rhvh Vwi-aotz-- 8.00g pool00 22.84
var_log_audit rhvh Vwi-aotz-- 2.00g pool00 5.75
[root@milhouse-main ~]# df -kh
Filesystem Size Used Avail Use% Mounted on
devtmpfs 252G 0 252G 0% /dev
tmpfs 252G 16K 252G 1% /dev/shm
tmpfs 252G 1.4G 251G 1% /run
tmpfs 252G 0 252G 0% /sys/fs/cgroup
/dev/mapper/rhvh-rhvh--4.3.8.1--0.20200126.0+1 321G 306G 0 100% /
/dev/mapper/rhvh-home 976M 2.6M 907M 1% /home
/dev/mapper/3600508b1001c9cd336275e31e675f593p2 976M 367M 543M 41% /boot
/dev/mapper/rhvh-tmp 976M 2.6M 907M 1% /tmp
/dev/mapper/3600508b1001c9cd336275e31e675f593p1 200M 9.7M 191M 5% /boot/efi
/dev/mapper/rhvh-var 15G 631M 14G 5% /var
/dev/mapper/rhvh-var_crash 9.8G 5.8G 3.5G 63% /var/crash
/dev/mapper/rhvh-var_log 7.8G 1.6G 5.9G 21% /var/log
/dev/mapper/rhvh-var_log_audit 2.0G 26M 1.8G 2% /var/log/audit
192.168.36.64:/exports/data 321G 306G 0 100%
/rhev/data-center/mnt/192.168.36.64:_exports_data
tmpfs 51G 0 51G 0% /run/user/0
[root@milhouse-main ~]#


Le mar. 18 mai 2021 à 23:47, marcel d'heureuse  a
écrit :

> But the score is 3400. The engine Image should be ok.
>
> Is the engine volume mounted and available as brick?
>
> gluster volume status engine
>
> Br
> Marcel
>
> Am 18. Mai 2021 23:37:38 MESZ schrieb Edward Berger :
>>
>> With all the other VMs paused, I would guess all the VM disk image
>> storage is offline or unreachable
>> from the hypervisor.
>>
>> login to this hypervisor host. df -kh to see whats mounted
>>
>> 

[ovirt-users] Re: Unable start self hosted engine after accidental shut down

2021-05-18 Thread marcel d'heureuse
But the score is 3400. The engine Image should be ok. 

Is the engine volume mounted and available as brick?

gluster volume status engine

Br
Marcel

Am 18. Mai 2021 23:37:38 MESZ schrieb Edward Berger :
>With all the other VMs paused, I would guess all the VM disk image
>storage
>is offline or unreachable
>from the hypervisor.
>
>login to this hypervisor host. df -kh to see whats mounted
>
>check the fileserving from hosts there.
>
>
>
>On Tue, May 18, 2021 at 4:33 PM Eugène Ngontang 
>wrote:
>
>> Hi,
>>
>> Our self hosted engine has been accidentally shut down by a teammate
>and
>> now I'm trying hard to get it back up without success.
>>
>> I've tried the --vm-start command but it says the VM is in
>WaitForLaunch
>> status.
>>
>> I've set the global maintenance mode but it does nothing.
>>
>> root@milhouse-main ~]# hosted-engine --vm-start
>> VM exists and is down, cleaning up and restarting
>> VM in WaitForLaunch
>>
>> [root@milhouse-main ~]# hosted-engine --set-maintenance --mode=global
>> [root@milhouse-main ~]# hosted-engine --vm-status
>>
>>
>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>
>>
>>
>> --== Host milhouse-main.envrmnt.local (id: 1) status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date  : False
>> Hostname   : milhouse-main.envrmnt.local
>> Host ID: 1
>> Engine status  : unknown stale-data
>> Score  : 3400
>> stopped: False
>> Local maintenance  : False
>> crc32  : 931b2db9
>> local_conf_timestamp   : 1642052
>> Host timestamp : 1642052
>> Extra metadata (valid at timestamp):
>>  metadata_parse_version=1
>>  metadata_feature_version=1
>>  timestamp=1642052 (Tue May 18 19:52:59 2021)
>>  host-id=1
>>  score=3400
>>  vm_conf_refresh_time=1642052 (Tue May 18 19:53:00 2021)
>>  conf_on_shared_storage=True
>>  maintenance=False
>>  state=EngineDown
>>  stopped=False
>>
>>
>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>
>> You have new mail in /var/spool/mail/root
>> [root@milhouse-main ~]# hosted-engine --vm-start
>> VM exists and is down, cleaning up and restarting
>> VM in WaitForLaunch
>>
>> [root@milhouse-main ~]#
>>
>> And when I list all vms, I can see the hosted engine is in the Shut
>Off status and the managed vms are all paused
>>
>>
>> [root@milhouse-main ~]# virsh -r list --all
>> setlocale: No such file or directory
>>  IdName   State
>> 
>>  2 hp_gpu-node11  paused
>>  3 fp_gpu-node5   paused
>>  4 hp_gpu-node10  paused
>>  5 hp_gpu-node7   paused
>>  6 cpu-node3  paused
>>  7 hp_gpu-node5   paused
>>  8 fp_gpu-node1   paused
>>  9 fp_gpu-node0   paused
>>  10cpu-node1  paused
>>  11fp_gpu-node6   paused
>>  12hp_gpu-node8   paused
>>  13fp_gpu-node10  paused
>>  14fp_gpu-node4   paused
>>  15fp_gpu-node9   paused
>>  16hp_gpu-node4   paused
>>  17fp_gpu-node15  paused
>>  18fp_gpu-node8   paused
>>  19hp_gpu-node0   paused
>>  20fp_gpu-node14  paused
>>  21fp_gpu-node2   paused
>>  22fp_gpu-node11  paused
>>  23hp_gpu-node9   paused
>>  24cpu-node2  paused
>>  25hp_gpu-node1   paused
>>  26hp_gpu-node2   paused
>>  27fp_gpu-node12  paused
>>  28hp_gpu-node3   paused
>>  29hp_gpu-node6   paused
>>  30infra-vm   paused
>>  31cpu-node0  paused
>>  32fp_gpu-node3   paused
>>  33fp_gpu-node7   paused
>>  34fp_gpu-node13  paused
>>  35bigip-16.1x-milhouse   paused
>>  - HostedEngine   shut off
>>
>> [root@milhouse-main ~]#
>>
>>
>> I don't want to reboot the host server, cause I could loose all my
>VMs.
>>
>> Can someone help here please?
>>
>> Thanks.
>>
>> Regards,
>> Eugène NG
>> --
>> LesCDN 
>> engont...@lescdn.com
>> 
>> *Aux hommes il faut un chef, et au*
>>
>> * chef il faut des hommes!L'habit ne fait pas le moine, mais
>lorsqu'on te
>> voit on te juge!*
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: 

[ovirt-users] Re: Unable start self hosted engine after accidental shut down

2021-05-18 Thread Edward Berger
With all the other VMs paused, I would guess all the VM disk image storage
is offline or unreachable
from the hypervisor.

login to this hypervisor host. df -kh to see whats mounted

check the fileserving from hosts there.



On Tue, May 18, 2021 at 4:33 PM Eugène Ngontang  wrote:

> Hi,
>
> Our self hosted engine has been accidentally shut down by a teammate and
> now I'm trying hard to get it back up without success.
>
> I've tried the --vm-start command but it says the VM is in WaitForLaunch
> status.
>
> I've set the global maintenance mode but it does nothing.
>
> root@milhouse-main ~]# hosted-engine --vm-start
> VM exists and is down, cleaning up and restarting
> VM in WaitForLaunch
>
> [root@milhouse-main ~]# hosted-engine --set-maintenance --mode=global
> [root@milhouse-main ~]# hosted-engine --vm-status
>
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
>
>
> --== Host milhouse-main.envrmnt.local (id: 1) status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : False
> Hostname   : milhouse-main.envrmnt.local
> Host ID: 1
> Engine status  : unknown stale-data
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 931b2db9
> local_conf_timestamp   : 1642052
> Host timestamp : 1642052
> Extra metadata (valid at timestamp):
>   metadata_parse_version=1
>   metadata_feature_version=1
>   timestamp=1642052 (Tue May 18 19:52:59 2021)
>   host-id=1
>   score=3400
>   vm_conf_refresh_time=1642052 (Tue May 18 19:53:00 2021)
>   conf_on_shared_storage=True
>   maintenance=False
>   state=EngineDown
>   stopped=False
>
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
> You have new mail in /var/spool/mail/root
> [root@milhouse-main ~]# hosted-engine --vm-start
> VM exists and is down, cleaning up and restarting
> VM in WaitForLaunch
>
> [root@milhouse-main ~]#
>
> And when I list all vms, I can see the hosted engine is in the Shut Off 
> status and the managed vms are all paused
>
>
> [root@milhouse-main ~]# virsh -r list --all
> setlocale: No such file or directory
>  IdName   State
> 
>  2 hp_gpu-node11  paused
>  3 fp_gpu-node5   paused
>  4 hp_gpu-node10  paused
>  5 hp_gpu-node7   paused
>  6 cpu-node3  paused
>  7 hp_gpu-node5   paused
>  8 fp_gpu-node1   paused
>  9 fp_gpu-node0   paused
>  10cpu-node1  paused
>  11fp_gpu-node6   paused
>  12hp_gpu-node8   paused
>  13fp_gpu-node10  paused
>  14fp_gpu-node4   paused
>  15fp_gpu-node9   paused
>  16hp_gpu-node4   paused
>  17fp_gpu-node15  paused
>  18fp_gpu-node8   paused
>  19hp_gpu-node0   paused
>  20fp_gpu-node14  paused
>  21fp_gpu-node2   paused
>  22fp_gpu-node11  paused
>  23hp_gpu-node9   paused
>  24cpu-node2  paused
>  25hp_gpu-node1   paused
>  26hp_gpu-node2   paused
>  27fp_gpu-node12  paused
>  28hp_gpu-node3   paused
>  29hp_gpu-node6   paused
>  30infra-vm   paused
>  31cpu-node0  paused
>  32fp_gpu-node3   paused
>  33fp_gpu-node7   paused
>  34fp_gpu-node13  paused
>  35bigip-16.1x-milhouse   paused
>  - HostedEngine   shut off
>
> [root@milhouse-main ~]#
>
>
> I don't want to reboot the host server, cause I could loose all my VMs.
>
> Can someone help here please?
>
> Thanks.
>
> Regards,
> Eugène NG
> --
> LesCDN 
> engont...@lescdn.com
> 
> *Aux hommes il faut un chef, et au*
>
> * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
> voit on te juge!*
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZA7FHNC3K7TXF3P47LZP7JNKNO4QCB4M/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe 

[ovirt-users] Re: Gluster volumes not healing (perhaps after host maintenance?)

2021-05-18 Thread Marco Fais
Hi David,

just spotted this post from a couple of weeks ago -- I have the same
problem (Gluster volume not healing) since the upgrade from 7.x to 8.4.
Same exact errors on glustershd.log -- and same errors if I try to heal
manually.

Typically I can get the volume healed by killing the specific brick
processes manually and forcing a volume start (to restart the failed
bricks).

Just wondering if you've got any progress on your side?

I have also tried to upgrade to 9.1 in one of the clusters (I have three
different ones affected) but didn't solve the issue.

Regards.
Marco

On Mon, 26 Apr 2021 at 21:55, David White via Users  wrote:

> I did have my /etc/hosts setup on all 3 of the oVirt Hosts in the format
> you described, with the exception of the trailing "host1" and "host2". I
> only had the FQDN in there.
>
> I had an outage of almost an hour this morning that may or may not be
> related to this. An "ETL Service" started, at which point a lot of things
> broke down, and I saw a lot of storage-related errors. Everything came back
> on its own, though.
>
> See my other thread that I just started on that topic.
> As of now, there are NOT indications that any of the volumes or disks are
> out of sync.
>
>
> Sent with ProtonMail  Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Sunday, April 25, 2021 1:43 AM, Strahil Nikolov via Users <
> users@ovirt.org> wrote:
>
> A/ & PTR records are pretty important.
> As long as you setup your /etc/hosts jn the format like this you will be
> OK:
>
> 10.10.10.10 host1.anysubdomain.domain host1
> 10.10.10.11 host2.anysubdomain.domain host2
>
> Usually the hostname is defined for each peer in the
> /var/lib/glusterd/peers. Can you check the contents on all nodes ?
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Apr 24, 2021 at 21:57, David White via Users
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CYPYALTFM7ITZZENSI6R5E6ZNT7TRY5Y/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NU6PXEUVVSCHVUIYTJRFOO72ZCJBWGVG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YLOE6ZX4W4XXEY72Q5ZJIZDKMNPEDO2/


[ovirt-users] Unable start self hosted engine after accidental shut down

2021-05-18 Thread Eugène Ngontang
Hi,

Our self hosted engine has been accidentally shut down by a teammate and
now I'm trying hard to get it back up without success.

I've tried the --vm-start command but it says the VM is in WaitForLaunch
status.

I've set the global maintenance mode but it does nothing.

root@milhouse-main ~]# hosted-engine --vm-start
VM exists and is down, cleaning up and restarting
VM in WaitForLaunch

[root@milhouse-main ~]# hosted-engine --set-maintenance --mode=global
[root@milhouse-main ~]# hosted-engine --vm-status


!! Cluster is in GLOBAL MAINTENANCE mode !!



--== Host milhouse-main.envrmnt.local (id: 1) status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : milhouse-main.envrmnt.local
Host ID: 1
Engine status  : unknown stale-data
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 931b2db9
local_conf_timestamp   : 1642052
Host timestamp : 1642052
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1642052 (Tue May 18 19:52:59 2021)
host-id=1
score=3400
vm_conf_refresh_time=1642052 (Tue May 18 19:53:00 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


!! Cluster is in GLOBAL MAINTENANCE mode !!

You have new mail in /var/spool/mail/root
[root@milhouse-main ~]# hosted-engine --vm-start
VM exists and is down, cleaning up and restarting
VM in WaitForLaunch

[root@milhouse-main ~]#

And when I list all vms, I can see the hosted engine is in the Shut
Off status and the managed vms are all paused


[root@milhouse-main ~]# virsh -r list --all
setlocale: No such file or directory
 IdName   State

 2 hp_gpu-node11  paused
 3 fp_gpu-node5   paused
 4 hp_gpu-node10  paused
 5 hp_gpu-node7   paused
 6 cpu-node3  paused
 7 hp_gpu-node5   paused
 8 fp_gpu-node1   paused
 9 fp_gpu-node0   paused
 10cpu-node1  paused
 11fp_gpu-node6   paused
 12hp_gpu-node8   paused
 13fp_gpu-node10  paused
 14fp_gpu-node4   paused
 15fp_gpu-node9   paused
 16hp_gpu-node4   paused
 17fp_gpu-node15  paused
 18fp_gpu-node8   paused
 19hp_gpu-node0   paused
 20fp_gpu-node14  paused
 21fp_gpu-node2   paused
 22fp_gpu-node11  paused
 23hp_gpu-node9   paused
 24cpu-node2  paused
 25hp_gpu-node1   paused
 26hp_gpu-node2   paused
 27fp_gpu-node12  paused
 28hp_gpu-node3   paused
 29hp_gpu-node6   paused
 30infra-vm   paused
 31cpu-node0  paused
 32fp_gpu-node3   paused
 33fp_gpu-node7   paused
 34fp_gpu-node13  paused
 35bigip-16.1x-milhouse   paused
 - HostedEngine   shut off

[root@milhouse-main ~]#


I don't want to reboot the host server, cause I could loose all my VMs.

Can someone help here please?

Thanks.

Regards,
Eugène NG
-- 
LesCDN 
engont...@lescdn.com

*Aux hommes il faut un chef, et au*

* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZA7FHNC3K7TXF3P47LZP7JNKNO4QCB4M/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Glenn Farmer
Gianluca, I hope I my frustration didn't come across too strong - I apologize 
if so.  I certainly now understand your posting of 4.4.5 as a diff source 
against 4.4.6 - thanks! - regards - Glenn
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSDN3CMTA65LQOK7Z3O3E5ZBIV4PVZHW/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Edward Berger
I swapped out the /etc/authselect login and system files and It seems to be
that the
updated node 4.6 pam stack is calling /usr/sbin/chkpwd and that fails for
all cockpit users, root and otherwise.

for root
May 18 13:03:02 br014 unix_chkpwd[14186]: check pass; user unknown
May 18 13:03:02 br014 unix_chkpwd[14187]: check pass; user unknown
May 18 13:03:02 br014 unix_chkpwd[14187]: password check failed for user
(root)

for local user account >1000 UID
May 18 13:03:28 br014 unix_chkpwd[14309]: could not obtain user info
(e##)


On Tue, May 18, 2021 at 12:02 PM Edward Berger  wrote:

> /etc/pam.d/cockpit under node 4.4.6 is the same as you posted.
> Something else changed.
>
> #%PAM-1.0
> # this MUST be first in the "auth" stack as it sets PAM_USER
> # user_unknown is definitive, so die instead of ignore to avoid subsequent
> modules mess up the error code
> -auth  [success=done new_authtok_reqd=done user_unknown=die
> default=ignore]   pam_cockpit_cert.so
> auth   required pam_sepermit.so
> auth   substack password-auth
> auth   include  postlogin
> auth   optional pam_ssh_add.so
> accountrequired pam_nologin.so
> accountinclude  password-auth
> password   include  password-auth
> # pam_selinux.so close should be the first session rule
> sessionrequired pam_selinux.so close
> sessionrequired pam_loginuid.so
> # pam_selinux.so open should only be followed by sessions to be executed
> in the user context
> sessionrequired pam_selinux.so open env_params
> sessionoptional pam_keyinit.so force revoke
> sessionoptional pam_ssh_add.so
> sessioninclude  password-auth
> sessioninclude  postlogin
>
>
> On Tue, May 18, 2021 at 11:50 AM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Tue, May 18, 2021 at 4:50 PM Glenn Farmer 
>> wrote:
>>
>>> The current thread is about 4.4.6 - nice that you can login to your
>>> 4.4.5.
>>>
>>
>> The subject of the thread says it all... ;-)
>> My point was to ask if you see differences in /etc/pam.d/cockpit in your
>> 4.4.6, in respect with the version I pasted for my 4.4.5 or if they are the
>> same.
>> I cannot compare as I have not yet 4.4.6 installed
>>
>>
>>> I changed the admin password on the engine - still cannot access the
>>> Cockpit GUI on any of my hosts.
>>>
>>
>> The cockpit gui for the host is accessed through users defined on the
>> hosts, not on engine side. It is not related to the admin engine web admi
>> gui...
>> I think you can configure a normal user on your hypervisor host and see
>> if you can use it to connect to the cockpit gui or if you receive error.
>> Do you need any particular functionality to use the root user?
>>
>> HIH,
>> Gianluca
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSM4BLBD36MFNXR5OXS4QWWHHGQXXZIP/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QADZ4TFIUNUKCCCBXE7LT6MHFXDNVARG/


[ovirt-users] Re: Gluster Geo-Replication Fails

2021-05-18 Thread Strahil Nikolov via Users
Now to make it perfect , leave it running and analyze the AVCs with semanage.
In the end SELINUX will remain working and geo-rep should be running also.

I've previously tried the rpm generated from 
https://github.com/gluster/glusterfs-selinux but it didn't help at that time. 
If possible, give it a try.
Best Regards,Strahil Nikolov 
 
  On Tue, May 18, 2021 at 16:09, Simon Scott wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GB23MBKEQKZNKLHBUN2EC5VFLXVZKDCV/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y2DUGZNZMMHZ6AM3VH2DY4VZDCC7WF3T/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Edward Berger
/etc/pam.d/cockpit under node 4.4.6 is the same as you posted.
Something else changed.

#%PAM-1.0
# this MUST be first in the "auth" stack as it sets PAM_USER
# user_unknown is definitive, so die instead of ignore to avoid subsequent
modules mess up the error code
-auth  [success=done new_authtok_reqd=done user_unknown=die
default=ignore]   pam_cockpit_cert.so
auth   required pam_sepermit.so
auth   substack password-auth
auth   include  postlogin
auth   optional pam_ssh_add.so
accountrequired pam_nologin.so
accountinclude  password-auth
password   include  password-auth
# pam_selinux.so close should be the first session rule
sessionrequired pam_selinux.so close
sessionrequired pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in
the user context
sessionrequired pam_selinux.so open env_params
sessionoptional pam_keyinit.so force revoke
sessionoptional pam_ssh_add.so
sessioninclude  password-auth
sessioninclude  postlogin


On Tue, May 18, 2021 at 11:50 AM Gianluca Cecchi 
wrote:

> On Tue, May 18, 2021 at 4:50 PM Glenn Farmer 
> wrote:
>
>> The current thread is about 4.4.6 - nice that you can login to your 4.4.5.
>>
>
> The subject of the thread says it all... ;-)
> My point was to ask if you see differences in /etc/pam.d/cockpit in your
> 4.4.6, in respect with the version I pasted for my 4.4.5 or if they are the
> same.
> I cannot compare as I have not yet 4.4.6 installed
>
>
>> I changed the admin password on the engine - still cannot access the
>> Cockpit GUI on any of my hosts.
>>
>
> The cockpit gui for the host is accessed through users defined on the
> hosts, not on engine side. It is not related to the admin engine web admi
> gui...
> I think you can configure a normal user on your hypervisor host and see if
> you can use it to connect to the cockpit gui or if you receive error.
> Do you need any particular functionality to use the root user?
>
> HIH,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSM4BLBD36MFNXR5OXS4QWWHHGQXXZIP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4ZXGJTICEWBOLFZJ2FLQULS3BZE4CP5G/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Gianluca Cecchi
On Tue, May 18, 2021 at 4:50 PM Glenn Farmer 
wrote:

> The current thread is about 4.4.6 - nice that you can login to your 4.4.5.
>

The subject of the thread says it all... ;-)
My point was to ask if you see differences in /etc/pam.d/cockpit in your
4.4.6, in respect with the version I pasted for my 4.4.5 or if they are the
same.
I cannot compare as I have not yet 4.4.6 installed


> I changed the admin password on the engine - still cannot access the
> Cockpit GUI on any of my hosts.
>

The cockpit gui for the host is accessed through users defined on the
hosts, not on engine side. It is not related to the admin engine web admi
gui...
I think you can configure a normal user on your hypervisor host and see if
you can use it to connect to the cockpit gui or if you receive error.
Do you need any particular functionality to use the root user?

HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSM4BLBD36MFNXR5OXS4QWWHHGQXXZIP/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Edward Berger
I see the same thing on a test cluster.  I presumed it was due to using
sssd and kerberos for local user logins.
Here's an example of what gets written to /var/log/secure when root login
to cockpit fails.
May 18 11:29:07 br014 unix_chkpwd[26429]: check pass; user unknown
May 18 11:29:07 br014 unix_chkpwd[26430]: check pass; user unknown
May 18 11:29:07 br014 unix_chkpwd[26430]: password check failed for user
(root)
May 18 11:29:07 br014 cockpit-session[26427]: pam_unix(cockpit:auth):
authentication failure; logname= uid=993 euid=993 tty= ruser=
rhost=:::128.182.79.36  user=root
May 18 11:29:07 br014 cockpit-session[26427]: pam_succeed_if(cockpit:auth):
requirement "uid >= 1000" not met by user "root"

The uid test is obviously an issue. not sure why check pass seems to always
give user unknown errors. tried putting selinux in permissive same issue.


On Tue, May 18, 2021 at 10:50 AM Glenn Farmer 
wrote:

> The current thread is about 4.4.6 - nice that you can login to your 4.4.5.
>
> I changed the admin password on the engine - still cannot access the
> Cockpit GUI on any of my hosts.
>
> Do I have to reboot them?  Restart Cockpit - tried that - failed.
>
> Cannot access Cockpit on all hosts in a cluster after upgrading to 4.4.6
> really should be considered a bug.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZPGTQUWDUPJWVRHFNUARRSB3EDX7PLX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BCPXOJ2GKWEEONMQ53775H6RD2FXFOWP/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Glenn Farmer
The current thread is about 4.4.6 - nice that you can login to your 4.4.5.

I changed the admin password on the engine - still cannot access the Cockpit 
GUI on any of my hosts.

Do I have to reboot them?  Restart Cockpit - tried that - failed.

Cannot access Cockpit on all hosts in a cluster after upgrading to 4.4.6 really 
should be considered a bug.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZPGTQUWDUPJWVRHFNUARRSB3EDX7PLX/


[ovirt-users] Re: oVirt deploy new HE Host problem

2021-05-18 Thread Marko Vrgotic
Hi Yedidyah,

This feels like we are both just repeating ourselves. I have been mentioning 
from beginning that this is on latest 4.3. I understand oVirt dream team cannot 
support all versions, but as I already mentioned I have big production platform 
that I cannot just upgrade overnight. This is also why I have been debugging so 
much on my own and pasting info, expecting to get some help with direction.

Regarding logs, no problem, I will upload entire bunch and share the link.

Just to be clear, I do not expect my platform to be fixed with patch (EOL is 
more than clear message), but to understand:

  *   what went wrong
  *   why is it behaving like this
  *   if more damage is going to occur and how serious it is.
  *   If it can be fixed without redeploying, great, but then question is:
 *   if the damage done is going to allow restore
 *   and can I even upgrade to 4.4 in this state.

-
kind regards/met vriendelijke groeten

Marko Vrgotic
Sr. System Engineer @ System Administration

ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgo...@activevideo.com
w: www.activevideo.com

ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ 
Hilversum, The Netherlands. The information contained in this message may be 
legally privileged and confidential. It is intended to be read only by the 
individual or entity to whom it is addressed or by their designee. If the 
reader of this message is not the intended recipient, you are on notice that 
any distribution of this message, in any form, is strictly prohibited.  If you 
have received this message in error, please immediately notify the sender 
and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or 
destroy any copy of this message.



From: Yedidyah Bar David 
Date: Tuesday, 18 May 2021 at 15:44
To: Marko Vrgotic 
Cc: Strahil Nikolov , users@ovirt.org 
Subject: Re: [ovirt-users] Re: oVirt deploy new HE Host problem
***CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you recognize the sender!!!***

On Mon, May 17, 2021 at 10:34 AM Marko Vrgotic
 wrote:
>
> Hi gentleman,
>
>
>
> Hope you had a great weekend.
>
> Can I assume that you will be able to look into log files this week ?
>
>
>
> As per Yedidyah’s comment, I stopped troubleshooting .
>
>
>
> Kindly awaiting your reply.


Hi Marko,

Please upload somewhere all of /var/log from all hosts and the engine,
and share a link. Thanks.

In particular, you didn't include 'hosted-engine --deploy' logs from
/var/log/ovirt-hosted-engine-setup.

Also: the attached ovirt-host-deploy log indicates this is on 4.3,
which is EOL and unsupported.

Best regards,
--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KGC3CJD6NUK7RLVMMT4RPHZ3NCTTLRY/


[ovirt-users] Re: oVirt deploy new HE Host problem

2021-05-18 Thread Yedidyah Bar David
On Mon, May 17, 2021 at 10:34 AM Marko Vrgotic
 wrote:
>
> Hi gentleman,
>
>
>
> Hope you had a great weekend.
>
> Can I assume that you will be able to look into log files this week ?
>
>
>
> As per Yedidyah’s comment, I stopped troubleshooting .
>
>
>
> Kindly awaiting your reply.


Hi Marko,

Please upload somewhere all of /var/log from all hosts and the engine,
and share a link. Thanks.

In particular, you didn't include 'hosted-engine --deploy' logs from
/var/log/ovirt-hosted-engine-setup.

Also: the attached ovirt-host-deploy log indicates this is on 4.3,
which is EOL and unsupported.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GGBBHH7QBHHN2E3MFS6JRL6QDJDZZJ4Q/


[ovirt-users] Re: Gluster Geo-Replication Fails

2021-05-18 Thread Simon Scott
Perfect, worked a treat - thanks Strahil 


From: Strahil Nikolov 
Sent: Tuesday 18 May 2021 04:10
To: Simon Scott ; users@ovirt.org 
Subject: Re: [ovirt-users] Re: Gluster Geo-Replication Fails

If you are running on EL8 -> It's the SELINUX.
To verify that,  stop the session and use 'setenforce 0' on both source and 
destination.

To make it work with SELINUX , you will need to use 'sealert -a' extensively 
(yum whatprovides '*/sealert').

Best Regards,
Strahil Nikolov

Typo - That's TWO sites...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHPGXOENFSY6XILYSSXAX6CAQ6WFJVQ7/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GB23MBKEQKZNKLHBUN2EC5VFLXVZKDCV/


[ovirt-users] Re: poweroff and reboot with ovirt_vm ansible module

2021-05-18 Thread Alessio B.
Thank you very much for the clear help!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KO3C5IYUGPQTC7G2NB4FA6CP2WSMEJW/


[ovirt-users] Re: Problems provisioning 4.4.6 hosted engine

2021-05-18 Thread Yedidyah Bar David
On Thu, May 13, 2021 at 3:34 PM Sketch  wrote:
>
> This is a new system is CentOS 8.3, with the oVirt-4.4 repo and all
> updates applied.  When I try to install the hosted engine with my engine
> backup from 4.3.10, the installation fails with a too many open files
> error.  My 8.3 hosts already had 1M system max files, which is more than
> any of my CentOS 7/oVirt 4.3 hosts have.  I tried increasing it to 2M with
> no luck, so my suspicion is that the error is on the engine itself?
>
> I tried provisioning a new engine just to test, and I get SSH key errors
> instead of this one.
>
> Any suggestions?
>
> 2021-05-12 23:09:44,731-0700 ERROR ansible failed {
>  "ansible_host": "localhost",
>  "ansible_playbook": 
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
>  "ansible_result": {
>  "_ansible_no_log": false,
>  "exception": "Traceback (most recent call last):\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py\", line 
> 665, in _execute\nresult = self._handler.run(task_vars=variables)\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/action/wait_for_connection.py\",
>  line 122, in run\n
> self._remove_tmp_path(self._connection._shell.tmpdir)\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 
> 417, in _remove_tmp_path\ntmp_rm_res = 
> self._low_level_execute_command(cmd, sudoable=False)\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 
> 1085, in _low_level_execute_command\nrc, stdout, stderr = 
> self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)\n  
> File \"/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py\", 
> line 1191, in exec_command\ncmd = self._build_command(*args)\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/connection/s
>  sh.py\", line 562, in _build_command\nself.sshpass_pipe = 
> os.pipe()\nOSError: [Errno 24] Too many open files\n\nDuring handling of the 
> above exception, another exception occurred:\n\nTraceback (most recent call 
> last):\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py\", line 
> 147, in run\nres = self._execute()\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py\", line 
> 673, in _execute\nself._handler.cleanup()\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 
> 128, in cleanup\nself._remove_tmp_path(self._connection._shell.tmpdir)\n  
> File \"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", 
> line 417, in _remove_tmp_path\ntmp_rm_res = 
> self._low_level_execute_command(cmd, sudoable=False)\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 
> 1085, in _low_level_execute_command\nrc, stdout, stderr = 
> self._connection.exec_command
>  (cmd, in_data=in_data, sudoable=sudoable)\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py\", line 
> 1191, in exec_command\ncmd = self._build_command(*args)\n  File 
> \"/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py\", line 
> 562, in _build_command\nself.sshpass_pipe = os.pipe()\nOSError: [Errno 
> 24] Too many open files\n",
>  "msg": "Unexpected failure during module execution.",
>  "stdout": ""
>  },
>  "ansible_task": "Wait for the local VM",
>  "ansible_type": "task",
>  "status": "FAILED",
>  "task_duration": 3605

So I suppose it failed after 3605 seconds, or 721 attempts (of 5 seconds each).

Do you see the VM in 'virsh list'?
Can you see the VM running (e.g. 'ps auxww | grep qemu')?
Can you try to ssh to it from the host (search the logs for
local_vm_ip for its local/private temporary address)?
Perhaps open its console (Perhaps 'virsh console HostedEngineLocal')?

That said, I'd personally also consider it a bug in ansible, unless
you made some relevant custom changes - the bug is that it seems to
leak open files.

Thanks and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICRTPOJ6HNZI3RUP6OFWKEKCWOF7J7KG/


[ovirt-users] [ANN] Async release for oVirt 4.4.6

2021-05-18 Thread Lev Veyde
oVirt 4.4.6 Async update #3


On May 18th 2021 the oVirt project released an async update to the
following packages:

   -

   Vdsm 4.40.60.7
   -

   oVirt Node 4.4.6.3

Fixing the following bugs:

   -

   Bug 1959945  -
   [NBDE] RHVH 4.4.6 host fails to startup, without prompting for passphrase
   -

   Bug 1955571  -
   Verify if we still need to omit ifcfg and clevis dracut modules for
   properly working bridged network
   -

   Bug 1950209  - Leaf
   images used by the VM is deleted by the engine during snapshot merge

oVirt Node Changes:

- Consume above oVirt updates

- Updated to Gluster 8.5


Full diff list:

--- ovirt-node-ng-image-4.4.6.2.manifest-rpm 2021-05-14 08:58:12.581488678
+0200

+++ ovirt-node-ng-image-4.4.6.3.manifest-rpm 2021-05-18 13:09:07.858527812
+0200

@@ -220,7 +220,7 @@

-glusterfs-8.4-1.el8.x86_64

-glusterfs-cli-8.4-1.el8.x86_64

-glusterfs-client-xlators-8.4-1.el8.x86_64

-glusterfs-events-8.4-1.el8.x86_64

-glusterfs-fuse-8.4-1.el8.x86_64

-glusterfs-geo-replication-8.4-1.el8.x86_64

-glusterfs-server-8.4-1.el8.x86_64

+glusterfs-8.5-1.el8.x86_64

+glusterfs-cli-8.5-1.el8.x86_64

+glusterfs-client-xlators-8.5-1.el8.x86_64

+glusterfs-events-8.5-1.el8.x86_64

+glusterfs-fuse-8.5-1.el8.x86_64

+glusterfs-geo-replication-8.5-1.el8.x86_64

+glusterfs-server-8.5-1.el8.x86_64

@@ -383,6 +383,6 @@

-libgfapi0-8.4-1.el8.x86_64

-libgfchangelog0-8.4-1.el8.x86_64

-libgfrpc0-8.4-1.el8.x86_64

-libgfxdr0-8.4-1.el8.x86_64

-libglusterd0-8.4-1.el8.x86_64

-libglusterfs0-8.4-1.el8.x86_64

+libgfapi0-8.5-1.el8.x86_64

+libgfchangelog0-8.5-1.el8.x86_64

+libgfrpc0-8.5-1.el8.x86_64

+libgfxdr0-8.5-1.el8.x86_64

+libglusterd0-8.5-1.el8.x86_64

+libglusterfs0-8.5-1.el8.x86_64

@@ -643 +643 @@

-ovirt-node-ng-image-update-placeholder-4.4.6.2-1.el8.noarch

+ovirt-node-ng-image-update-placeholder-4.4.6.3-1.el8.noarch

@@ -651,2 +651,2 @@

-ovirt-release-host-node-4.4.6.2-1.el8.noarch

-ovirt-release44-4.4.6.2-1.el8.noarch

+ovirt-release-host-node-4.4.6.3-1.el8.noarch

+ovirt-release44-4.4.6.3-1.el8.noarch

@@ -754 +754 @@

-python3-gluster-8.4-1.el8.x86_64

+python3-gluster-8.5-1.el8.x86_64

@@ -940,15 +940,15 @@

-vdsm-4.40.60.6-1.el8.x86_64

-vdsm-api-4.40.60.6-1.el8.noarch

-vdsm-client-4.40.60.6-1.el8.noarch

-vdsm-common-4.40.60.6-1.el8.noarch

-vdsm-gluster-4.40.60.6-1.el8.x86_64

-vdsm-hook-ethtool-options-4.40.60.6-1.el8.noarch

-vdsm-hook-fcoe-4.40.60.6-1.el8.noarch

-vdsm-hook-openstacknet-4.40.60.6-1.el8.noarch

-vdsm-hook-vhostmd-4.40.60.6-1.el8.noarch

-vdsm-hook-vmfex-dev-4.40.60.6-1.el8.noarch

-vdsm-http-4.40.60.6-1.el8.noarch

-vdsm-jsonrpc-4.40.60.6-1.el8.noarch

-vdsm-network-4.40.60.6-1.el8.x86_64

-vdsm-python-4.40.60.6-1.el8.noarch

-vdsm-yajsonrpc-4.40.60.6-1.el8.noarch

+vdsm-4.40.60.7-1.el8.x86_64

+vdsm-api-4.40.60.7-1.el8.noarch

+vdsm-client-4.40.60.7-1.el8.noarch

+vdsm-common-4.40.60.7-1.el8.noarch

+vdsm-gluster-4.40.60.7-1.el8.x86_64

+vdsm-hook-ethtool-options-4.40.60.7-1.el8.noarch

+vdsm-hook-fcoe-4.40.60.7-1.el8.noarch

+vdsm-hook-openstacknet-4.40.60.7-1.el8.noarch

+vdsm-hook-vhostmd-4.40.60.7-1.el8.noarch

+vdsm-hook-vmfex-dev-4.40.60.7-1.el8.noarch

+vdsm-http-4.40.60.7-1.el8.noarch

+vdsm-jsonrpc-4.40.60.7-1.el8.noarch

+vdsm-network-4.40.60.7-1.el8.x86_64

+vdsm-python-4.40.60.7-1.el8.noarch

+vdsm-yajsonrpc-4.40.60.7-1.el8.noarch

-- 

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5G2USCO4FK6XFN6PEPKRFOT6OZQCUEU/


[ovirt-users] Re: Ovirt Engine -- Connection Refused to all hosts

2021-05-18 Thread Artur Socha
Hi Nick,
Could you post some more information about your setup?
In particular it would be useful to have the following:
1) *ovirt-engine *version
2) *vdsm-jsonrpc-java *version
3) vdsm logs from the host (*/var/log/vdsm/{vdsm,supervdsm}.log*, check for
errors & warnings)
4) libvirt logs (if any) , * journalctl -u libvirtd*

best,
Artur


On Tue, May 18, 2021 at 8:01 AM Yedidyah Bar David  wrote:

> On Tue, May 18, 2021 at 8:37 AM Nick Polites  wrote:
> >
> > Hi All,
> >
> > I am not sure if my original post is being reviewed before posting but
> trying again in case it failed to send.
> >
> > I tried logging in this morning to oVrit and see that all of my hosts
> are unresponsive. I am seeing a connection refused error in the engine
> logs. I am able to SSH and ping the host from the engine. Any help would be
> appreciated.
> >
> > 2021-05-15 15:19:21,041Z ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-65) [] Command
> 'GetCapabilitiesAsyn
> > cVDSCommand(HostName = hlkvm03,
> VdsIdAndVdsVDSCommandParametersBase:{hostId='2186eca7-4d9d-482f-b1b7-b63ac46b96aa',
> vds='Host[hlkvm03,2186eca7-4d9d-482f-b1b7
> > -b63ac46b96aa]'})' execution failed: java.net.ConnectException:
> Connection refused
>
> Is vdsmd up on your hosts? Accessible? Can you check its logs?
>
> Good luck and best regards,
>
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SV4ENLDTVHIPV7EKFCA4EPQNRHAPDV4N/
>


-- 
Artur Socha
Senior Software Engineer, RHV
Red Hat
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCWT7RHE5LIQHQGAA6A3R7OC7ODOBYC4/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-18 Thread Harry O
Cockpit crashes with an Ooops! And therefore closes the ansible output console, 
so we need to find the file with that output.
/ovirt-dashboard just shows blank white screen.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SFS5AEKSNDHMZG6LYFADMNFPN57O7QF5/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Gianluca Cecchi
On Tue, May 18, 2021 at 7:39 AM  wrote:

> Hello.
> I'm having the same issue with cockpit on the nodes. I'm unable to login
> as root or local user. I went from 4.4.5 to 4.4.6. It worked fine before
> the upgrade. I know the password is correct because I can log into the node
> via console and ssh. On one of the nodes I created a local account and have
> the same issue. The admin account works fine on the hosted engine VM.
>
>
I have not 4.4.6 yet, but could it be a change in /etc/pam.d/cockpit file?

On my 4.4.5 CentOS 8.3 based host, where I can connect as root in cockpit
host console, I currently have this:

#%PAM-1.0
# this MUST be first in the "auth" stack as it sets PAM_USER
# user_unknown is definitive, so die instead of ignore to avoid subsequent
modules mess up the error code
-auth  [success=done new_authtok_reqd=done user_unknown=die
default=ignore]   pam_cockpit_cert.so
auth   required pam_sepermit.so
auth   substack password-auth
auth   include  postlogin
auth   optional pam_ssh_add.so
accountrequired pam_nologin.so
accountinclude  password-auth
password   include  password-auth
# pam_selinux.so close should be the first session rule
sessionrequired pam_selinux.so close
sessionrequired pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in
the user context
sessionrequired pam_selinux.so open env_params
sessionoptional pam_keyinit.so force revoke
sessionoptional pam_ssh_add.so
sessioninclude  password-auth
sessioninclude  postlogin

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L3MWSV5IDQ4IZCJIDPZ2NWEVRENEAXMJ/


[ovirt-users] Re: Ovirt Engine -- Connection Refused to all hosts

2021-05-18 Thread Yedidyah Bar David
On Tue, May 18, 2021 at 8:37 AM Nick Polites  wrote:
>
> Hi All,
>
> I am not sure if my original post is being reviewed before posting but trying 
> again in case it failed to send.
>
> I tried logging in this morning to oVrit and see that all of my hosts are 
> unresponsive. I am seeing a connection refused error in the engine logs. I am 
> able to SSH and ping the host from the engine. Any help would be appreciated.
>
> 2021-05-15 15:19:21,041Z ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-65) [] Command 
> 'GetCapabilitiesAsyn
> cVDSCommand(HostName = hlkvm03, 
> VdsIdAndVdsVDSCommandParametersBase:{hostId='2186eca7-4d9d-482f-b1b7-b63ac46b96aa',
>  vds='Host[hlkvm03,2186eca7-4d9d-482f-b1b7
> -b63ac46b96aa]'})' execution failed: java.net.ConnectException: Connection 
> refused

Is vdsmd up on your hosts? Accessible? Can you check its logs?

Good luck and best regards,

Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SV4ENLDTVHIPV7EKFCA4EPQNRHAPDV4N/