[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Glenn Farmer
Gianluca, I hope I my frustration didn't come across too strong - I apologize 
if so.  I certainly now understand your posting of 4.4.5 as a diff source 
against 4.4.6 - thanks! - regards - Glenn
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSDN3CMTA65LQOK7Z3O3E5ZBIV4PVZHW/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-18 Thread Glenn Farmer
The current thread is about 4.4.6 - nice that you can login to your 4.4.5.

I changed the admin password on the engine - still cannot access the Cockpit 
GUI on any of my hosts.

Do I have to reboot them?  Restart Cockpit - tried that - failed.

Cannot access Cockpit on all hosts in a cluster after upgrading to 4.4.6 really 
should be considered a bug.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZPGTQUWDUPJWVRHFNUARRSB3EDX7PLX/


[ovirt-users] Re: ovirt-node-ng-image-update 4.2.4 to 4.2.5.1 fails

2018-08-22 Thread Glenn Farmer
Yuval, thanks for you assistance & guidance.

I just wanted to confirm that with /var/crash mounted (and leftover v4.2.5.1 LV 
from previous failed installation removed) - I was able to successfully upgrade 
from v4.2.4 to v4.2.5.1.

Thanks again - Glenn
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FIJJIAGNWT42U6PUPOI45VQBIQYSMJ5E/


[ovirt-users] Re: ovirt-node-ng-image-update 4.2.4 to 4.2.5.1 fails

2018-08-21 Thread Glenn Farmer
Luckily I had a good node that upgraded to 4.2.5.1 successfully - so I 
duplicated the mount as:

mount -t ext4 -o rw,relatime,seclabel,discard,stripe=16,data=ordered 
/dev/mapper/onn-var_crash /var/crash

Then removed the two 4.2.5.1 Logical Volumes.

Then yum reinstall ovirt-node-ng-image-update-4.2.5.1
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DUU2AMHO4TA5CL7E7XV4HMMMARAMRJDY/


[ovirt-users] ovirt-node-ng-image-update 4.2.4 to 4.2.5.1 fails

2018-08-19 Thread Glenn Farmer
yum update ends with:

warning: %post(ovirt-node-ng-image-update-4.2.5.1-1.el7.noarch) scriptlet 
failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package 
ovirt-node-ng-image-update-4.2.5.1-1.el7.noarch

It creates the layers:

ovirt-node-ng-4.2.5.1-0.20180731.0   onn Vri---tz-k  6.00g pool00
ovirt-node-ng-4.2.5.1-0.20180731.0+1 onn  Vwi-a-tz--  6.00g pool00 
ovirt-node-ng-4.2.5.1-0.20180731.0

But no grub2 boot entry.

nodectl info:

layers:
  ovirt-node-ng-4.2.4-0.20180626.0:
ovirt-node-ng-4.2.4-0.20180626.0+1
  ovirt-node-ng-4.2.5.1-0.20180731.0:
ovirt-node-ng-4.2.5.1-0.20180731.0+1
  ovirt-node-ng-4.2.2-0.20180405.0:
ovirt-node-ng-4.2.2-0.20180405.0+1
bootloader:
  default: ovirt-node-ng-4.2.4-0.20180626.0+1
  entries:
ovirt-node-ng-4.2.2-0.20180405.0+1:
  index: 1
  title: ovirt-node-ng-4.2.2-0.20180405.0+1
  kernel: 
/boot/ovirt-node-ng-4.2.2-0.20180405.0+1/vmlinuz-3.10.0-693.21.1.el7.x86_64
  args: "ro crashkernel=auto 
rd.lvm.lv=onn/ovirt-node-ng-4.2.2-0.20180405.0+1 
img.bootid=ovirt-node-ng-4.2.2-0.20180405.0+1"
  initrd: 
/boot/ovirt-node-ng-4.2.2-0.20180405.0+1/initramfs-3.10.0-693.21.1.el7.x86_64.img
  root: /dev/onn/ovirt-node-ng-4.2.2-0.20180405.0+1
ovirt-node-ng-4.2.4-0.20180626.0+1:
  index: 0
  title: ovirt-node-ng-4.2.4-0.20180626.0+1
  kernel: 
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/vmlinuz-3.10.0-862.3.3.el7.x86_64
  args: "ro crashkernel=auto 
rd.lvm.lv=onn/ovirt-node-ng-4.2.4-0.20180626.0+1 
img.bootid=ovirt-node-ng-4.2.4-0.20180626.0+1"
  initrd: 
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/initramfs-3.10.0-862.3.3.el7.x86_64.img
  root: /dev/onn/ovirt-node-ng-4.2.4-0.20180626.0+1
current_layer: ovirt-node-ng-4.2.4-0.20180626.0+1

Just posting for others that might have the same issue.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VOTNR3NH3EAEW6YDINIILFVGQB2BX544/


[ovirt-users] ONN v4.2.3 Hosted Engine Deployment - Auto addition of Gluster Hosts

2018-05-12 Thread glenn . farmer
The new Ovirt Node installation script in v4.2.3 automatically adds the gluster 
hosts after initial hosted engine setup.  This caused major problems as I had 
used different FQDNs on a different VLAN to initially setup my gluster nodes 
[to isolate storage traffic] - but then put them onto the "ovirtmgmt" network.  
Because they were gluster nodes - I could not remove them from the ovirtmgmt 
network to re-install on the proper management VLAN.

I recommend removing the auto-installation of the other gluster nodes - or at 
least provide an option to decline the automatic addition of the other nodes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org