[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread penguin pages


"...Tuning Gluster with VDO bellow is quite difficult and the overhead of using 
VDO could
reduce performance " Yup. hense creation of a dedicated data00 volume from 
the 1TB SSD each server had.  Matched options listed in oVirt.. but still OCP 
would not address the drive as target for deployment. That is when I opened 
ticket with RH and they noted Gluster is not a supported target for OCP. Hense 
then off to check if we could do CEPH HCI.. nope. 

"..I would try with VDO compression and dedup disabled.If your SSD has 512 byte 
physical..& logical size, you can skip VDO at all to check performance."  
Yes.. VDO removed was/ is next test.  But your note about 512 is yes.. Are 
their tuning parameters for Gluster with this?


"...Also FS mount options are very important for XFS"  - What options do 
you use / recommend?  Do you have a link to said tuning manual page where I 
could review and knowing the base HCI volume is  VDO + XFS + Gluster.   But 
second volume for OCP will be just  XFS + Gluster I would assume this may 
change recommendations.

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JHRUZV6JS2F25AN4FEPBFBIDGPKCLGQL/


[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread penguin pages

The response was that when I select the oVirt HCI storage volumes to deploy to 
(with VDO enabled) which are a single 512GB SSD with only one small IDM VM 
running.  The IPI OCP 4.7 deployment fails.  RH closed ticket because "gluster 
volume is too slow".

I then tried to create with my other 1TB SSD in each server gluster volume 
without VDO and see if that worked.. but though I matched all settings / 
gluster options oVirt set, IPI OCP would not show disk as deployable option.   
I then figured I would use GUI to create bricks vs ansible means (trying to be 
good and stop doing direct shell but build as ansible playbooks).. but that 
test is on hold because servers for next two weeks are being re-tasked for 
another POC.  So I figured I would do some rethinking if oVirt HCI on RHEV 4.5 
with Gluster is a rat hole that will never work.   

Below is the output from fio test OCP team asked me to run to show gluster was 
too slow.
##
ansible@LT-A0070501:/mnt/c/GitHub/penguinpages_cluster_devops/cluster_devops$ 
ssh core@172.16.100.184
[core@localhost ~]$ journalctl -b -f -u release-image.service -u 
bootkube.service
-- Logs begin at Sun 2021-04-11 19:18:07 UTC. --
Apr 13 11:50:23 localhost bootkube.sh[1276476]: [#404] failed to fetch 
discovery: Get "https://localhost:6443/api?timeout=32s": x509: certificate has 
expired or is not yet valid: current time 2021-04-13T11:50:23Z is after 
2021-04-12T19:12:30Z
Apr 13 11:50:23 localhost bootkube.sh[1276476]: [#405] failed to fetch 
discovery: Get "https://localhost:6443/api?timeout=32s": x509: certificate has 
expired or is not yet valid: current time 2021-04-13T11:50:23Z is after 
2021-04-12T19:12:30Z

[core@localhost ~]$ su -
Password: 
su: Authentication failure
[core@localhost ~]$ sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z 
quay.io/openshift-scale/etcd-perf Trying to pull 
quay.io/openshift-scale/etcd-perf
Trying to pull quay.io/openshift-scale/etcd-perf...
Getting image source signatures
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob fcc022b71ae4 done
Copying blob a93d706457b7 done
Copying blob 763b3f36c462 done
Writing manifest to image destination
Storing signatures
 Running fio 
---

{
  "fio version" : "fio-3.7",
  "timestamp" : 1618315279,
  "timestamp_ms" : 1618315279798,
  "time" : "Tue Apr 13 12:01:19 2021",
  "global options" : {
"rw" : "write",
"ioengine" : "sync",
"fdatasync" : "1",
"directory" : "/var/lib/etcd",
"size" : "22m",
"bs" : "2300"
  },
  "jobs" : [
{
  "jobname" : "etcd_perf",
  "groupid" : 0,
  "error" : 0,
  "eta" : 0,
  "elapsed" : 507,
  "job options" : {
"name" : "etcd_perf"
  },
  "read" : {
"io_bytes" : 0,
"io_kbytes" : 0,
"bw_bytes" : 0,
"bw" : 0,
"iops" : 0.00,
"runtime" : 0,
"total_ios" : 0,
"short_ios" : 10029,
"drop_ios" : 0,
"slat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00
},
"clat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00,
  "percentile" : {
"1.00" : 0,
"5.00" : 0,
"10.00" : 0,
"20.00" : 0,
"30.00" : 0,
"40.00" : 0,
"50.00" : 0,
"60.00" : 0,
"70.00" : 0,
"80.00" : 0,
"90.00" : 0,
"95.00" : 0,
"99.00" : 0,
"99.50" : 0,
"99.90" : 0,
"99.95" : 0,
"99.99" : 0
  }
},
"lat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00
},
"bw_min" : 0,
"bw_max" : 0,
"bw_agg" : 0.00,
"bw_mean" : 0.00,
"bw_dev" : 0.00,
"bw_samples" : 0,
"iops_min" : 0,
"iops_max" : 0,
"iops_mean" : 0.00,
"iops_stddev" : 0.00,
"iops_samples" : 0
  },
  "write" : {
"io_bytes" : 23066700,
"io_kbytes" : 22526,
"bw_bytes" : 45589,
"bw" : 44,
"iops" : 19.821372,
"runtime" : 505969,
"total_ios" : 10029,
"short_ios" : 0,
"drop_ios" : 0,
"slat_ns" : {
  "min" : 0,
  "max" : 0,
  "mean" : 0.00,
  "stddev" : 0.00
},
"clat_ns" : {
  "min" : 12011,
  "max" : 680340,
  "mean" : 26360.617210,
  "stddev" : 15390.749240,
  "percentile" : {
"1.00" 

[ovirt-users] Re: HCI - oVirt for CEPH

2021-04-26 Thread penguin pages

It was on a support ticket / call I was having.  I googled around and the only 
article I found was the one about features being removed.. But not sure if this 
effects oVirt / HCI.

My ticket was about trying to deploy OCP on a full SSD cluster of three nodes 
and disk performance over 10Gb will too slow and RH support was " We don't 
support use of gluster for OCP.. and need you to move off gluster for CEPH.

So I opened another ticket about CEPH on HCI .. and was told "not supported.. 
CEPH nodes must be external"  So my three server small work office and demo 
stack, now is rethinking having to go to anther stack / vendor such as VMWare 
and vSAN, just because I can't get a stack that meets needs for small HCI stack 
with Linux.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K4QWNUSG6QDJDAOBOPJWEYIBE4O5HTF7/


[ovirt-users] HCI - oVirt for CEPH

2021-04-26 Thread penguin pages

I have been building out HCI stack with KVM/RHEV + oVirt with the HCI 
deployment process.  This is very nice for small / remote site use cases, but 
with Gluster being anounced as EOL in 18 months, what is the replacement plan?

Are their working projects and plans to replace Gluster with CEPH?
Are their deployment plans to get an HCI stack onto a supported file system?

I liked gluster for the control plan for the oVirt engine and smaller utility 
VMs as each system has a full copy, I can retrieve /extract a copy of the VM 
without having all bricks back... it was just "easy" to use.  CEPH just means 
more complexity.. and though it scales better and has better features, it means 
that repair means having critical mass of nodes up before you can extra data 
(vs any disk can be pulled out of a gluster node, plugged into my laptop and I 
can at least extract the data).

I guess I am not trying to debate shifting to CEPH.. it does not matter.. that 
ship sailed...  What I am asking is when / what are the plans for replacement 
of Gluster for HCI.  Because right now, for small sites for HCI, when Gluster 
is no longer supported.. and CEPH does not make it... is to go VMWare and vSAN 
or some other total different stack.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHR2GFVIVHQBYDF2SU4KUVH5RXFMVOE/


[ovirt-users] Host Add - Host not found check DNS or /etc/hosts

2021-03-18 Thread penguin pages

Fresh install CentOS8 Streams.  Gluster wizard runs and deploys.  Engine 
installs on node.

I copy over /etc/hosts  for primary site resources to ovirt engine so it can 
find primary cluster nodes without DNS being booted / up at that time.

I check ovirt engine can ssh to the three nodes... so as root I generate SSH 
key and do ssh-copy-id  to all three nodes.

So as much as I can validate communication is working..  i have.

When I "add host" -> use FDN -> input root password 

But it shows up in GUI host list with "install fails" 


Only message is :
[root@ovirt01 ~]# cat /var/log/ovirt-engine/ansible-runner-service.log
2021-03-18 09:44:15,998 - runner_service.controllers.hosts - DEBUG - Adding 
host odin.penguinpages.local to group ovirt
2021-03-18 09:44:16,344 - runner_service.services.hosts - ERROR - SSH - 
NOCONN:SSH error - 'odin.penguinpages.local' not found; check DNS or /etc/hosts
2021-03-18 10:07:15,970 - root - INFO - Analysing local configuration options 
from /etc/ansible-runner-service/config.yaml
2021-03-18 10:07:15,975 - root - INFO - - setting playbooks_root_dir to 
/usr/share/ovirt-engine/ansible-runner-service-project
2021-03-18 10:07:15,975 - root - INFO - - setting ssh_private_key to 
/etc/pki/ovirt-engine/keys/engine_id_rsa
2021-03-18 10:07:15,975 - root - INFO - - setting port to 50001
2021-03-18 10:07:15,975 - root - INFO - - setting target_user to root
2021-03-18 10:07:15,975 - root - INFO - - setting log_path to 
/var/log/ovirt-engine
2021-03-18 10:07:15,975 - root - INFO - Analysing runtime overrides from 
environment variables
2021-03-18 10:07:15,975 - root - INFO - No configuration settings overridden
2021-03-18 10:07:15,995 - root - INFO - Loaded logging configuration from 
/etc/ansible-runner-service/logging.yaml
2021-03-18 10:07:16,004 - runner_service.controllers.hosts - DEBUG - Request 
received, content-type :None
2021-03-18 10:07:16,005 - runner_service.controllers.hosts - INFO - 127.0.0.1 - 
GET /api/v1/hosts/thor.penguinpages.local
2021-03-18 10:07:16,013 - runner_service.controllers.hosts - DEBUG - Request 
received, content-type :application/json; charset=UTF-8
2021-03-18 10:07:16,013 - runner_service.controllers.hosts - INFO - 127.0.0.1 - 
POST /api/v1/hosts/thor.penguinpages.local/groups/ovirt
2021-03-18 10:07:16,013 - runner_service.controllers.hosts - DEBUG - additional 
args received
2021-03-18 10:07:16,014 - runner_service.controllers.hosts - DEBUG - Adding 
host thor.penguinpages.local to group ovirt
2021-03-18 10:07:16,339 - runner_service.services.hosts - ERROR - SSH - 
NOCONN:SSH error - 'thor.penguinpages.local' not found; check DNS or /etc/hosts
2021-03-18 10:07:38,333 - runner_service.controllers.hosts - DEBUG - Request 
received, content-type :None
2021-03-18 10:07:38,334 - runner_service.controllers.hosts - INFO - 127.0.0.1 - 
GET /api/v1/hosts/thor.penguinpages.local
2021-03-18 10:07:38,339 - runner_service.controllers.hosts - DEBUG - Request 
received, content-type :application/json; charset=UTF-8
2021-03-18 10:07:38,339 - runner_service.controllers.hosts - INFO - 127.0.0.1 - 
POST /api/v1/hosts/thor.penguinpages.local/groups/ovirt
2021-03-18 10:07:38,339 - runner_service.controllers.hosts - DEBUG - additional 
args received
2021-03-18 10:07:38,340 - runner_service.controllers.hosts - DEBUG - Adding 
host thor.penguinpages.local to group ovirt
2021-03-18 10:07:38,713 - runner_service.services.hosts - ERROR - SSH - 
NOCONN:SSH error - 'thor.penguinpages.local' not found; check DNS or /etc/hosts


But ssh is fine to all hosts works fine.
[root@ovirt01 ~]# ssh thor.penguinpages.local
Web console: https://thor.penguinpages.local:9090/ or 
https://172.16.100.101:9090/

Last login: Wed Mar 17 11:27:34 2021 from 172.16.101.103
[root@thor ~]# exit
logout
Connection to thor.penguinpages.local closed.
[root@ovirt01 ~]# ssh medusa.penguinpages.local
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Thu Mar 18 10:06:40 2021 from 172.16.100.186
[root@medusa ~]# exit
logout
Connection to medusa.penguinpages.local closed.
[root@ovirt01 ~]# ssh odin.penguinpages.local
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Wed Mar 17 16:02:46 2021 from 172.16.100.186
[root@odin ~]# exit
logout
Connection to odin.penguinpages.local closed.
[root@ovirt01 ~]#


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/22L6ZLIB6ARWKOGOAQ2YLZ3OA6EGJJN6/


[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread penguin pages
well.. figured the package remove was means to get rid of "upgrade pending" 
which would then allow me to get engine failover to start working but...  
ya.. don't do that.

How to destroy engine:
1) yum update --allowerasing 
2) reboot 
3) no more engine starting.  
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/troubleshooting
 

Validated services look ok
[root@thor ~]# systemctl status ovirt-ha-proxy
Unit ovirt-ha-proxy.service could not be found.
[root@thor ~]# systemctl status ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: active (running) since Wed 2021-03-10 14:55:17 EST; 14min ago
 Main PID: 6390 (ovirt-ha-agent)
Tasks: 2 (limit: 1080501)
   Memory: 25.8M
   CGroup: /system.slice/ovirt-ha-agent.service
   └─6390 /usr/libexec/platform-python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent

Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine 
High Availability Monitoring Agent.
[root@thor ~]# systemctl status -l ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: active (running) since Wed 2021-03-10 14:55:17 EST; 16min ago
 Main PID: 6390 (ovirt-ha-agent)
Tasks: 2 (limit: 1080501)
   Memory: 25.6M
   CGroup: /system.slice/ovirt-ha-agent.service
   └─6390 /usr/libexec/platform-python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent

Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine 
High Availability Monitoring Agent.
[root@thor ~]#journalctl -u ovirt-ha-agent

-- Logs begin at Wed 2021-03-10 14:47:34 EST, end at Wed 2021-03-10 15:12:12 
EST. --
Mar 10 14:48:35 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine 
High Availability Monitoring Agent.
Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start 
necessary monitors
Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
last):
File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 85, in start_monitor
  response = 
self._proxy.start_monitor(type, options)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
  return 
self.__send(self.__name, args)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
  
verbose=self.__verbose
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
  return 
self.single_request(host, handler, request_body, verbose)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request
  http_conn = 
self.send_request(host, handler, request_body, verbose)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request
  
self.send_content(connection, request_body)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content
  
connection.endheaders(request_body)
File 
"/usr/lib64/python3.6/http/client.py", line 1264, in endheaders
  
self._send_output(message_body, encode_chunked=encode_chunked)
File 
"/usr/lib64/python3.6/http/client.py", line 1040, in _send_output
  self.send(msg)
File 
"/usr/lib64/python3.6/http/client.py", line 978, in send
  self.connect()
File 

[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread penguin pages
I did make that post but that was more about convert to CentOS 8 to streams 
fubar my cluster up... ya.. still trying to get it back on its feet.

I have been trying to move to IaC based deployment but ..  kind of given up on 
that as oVirt seems to really need its last steps "HCI Wizard" 

yum install ovirt-hosted-engine-setup

# what I wish it would spit out ansible playbook so I could copy this over and 
run it as  a playbook.  Same for sub wizard of "gluster"
This was sort of posted 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTLI55VFCFSK3F7MATAHGJIGRJZBTDLA/
   

Issue I have some of the cluster working but until I can trust it is stable, 
can deploy and maintain VM, I don't want to move it into production to take VMs.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WLNGHPUDCQC25NOJNDY43AMB3W7ZORGY/


[ovirt-users] Update Package Conflict

2021-03-10 Thread penguin pages

Fresh install of minimal CentOS8

Then deploy: 
- EPEL
- Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm

Install all nodes:
- cockpit-ovirt-dashboard
- gluster-ansible-roles 
- vdsm-gluster
- ovirt-host
- ovirt-ansible-roles
- ovirt-ansible-infra

Install on "first node of cluster"
- ovirt-engine-appliance



Now each node is stuck with same package conflict error: (and this blocks GUI 
"upgrades")

[root@medusa ~]# yum update
Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Error:
 Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, 
but none of the providers can be installed
  - package cockpit-bridge-238.1-1.el8.x86_64 conflicts with cockpit-dashboard 
< 233 provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
ovirt-host-4.4.1-4.el8.x86_64
  - cannot install the best update candidate for package 
cockpit-bridge-217-1.el8.x86_64
 Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
  - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
cockpit-dashboard-217-1.el8.noarch
 Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires 
ovirt-host >= 4.4.0, but none of the providers can be installed
  - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
  - cannot install the best update candidate for package 
cockpit-system-217-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or 
'--skip-broken' to skip uninstallable packages or '--nobest' to use not only 
best candidate packages)
[root@medusa ~]# yum update --allowerasing
Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Dependencies resolved.
=
 Package 
Architecture Version
 Repository
Size
=
Upgrading:
 cockpit-bridge  x86_64 
  238.1-1.el8   
  baseos   535 k
 cockpit-system  noarch 
  238.1-1.el8   
  baseos   3.4 M
 replacing  cockpit-dashboard.noarch 217-1.el8
Removing dependent packages:
 cockpit-ovirt-dashboard noarch 
  0.14.17-1.el8 
  @ovirt-4.416 M
 ovirt-host  x86_64 
  4.4.1-4.el8   
  @ovirt-4.411 k
 ovirt-hosted-engine-setup   noarch 
  2.4.9-1.el8   
  @ovirt-4.4   1.3 M

Transaction Summary
=
Upgrade  2 Packages
Remove   3 Packages



##

Initially I assumed this was a path I was taking that was not standard.. but 

[ovirt-users] Re: engine - gluster volume import

2021-03-10 Thread penguin pages


Thanks ..   That worked.  now the engine, data, vmstore   gluster volumes are 
under "engine" control.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OASWKINVFSSMPX7A6KEN6NBFRYC566N/


[ovirt-users] engine - gluster volume import

2021-03-09 Thread penguin pages

I keep going through cycles to get a HCI cluster to deploy.  


Gluster is working fine.  Standard from HCI wizard:
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore49154 0  Y   40968
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore49157 0  Y   3771
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore  49154 0  Y   5138
Self-heal Daemon on localhost   N/A   N/AY   41172
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY   5150
Self-heal Daemon on odinst.penguinpages.loc
al  N/A   N/AY   3142


What I think happens when the engine is installed "on existing disk"  is that 
it does not install components for gluster needed to present those to be 
deployed upon by gluster.
What is this service / install / means to get the engine to "add" gluster 
volumes? 

Ex:  When I click -> Storage -> (no volumes listed) -> New volume -> 
  all drops downs are blank and cannot be populated.

 





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5PP664PUCO6CBUPVXIFFFVDSGMROY7KQ/


[ovirt-users] Re: HCI Fresh Deploy - Cannot upgrade 4.4. to 4.5 or add more hosts

2021-03-08 Thread penguin pages


Their may be aspect of this where HCI engine.. composes bricks of gluster into 
engine hosted storage... and then is not able to add hosts into cluster.

Below is looping error : 
[root@ovirte01 ~]# tail -f /var/log/ovirt-engine/engine.log
2021-03-08 10:40:19,192-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
START, GlusterServersListVDSCommand(HostName = medusa.penguinpages.local, 
VdsIdVDSCommandParametersBase:{hostId='ea26e8ad-e762-4852-bf71-b0a6b2b69853'}), 
log id: 76bd303b
2021-03-08 10:40:19,395-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
FINISH, GlusterServersListVDSCommand, return: [172.16.101.103/24:CONNECTED, 
thorst.penguinpages.local:CONNECTED, odinst.penguinpages.local:CONNECTED], log 
id: 76bd303b
2021-03-08 10:40:19,400-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
START, GlusterVolumesListVDSCommand(HostName = medusa.penguinpages.local, 
GlusterVolumesListVDSParameters:{hostId='ea26e8ad-e762-4852-bf71-b0a6b2b69853'}),
 log id: 2ebcd203
2021-03-08 10:40:19,507-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not add brick 'thorst.penguinpages.local:/gluster_bricks/data/data' to 
volume '78117441-467f-4c9c-ae10-4608de688404' - server uuid 
'165ecdcd-10c1-4b34-aefa-9a0a6d3d8751' not found in cluster 
'91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,508-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not add brick 'odinst.penguinpages.local:/gluster_bricks/data/data' to 
volume '78117441-467f-4c9c-ae10-4608de688404' - server uuid 
'835293d4-38b8-4e1b-80ff-9e654effa3c3' not found in cluster 
'91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,511-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not associate brick 
'medusast.penguinpages.local:/gluster_bricks/data/data' of volume 
'78117441-467f-4c9c-ae10-4608de688404' with correct network as no gluster 
network found in cluster '91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,511-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not add brick 'thorst.penguinpages.local:/gluster_bricks/engine/engine' 
to volume '40f9f0b8-1861-45d6-b63b-d59ec264422d' - server uuid 
'165ecdcd-10c1-4b34-aefa-9a0a6d3d8751' not found in cluster 
'91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,512-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not add brick 'odinst.penguinpages.local:/gluster_bricks/engine/engine' 
to volume '40f9f0b8-1861-45d6-b63b-d59ec264422d' - server uuid 
'835293d4-38b8-4e1b-80ff-9e654effa3c3' not found in cluster 
'91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,514-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not associate brick 
'medusast.penguinpages.local:/gluster_bricks/engine/engine' of volume 
'40f9f0b8-1861-45d6-b63b-d59ec264422d' with correct network as no gluster 
network found in cluster '91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,515-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not add brick 'thorst.penguinpages.local:/gluster_bricks/vmstore/vmstore' 
to volume '59cb23d6-325b-48ea-a03c-6fb43809f714' - server uuid 
'165ecdcd-10c1-4b34-aefa-9a0a6d3d8751' not found in cluster 
'91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,515-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not add brick 'odinst.penguinpages.local:/gluster_bricks/vmstore/vmstore' 
to volume '59cb23d6-325b-48ea-a03c-6fb43809f714' - server uuid 
'835293d4-38b8-4e1b-80ff-9e654effa3c3' not found in cluster 
'91fde852-8013-11eb-8054-00163e7e2056'
2021-03-08 10:40:19,517-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) [] 
Could not associate brick 
'medusast.penguinpages.local:/gluster_bricks/vmstore/vmstore' of volume 
'59cb23d6-325b-48ea-a03c-6fb43809f714' with correct 

[ovirt-users] Re: HCI Fresh Deploy - Cannot upgrade 4.4. to 4.5 or add more hosts

2021-03-08 Thread penguin pages
<<< Update>>>

as I try to add node. I am able to add fingerprint .. and below is output from 
engine

[root@ovirte01 ~]# tail -f /var/log/ovirt-engine/server.log
...
2021-03-08 10:23:21,542-05 WARNING [javax.persistence.spi] (default task-6) 
javax.persistence.spi::No valid providers found.
2021-03-08 10:23:21,719-05 INFO  
[org.apache.sshd.common.io.DefaultIoServiceFactoryFactory] (default task-6) No 
detected/configured IoServiceFactoryFactory using Nio2ServiceFactoryFactory
2021-03-08 10:23:21,744-05 INFO  
[org.apache.sshd.client.config.hosts.DefaultConfigFileHostEntryResolver] 
(default task-6) resolveEffectiveResolver(dummy@thor.penguinpages.local:22) 
loaded 0 entries from /var/lib/ovirt-engine/.ssh/config
2021-03-08 10:23:30,737-05 WARN  
[org.apache.sshd.client.session.ClientConnectionService] 
(sshd-SshClient[4f22a453]-nio2-thread-3) 
globalRequest(ClientConnectionService[ClientSessionImpl[root@thor.penguinpages.local/172.16.100.101:22]])[hostkeys...@openssh.com,
 want-reply=false] failed (SshException) to process: EdDSA provider not 
supported

Does anyone know where else logs would be for engine failure to add node?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJGXKG4LFWEAAIZLH24KRN7WSLVUUFIX/


[ovirt-users] HCI Fresh Deploy - Cannot upgrade 4.4. to 4.5 or add more hosts

2021-03-08 Thread penguin pages

I reinstalled OS all nodes CentoOS 8 streams.

installed cockpit engine and ran through HCI deploy wizard with gluster.   this 
deployed 4.4 version of engine.. then said 4.5 version available.  

I try to add hosts and get error "Error while executing action: Server 
thor.penguinpages.local is already part of another cluster."  (or other node).  
 

I ssh into ovirt engine host and attempt to ssh from that host to three nodes 
of HCI cluster and it works fine.   I added ssh keys for passwordless login but 
no change

I change cluster then datacenter to 4.5.   Then attempt to run upgrade but get 
same error.


Seems like chicken / egg issue.  I can't get nodes.   I can't upgrade 4.4. to 
4.5 because I have no nodes to allow maintenance mode.   And engine shows "no 
updates" needed

[root@ovirte01 ~]# yum update
Last metadata expiration check: 1:39:16 ago on Mon 08 Mar 2021 08:36:40 AM EST.
Dependencies resolved.
Nothing to do.
Complete!

Suggestions?  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4FCKB5VXMDIQFSSXZFDNMMLFKXBZPC7/


[ovirt-users] Re: HCI Deployment via Ansible

2021-01-25 Thread penguin pages
HCI creates three volumes

engine
data
vmstore

engine is dedicated for just hosted engine and its database... this one I think 
will be a control plane that I hope with tasks like snapshots.. upgrades and 
patches could be deployed with some aspect of recovery.. but if that fails.. 
redeployment would be isolated.

data and vmstore .. I think this is for hosting the Database for oVirt iso 
storage, but if the engine is down.. and its database.. no way to stich the VMs 
back together.  Some VMs I just need streamline way to ingest back into new 
deployed engine.   Maybe someone has script to parse those volumes and offer 
"would you like to import these VMs" 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OUEUXT3FZME74S7TX2OYRTGQKSVODXUU/


[ovirt-users] HCI Deployment via Ansible

2021-01-25 Thread penguin pages

I am trying to teach myself new and better means to deploy and control oVirt 
and HCI deployment.

Does anyone have notes, experience and or examples.. website etc..  where 
someone managed deployment and admin of HCI deployment (CentOS + Cockpit + 
gluster + oVirt) and instead of using shell and manaual / gui deployment, did 
this "Infrastructure as code"

Ex Step 1:
1) Foundation with kickstart. three nodes with "minimal install" all with ssh 
keyless password login for user "ansible"
2) Keys exchanged between nodes for keyless login for root and ansible user
3) Using ansible publish HCI deployment playbook.
   -> Define drive for each host to use
   -> Define name and host / ip of engine
   -> install HCI engine with gluster
   -> Add SSH keys for ansible and access so future oVirt interaction can be 
done via ansible

I am just starting to research this.. but hoping others have blazed this path 
and I can leverage their process.

Concerns:
1) How to get HCI to deploy only engine layer on (in my case... 512GB SSD .. 
with VDO and avoid destroying VMS not related to engine).. aka if I have to 
repair engine again will it keep nuking the volumes "data" and "vmstore" 
volumes which I think the HCI deployment wants to do if it has to redeploy 
fresh.
2) If above is always to be an issue .. can I make default VM deployment be a 
dedicated gluster volume (my case is 1TB SSD with VDO in each server) such that 
rebuild of engine layer can just slurp back in the VMs from that volume.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTLI55VFCFSK3F7MATAHGJIGRJZBTDLA/


[ovirt-users] Re: ovirt-ha-agent

2021-01-25 Thread penguin pages

So..  redeploy of three physical systems with CentOS8-Stream  ...   And was 
gearing up to install oVirt via cockpit or CLI  but got a bug response back 
from RH on bugzilla... need input from oVirt team

https://bugzilla.redhat.com/show_bug.cgi?id=1917011

Question:
1) For those of us who need more advanced channels the "free RHEL..."  CentOS 
streams is our only path.. or to move to something like Ubuntu etc.. which has 
its own set of issues..   Does the oVirt team have issues with centOS streams 
or am I sticking my neck out to get chopped off
2) When I run HCI deployment...  is their any work or means to scan the data 
volume and import VMs it finds?   To rebuild my five or six VMs that keeping 
would be helpful would take a few hours.. .. My concern is what it would take 
when I have a few dozen.. or...  how many tid-bits of disks / files will be 
left around because maybe I did not clean it up manually very well.
3) In future.. I think I need to export VMs to a dedicated "backup volume" on 
some kind of schedule.  Is their any tools or thoughts on this.  Ex:  My two 
node DNS/LDAP/Kerberos / IPLB / DHCP / Plex ..  VMs.. I just can't afford this 
recovery time to happen again because of a patch.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HK5VGGCEX62RDMSDUR6HKB53AQV6KYMO/


[ovirt-users] Re: ovirt-ha-agent

2021-01-18 Thread penguin pages
I was avoiding reloading the OS.  This to me was like "reboot as a fix"...  to 
wipe environment out and restart, vs repair where I learn how to debug.

But after weeks... I am running out of time.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CMLU7X2QYLZUB57EACBYSGNPIGIGSUIQ/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread penguin pages


After looking into logs..  I think issue is about storage where it should 
deploy.   Wizard did not seem to focus on that..  I A$$umed it was aware of 
volume per previous detected deployment... but...



2021-01-18 10:34:07,917-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Clean 
local storage pools]
2021-01-18 10:34:08,418-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2021-01-18 10:34:08,919-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Destroy local storage-pool {{ he_local_vm_dir | basename }}]
2021-01-18 10:34:09,320-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'Unexpected templating type error occurred on (virsh -c 
qemu:///system?authfile={{ he_libvirt_authfile }} pool-destroy {{ 
he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not 
NoneType', '_ansible_no_log': False}
2021-01-18 10:34:09,421-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error 
occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} 
pool-destroy {{ he_local_vm_dir | basename }}): expected str, bytes or 
os.PathLike object, not NoneType"}
2021-01-18 10:34:09,821-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Undefine local storage-pool {{ he_local_vm_dir | basename }}]
2021-01-18 10:34:10,223-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'Unexpected templating type error occurred on (virsh -c 
qemu:///system?authfile={{ he_libvirt_authfile }} pool-undefine {{ 
he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not 
NoneType', '_ansible_no_log': False}
2021-01-18 10:34:10,323-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error 
occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} 
pool-undefine {{ he_local_vm_dir | basename }}): expected str, bytes or 
os.PathLike object, not NoneType"}
2021-01-18 10:34:10,724-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
2021-01-18 10:34:11,125-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'The task includes an option with an undefined variable. The error was: 
\'local_vm_disk_path\' is undefined\n\nThe error appears to be in 
\'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml\':
 line 16, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nchanged_when: 
true\n  - name: Destroy local storage-pool {{ 
local_vm_disk_path.split(\'/\')[5] }}\n^ here\nWe could be wrong, but this 
one looks like it might be an issue with\nmissing quotes. Always quote template 
expression brackets when they\nstart a value. For instance:\n\n
with_items:\n  - {{ foo }}\n\nShould be written as:\n\nwith_items:\n
  - "{{ foo }}"\n', '_ansible_no_log': False}
2021-01-18 10:34:11,226-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "The task includes an option with an 
undefined variable. The error was: 'local_vm_disk_path' is undefined\n\nThe 
error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml':
 line 16, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nchanged_when: 
true\n  - name: Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] 
}}\n^ here\nWe could be wrong, but this one looks like it might be an issue 
with\nmissing quotes. Always quote template expression brackets when 
they\nstart a value. For instance:\n\nwith_items:\n  - {{ foo 
}}\n\nShould be written as:\n\nwith_items:\n  - \"{{ foo }}\"\n"}
2021-01-18 10:34:11,626-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
2021-01-18 10:34:12,028-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'The task includes an option with an undefined variable. The error was: 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread penguin pages


Following document to redploy engine...

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/cleaning_up_a_failed_self-hosted_engine_deployment

 From Host which had listed engine as in its inventory ### 
[root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup
 This will de-configure the host to run ovirt-hosted-engine-setup from scratch.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Destroy hosted-engine VM ===-
error: failed to get domain 'HostedEngine'

  -=== Stop HA services ===-
  -=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
  -=== Disconnecting the hosted-engine storage domain ===-
  -=== De-configure VDSM networks ===-
ovirtmgmt
 A previously configured management bridge has been found on the system, this 
will try to de-configure it. Under certain circumstances you can loose network 
connection.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Stop other services ===-
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd.socket
  libvirtd-ro.socket
  libvirtd-admin.socket
  -=== De-configure external daemons ===-
Removing database file /var/lib/vdsm/storage/managedvolume.db
  -=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
- removing /etc/ovirt-hosted-engine/hosted-engine.conf
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-key.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
- removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-key.pem
- removing /etc/pki/CA/cacert.pem
- removing /etc/pki/libvirt/clientcert.pem
- removing /etc/pki/libvirt/private/clientkey.pem
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
? /var/tmp/localvm* already missing
  -=== Removing IP Rules ===-
[root@medusa ~]# 
[root@medusa ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  During customization use CTRL-D to abort.
  Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
  The locally running engine will be used to configure a new storage 
domain and create a VM there.


1) Error about firewall
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 
'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'' failed. The error was: error while evaluating conditional 
(firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be 
in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nregister: 
firewalld_s\n  - name: Enforce firewalld status\n^ here\n"}

###  Hmm.. that is dumb.. its disabled to avoid issues
[root@medusa ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor 
preset: enabled)
   Active: inactive (dead)
 Docs: man:firewalld(1)


2) Error about ssh to host ovirte01.penguinpages.local 
[ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed 
to connect to the host via ssh: ssh: connect to host 
ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host 
localhost is unreachable", "unreachable": true}

###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should 
be offline till it does it.  And as VMs to run DNS are down.. I am using hosts 
file to ignite the enviornment.  Not sure what it expects 
[root@medusa ~]# cat /etc/hosts |grep ovir
172.16.100.31 ovirte01.penguinpages.local ovirte01



Did not go well. 

Attached is deployment details as well as logs. 

Maybe someone can point out what I am doing wrong.  Last time I did this I did 
the HCI wizard.. but the hosted engine dashboard for "Virtualization" in 
cockpit https://172.16.100.101:9090/ovirt-dashboard#/he   no longer offers a 
deployment UI option.



## Deployment 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-16 Thread penguin pages
I do not know what happened to my other VMs.  Two are important  ns01 and ns02 
which are my IDM cluster nodes with also plex and other utilities / services.

Most of the rest are throw away VMs for testing / OCP /OKD

I think I may have to redeploy... but concerns are

1)  CentOS8 Streams has package conflicts with cockpit and ovirt
https://bugzilla.redhat.com/show_bug.cgi?id=1917011

2) I do have a backup.. but hoping the deployment could redeploy and use 
existing PostGres DB... and so save rebuild.  I think I have a backup.. but it 
is weeks old.. and.. so lots of things changed.  (need to automate backups to 
my NAS .. on todo list now). 

I think I will try to redeploy and see how it goes...  Thanks for help.. I am 
sure this drama fest is not over.  More to come.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKRPZ6UAIC6VVFLNG3PZILFNBXXOMQLC/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-16 Thread penguin pages

Thanks for help following below.

1) Auth to Libvirtd and show VM "hosted engine"  but also now that I manually 
registered "ns01" per above
[root@medusa ~]# vdsm-client Host getVMList
[
{
"status": "Down",
"statusTime": "2218288798",
"vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
}
]
[root@medusa ~]# virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
   'quit' to quit

virsh # list --all
 Id   NameState

 -HostedEngineshut off
 -HostedEngineLocal   shut off
 -ns01shut off

2) Start VM  but seems network is needed first
virsh # start HostedEngine
error: Failed to start domain HostedEngine
error: Network not found: no network with matching name 'vdsm-ovirtmgmt'

virsh # start HostedEngineLocal
error: Failed to start domain HostedEngineLocal
error: Requested operation is not valid: network 'default' is not active

3) Start Networks:  This is "next next" HCI+Gluster build so it called it 
"ovirtmgmt"
virsh # net-list
 Name  StateAutostart   Persistent

 ;vdsmdummy;   active   no  no
virsh # net-autostart --network default
Network default marked as autostarted
virsh # net-start default
Network default started
virsh # start HostedEngineLocal
error: Failed to start domain HostedEngineLocal
error: Cannot access storage file '/var/tmp/localvmn4khg_ak/seed.iso': No such 
file or directory

<<<>>

virsh # dumpxml HostedEngineLocal

  HostedEngineLocal
  bb2006ce-838b-47a3-a049-7e3e5c7bb049
  
http://libosinfo.org/xmlns/libvirt/domain/1.0;>
  http://redhat.com/rhel/8.0"/>

  
  16777216
  16777216
  4
  
hvm


  
  


  
  
  

  
  destroy
  restart
  destroy
  


  
  
/usr/libexec/qemu-kvm

  
  
  
  


  
  
  
  
  



  



  
  
  


  
  
  


  
  
  


  
  
  


  
  
  


  


  
  
  
  


  

  


  


  
  




  


  
  



  /dev/random
  

  


virsh #

##So not sure what hosted engine needs an ISO image.  Can I remove this?
virsh # change-media HostedEngineLocal /var/tmp/localvmn4khg_ak/seed.iso --eject
Successfully ejected media.

virsh # start HostedEngineLocal
error: Failed to start domain HostedEngineLocal
error: Cannot access storage file 
'/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645'
 (as uid:107, gid:107): No such file or directory
[root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree |grep 
e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645
[root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# pwd
/gluster_bricks/engine/engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde
[root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree
.
├── dom_md
│   ├── ids
│   ├── inbox
│   ├── leases
│   ├── metadata
│   ├── outbox
│   └── xleases
├── ha_agent
│   ├── hosted-engine.lockspace -> 
/run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/6023f2b1-ea6e-485b-9ac2-8decd5f7820d/b38a5e37-fac4-4c23-a0c4-7359adff619c
│   └── hosted-engine.metadata -> 
/run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/77082dd8-7cb5-41cc-a69f-0f4c0380db23/38d552c5-689d-47b7-9eea-adb308da8027
├── images
│   ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease
│   │   └── e4e26573-09a5-43fa-91ec-37d12de46480.meta
│   ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease
│   │   └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta
│   ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease
│   │   └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta
│   ├── 685309b1-1ae9-45f3-90c3-d719a594482d
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease
│   │   └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta
│   ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease
│   │   └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta
│   └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease
│   └── 38d552c5-689d-47b7-9eea-adb308da8027.meta
└── master
├── tasks
│   ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages

Thanks for replies.

Here is where it is at:

# Two nodes think no VMs exist
[root@odin ~]# vdsm-client Host getVMList
[]

#One showing one VM but down
[root@medusa ~]# vdsm-client Host getVMList
[
{
"status": "Down",
"statusTime": "2153886148",
"vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
}
]
[root@medusa ~]# vdsm-client Host getAllVmStats
[
{
"exitCode": 1,
"exitMessage": "VM terminated with error",
"exitReason": 1,
"status": "Down",
"statusTime": "2153916276",
"vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
}
]
[root@medusa ~]# vdsm-client VM cont vmID="69ab4f82-1a53-42c8-afca-210a3a2715f1"
vdsm-client: Command VM.cont with args {'vmID': 
'69ab4f82-1a53-42c8-afca-210a3a2715f1'} failed:
(code=16, message=Unexpected exception)


# Assuming that ID represents the hosted-engine I tried to start it
[root@medusa ~]# hosted-engine --vm-start
The hosted engine configuration has not been retrieved from shared storage. 
Please ensure that ovirt-ha-agent is running and the storage server is 
reachable.

# Back to ovirt-ha-agent being fubar and stoping things.

I have about 8 or so VMs on the cluster. Two are my IDM nodes which has DNS and 
other core services.. which is what I am really trying to get up .. even if 
manual until I figure out oVirt issue.  I think you are correct. "engine" 
volume is for just the engine.  Data is where the other VMs are

[root@medusa images]# tree
.
├── 335c6b1a-d8a5-4664-9a9c-39744d511af8
│   ├── 579323ad-bf7b-479b-b682-6e1e234a7908
│   ├── 579323ad-bf7b-479b-b682-6e1e234a7908.lease
│   └── 579323ad-bf7b-479b-b682-6e1e234a7908.meta
├── d318cb8f-743a-461b-b246-75ffcde6bc5a
│   ├── c16877d0-eb23-42ef-a06e-a3221ea915fc
│   ├── c16877d0-eb23-42ef-a06e-a3221ea915fc.lease
│   └── c16877d0-eb23-42ef-a06e-a3221ea915fc.meta
└── junk
├── 296163f2-846d-4a2c-9a4e-83a58640b907
│   ├── 376b895f-e0f2-4387-b038-fbef4705fbcc
│   ├── 376b895f-e0f2-4387-b038-fbef4705fbcc.lease
│   └── 376b895f-e0f2-4387-b038-fbef4705fbcc.meta
├── 45a478d7-4c1b-43e8-b106-7acc75f066fa
│   ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20
│   ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20.lease
│   └── b5249e6c-0ba6-4302-8e53-b74d2b919d20.meta
├── d8b708c1-5762-4215-ae1f-0e57444c99ad
│   ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9
│   ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.lease
│   └── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.meta
└── eaf12f3c-301f-4b61-b5a1-0c6d0b0a7f7b
├── fbf3bf59-a23a-4c6f-b66e-71369053b406
├── fbf3bf59-a23a-4c6f-b66e-71369053b406.lease
└── fbf3bf59-a23a-4c6f-b66e-71369053b406.meta

7 directories, 18 files
[root@medusa images]# cd /media/engine/
[root@medusa engine]# ls
3afc47ba-afb9-413f-8de5-8d9a2f45ecde
[root@medusa engine]# tree
.
└── 3afc47ba-afb9-413f-8de5-8d9a2f45ecde
├── dom_md
│   ├── ids
│   ├── inbox
│   ├── leases
│   ├── metadata
│   ├── outbox
│   └── xleases
├── ha_agent
├── images
│   ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease
│   │   └── e4e26573-09a5-43fa-91ec-37d12de46480.meta
│   ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease
│   │   └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta
│   ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease
│   │   └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta
│   ├── 685309b1-1ae9-45f3-90c3-d719a594482d
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease
│   │   └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta
│   ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease
│   │   └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta
│   └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease
│   └── 38d552c5-689d-47b7-9eea-adb308da8027.meta
└── master
├── tasks
│   ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba
│   │   └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0
│   ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp
│   ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp
│   ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup
│   │   └── 2e0e347c-fd01-404f-9459-ef175c82c354.task
│   ├── 43f17022-e003-4e9f-81ec-4a01582223bd.backup
│   │   └── 43f17022-e003-4e9f-81ec-4a01582223bd.task
│   ├── 5055f61a-4cc8-459f-8fe5-19427b74a4f2.temp
│   ├── 6826c8f5-b9df-498e-a576-af0c4e7fe69c
│   │   └── 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages


I found this document which was useful to explain some details on how to debug 
and roles. 
https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf

But still stuck with engine not starting.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WPUHDT7XL7UMMCOIDHAFJPFVJJXOVNMX/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages

So only two things that jump out are just

1)  ovirt-ha-agent not starting... back to python sillyness that I have no idea 
on debug
[root@medusa ~]# systemctl status ovirt-ha-agent.service
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Fri 2021-01-15 
11:54:52 EST; 6s ago
  Process: 16116 ExecStart=/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent 
(code=exited, status=157)
 Main PID: 16116 (code=exited, status=157)
[root@medusa ~]# tail /var/log/messages
Jan 15 11:55:02 medusa systemd[1]: Started oVirt Hosted Engine High 
Availability Monitoring Agent.
Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start 
necessary monitors
Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
last):#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 85, in start_monitor#012response = self._proxy.start_monitor(type, 
options)#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in 
__call__#012return self.__send(self.__name, args)#012  File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request#012
verbose=self.__verbose#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 
1154, in request#012return self.single_request(host, handler, request_body, 
verbose)#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in 
single_request#012http_conn = self.send_request(host, handler, 
request_body, verbose)#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 
1279, in send_request#012self.send_content(connection, request_body)#012  
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content#012
connection.endheaders(request_body)#012  File 
"/usr/lib64/python3.6/http/client.py", line 1264, in endheaders#012
self._send_output(message_body, encode_chunked=encode_chunked)#012  File 
"/usr/lib64/python3.6/http/client.py", line 1040, in _send_output#012
self.send(msg)#012  File "/usr/lib64/python3.6/http/client.py", line 978, in 
send#012self.connect()#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 
74, in connect#012
self.sock.connect(base64.b16decode(self.host))#012FileNotFoundError: [Errno 2] 
No such file or directory#012#012During handling of the above exception, 
another exception occurred:#012#012Traceback (most recent call last):#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
131, in _run_agent#012return action(he)#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
55, in action_proper#012return he.start_monitoring()#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 437, in start_monitoring#012self._initialize_broker()#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 561, in _initialize_broker#012m.get('options', {}))#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 91, in start_monitor#012).format(t=type, o=options, 
e=e)#012ovirt_hosted_engine_ha.lib.exceptions.RequestError: brokerlink - failed 
to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, 
[monitor: 'network', options: {'addr': '172.16.100.1', 'network_test': 'dns', 
'tcp_t_address': '', 'tcp_t_port': ''}]
Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent
Jan 15 11:55:02 medusa systemd[1]: ovirt-ha-agent.service: Main process exited, 
code=exited, status=157/n/a
Jan 15 11:55:02 medusa systemd[1]: ovirt-ha-agent.service: Failed with result 
'exit-code'.
Jan 15 11:55:05 medusa upsmon[1530]: Poll UPS [nutmonitor@172.16.100.102] 
failed - [nutmonitor] does not exist on server 172.16.100.102
Jan 15 11:55:06 medusa vdsm[14589]: WARN unhandled write event
Jan 15 11:55:08 medusa vdsm[14589]: WARN unhandled close event
Jan 15 11:55:10 medusa upsmon[1530]: Poll UPS [nutmonitor@172.16.100.102] 
failed - [nutmonitor] does not exist on server 172.16.100.102


2) Notes about vdsmd  host engine "setup not finished"... but this may be issue 
of ha-agent as source
[root@medusa ~]# systemctl status vdsmd.service
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: disabled)
   Active: active (running) since Fri 2021-01-15 11:49:27 EST; 6min ago
 Main PID: 14589 (vdsmd)
Tasks: 72 (limit: 410161)
   Memory: 77.8M
   CGroup: /system.slice/vdsmd.service
   ├─14589 /usr/bin/python3 /usr/share/vdsm/vdsmd
   ├─14686 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages


Maybe this is the rathole cause?

[root@medusa system]# systemctl status vdsmd.service
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: disabled)
   Active: active (running) since Fri 2021-01-15 10:53:56 EST; 5s ago
  Process: 32306 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 32364 (vdsmd)
Tasks: 77 (limit: 410161)
   Memory: 77.1M
   CGroup: /system.slice/vdsmd.service
   ├─32364 /usr/bin/python3 /usr/share/vdsm/vdsmd
   ├─32480 /usr/libexec/ioprocess --read-pipe-fd 44 --write-pipe-fd 43 
--max-threads 10 --max-queued-requests 10
   ├─32488 /usr/libexec/ioprocess --read-pipe-fd 50 --write-pipe-fd 49 
--max-threads 10 --max-queued-requests 10
   ├─32494 /usr/libexec/ioprocess --read-pipe-fd 55 --write-pipe-fd 54 
--max-threads 10 --max-queued-requests 10
   ├─32501 /usr/libexec/ioprocess --read-pipe-fd 61 --write-pipe-fd 60 
--max-threads 10 --max-queued-requests 10
   └─32514 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 61 
--max-threads 10 --max-queued-requests 10

Jan 15 10:53:55 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running nwfilter
Jan 15 10:53:55 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running dummybr
Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running tune_system
Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running test_space
Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running test_lo
Jan 15 10:53:56 medusa.penguinpages.local systemd[1]: Started Virtual Desktop 
Server Manager.
Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN MOM not available. 
Error: [Errno 111] Connection refused
Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN MOM not available, 
KSM stats will be missing. Error:
Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN Failed to retrieve 
Hosted Engine HA info, is Hosted Engine setup finished?
Jan 15 10:53:59 medusa.penguinpages.local vdsm[32364]: WARN Not ready yet, 
ignoring event '|virt|VM_status|69ab4f82-1a53-42c8-afca-210a3a2715f1' 
args={'69ab4f82-1a53-42c8-afca-210a3a2715f1': {'status': 'Down', 'vmId': 
'69ab4f82-1a53>
[root@medusa system]#


I googled around and the hits talk about re-running engine.. is their some kind 
of flow diagram of how to get oVirt back on its feet if it dies like this?  I 
feel like I am poking in the dark here.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4ZWNO3CQGRTUIBESB2YC4S2C2LI3ODCC/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages
[root@medusa ~]# virsh net-list
Please enter your authentication name: admin
Please enter your password:
 Name  StateAutostart   Persistent

 ;vdsmdummy;   active   no  no


# Hmm.. so not sure with oVirt this is expected.. but the defined networks I 
use are still present..
[root@medusa ~]# cat /var/lib/vdsm/persistence/netconf/nets/
101_Storage  102_DMZ  ovirtmgmtStorage

# The one that the ovirt engine is bound to is the default "ovirtmgmt" named 
one 
[root@medusa ~]# cat /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt
{
"netmask": "255.255.255.0",
"ipv6autoconf": false,
"nic": "enp0s29u1u4",
"bridged": true,
"ipaddr": "172.16.100.103",
"defaultRoute": true,
"dhcpv6": false,
"gateway": "172.16.100.1",
"mtu": 1500,
"switch": "legacy",
"stp": false,
"bootproto": "none",
"nameservers": [
"172.16.100.40",
"8.8.8.8"
]
}
[root@medusa ~]#

# Looks fine to me...
[root@medusa ~]# virsh net-start ovirtmgmt
Please enter your authentication name: admin
Please enter your password:
error: failed to get network 'ovirtmgmt'
error: Network not found: no network with matching name 'ovirtmgmt'

[root@medusa ~]#


... back to googling...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GYXN5J3JW4CMEAWYVZBXQIY2VJHKOOW3/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages


Seems fresh cup cofee is helping 

Post streams update fubar issues
# Fix dependancy issues
yum update --allowerasing
# List VMs read only and bypass password issue..  uh.. bueller... I know it has 
VMs..  
[root@odin ~]# virsh --readonly list
 Id   Name   State

# Set password so virsh with admin account works
[root@odin ~]# saslpasswd2 -a libvirt admin
Password:
Again (for verification):
[root@odin ~]# virsh list --all
Please enter your authentication name: admin
Please enter your password:
 Id   NameState

 -HostedEngineLocal   shut off

[root@odin ~]# virsh start HostedEngineLocal
Please enter your authentication name: admin
Please enter your password:
error: Failed to start domain HostedEngineLocal
error: Requested operation is not valid: network 'default' is not active

[root@odin ~]#



Now looking into OVS side.  But game for other suggestions as this seems like a 
bit of a hack to get it working



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M6SHNBILQBXBWFW6BXBCPWVDB6UGT3XL/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages
Thanks for reply..  seems guestfish is a tool I need to do some RTFM on.

This would I think allow me to read in disk from then "engine" storage.  and 
manipulate files within it.


But is their a way to just start the VMs?

I guess I jumped from "old school virsh" to relying upon oVirt GUI... and need 
to buff up on tools that allow me to debug when engine is down.
[root@thor ~]# virsh list --all
Please enter your authentication name: admin
Please enter your password:
error: failed to connect to the hypervisor
error: authentication failed: authentication failed

<<< and I only ever use ONE password for all systems / accounts but I think 
virsh has been depricated... so maybe this is why>

I am currently poking around with 

[root@thor ~]# virt-
virt-admin   virt-clone   virt-diff
virt-host-validate   virt-ls  virt-resize  
virt-tar-in  virt-xml
virt-alignment-scan  virt-copy-in virt-edit
virt-index-validate  virt-make-fs virt-sanlock-cleanup 
virt-tar-out virt-xml-validate
virt-builder virt-copy-outvirt-filesystems 
virt-inspector   virt-pki-validatevirt-sparsify
virt-v2v
virt-builder-repository  virt-customize   virt-format  
virt-install virt-qemu-runvirt-sysprep 
virt-v2v-copy-to-local
virt-cat virt-df  virt-get-kernel  
virt-log virt-rescue  virt-tail
virt-what
[root@thor ~]#

Does anyone have example of:
1) List VMs
2) start VM named "foo"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFY2A6L4QEVFCCZ7TVAO7YB324LFCKWJ/


[ovirt-users] Re: Q: Node SW Install from oVirt Engine UI Failed

2021-01-15 Thread penguin pages
What is in /var/log/messages


What steps did you take to deploy post CentOS 8 Streams deployment?  Did you 
launch from Cockpit UI?

The GUI deployment process does output reasonable logs
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPYUG7T5724BOHTUEKVAEBKP2WYJNA5R/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages
Update:

[root@thor ~]#  dnf update --allowerasing
Last metadata expiration check: 0:00:09 ago on Fri 15 Jan 2021 10:02:05 AM EST.
Dependencies resolved.
=
 Package 
Architecture Version
 Repository
Size
=
Upgrading:
 cockpit-bridge  x86_64 
  234-1.el8 
  baseos   597 k
 cockpit-system  noarch 
  234-1.el8 
  baseos   3.1 M
 replacing  cockpit-dashboard.noarch 217-1.el8
Removing dependent packages:
 cockpit-ovirt-dashboard noarch 
  0.14.17-1.el8 
  @ovirt-4.416 M
 ovirt-host  x86_64 
  4.4.1-4.el8   
  @ovirt-4.411 k
 ovirt-hosted-engine-setup   noarch 
  2.4.9-1.el8   
  @ovirt-4.4   1.3 M

Transaction Summary
=
Upgrade  2 Packages
Remove   3 Packages

Total download size: 3.7 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): cockpit-bridge-234-1.el8.x86_64.rpm  

 160 kB/s | 597 kB 00:03
(2/2): cockpit-system-234-1.el8.noarch.rpm  

 746 kB/s | 3.1 MB 00:04
-
Total   

 499 kB/s | 3.7 MB 00:07
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing:

 1/1
  Upgrading: cockpit-bridge-234-1.el8.x86_64

 1/8
  Upgrading: cockpit-system-234-1.el8.noarch

 2/8
  Erasing  : ovirt-host-4.4.1-4.el8.x86_64  

 3/8
  Obsoleting   : cockpit-dashboard-217-1.el8.noarch 

 4/8
  Cleanup  : cockpit-system-217-1.el8.noarch

 5/8
  Erasing  : 

[ovirt-users] oVirt Engine no longer Starting

2021-01-15 Thread penguin pages


3 node cluster.  Gluster for shared storage.

CentOS8

Updated to CentOS 8 Streams :P -> 
https://bugzilla.redhat.com/show_bug.cgi?id=1911910


After several weeks .. I am really in need of direction to get this fixed.

I saw several postings about oVirt package issues but not found a fix.



[root@thor ~]# dnf update
Last metadata expiration check: 2:54:29 ago on Fri 15 Jan 2021 06:49:16 AM EST.
Error:
 Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, 
but none of the providers can be installed
  - package cockpit-bridge-234-1.el8.x86_64 conflicts with cockpit-dashboard < 
233 provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
ovirt-host-4.4.1-4.el8.x86_64
  - cannot install the best update candidate for package 
cockpit-bridge-217-1.el8.x86_64
 Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
  - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
cockpit-dashboard-217-1.el8.noarch
 Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires 
ovirt-host >= 4.4.0, but none of the providers can be installed
  - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
  - cannot install the best update candidate for package 
cockpit-system-217-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or 
'--skip-broken' to skip uninstallable packages or '--nobest' to use not only 
best candidate packages)
[root@thor ~]# yum install cockpit-dashboard --nobest
Last metadata expiration check: 2:54:52 ago on Fri 15 Jan 2021 06:49:16 AM EST.
Package cockpit-dashboard-217-1.el8.noarch is already installed.
Dependencies resolved.

 Problem: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
  - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best candidate for the job
=
 Package  Architecture  
   Version  
   Repository   Size
=
Skipping packages with broken dependencies:
 ovirt-host   x86_64
   4.4.1-1.el8  
   ovirt-4.413 k
 ovirt-host   x86_64
   4.4.1-2.el8  
   ovirt-4.413 k
 ovirt-host   x86_64
   4.4.1-3.el8  
   ovirt-4.413 k

Transaction Summary
=
Skip  3 Packages

Nothing to do.

[ovirt-users] Re: CEPH - Opinions and ROI

2020-10-01 Thread penguin pages
Thanks for response.

Seems a bit too far into "bleeding edge" .. such that I should kick tires 
virtually vs commuting plugins to oVirt +Gluster where upgrades and other 
issues may happen.   Seems like Alpha stage of  (no thin provisioning, issues 
with deleting volumes, no export / import .. which is a big one for me). 

Do we have a direction where / if it will be more of a first class citizen in 
oVirt?  4.?? 

Maybe others in community have it and it is working for them.  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2NL5C2INUZPQEWIMSQPJIR6STKGQ3IR/


[ovirt-users] Re: CEPH - Opinions and ROI

2020-10-01 Thread penguin pages

These are all storage rich servers.   

Drives:
USB 3 64GB Boot / OS
512GB SSD (Gluster HCI:  volumes "engine", "data" , "vmstore" "iso" I added 
last one to .. well to learn if I could extend with LVM ;)
1TB SSD: (VDO+Gluster Manual build due to brick and fqdn issues in oVirt,  It 
did import once it was created so that is good.. )
1TB SSD/NVMe: (?? CEPH ??

Goal is I can learn technology and play.. but have several independent volumes 
where I can move important systems to / from / backup so if my playing around 
messes things up.. I have a fall back.

I would try RedHat Container Storage.. but It is a home lab so my budget is all 
used up on hardware and so CentOS.  I am hoping oVirt had a similar setup 
process like " yum install -y gluster-ansible-roles "  but for CEPH.

This video implies something of that ilk exists..  
https://www.youtube.com/watch?v=wIw7RjHPhzsbut  jumps right into 
setup.. and fails to mention "how did you get that plugin in cockpit"... and is 
their an "oVirt" version.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5BOD7NGNWF6QVEX2SMUFKXHYNKZITQR4/


[ovirt-users] Re: java.lang.reflect.UndeclaredThrowableException - oVirt engine UI

2020-10-01 Thread penguin pages
Sorry.. this was a duplicate post..  I added this via web browser...  waited 5 
min... it did not show.. so I assumed it failed to post..  so sent again via 
email

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/45KKF5TN5PRQ3R7MDOWIQTSYZXZRVDIZ/

Fixed
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FIPQKLTF7AK2GX2S3F46CJQHBRUFLFVC/


[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-30 Thread penguin pages
I think this is the issue.  When HCI deployed nodes. and consumed the drives 
and setup "engine" "data" and "vmstore"

The GUI was set for "storage" network via hostsnames correctly.  And I think, 
based on watching replication traffic it is using 10Gb "storage" network.  CLI 
shows peers on that LAN also.

BUT..  oVirt keeps refering to nodes as the "hostname"  
EX:   
thor.penguinpages.local   (ovirtmanagment lan 1Gb 172.16.100.0/24) 
vs
thorst.penguinpages.local (storage lan 10Gb 172.16.101.0/24)

We see this error when I notice a brick having replication issue.  Which the 
CLI does not show. but that is a different topic :)
# node "medusa" showing three unsynced files.  Select brick ->reset

But when I say "reset brick" to restart its replication I get error

Error while executing action Start Gluster Volume Reset Brick: Volume reset 
brick start failed: rc=-1 out=() err=['brick: 
medusa_penguinpages_local:/gluster_bricks/vmstore/vmstore does not exist in 
volume: vmstore']

##

the real brick name is
[root@odin ~]# gluster volume status vmstore
Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore49155 0  Y   14179
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore49156 0  Y   8776
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore  49156 0  Y   11985
Self-heal Daemon on localhost   N/A   N/AY   2698
Self-heal Daemon on thorst.penguinpages.loc
al  N/A   N/AY   14256
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY   12363

Task Status of Volume vmstore
--
There are no active volume tasks

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7U2KOVJ4HQ26TKD46XCWYA6QTFFCEQNN/


[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-30 Thread penguin pages
I have a network called "Storage"   but not called "gluster logical network"   

Front end  172.16.100.0/24 for mgmt and vms (1Gb)  "ovirtmgmt"

Back end 172.16.101.0/24 for storage (10Gb) "Storage"

and yes.. I was never able to figure out how to us UI to create bricks.. so I 
just was bad and went to CLI and made them.

But would be valuable to learn oVirt "Best Practice" way... though the HCI 
wizard setup SHOULD have done this in that the wizard allows and I supplied 
font vs back end.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COXTYU3QRPIU4YP3OTSMTA7Y4E5UGEV/


[ovirt-users] Version 4.4.2.6-1.el8 -Console Error: java.lang.reflect.UndeclaredThrowableException

2020-09-30 Thread penguin pages

Got message this AM when tried to login to oVirt Engine which up till now has 
been working fine.

I can supply username and password and get portal to choose "Administration 
Portal" or "VM Portal"

I have tested both.. both have same response about 
java.lang.reflect.UndeclaredThrowableException

I restarted the engine
#
hosted-engine --set-maintenance --mode=global
hosted-engine --vm-shutdown
hosted-engine --vm-status
#make sure that the status is shutdown before restarting
hosted-engine --vm-start
hosted-engine --vm-status
#make sure the status is health before leaving maintenance mode
hosted-engine --set-maintenance --mode=none

#
--== Host thor.penguinpages.local (id: 1) status ==--

Host ID: 1
Host timestamp : 70359
Score  : 3400
Engine status  : {"vm": "down", "health": "bad", "detail": 
"unknown", "reason": "vm not running on this host"}
Hostname   : thor.penguinpages.local
Local maintenance  : False
stopped: False
crc32  : 25adf6d0
conf_on_shared_storage : True
local_conf_timestamp   : 70359
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=70359 (Wed Sep 30 09:35:22 2020)
host-id=1
score=3400
vm_conf_refresh_time=70359 (Wed Sep 30 09:35:22 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host medusa.penguinpages.local (id: 3) status ==--

Host ID: 3
Host timestamp : 92582
Score  : 3400
Engine status  : {"vm": "up", "health": "good", "detail": 
"Up"}
Hostname   : medusa.penguinpages.local
Local maintenance  : False
stopped: False
crc32  : 623359d2
conf_on_shared_storage : True
local_conf_timestamp   : 92582
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=92582 (Wed Sep 30 09:35:25 2020)
host-id=3
score=3400
vm_conf_refresh_time=92582 (Wed Sep 30 09:35:25 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
##

I downloaded and installed key from portal.. thinking that may have been 
issue.. it was not.

I googled around /searched forum and nothing jumped out.   (only hit I found in 
forum https://lists.ovirt.org/pipermail/users/2015-June/033421.html  but no 
note about fix)

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45KKF5TN5PRQ3R7MDOWIQTSYZXZRVDIZ/


[ovirt-users] Re: oVirt Change hosts to FQDN

2020-09-28 Thread penguin pages
I saw note about holiday.. and I wish all well.   Just kind of stuck here where 
I am afraid to move forward in building the stack with nodes left in limbo with 
gluster / cluster.  I just need to repair the host set to connect via IP vs DNS.

Any ideas.. or is this a wipe and rebuild of engine again?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SNZ4LEKE5P6LB43BCIXAD5F7RR7IVIB6/


[ovirt-users] Re: oVirt - Engine - VM Reconstitute

2020-09-25 Thread penguin pages
 there is a "Import VM" tab which
will allow you to import your VMs.


Anyone have idea on how this is done? 

I can see .meta  and .lease files  but not sure how this would be used to 
import VMs.

It would be nice to tell it to review current "disk / images" and compare to 
what is in library ...  because I think those VMs orphaned and the ones I 
imported... reflect dead / garbage now on the volumes.  And I see no easy way 
to clean up.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHVOZ3RRVIHSTWWUES4RG47H22HUIJ3S/


[ovirt-users] Re: oVirt - Engine - VM Reconstitute

2020-09-25 Thread penguin pages
Thanks for reply.. It really is appreciated.

1) Note about VM import
-> Can you provide details  / Example.  For me I click on
oVirt-> Compute -> Virtual Machines -> Import
-> Source Option (only ones making any sense would be "Export Domain" rest 
are unrelated and KVM one would need xml.. which is I think gone now)  So..  if 
I choose "Export Domain"  the "path" is to what file to import VM?


2) note about import from live VM in ..  When engine is gone.. this would be 
intersting to try.. but as I reboot.. and re-installed engine.. I think this 
cleared any hope of getting the libvirt xml files out
[root@medusa libvirt]# tree /var/run/libvirt/

.
├── hostdevmgr
├── interface
│   └── driver.pid
├── libvirt-admin-sock
├── libvirt-sock
├── libvirt-sock-ro
├── network
│   ├── autostarted
│   ├── driver.pid
│   ├── nwfilter.leases
│   ├── ;vdsmdummy;.xml
│   └── vdsm-ovirtmgmt.xml
├── nodedev
│   └── driver.pid
├── nwfilter
│   └── driver.pid
├── nwfilter-binding
│   └── vnet0.xml
├── qemu
│   ├── autostarted
│   ├── driver.pid
│   ├── HostedEngine.pid
│   ├── HostedEngine.xml
│   └── slirp
├── secrets
│   └── driver.pid
├── storage
│   ├── autostarted
│   └── driver.pid
├── virtlockd-sock
├── virtlogd-admin-sock
└── virtlogd-sock

10 directories, 22 files
[root@medusa libvirt]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUXIOEH2V4SY3MVAOLTS2V5YQO6MZCMQ/


[ovirt-users] Re: oVirt - Engine - VM Reconstitute

2020-09-25 Thread penguin pages


Ok.. digging around a bit.

##
[root@medusa vmstore]# tree -h /media/data/
/media/data/
├── [   48]  7801d608-0416-4e5e-a469-2fefa2398d06
│   ├── [   89]  dom_md
│   │   ├── [ 1.0M]  ids
│   │   ├── [  16M]  inbox
│   │   ├── [ 2.0M]  leases
│   │   ├── [  549]  metadata
│   │   ├── [  16M]  outbox
│   │   └── [ 1.2M]  xleases
│   ├── [ 8.0K]  images
│   │   ├── [ 8.0K]  060cb15c-efb4-45fd-82e3-9001312cffdf
│   │   │   ├── [ 160K]  d3f7ab6e-d371-4748-bc6d-26557ce9812a
│   │   │   ├── [ 1.0M]  d3f7ab6e-d371-4748-bc6d-26557ce9812a.lease
│   │   │   └── [  430]  d3f7ab6e-d371-4748-bc6d-26557ce9812a.meta
│   │   ├── [  149]  138a359c-13e6-4448-b543-533894e41fca
│   │   │   ├── [ 1.7G]  ece912a4-6756-4944-803c-c7ac58713ef4
│   │   │   ├── [ 1.0M]  ece912a4-6756-4944-803c-c7ac58713ef4.lease
│   │   │   └── [  304]  ece912a4-6756-4944-803c-c7ac58713ef4.meta
│   │   ├── [  149]  26def4e7-1153-417c-88c1-fd3dfe2b0fb9
│   │   │   ├── [ 100G]  0136657f-1f6f-4140-8c7b-f765316d4e3a
│   │   │   ├── [ 1.0M]  0136657f-1f6f-4140-8c7b-f765316d4e3a.lease
│   │   │   └── [  316]  0136657f-1f6f-4140-8c7b-f765316d4e3a.meta
│   │   ├── [  149]  2d684975-06e1-442e-a785-1cfcc70a9490
│   │   │   ├── [ 4.3G]  688ce708-5be2-4082-9337-7209081082bf
│   │   │   ├── [ 1.0M]  688ce708-5be2-4082-9337-7209081082bf.lease
│   │   │   └── [  343]  688ce708-5be2-4082-9337-7209081082bf.meta
│   │   ├── [ 8.0K]  444ee51d-da70-419e-8d8e-a94aed28d0fa
│   │   │   ├── [ 112M]  23608ebc-f8a4-4482-875c-e12eaa69c8eb
│   │   │   ├── [ 1.0M]  23608ebc-f8a4-4482-875c-e12eaa69c8eb.lease
│   │   │   ├── [  377]  23608ebc-f8a4-4482-875c-e12eaa69c8eb.meta
│   │   │   ├── [ 683M]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c
│   │   │   ├── [ 1.0M]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.lease
│   │   │   └── [  369]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.meta
│   │   ├── [  149]  54cfa3af-9045-4e9b-ba8d-4ac7181490da
│   │   │   ├── [ 113M]  a1807521-7009-4896-9325-6a2a7c0e29ef
│   │   │   ├── [ 1.0M]  a1807521-7009-4896-9325-6a2a7c0e29ef.lease
│   │   │   └── [  288]  a1807521-7009-4896-9325-6a2a7c0e29ef.meta
│   │   ├── [  149]  5917ba35-689c-409b-a89c-37bd08f06e76
│   │   │   ├── [ 7.7G]  ea6610cd-c0b9-457f-aaf5-d199a3bd1a83
│   │   │   ├── [ 1.0M]  ea6610cd-c0b9-457f-aaf5-d199a3bd1a83.lease
│   │   │   └── [  351]  ea6610cd-c0b9-457f-aaf5-d199a3bd1a83.meta
│   │   ├── [ 8.0K]  6914d63d-e57f-4e9f-9ca2-a378ad2f0a4f
│   │   │   ├── [ 683M]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c
│   │   │   ├── [ 1.0M]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.lease
│   │   │   ├── [  369]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.meta
│   │   │   ├── [ 100M]  b0deed57-f5fe-4e44-8d0a-105029bdeae5
│   │   │   ├── [ 1.0M]  b0deed57-f5fe-4e44-8d0a-105029bdeae5.lease
│   │   │   └── [  377]  b0deed57-f5fe-4e44-8d0a-105029bdeae5.meta
│   │   ├── [  149]  7daa2083-29d8-4b64-a50a-d09ab1428513
│   │   │   ├── [ 100G]  d96bf89f-351a-4c86-9865-9531d8f7a97b
│   │   │   ├── [ 1.0M]  d96bf89f-351a-4c86-9865-9531d8f7a97b.lease
│   │   │   └── [  316]  d96bf89f-351a-4c86-9865-9531d8f7a97b.meta
│   │   ├── [ 8.0K]  7e523f3d-311a-4caf-ae34-6cd455274d5f
│   │   │   ├── [ 683M]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c
│   │   │   ├── [ 1.0M]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.lease
│   │   │   ├── [  369]  3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.meta
│   │   │   ├── [ 4.6G]  55ebf216-b404-4c09-8cbc-91882f42cb94
│   │   │   ├── [ 1.0M]  55ebf216-b404-4c09-8cbc-91882f42cb94.lease
│   │   │   └── [  316]  55ebf216-b404-4c09-8cbc-91882f42cb94.meta
│   │   ├── [  149]  9ccb26cf-dd4a-4c9a-830c-ee084074d7a1
│   │   │   ├── [  11G]  a704ef38-6883-4857-b2fa-423033058927
│   │   │   ├── [ 1.0M]  a704ef38-6883-4857-b2fa-423033058927.lease
│   │   │   └── [  311]  a704ef38-6883-4857-b2fa-423033058927.meta
│   │   ├── [  149]  a2bbc814-015a-4a37-8f58-68aa6ef73f8e
│   │   │   ├── [  19G]  a5cb25e3-28a0-4d88-b2ca-d3732765b5fb
│   │   │   ├── [ 1.0M]  a5cb25e3-28a0-4d88-b2ca-d3732765b5fb.lease
│   │   │   └── [  311]  a5cb25e3-28a0-4d88-b2ca-d3732765b5fb.meta
│   │   ├── [  149]  adecc80f-8ce0-4ce0-9d73-d5de8f4a72e1
│   │   │   ├── [  10G]  cc010af4-ce51-4917-9f2c-db0ec9353103
│   │   │   ├── [ 1.0M]  cc010af4-ce51-4917-9f2c-db0ec9353103.lease
│   │   │   └── [  326]  cc010af4-ce51-4917-9f2c-db0ec9353103.meta
│   │   ├── [  149]  b8bd6924-fcd3-4479-a7da-6b255431a308
│   │   │   ├── [  20G]  f5a891db-4492-49e4-bf6a-72182ba4bf15
│   │   │   ├── [ 1.0M]  f5a891db-4492-49e4-bf6a-72182ba4bf15.lease
│   │   │   └── [  314]  f5a891db-4492-49e4-bf6a-72182ba4bf15.meta
│   │   ├── [  149]  ce4133ad-562f-4f23-add6-cd168a906267
│   │   │   ├── [ 118M]  a09c8a84-1904-4632-892e-beb55abc873a
│   │   │   ├── [ 1.0M]  a09c8a84-1904-4632-892e-beb55abc873a.lease
│   │   │   └── [  313]  a09c8a84-1904-4632-892e-beb55abc873a.meta
│   │   ├── [  149]  d0038fa8-eee1-4548-82b9-b7f79adb182c
│   │   │   ├── [ 7.7G]  7568e474-8ab5-4953-8cd3-a9c9b8df3595
│   │   │   ├── [ 1.0M]  7568e474-8ab5-4953-8cd3-a9c9b8df3595.lease
│   │   │   

[ovirt-users] Re: oVirt host "unregistered"

2020-09-24 Thread penguin pages

No root cause but found work around.

Even though hosts to be added are fully in /etc/hosts and able to resolve and 
ssh passwordless login is fine...

[root@thor ~]# ping odin
PING odin.penguinpages.local (172.16.100.102) 56(84) bytes of data.
64 bytes from odin.penguinpages.local (172.16.100.102): icmp_seq=1 ttl=64 
time=0.085 ms
^C
--- odin.penguinpages.local ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms
[root@thor ~]# ping odin.penguinpages.local
PING odin.penguinpages.local (172.16.100.102) 56(84) bytes of data.
64 bytes from odin.penguinpages.local (172.16.100.102): icmp_seq=1 ttl=64 
time=0.083 ms
^C
--- odin.penguinpages.local ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms


change target system to IP address allows host to be added 

Hmm..  my guess is something is hard coded to IP and not right here..  

I will post if I find more.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ATAJ2XOTH22R53K5ZCB6V5PX6YXJCGCM/


[ovirt-users] Re: oVirt host "unregistered"

2020-09-24 Thread penguin pages


After googling around ... it seemed that with the error "reinstall" noted for 
the server running engine.. it was just time to try that.

I ran "ovirt-hosted-engine-cleanup" on all three nodes.

Then made sure gluster bricks happy.. and reran setup ovirt-engine  wizard from 
cockpit.

After a few gluster option adjustments  deployment completed.   It came up 
and noted that I had two more nodes to setup that it detected via gluster..  I 
tried that wizard but it just hung... so I closed it.   i then set CPU (one of 
my systems is older sandybridge so clsuter set to that) and other cluster 
parameters...

Went to add node(s) and get error:

"Error while executing action: Cannot add Host. Connecting to host via SSH has 
failed, verify that the host is reachable (IP address, routable address etc.) 
You may refer to the engine.log file for further details."

Tested SSH between all nodes and works without password.

Told it  in UI to "fetch" ssh key..

Enter host fingerprint or fetch manually from host
Error in fetching fingerprint

I googled around and noted stuff about ovn key issues.   Any ideas on where 
this key issue is from?  How to root cause why I can't re-add nodes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HEBLFFAVNYEP656A5BVITPE6NBWGHBXF/


[ovirt-users] Re: Restart oVirt-Engine

2020-09-24 Thread penguin pages


Googled around and just missed right hit.


#
[root@thor iso]# hosted-engine --vm-status


--== Host thor.penguinpages.local (id: 1) status ==--

Host ID: 1
Host timestamp : 109144
Score  : 3400
Engine status  : {"vm": "up", "health": "good", "detail": 
"Up"}
Hostname   : thor.penguinpages.local
Local maintenance  : False
stopped: False
crc32  : 87e5facd
conf_on_shared_storage : True
local_conf_timestamp   : 109144
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=109144 (Thu Sep 24 14:37:09 2020)
host-id=1
score=3400
vm_conf_refresh_time=109144 (Thu Sep 24 14:37:09 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
[root@thor iso]# hosted-engine --vm-shutdown
[root@thor iso]# hosted-engine --vm-status


--== Host thor.penguinpages.local (id: 1) status ==--

Host ID: 1
Host timestamp : 109464
Score  : 0
Engine status  : {"vm": "down_unexpected", "health": "bad", 
"detail": "Down", "reason": "bad vm status"}
Hostname   : thor.penguinpages.local
Local maintenance  : False
stopped: False
crc32  : 22ff75b4
conf_on_shared_storage : True
local_conf_timestamp   : 109464
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=109464 (Thu Sep 24 14:42:29 2020)
host-id=1
score=0
vm_conf_refresh_time=109464 (Thu Sep 24 14:42:29 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan  2 01:29:44 1970
[root@thor iso]# hosted-engine --vm-start
VM exists and is Down, cleaning up and restarting
VM in WaitForLaunch
[root@thor iso]#
###

Sorry for noise.   Restart allowed host to join back under management.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XVUKODX5TDRT726SNPHRZPN5ONVQYNR/


[ovirt-users] Re: ISO Repo

2020-09-23 Thread penguin pages

This is close to all i need.

I did upload the iso as image.. but how do you attach it to the VM ?   I 
assumed that issue was a 4.4 changes / constrain.
 It was a miss-interpretation on my part that ISO use was relegated to being 
only as a "volume" independent.

As such.. I went back to uploading iso images.  They are now "images" in disk.

Going to VM: select vm -> Disks -> VM Devices -> "edit virtual machine" -> Boot 
Options -> "attach CD" -> "whoot!!!"

ISO images now show as attachable option.

Thanks.. All good.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NKQNSPLSCUMSWORTEGPE7AH6RJWHM5LL/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread penguin pages


eMail client with this forum is a bit .. I was told this web interface 
I could post images... as embedded ones in email get scraped out...  but not 
seeing how that is done. Seems to be txt only.



1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)"  how 
does one restart just the oVirt-engine?

2) I now show in shell  3 nodes, each with the one brick for data, vmstore, 
engine (and an ISO one I am trying to make).. with one brick each and all 
online and replicating.   But the GUI shows thor (first server running engine) 
offline needing to be reloaded.  Now volumes show two bricks.. one online one 
offline.  And no option to start / force restart.

3) I have tried several times to try a graceful reboot to see if startup 
sequence was issue.   I tore down VLANs and bridges to make it flat 1 x 1Gb 
mgmt, 1 x 10Gb storage.   SSH between nodes is fine... copy test was great.   I 
don't think it is nodes.

4) To the question of "did I add third node later."  I would attach deployment 
guide I am building ... but can't do that in this forum.  but this is as simple 
as I can make it.  3 intel generic servers,  1 x boot drive , 1 x 512GB SSD,  2 
x 1TB SSD in each.   wipe all data all configuration fresh Centos8 minimal 
install.. setup SSH setup basic networking... install cockpit.. run HCI wizard 
for all three nodes. That is all.

Trying to learn and support concept of oVirt as a viable platform but still 
trying to work through learning how to root cause, kick tires, and debug / 
recover when things go down .. as they will.

Help is appreciated.  The main concern I have is gap in what engine sees and 
what CLI shows.  Can someone show me where to get logs?  the GUI log  when I 
try to "activate" thor server "Status of host thor was set to NonOperational."  
"Gluster command [] failed on server ."   is very unhelpful.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKD7LJMC4X3LG5SEZ2M64YN5UKX36RAS/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread penguin pages

I pasted old / file path not right example above.. But here is a cleaner 
version with error i am trying to root cause

[root@odin vmstore]# python3 
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
https://ovirte01.penguinpages.local/ --username admin@internal --password-file 
/gluster_bricks/vmstore/vmstore/.ovirt.password --cafile 
/gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name vmstore 
--disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 21474836480
Disk initial size: 431751168
Disk name: ns01.qcow2
Disk backup: False
Connecting...
Creating disk...
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
262, in 
name=args.sd_name
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7697, 
in add
return self._internal_add(disk, headers, query, wait)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in 
_internal_add
return future.wait() if wait else future
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in 
wait
return self._code(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in 
callback
self._check_fault(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in 
_check_fault
self._raise_error(response, body)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in 
_raise_error
raise error
ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault detail is 
"Entity not found: vmstore". HTTP response code is 404.
[root@odin vmstore]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EMLWBA4FSQUMPH4CTAXSADIKD46PDQQZ/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread penguin pages
Thanks for reply.  I read this late at night and assumed the "engine url" meant 
old KVM. system .. but this implies the oVirt engine.  I then translated your 
helpful notes... but likely missing some parameter.

#
# Install import client
dnf install ovirt-imageio-client python3-ovirt-engine-sdk4

# save oVirt engine cert on gluster share (have to use GUI for now as I could 
not figure out wget means)
https://ovirte01.penguinpages.local/ovirt-engine/



mv /gluster_bricks/engine/engine/ovirte01_pki-resource.cer 
/gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
chmod 440 /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
chown root:kvm /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
# Put oVirt Password in a file for use
echo "blahblahblah" > /gluster_bricks/engine/engine/.ovirt.password
chmod 440 /gluster_bricks/engine/engine/.ovirt.password
chown root:kvm /gluster_bricks/engine/engine/.ovirt.password
# upload the qcow2 images to oVirt
[root@odin vmstore]# pwd
/gluster_bricks/vmstore/vmstore
[root@odin vmstore]# ls -alh
total 385M
drwxr-xr-x.   7 vdsm kvm  8.0K Sep 21 13:20 .
drwxr-xr-x.   3 root root   21 Sep 16 23:42 ..
-rw-r--r--.   1 root root0 Sep 21 13:20 example.log
drwxr-xr-x.   6 vdsm kvm64 Sep 17 21:28 f118dcae-6162-4e9a-89e4-f30ffcfb9ccf
drw---. 262 root root 8.0K Sep 17 01:29 .glusterfs
drwxr-xr-x.   2 root root   45 Sep 17 08:15 isos
-rwxr-xr-x.   2 root root  64M Sep 17 00:08 ns01_20200910.tgz
-rw-rw.   2 qemu qemu  64M Sep 17 11:20 ns01.qcow2
-rw-rw.   2 qemu qemu  64M Sep 17 13:34 ns01_var.qcow2
-rwxr-xr-x.   2 root root  64M Sep 17 00:09 ns02_20200910.tgz
-rw-rw.   2 qemu qemu  64M Sep 17 11:21 ns02.qcow2
-rw-rw.   2 qemu qemu  64M Sep 17 13:34 ns02_var.qcow2
drwxr-xr-x.   2 root root   38 Sep 17 10:19 qemu
drwxr-xr-x.   3 root root 280K Sep 21 08:21 .shard
[root@odin vmstore]# python3 
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
> --engine-url https://ovirte01.penguinpages.local/ \
> --username admin@internal \
> --password-file /gluster_bricks/engine/engine/.ovirt.password \
> --cafile /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer \
> --sd-name vmstore \
> --disk-sparse \
> /gluster_bricks/vmstore/vmstore.qcow2
Checking image...
qemu-img: Could not open '/gluster_bricks/vmstore/vmstore.qcow2': Could not 
open '/gluster_bricks/vmstore/vmstore.qcow2': No such file or directory
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
210, in 
image_info = get_image_info(args.filename)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
133, in get_image_info
["qemu-img", "info", "--output", "json", filename])
  File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
  File "/usr/lib64/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['qemu-img', 'info', '--output', 
'json', '/gluster_bricks/vmstore/vmstore.qcow2']' returned non-zero exit status 
1.
[root@odin vmstore]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LA4CAJLVHZKU4ZNXGMWUBVRP37PS5OBL/