[ovirt-users] Re: CEPH - Opinions and ROI
@kushwaha jumping into this thread I am also interested in your setup. Thanks. Erick On Thu, Oct 1, 2020 at 11:14 AM Kushwaha, Tarun Kumar < ta...@synergysystemsindia.com> wrote: > hi > i am running production with oVirt Ceph HCI last 1 year > > https://skyvirt.tech is running over it > > i can share how to setup > > On Thu, 1 Oct 2020, 21:31 Philip Brown, wrote: > >> ceph through an iscsi gateway is very.. very.. slow. >> >> >> >> - Original Message - >> From: "Matthew Stier" >> To: "Jeremey Wise" , "users" >> Sent: Wednesday, September 30, 2020 10:03:34 PM >> Subject: [ovirt-users] Re: CEPH - Opinions and ROI >> >> If you can’t go direct, how about round about, with an iSCSI gateway. >> >> >> >> From: Jeremey Wise >> Sent: Wednesday, September 30, 2020 11:33 PM >> To: users >> Subject: [ovirt-users] CEPH - Opinions and ROI >> >> >> >> >> >> >> >> I have for many years used gluster because..well. 3 nodes.. and so long >> as I can pull a drive out.. I can get my data.. and with three copies.. I >> have much higher chance of getting it. >> >> >> >> >> >> Downsides to gluster: Slower (its my home..meh... and I have SSD to avoid >> MTBF issues ) and with VDO.. and thin provisioning.. not had issue. >> >> >> >> >> >> BUT gluster seems to be falling out of favor. Especially as I move >> towards OCP. >> >> >> >> >> >> So.. CEPH. I have one SSD in each of the three servers. so I have some >> space to play. >> >> >> >> >> >> I googled around.. and find no clean deployment notes and guides on CEPH >> + oVirt. >> >> >> >> >> >> Comments or ideas.. >> >> >> >> >> >> -- >> >> >> [ mailto:jeremey.w...@gmail.com | p ] enguinpages. >> >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/privacy-policy.html >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4V6NKC62LW42S7UU27KEYRBVB6NFSIS/ >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/privacy-policy.html >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WX6NC2V5LZXPIKCMAISXTIRWEGXLKRHY/ >> > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/B37PZQXQKW6IDVSEWFV5D2OLQOU5P75X/ > -- - Erick Perez Quadrian Enterprises S.A. - Panama, Republica de Panama Skype chat: eaperezh WhatsApp IM: +507-6675-5083 - ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/T7WLJSUL7XCPBYUK4NIKBEWJVETTQE3J/
[ovirt-users] Re: Failed to add vm as host (CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION)
Please check this bug report. Are you running an unsupported CPU? Can you post /proc/cpuinfo https://bugzilla.redhat.com/show_bug.cgi?id=1670152 On Mon, Feb 10, 2020 at 3:18 AM Ritesh Chikatwar wrote: > Hello, > > error is Host host moved to Non-Operational state as host CPU type is not > supported in this cluster compatibility version or is not supported at all. > > Vm is running centos 8: > enabled nested virtualization by this way: > cat /sys/module/kvm_intel/parameters/nested > vi /etc/modprobe.d/kvm-nested.conf > ---> Contains > options kvm-intel nested=1 > options kvm-intel enable_shadow_vmcs=1 > options kvm-intel enable_apicv=1 > options kvm-intel ept=1 > modprobe -r kvm_intel > modprobe -a kvm_intel > cat /sys/module/kvm_intel/parameters/nested > --> 1 > > Engine Log: > https://pastebin.com/hgrwzRhE > > Ovirt-engine Cluster Version is 4.4 > I am running ovirt engine on fedora 30. > Cluster CPU Type Intel Nehlam Family. > > Any thoughts will be appreciated. > > Rit > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/5GDFN7ESHHDBCA6CAAXWM4NMSNEAS7EO/ > -- - Erick Perez Quadrian Enterprises S.A. - Panama, Republica de Panama Skype chat: eaperezh WhatsApp IM: +507-6675-5083 - ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F2JHA3AN46D4IP4NQQCG5RFXBTGB3UUD/
[ovirt-users] Re: Fencing Agent using SuperMicro IPMI Fails
Congratulations.. in my case I fail to see that message on the logs quite some time ago so I normally do two or more hosts and then do fencing. On Sun, Oct 20, 2019, 3:37 PM wrote: > Oh wait! Now I see what you mean: > Test failed: Failed to run fence status-check on host > 'vmh.cyber-range.lan'. No other host was available to serve as proxy for > the operation. > > The 2nd host would be the proxy... Well bummer. > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WHRAGIBUL2X7CR5JEZXLSNNR3ASE3Z5/ > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY7KXYCV2SCRYDIF2FHEDLHXF2I2JY5I/
[ovirt-users] Re: Fencing Agent using SuperMicro IPMI Fails
I believe without the inclusion of a developer on this thread it makes sense not to work with a single host because if you're trying to test fencing (via the power management setup) then how the host will fence itself and validate it. This of course when testing with a single host. I believe the option is labeled incorrectly on ovirt because it should not be called power management if you're performing fencing with it. there should be a an explicit button or setup option for fencing and another one for power management things. this leads to confusing terms and this is one of the features that sometimes keep people from testing systems like ovirt and sticking with hyper-v or VMware where options are clearly described. once again it makes no sense to me to test fencing with a single host on my environment. that's why I believe developers failed to document the fact that you need at least two host to test fencing. I could be totally wrong but as I learn ovirt here I have never ever been able to test fencing with a single host, always with two or more hosts previously configured. On Sun, Oct 20, 2019, 3:26 PM wrote: > No just a single host. I didn't see any documentation that said you need > two or more hosts. You could be right. > > I did some further testing by installing ipmitool on my laptop. > [root@lt ~]# ipmitool -v -I lanplus -H 172.30.50.2 -U ADMIN session info > all > Password: > session handle: 0 > slot count: 16 > active sessions : 1 > user id : 2 > privilege level : ADMINISTRATOR > session type : IPMIv2/RMCP+ > channel number: 0x01 > > Maybe a one of the DEVs could chime in and confirm if this is expected > behavior? > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/WEXT7RKKBKIOFKWMZRSCJJE4U7MQ7LKZ/ > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQLFUC3NEH4TDOJYSKNJI5WRSGBIHPTT/
[ovirt-users] Re: Fencing Agent using SuperMicro IPMI Fails
You need to have two hosts up and running and members of same ovirt setup. It seems when you set up fencing, one host perform the test on the other one. in my Case I never been able to perform fencing setup properly unless I have a minimum of two nodes working. I use Tyan servers. Same setup/params as yours. Do you have two hosts up and running? On Sun, Oct 20, 2019, 12:58 PM wrote: > I did totally remove that value. It still fails. > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/LWQA536AZ773IJC5A3ZRU3BVKPZFAZSU/ > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TU3WKX7PD3J5BDIJCLMWZXKAB32VHW4A/
[ovirt-users] Re: Engine died while creating VM
THis post can be ignored. Because the main error was the NFS server dying due to excess read/write on the SATA based storage. Timeouts were huge making the NFS-based storage domain unusable. - Erick Perez On Mon, Aug 26, 2019 at 4:44 AM Erick Perez - Quadrian Enterprises wrote: > > Hi, > I was just creating a VM on a freshly installed 2-node selfhosted > ovirt setup with a NFS backend. > NFS connections are v4.2 for VM data domain > > What other log is needed? > > Thanks, in advance. > > > Here is the output of the last 300 lines of /var/log/messages > [root@hvm002 ~]# > [root@hvm002 ~]# tail /var/log/messages -n 300 > Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service: main process > exited, code=exited, status=157/n/a > Aug 26 04:34:56 hvm002 systemd: Unit ovirt-ha-agent.service entered > failed state. > Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service failed. > Aug 26 04:35:06 hvm002 systemd: ovirt-ha-agent.service holdoff time > over, scheduling restart. > Aug 26 04:35:06 hvm002 systemd: Cannot add dependency job for unit > lvm2-lvmetad.socket, ignoring: Unit is masked. > Aug 26 04:35:06 hvm002 systemd: Stopped oVirt Hosted Engine High > Availability Monitoring Agent. > Aug 26 04:35:06 hvm002 systemd: Started oVirt Hosted Engine High > Availability Monitoring Agent. > Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed > to start necessary monitors > Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent > call last):#012 File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 131, in _run_agent#012return action(he)#012 File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 55, in action_proper#012return he.start_monitoring()#012 > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", > line 432, in start_monitoring#012self._initialize_broker()#012 > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", > line 556, in _initialize_broker#012m.get('options', {}))#012 File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", > line 86, in start_monitor#012).format(t=type, o=options, > e=e)#012RequestError: brokerlink - failed to start monitor via > ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: > 'network', options: {'tcp_t_address': '', 'network_test': 'dns', > 'tcp_t_port': '', 'addr': '172.21.48.1'}] > Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent > Aug 26 04:35:07 hvm002 systemd: ovirt-ha-agent.service: main process > exited, code=exited, status=157/n/a > Aug 26 04:35:07 hvm002 systemd: Unit ovirt-ha-agent.service entered > failed state. > Aug 26 04:35:07 hvm002 systemd: ovirt-ha-agent.service failed. > Aug 26 04:35:08 hvm002 vdsm[7369]: ERROR failed to retrieve Hosted > Engine HA score '[Errno 2] No such file or directory'Is the Hosted > Engine setup finished? > Aug 26 04:35:15 hvm002 systemd: Cannot add dependency job for unit > lvm2-lvmetad.socket, ignoring: Unit is masked. > Aug 26 04:35:15 hvm002 systemd: Starting Cockpit Web Service... > Aug 26 04:35:15 hvm002 systemd: Started Cockpit Web Service. > Aug 26 04:35:15 hvm002 cockpit-ws: Using certificate: > /etc/cockpit/ws-certs.d/0-self-signed.cert > Aug 26 04:35:15 hvm002 cockpit-ws: couldn't read from connection: Peer > sent fatal TLS alert: Certificate is bad > Aug 26 04:35:17 hvm002 systemd: ovirt-ha-agent.service holdoff time > over, scheduling restart. > Aug 26 04:35:17 hvm002 systemd: Cannot add dependency job for unit > lvm2-lvmetad.socket, ignoring: Unit is masked. > Aug 26 04:35:17 hvm002 systemd: Stopped oVirt Hosted Engine High > Availability Monitoring Agent. > Aug 26 04:35:17 hvm002 systemd: Started oVirt Hosted Engine High > Availability Monitoring Agent. > Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed > to start necessary monitors > Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent > call last):#012 File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 131, in _run_agent#012return action(he)#012 File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 55, i
[ovirt-users] Error while trying to import Xenserver OVA to Ovirt 4.3.5
Hi there, I have a Xen Server 6.2 that we want to migrate its VMs to ovrt 4.3.5 on Centos.hosts. I exported the VMs from Xen to an OVA file and placed the OVA file in one of the Ovirt nodes. While using the import OVA web interface I get this error: VDSM hvm001.unachi.ac.pa command GetOvaInfoVDS failed: Internal JSON-RPC error: {'reason': "Attempt to call function: > with arguments: (u'/opt/UNACHI VMs2.ova',) error: not all arguments converted during string formatting"} What will the correct procedure be? thanks, - Erick Perez - ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SYKBI4ZHUWBQPGGTC35W6EBQFA44IMDZ/
[ovirt-users] Re: Engine died while creating VM
Continuing with this message. I have found that the NFS server stops responding. What logs from the NFS server (Linux Centos 7.6 with NFS exporting v4.2) do i need? Aug 30 10:42:01 hvm002 libvirtd: 2019-08-30 15:42:01.239+: 20196: warning : qemuDomainObjBeginJobInternal:6724 : Cannot start job (modify, none, none) for domain CentosPrueba; current job is (query, none, none) owned by (20200 remoteDispatchConnectGetAllDomainStats, 0 , 0 (flags=0x0)) for (4072s, 0s, 0s) Aug 30 10:42:01 hvm002 libvirtd: 2019-08-30 15:42:01.239+: 20196: error : qemuDomainObjBeginJobInternal:6746 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchConnectGetAllDomainStats) Aug 30 10:42:01 hvm002 vdsm[25926]: ERROR Error running VM callback#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 663, in dispatchLibvirtEvents#012v.onIOError(devAlias, reason, action)#012 File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5074, in onIOError#012self._update_metadata()#012 File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5129, in _update_metadata#012self._sync_metadata()#012 File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5142, in _sync_metadata#012self._md_desc.dump(self._dom)#012 File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py", line 515, in dump#0120)#012 File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 108, in f#012raise toe#012TimeoutError: Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchConnectGetAllDomainStats) Aug 30 10:46:23 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:46:23 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:48:32 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:48:32 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:50:01 hvm002 systemd: Created slice User Slice of root. Aug 30 10:50:01 hvm002 systemd: Started Session 719 of user root. Aug 30 10:50:01 hvm002 systemd: Removed slice User Slice of root. Aug 30 10:53:24 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:53:24 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:55:09 hvm002 systemd-logind: New session 720 of user root. Aug 30 10:55:09 hvm002 systemd: Created slice User Slice of root. Aug 30 10:55:09 hvm002 systemd: Started Session 720 of user root. Aug 30 10:55:33 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:55:33 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:57:36 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:57:36 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:57:36 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 10:57:36 hvm002 kernel: nfs: server 10.10.10.2 not responding, timed out Aug 30 11:00:01 hvm002 systemd: Started Session 721 of user root. Aug 30 11:01:01 hvm002 systemd: Started Session 722 of user root. [root@hvm002 ~]# - Erick Perez Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de Datos para Gobiernos Quadrian Enterprises S.A. - Panama, Republica de Panama Skype chat: eaperezh WhatsApp IM: +507-6675-5083 --------- On Mon, Aug 26, 2019 at 4:44 AM Erick Perez - Quadrian Enterprises wrote: > > Hi, > I was just creating a VM on a freshly installed 2-node selfhosted > ovirt setup with a NFS backend. > NFS connections are v4.2 for VM data domain > > What other log is needed? > > Thanks, in advance. > > > Here is the output of the last 300 lines of /var/log/messages > [root@hvm002 ~]# > [root@hvm002 ~]# tail /var/log/messages -n 300 > Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service: main process > exited, code=exited, status=157/n/a > Aug 26 04:34:56 hvm002 systemd: Unit ovirt-ha-agent.service entered > failed state. > Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service failed. > Aug 26 04:35:06 hvm002 systemd: ovirt-ha-agent.service holdoff time > over, scheduling restart. > Aug 26 04:35:06 hvm002 systemd: Cannot add dependency job for unit > lvm2-lvmetad.socket, ignoring: Unit is masked. > Aug 26 04:35:06 hvm002 systemd: Stopped oVirt Hosted Engine High > Availability Monitoring Agent. > Aug 26 04:35:06 hvm002 systemd: Started oVirt Hosted Engine High > Availability Monitoring Agent. > Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed > to start necessary monitors > Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent > call last):#012 File > "/u
[ovirt-users] Engine died while creating VM
Hi, I was just creating a VM on a freshly installed 2-node selfhosted ovirt setup with a NFS backend. NFS connections are v4.2 for VM data domain What other log is needed? Thanks, in advance. Here is the output of the last 300 lines of /var/log/messages [root@hvm002 ~]# [root@hvm002 ~]# tail /var/log/messages -n 300 Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service: main process exited, code=exited, status=157/n/a Aug 26 04:34:56 hvm002 systemd: Unit ovirt-ha-agent.service entered failed state. Aug 26 04:34:56 hvm002 systemd: ovirt-ha-agent.service failed. Aug 26 04:35:06 hvm002 systemd: ovirt-ha-agent.service holdoff time over, scheduling restart. Aug 26 04:35:06 hvm002 systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked. Aug 26 04:35:06 hvm002 systemd: Stopped oVirt Hosted Engine High Availability Monitoring Agent. Aug 26 04:35:06 hvm002 systemd: Started oVirt Hosted Engine High Availability Monitoring Agent. Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent#012return action(he)#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper#012return he.start_monitoring()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 432, in start_monitoring#012self._initialize_broker()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 556, in _initialize_broker#012m.get('options', {}))#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 86, in start_monitor#012).format(t=type, o=options, e=e)#012RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network', options: {'tcp_t_address': '', 'network_test': 'dns', 'tcp_t_port': '', 'addr': '172.21.48.1'}] Aug 26 04:35:07 hvm002 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent Aug 26 04:35:07 hvm002 systemd: ovirt-ha-agent.service: main process exited, code=exited, status=157/n/a Aug 26 04:35:07 hvm002 systemd: Unit ovirt-ha-agent.service entered failed state. Aug 26 04:35:07 hvm002 systemd: ovirt-ha-agent.service failed. Aug 26 04:35:08 hvm002 vdsm[7369]: ERROR failed to retrieve Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup finished? Aug 26 04:35:15 hvm002 systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked. Aug 26 04:35:15 hvm002 systemd: Starting Cockpit Web Service... Aug 26 04:35:15 hvm002 systemd: Started Cockpit Web Service. Aug 26 04:35:15 hvm002 cockpit-ws: Using certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert Aug 26 04:35:15 hvm002 cockpit-ws: couldn't read from connection: Peer sent fatal TLS alert: Certificate is bad Aug 26 04:35:17 hvm002 systemd: ovirt-ha-agent.service holdoff time over, scheduling restart. Aug 26 04:35:17 hvm002 systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked. Aug 26 04:35:17 hvm002 systemd: Stopped oVirt Hosted Engine High Availability Monitoring Agent. Aug 26 04:35:17 hvm002 systemd: Started oVirt Hosted Engine High Availability Monitoring Agent. Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent#012return action(he)#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper#012return he.start_monitoring()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 432, in start_monitoring#012self._initialize_broker()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 556, in _initialize_broker#012m.get('options', {}))#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 86, in start_monitor#012).format(t=type, o=options, e=e)#012RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network', options: {'tcp_t_address': '', 'network_test': 'dns', 'tcp_t_port': '', 'addr': '172.21.48.1'}] Aug 26 04:35:17 hvm002 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent Aug 26 04:35:17 hvm002 systemd: ovirt-ha-agent.servi
[ovirt-users] Re: Moving ovirt engine disk to another storage volume
I answered way too fast. So sorry to alljust update the profile with a "personal" profile. - Erick Perez Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de Datos para Gobiernos Quadrian Enterprises S.A. - Panama, Republica de Panama Skype chat: eaperezh WhatsApp IM: +507-6675-5083 - On Sat, Aug 24, 2019 at 12:58 PM Erick Perez - Quadrian Enterprises wrote: > > Seems its not true: > "You're attempting to access content that requires a Red Hat login > with a complete profile. " > It seems developer profiles are "not" complete profiles. > > - > Erick Perez > Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de > Datos para Gobiernos > Quadrian Enterprises S.A. - Panama, Republica de Panama > Skype chat: eaperezh > WhatsApp IM: +507-6675-5083 > - > > On Sat, Aug 24, 2019 at 11:44 AM Scott Worthington > wrote: > > > > Subscriptions are free, please join the developer program with red hat > > (also free) to see the article. > > > > On Sat, Aug 24, 2019, 12:11 PM Erick Perez - Quadrian Enterprises > > wrote: > >> > >> i found this article link from redhat. Unfortunately needs subscription: > >> https://access.redhat.com/solutions/2998291 > >> > >> > >> - > >> Erick Perez > >> Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de > >> Datos para Gobiernos > >> Quadrian Enterprises S.A. - Panama, Republica de Panama > >> Skype chat: eaperezh > >> WhatsApp IM: +507-6675-5083 > >> - > >> > >> On Sat, Aug 24, 2019 at 10:43 AM Erick Perez - Quadrian Enterprises > >> wrote: > >> > > >> > Good morning, > >> > > >> > I am running Ovirt 4.3.5 in Centos 7.6 with one virt node an a NFS > >> > storage node. I did the self-hosted engine setup and i plan to add a > >> > second virt host in a few days. > >> > > >> > I need to do heavy maintenance on the storage node (VDO and mdadm > >> > things) and would like to know how (or a link to an article) can I > >> > move the ovirt engine disk to another storage. > >> > > >> > Currentl the NFS storage has two volumens (volA,volB) and the physical > >> > host have spare space too. Virtual machines are in VolB and the engine > >> > is in VolA. > >> > > >> > I would like to move the engine disk from VolA to VolB or to local > >> > storage. > >> > > >> > BTW I am not sure if I should say "move the engine" or should I say > >> > move "hosted_storage domain" domain > >> > > >> > thanks in advance. > >> > > >> > - > >> > Erick Perez > >> ___ > >> Users mailing list -- users@ovirt.org > >> To unsubscribe send an email to users-le...@ovirt.org > >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > >> oVirt Code of Conduct: > >> https://www.ovirt.org/community/about/community-guidelines/ > >> List Archives: > >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQCFO6SE62C6GFVVWJIQMUHJERYNSEWR/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QOGVGKAYSPNC4VJ3GSFK6L6BNLTXML5U/
[ovirt-users] Re: Moving ovirt engine disk to another storage volume
Seems its not true: "You're attempting to access content that requires a Red Hat login with a complete profile. " It seems developer profiles are "not" complete profiles. - Erick Perez Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de Datos para Gobiernos Quadrian Enterprises S.A. - Panama, Republica de Panama Skype chat: eaperezh WhatsApp IM: +507-6675-5083 - On Sat, Aug 24, 2019 at 11:44 AM Scott Worthington wrote: > > Subscriptions are free, please join the developer program with red hat (also > free) to see the article. > > On Sat, Aug 24, 2019, 12:11 PM Erick Perez - Quadrian Enterprises > wrote: >> >> i found this article link from redhat. Unfortunately needs subscription: >> https://access.redhat.com/solutions/2998291 >> >> >> - >> Erick Perez >> Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de >> Datos para Gobiernos >> Quadrian Enterprises S.A. - Panama, Republica de Panama >> Skype chat: eaperezh >> WhatsApp IM: +507-6675-5083 >> - >> >> On Sat, Aug 24, 2019 at 10:43 AM Erick Perez - Quadrian Enterprises >> wrote: >> > >> > Good morning, >> > >> > I am running Ovirt 4.3.5 in Centos 7.6 with one virt node an a NFS >> > storage node. I did the self-hosted engine setup and i plan to add a >> > second virt host in a few days. >> > >> > I need to do heavy maintenance on the storage node (VDO and mdadm >> > things) and would like to know how (or a link to an article) can I >> > move the ovirt engine disk to another storage. >> > >> > Currentl the NFS storage has two volumens (volA,volB) and the physical >> > host have spare space too. Virtual machines are in VolB and the engine >> > is in VolA. >> > >> > I would like to move the engine disk from VolA to VolB or to local storage. >> > >> > BTW I am not sure if I should say "move the engine" or should I say >> > move "hosted_storage domain" domain >> > >> > thanks in advance. >> > >> > - >> > Erick Perez >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQCFO6SE62C6GFVVWJIQMUHJERYNSEWR/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOCPUSLSXRLLX3X63JCQOCGPB23LV5HI/
[ovirt-users] Re: Moving ovirt engine disk to another storage volume
i found this article link from redhat. Unfortunately needs subscription: https://access.redhat.com/solutions/2998291 - Erick Perez Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de Datos para Gobiernos Quadrian Enterprises S.A. - Panama, Republica de Panama Skype chat: eaperezh WhatsApp IM: +507-6675-5083 - On Sat, Aug 24, 2019 at 10:43 AM Erick Perez - Quadrian Enterprises wrote: > > Good morning, > > I am running Ovirt 4.3.5 in Centos 7.6 with one virt node an a NFS > storage node. I did the self-hosted engine setup and i plan to add a > second virt host in a few days. > > I need to do heavy maintenance on the storage node (VDO and mdadm > things) and would like to know how (or a link to an article) can I > move the ovirt engine disk to another storage. > > Currentl the NFS storage has two volumens (volA,volB) and the physical > host have spare space too. Virtual machines are in VolB and the engine > is in VolA. > > I would like to move the engine disk from VolA to VolB or to local storage. > > BTW I am not sure if I should say "move the engine" or should I say > move "hosted_storage domain" domain > > thanks in advance. > > - > Erick Perez ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQCFO6SE62C6GFVVWJIQMUHJERYNSEWR/
[ovirt-users] Moving ovirt engine disk to another storage volume
Good morning, I am running Ovirt 4.3.5 in Centos 7.6 with one virt node an a NFS storage node. I did the self-hosted engine setup and i plan to add a second virt host in a few days. I need to do heavy maintenance on the storage node (VDO and mdadm things) and would like to know how (or a link to an article) can I move the ovirt engine disk to another storage. Currentl the NFS storage has two volumens (volA,volB) and the physical host have spare space too. Virtual machines are in VolB and the engine is in VolA. I would like to move the engine disk from VolA to VolB or to local storage. BTW I am not sure if I should say "move the engine" or should I say move "hosted_storage domain" domain thanks in advance. - Erick Perez ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HTZ462C6TFJFD2BJEIQLXLG4YDGIUWHP/