[ovirt-users] Re: High performance VM cannot migrate due to TSC frequency

2020-12-17 Thread Gianluca Cecchi
On Thu, Dec 17, 2020 at 5:30 PM Milan Zamazal  wrote:

> Gianluca Cecchi  writes:
>
> > On Wed, Dec 16, 2020 at 8:59 PM Milan Zamazal 
> wrote:
> >
> >>
> >> If the checkbox is unchecked, the migration shouldn't be prevented.
> >> I think the TSC frequency shouldn't be written to the VM domain XML in
> >> such a case and then there should be no restrictions (and no guarantees)
> >> on the frequency.
> >>
> >> Do you mean you can't migrate even with the checkbox unchecked?  If so,
> >> what error message do you get in such a case?
> >>
> >
> > Yes, exactly.
> > I powered off the VM and then disabled the check and then powered on the
> VM
> > again, that is running on host ov301. ANd I have other two hosts: ov300
> and
> > ov200.
> > From web admin gui if I select the VM and "migrate" button I cannot
> select
> > the destination host and inside the bix there is the words "No available
> > host to migrate VMs to" and going to engine.log, as soon as I click the
> > "migrate" button I see these new lines:
>
> I see, I can reproduce it.  It looks like a bug in Engine.  While the VM
> is correctly started without TSC frequency set, the migration filter in
> Engine apparently still applies.
>
> I'll add a note about it to the TSC migration bug.
>
> Regards,
> Milan
>
>
Ok, thanks.
In the meantime do I have any sort of workaround to be able to migrate the
VM? Eg I could set the VM as non High Performance, or any better other
option?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEBFTUV6W5EPEJVEZPZ6QT26DXCHD67W/


[ovirt-users] Re: fence_xvm for testing

2020-12-17 Thread Alex K
On Thu, Dec 17, 2020, 14:43 Strahil Nikolov  wrote:

> Sadly no. I have used it on test Clusters with KVM VMs.
>
You mean clusters managed with pacemaker?

>
> If you manage to use oVirt as a nested setup, fencing works quite well
> with ovirt.
>
I have setup nested ovirt 4.3 on top a KVM host running centos 8 stream.

>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В четвъртък, 17 декември 2020 г., 11:16:47 Гринуич+2, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
> Hi Strahil,
>
> Do you have a working setup with fence_xvm for ovirt 4.3?
>
> On Mon, Dec 14, 2020 at 8:59 PM Strahil Nikolov 
> wrote:
> > Fence_xvm requires a key is deployed on both the Host and the VMs in
> order to succeed. What is happening when you use the cli on any of the VMs ?
> > Also, the VMs require an open tcp port to receive the necessary output
> of each request.
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> >
> >
> >
> >
> > В понеделник, 14 декември 2020 г., 10:57:11 Гринуич+2, Alex K <
> rightkickt...@gmail.com> написа:
> >
> >
> >
> >
> >
> > Hi friends,
> >
> > I was wondering what is needed to setup fence_xvm in order to use for
> power management in virtual nested environments for testing purposes.
> >
> > I have followed the following steps:
> > https://github.com/rightkick/Notes/blob/master/Ovirt-fence_xmv.md
> >
> > I tried also engine-config -s
> CustomFenceAgentMapping="fence_xvm=_fence_xvm"
> > From command line all seems fine and I can get the status of the host
> VMs, but I was not able to find what is needed to set this up at engine UI:
> >
> >
> > At username and pass I just filled dummy values as they should not be
> needed for fence_xvm.
> > I always get an error at GUI while engine logs give:
> >
> >
> > 2020-12-14 08:53:48,343Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
> kvm0.lab.local.Internal JSON-RPC error
> > 2020-12-14 08:53:48,343Z INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand,
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN',
> message='Internal JSON-RPC error'}, log id: 2437b13c
> > 2020-12-14 08:53:48,400Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local
> and Fence Agent fence_xvm:225.0.0.12 failed.
> > 2020-12-14 08:53:48,400Z WARN
>  [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4)
> [07c1d540-6d8d-419c-affb-181495d75759] Fence action failed using proxy host
> 'kvm1.lab.local', trying another proxy
> > 2020-12-14 08:53:48,485Z ERROR 
> > [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] Can not run fence
> action on host 'kvm0.lab.local', no suitable proxy host was found.
> > 2020-12-14 08:53:48,486Z WARN
>  [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4)
> [07c1d540-6d8d-419c-affb-181495d75759] Failed to find another proxy to
> re-run failed fence action, retrying with the same proxy 'kvm1.lab.local'
> > 2020-12-14 08:53:48,582Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
> kvm0.lab.local.Internal JSON-RPC error
> > 2020-12-14 08:53:48,582Z INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand,
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN',
> message='Internal JSON-RPC error'}, log id: 8607bc9
> > 2020-12-14 08:53:48,637Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local
> and Fence Agent fence_xvm:225.0.0.12 failed.
> >
> >
> > Any idea?
> >
> > Thanx,
> > Alex
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7IHC4MYY5LJFJMEJMLRRFSTMD7IK23I/
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an 

[ovirt-users] [ANN] oVirt 4.4.4 Sixth Release Candidate is now available for testing

2020-12-17 Thread Lev Veyde
oVirt 4.4.4 Sixth Release Candidate is now available for testing

The oVirt Project is pleased to announce the availability of oVirt 4.4.4
Sixth Release Candidate for testing, as of December 17th, 2020.

This update is the fourth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1

Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.

Due to Bug 1837864  -
Host enter emergency mode after upgrading to latest build

If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.4 you may get your
host entering emergency mode.

In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:

   1.

   Remove the current lvm filter while still on 4.4.1, or in emergency mode
   (if rebooted).
   2.

   Reboot.
   3.

   Upgrade to 4.4.4 (redeploy in case of already being on 4.4.4).
   4.

   Run vdsm-tool config-lvm-filter to confirm there is a new filter in
   place.
   5.

   Only if not using oVirt Node:
   - run "dracut --force --add multipath” to rebuild initramfs with the
   correct filter configuration
   6.

   Reboot.

Documentation

   -

   If you want to try oVirt as quickly as possible, follow the instructions
   on the Download  page.
   -

   For complete installation, administration, and usage instructions, see
   the oVirt Documentation .
   -

   For upgrading from a previous version, see the oVirt Upgrade Guide
   .
   -

   For a general overview of oVirt, see About oVirt
   .

Important notes before you try it

Please note this is a pre-release build.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not be used in production.
Installation instructions

For installation instructions and additional information please refer to:

https://ovirt.org/documentation/

This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 8.3 or newer

* CentOS Linux (or similar) 8.3 or newer

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 8.3 or newer

* CentOS Linux (or similar) 8.3 or newer

* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

Notes:

- oVirt Appliance is already available for CentOS Linux 8

- oVirt Node NG is already available for CentOS Linux 8

Additional Resources:

* Read more about the oVirt 4.4.4 release highlights:
http://www.ovirt.org/release/4.4.4/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.4.4/

[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/

-- 

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADVD2UQO7YQMSYKPFPUOVM7BWRPAPFHR/


[ovirt-users] Re: High performance VM cannot migrate due to TSC frequency

2020-12-17 Thread Milan Zamazal
Gianluca Cecchi  writes:

> On Wed, Dec 16, 2020 at 8:59 PM Milan Zamazal  wrote:
>
>>
>> If the checkbox is unchecked, the migration shouldn't be prevented.
>> I think the TSC frequency shouldn't be written to the VM domain XML in
>> such a case and then there should be no restrictions (and no guarantees)
>> on the frequency.
>>
>> Do you mean you can't migrate even with the checkbox unchecked?  If so,
>> what error message do you get in such a case?
>>
>
> Yes, exactly.
> I powered off the VM and then disabled the check and then powered on the VM
> again, that is running on host ov301. ANd I have other two hosts: ov300 and
> ov200.
> From web admin gui if I select the VM and "migrate" button I cannot select
> the destination host and inside the bix there is the words "No available
> host to migrate VMs to" and going to engine.log, as soon as I click the
> "migrate" button I see these new lines:

I see, I can reproduce it.  It looks like a bug in Engine.  While the VM
is correctly started without TSC frequency set, the migration filter in
Engine apparently still applies.

I'll add a note about it to the TSC migration bug.

Regards,
Milan

> 2020-12-16 23:13:27,949+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-41)
> [308a29e2-2c4f-45fe-bdce-b032b36d4656] Candidate host 'ov300'
> ('07b979fb-4779-4477-89f2-6a96093c06f7') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
> 2020-12-16 23:13:27,949+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-41)
> [308a29e2-2c4f-45fe-bdce-b032b36d4656] Candidate host 'ov200'
> ('949d0087-2c24-4759-8427-f9eade1dd2cc') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
> 2020-12-16 23:13:28,032+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-38)
> [5837b695-c70d-4f45-a452-2c7c1b4ea69b] Candidate host 'ov300'
> ('07b979fb-4779-4477-89f2-6a96093c06f7') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
> 2020-12-16 23:13:28,032+01 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-38)
> [5837b695-c70d-4f45-a452-2c7c1b4ea69b] Candidate host 'ov200'
> ('949d0087-2c24-4759-8427-f9eade1dd2cc') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
> id: null)
>
> On all three nodes I have this kind of running kernel and package versions:
>
> [root@ov300 vdsm]# rpm -q qemu-kvm libvirt-daemon systemd
> qemu-kvm-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
> libvirt-daemon-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
> systemd-239-41.el8_3.x86_64
>
> and
> [root@ov300 vdsm]# uname -r
> 4.18.0-240.1.1.el8_3.x86_64
> [root@ov300 vdsm]#
>
> Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WX4PA75LAOXIN6PKYDTZ5UZ4OMZICXEY/


[ovirt-users] Re: Adding host to hosted engine fails

2020-12-17 Thread Aries Ahito
im using ovirt 4.4
of course glusterd is running since i already manage to deploy
hosted-engine on node1
ill attached again the vdsm logs.

On Thu, Dec 17, 2020 at 5:29 PM Ritesh Chikatwar 
wrote:

> Hello,
>
>
> Which version of ovirt are you using?
> Can you make sure gluster service is running or not because i see an error
> as Could not connect to storageServer.
> Also please share the engine log as well and a few more lines after the
> error occurred in vdsm.
>
> Ritesh
>
> On Thu, Dec 17, 2020 at 12:00 PM Ariez Ahito 
> wrote:
>
>> here is our setup
>> stand alone glusterfs storage replica3
>> 10.33.50.33
>> 10.33.50.34
>> 10.33.50.35
>>
>> we deployed hosted-engine and managed to connect to our glusterfs storage
>>
>> now we are having issues adding hosts
>>
>> here is the logs
>> dsm.gluster.exception.GlusterVolumesListFailedException: Volume list
>> failed: rc=1 out=() err=['Command {self.cmd} failed with rc={self.rc}
>> out={self.out!r} err={self.err!r}']
>> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4)
>> [storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
>> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4) [vdsm.api] FINISH
>> connectStorageServer return={'statuslist': [{'id':
>> 'afa2d41a-d817-4f4a-bd35-5ffedd1fa65b', 'status': 4149}]}
>> from=:::10.33.0.10,50058, flow_id=6170eaa3,
>> task_id=f00d28fa-077f-403a-8024-9f9b533bccb5 (api:54)
>> 2020-12-17 14:22:27,107+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer]
>> RPC call StoragePool.connectStorageServer took more than 1.00 seconds to
>> succeed: 3.34 (__init__:316)
>> 2020-12-17 14:22:27,213+0800 INFO  (jsonrpc/6) [vdsm.api] START
>> connectStorageServer(domType=7,
>> spUUID='1abdb9e4-3f85-11eb-9994-00163e4e4935', conList=[{'password':
>> '', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options':
>> 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection':
>> 'gluster3:/VOL2', 'ipv6_enabled': 'false', 'id':
>> '2fb6989d-b26b-42e7-af35-4e4cf718eebf', 'user': '', 'tpgt': '1'},
>> {'password': '', 'vfs_type': 'glusterfs', 'port': '',
>> 'mnt_options': 'backup-volfile-servers=gluster3:gluster4', 'iqn': '',
>> 'connection': 'gluster3:/VOL3', 'ipv6_enabled': 'false', 'id':
>> 'b7839bcd-c0e3-422c-8f2c-47351d24b6de', 'user': '', 'tpgt': '1'}],
>> options=None) from=:::10.33.0.10,50058, flow_id=6170eaa3,
>> task_id=cfeb3401-54b9-4756-b306-88d4275c0690 (api:48)
>> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] START
>> repoStats(domains=()) from=internal,
>> task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:48)
>> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] FINISH
>> repoStats return={} from=internal,
>> task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:54)
>> 2020-12-17 14:22:30,512+0800 ERROR (jsonrpc/6) [storage.HSM] Could not
>> connect to storageServer (hsm:2444)
>>
>>
>> in the events  tab
>> The error message for connection gluster3:/ISO returned by VDSM was:
>> Failed to fetch Gluster Volume List
>> The error message for connection gluster3:/VOL1 returned by VDSM was:
>> Failed to fetch Gluster Volume List
>>
>> thanks
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJZNXHOIFHWNDJJ7INI3VNLT46TB3EAW/
>>
>

-- 
Aristotle D. Ahito
--
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6JHSPVTJ7YDWQF4PZMSRPBCD6HH4AGDG/


[ovirt-users] Re: Adding host to hosted engine fails

2020-12-17 Thread Strahil Nikolov via Users
I would start more simple and mount the volume via FUSE on any of the oVirt 
hosts:

mount -t glusterfs :/volume /mnt

Then browse the /mnt and verify that you can read and write via vdsm user:

sudo -u vdsm touch /mnt/testfile
sudo -u vdsm mkdir /mnt/testdir
sudo -u vdsm touch /mnt/testdir/testfile


Best Regards,
Strahil Nikolov






В четвъртък, 17 декември 2020 г., 11:45:45 Гринуич+2, Ritesh Chikatwar 
 написа: 





Hello,


Which version of ovirt are you using?
Can you make sure gluster service is running or not because i see an error as 
Could not connect to storageServer.
Also please share the engine log as well and a few more lines after the error 
occurred in vdsm.

Ritesh

On Thu, Dec 17, 2020 at 12:00 PM Ariez Ahito  wrote:
> here is our setup
> stand alone glusterfs storage replica3
> 10.33.50.33
> 10.33.50.34
> 10.33.50.35
> 
> we deployed hosted-engine and managed to connect to our glusterfs storage
> 
> now we are having issues adding hosts 
> 
> here is the logs
> dsm.gluster.exception.GlusterVolumesListFailedException: Volume list failed: 
> rc=1 out=() err=['Command {self.cmd} failed with rc={self.rc} 
> out={self.out!r} err={self.err!r}']
> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4) [storage.StorageDomainCache] 
> Invalidating storage domain cache (sdc:74)
> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4) [vdsm.api] FINISH 
> connectStorageServer return={'statuslist': [{'id': 
> 'afa2d41a-d817-4f4a-bd35-5ffedd1fa65b', 'status': 4149}]} 
> from=:::10.33.0.10,50058, flow_id=6170eaa3, 
> task_id=f00d28fa-077f-403a-8024-9f9b533bccb5 (api:54)
> 2020-12-17 14:22:27,107+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC 
> call StoragePool.connectStorageServer took more than 1.00 seconds to succeed: 
> 3.34 (__init__:316)
> 2020-12-17 14:22:27,213+0800 INFO  (jsonrpc/6) [vdsm.api] START 
> connectStorageServer(domType=7, 
> spUUID='1abdb9e4-3f85-11eb-9994-00163e4e4935', conList=[{'password': 
> '', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 
> 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection': 
> 'gluster3:/VOL2', 'ipv6_enabled': 'false', 'id': 
> '2fb6989d-b26b-42e7-af35-4e4cf718eebf', 'user': '', 'tpgt': '1'}, 
> {'password': '', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 
> 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection': 
> 'gluster3:/VOL3', 'ipv6_enabled': 'false', 'id': 
> 'b7839bcd-c0e3-422c-8f2c-47351d24b6de', 'user': '', 'tpgt': '1'}], 
> options=None) from=:::10.33.0.10,50058, flow_id=6170eaa3, 
> task_id=cfeb3401-54b9-4756-b306-88d4275c0690 (api:48)
> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] START 
> repoStats(domains=()) from=internal, 
> task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:48)
> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] FINISH repoStats 
> return={} from=internal, task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:54)
> 2020-12-17 14:22:30,512+0800 ERROR (jsonrpc/6) [storage.HSM] Could not 
> connect to storageServer (hsm:2444)
> 
> 
> in the events  tab
> The error message for connection gluster3:/ISO returned by VDSM was: Failed 
> to fetch Gluster Volume List
> The error message for connection gluster3:/VOL1 returned by VDSM was: Failed 
> to fetch Gluster Volume List
> 
> thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJZNXHOIFHWNDJJ7INI3VNLT46TB3EAW/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UTFH7VWFPBGSQIZGJQXXLBODXTBPPJT2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XPUOIZL5K25CJQRUT5ICJONFY6IQDT7U/


[ovirt-users] Re: fence_xvm for testing

2020-12-17 Thread Strahil Nikolov via Users
Sadly no. I have used it on test Clusters with KVM VMs.

If you manage to use oVirt as a nested setup, fencing works quite well with 
ovirt.

Best Regards,
Strahil Nikolov






В четвъртък, 17 декември 2020 г., 11:16:47 Гринуич+2, Alex K 
 написа: 





Hi Strahil, 

Do you have a working setup with fence_xvm for ovirt 4.3?

On Mon, Dec 14, 2020 at 8:59 PM Strahil Nikolov  wrote:
> Fence_xvm requires a key is deployed on both the Host and the VMs in order to 
> succeed. What is happening when you use the cli on any of the VMs ?
> Also, the VMs require an open tcp port to receive the necessary output of 
> each request.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 14 декември 2020 г., 10:57:11 Гринуич+2, Alex K 
>  написа: 
> 
> 
> 
> 
> 
> Hi friends, 
> 
> I was wondering what is needed to setup fence_xvm in order to use for power 
> management in virtual nested environments for testing purposes. 
> 
> I have followed the following steps: 
> https://github.com/rightkick/Notes/blob/master/Ovirt-fence_xmv.md
> 
> I tried also engine-config -s CustomFenceAgentMapping="fence_xvm=_fence_xvm"
> From command line all seems fine and I can get the status of the host VMs, 
> but I was not able to find what is needed to set this up at engine UI: 
> 
> 
> At username and pass I just filled dummy values as they should not be needed 
> for fence_xvm. 
> I always get an error at GUI while engine logs give: 
> 
> 
> 2020-12-14 08:53:48,343Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
> kvm0.lab.local.Internal JSON-RPC error
> 2020-12-14 08:53:48,343Z INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default 
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, 
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', 
> message='Internal JSON-RPC error'}, log id: 2437b13c
> 2020-12-14 08:53:48,400Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
> Fence Agent fence_xvm:225.0.0.12 failed.
> 2020-12-14 08:53:48,400Z WARN  
> [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
> [07c1d540-6d8d-419c-affb-181495d75759] Fence action failed using proxy host 
> 'kvm1.lab.local', trying another proxy
> 2020-12-14 08:53:48,485Z ERROR 
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-4) 
> [07c1d540-6d8d-419c-affb-181495d75759] Can not run fence action on host 
> 'kvm0.lab.local', no suitable proxy host was found.
> 2020-12-14 08:53:48,486Z WARN  
> [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
> [07c1d540-6d8d-419c-affb-181495d75759] Failed to find another proxy to re-run 
> failed fence action, retrying with the same proxy 'kvm1.lab.local'
> 2020-12-14 08:53:48,582Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
> kvm0.lab.local.Internal JSON-RPC error
> 2020-12-14 08:53:48,582Z INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default 
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, 
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', 
> message='Internal JSON-RPC error'}, log id: 8607bc9
> 2020-12-14 08:53:48,637Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
> Fence Agent fence_xvm:225.0.0.12 failed.
> 
> 
> Any idea?
> 
> Thanx, 
> Alex
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7IHC4MYY5LJFJMEJMLRRFSTMD7IK23I/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LVCO67OSNVWAD37FCH6C4YQPMYJD67OM/


[ovirt-users] Re: Network Teamd support

2020-12-17 Thread Strahil Nikolov via Users
Hey Dominik,

it was mentioned several times before why teaming is "better" than bonding ;)

Best Regards,
Strahil Nikolov






В сряда, 16 декември 2020 г., 16:59:20 Гринуич+2, Dominik Holler 
 написа: 







On Fri, Dec 11, 2020 at 1:19 AM Carlos C  wrote:
> Hi folks,
> 
> Does Ovirt 4.4.4 support or will support Network Teamd? Or only bonding?
> 


Currently, oVirt does not support teaming,
but you are welcome to share which the feature you are missing in the current 
oVirt bonding implementation in
https://bugzilla.redhat.com/show_bug.cgi?id=1351510

Thanks
Dominik

 
>  regards
> Carlos
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABGHHQYZBLO34YXBP4BKX6UGLIOL7IVU/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWYK4MADCU2ZPDQETOETSSDX546HDR6Q/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJWRRNRRIHBIXOTVQ7KESIEWSBMFFBKM/


[ovirt-users] Re: Cannot connect Glusterfs storage to Ovirt

2020-12-17 Thread Strahil Nikolov via Users
Did you mistype in the e-mail or you really put / ?
For Gluster , there should be a ":" character between Gluster Vol Server and 
volume:

: and :/ are both valid ways to define the volume.

Best Regards,
Strahil Nikolov






В сряда, 16 декември 2020 г., 02:37:45 Гринуич+2, Ariez Ahito 
 написа: 





HI guys, i have installed ovirt 4.4 hosted engine and a separate glusterfs 
storage.
now during hosted engine deployment when i try do choose 
STORAGE TYPE: gluster
Storage connection: 10.33.50.33/VOL1
Mount Option:

when i try to connect

this gives me an error:
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is 
"[Problem while trying to mount target]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Problem while trying to 
mount target]\". HTTP response code is 400."}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KLPN6P4FMY6LJAD4ETRYLV5PCA7BAV6J/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NV2WNQOLT2BET4DVNYYRWOQO3QZ5QBRZ/


[ovirt-users] Re: Adding host to hosted engine fails

2020-12-17 Thread Ritesh Chikatwar
Hello,


Which version of ovirt are you using?
Can you make sure gluster service is running or not because i see an error
as Could not connect to storageServer.
Also please share the engine log as well and a few more lines after the
error occurred in vdsm.

Ritesh

On Thu, Dec 17, 2020 at 12:00 PM Ariez Ahito 
wrote:

> here is our setup
> stand alone glusterfs storage replica3
> 10.33.50.33
> 10.33.50.34
> 10.33.50.35
>
> we deployed hosted-engine and managed to connect to our glusterfs storage
>
> now we are having issues adding hosts
>
> here is the logs
> dsm.gluster.exception.GlusterVolumesListFailedException: Volume list
> failed: rc=1 out=() err=['Command {self.cmd} failed with rc={self.rc}
> out={self.out!r} err={self.err!r}']
> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4)
> [storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4) [vdsm.api] FINISH
> connectStorageServer return={'statuslist': [{'id':
> 'afa2d41a-d817-4f4a-bd35-5ffedd1fa65b', 'status': 4149}]}
> from=:::10.33.0.10,50058, flow_id=6170eaa3,
> task_id=f00d28fa-077f-403a-8024-9f9b533bccb5 (api:54)
> 2020-12-17 14:22:27,107+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.connectStorageServer took more than 1.00 seconds to
> succeed: 3.34 (__init__:316)
> 2020-12-17 14:22:27,213+0800 INFO  (jsonrpc/6) [vdsm.api] START
> connectStorageServer(domType=7,
> spUUID='1abdb9e4-3f85-11eb-9994-00163e4e4935', conList=[{'password':
> '', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options':
> 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection':
> 'gluster3:/VOL2', 'ipv6_enabled': 'false', 'id':
> '2fb6989d-b26b-42e7-af35-4e4cf718eebf', 'user': '', 'tpgt': '1'},
> {'password': '', 'vfs_type': 'glusterfs', 'port': '',
> 'mnt_options': 'backup-volfile-servers=gluster3:gluster4', 'iqn': '',
> 'connection': 'gluster3:/VOL3', 'ipv6_enabled': 'false', 'id':
> 'b7839bcd-c0e3-422c-8f2c-47351d24b6de', 'user': '', 'tpgt': '1'}],
> options=None) from=:::10.33.0.10,50058, flow_id=6170eaa3,
> task_id=cfeb3401-54b9-4756-b306-88d4275c0690 (api:48)
> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] START
> repoStats(domains=()) from=internal,
> task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:48)
> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] FINISH
> repoStats return={} from=internal,
> task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:54)
> 2020-12-17 14:22:30,512+0800 ERROR (jsonrpc/6) [storage.HSM] Could not
> connect to storageServer (hsm:2444)
>
>
> in the events  tab
> The error message for connection gluster3:/ISO returned by VDSM was:
> Failed to fetch Gluster Volume List
> The error message for connection gluster3:/VOL1 returned by VDSM was:
> Failed to fetch Gluster Volume List
>
> thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJZNXHOIFHWNDJJ7INI3VNLT46TB3EAW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UTFH7VWFPBGSQIZGJQXXLBODXTBPPJT2/


[ovirt-users] Re: fence_xvm for testing

2020-12-17 Thread emesika
On Tue, Dec 15, 2020 at 1:59 PM Alex K  wrote:

>
>
> On Tue, Dec 15, 2020 at 1:43 PM emesika  wrote:
>
>> The problem is that the custom fencing configuration is not defined well
>>
>> Please follow [1] and retry
>>
>> [1]
>> https://www.ovirt.org/develop/developer-guide/engine/custom-fencing.html
>>
> Yes, I followed that.
> I cannot see what I am missing:
>
> [root@manager ~]# engine-config -g CustomVdsFenceType
> CustomVdsFenceType: fence_xvm version: general
>
should be only xvm

> [root@manager ~]# engine-config -g CustomFenceAgentMapping
> CustomFenceAgentMapping: fence_xvm=xvm version: general
>
Not needed , please keep empty

> [root@manager ~]# engine-config -g CustomVdsFenceOptionMapping
> CustomVdsFenceOptionMapping: fence_xvm: version: general
>
this one seems not OK , you should list here all options for the agent ,
please check the doc again

>
>
>>
>> On Tue, Dec 15, 2020 at 12:56 PM Alex K  wrote:
>>
>>>
>>>
>>> On Tue, Dec 15, 2020 at 12:34 PM Martin Perina 
>>> wrote:
>>>


 On Tue, Dec 15, 2020 at 11:18 AM Alex K 
 wrote:

>
>
> On Tue, Dec 15, 2020 at 11:59 AM Martin Perina 
> wrote:
>
>> Hi,
>>
>> could you please provide engine.log? And also vdsm.log from a host
>> which was acting as a fence proxy?
>>
>
> At proxy host (kvm1) I see the following vdsm.log:
>
> 2020-12-15 10:13:03,933+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer]
> RPC call Host.fenceNode failed (error 1) in 0.01 seconds (__init__:312)
> 2020-12-15 10:13:04,376+ INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
> RPC call Host.fenceNode failed (error 1) in 0.01 seconds (__init__:312)
>

 Isn't there stdout and stderr content of fence_xvm execution a few
 lines above, which should reveal the exact error? If not, then could you
 please turn on debug logging using below command:

 vdsm-client Host setLogLevel level=DEBUG

 This should be executed on the host which acts as a fence proxy (if you 
 have multiple hosts, then you would need to turn on debug on all, because 
 the fence proxy is selected randomly).

 Once we will have vdsm.log with fence_xvm execution details, then you can 
 change log level to INFO again by running:

 I had to set engine-config -s CustomFenceAgentMapping="fence_xvm=xvm"
>>> at engine, as it seems the host prepends fence_.
>>> After that I got the following at the proxy host with DEBUG enabled:
>>>
>>> 2020-12-15 10:51:57,891+ DEBUG (jsonrpc/7) [jsonrpc.JsonRpcServer]
>>> Calling 'Host.fenceNode' in bridge with {u'username': u'root', u'addr':
>>> u'225.0.0.12', u'agent': u'xvm', u'options': u'port=ovirt-node0',
>>> u'action': u'status', u'password': '', u'port': u'0'} (__init__:329)
>>> 2020-12-15 10:51:57,892+ DEBUG (jsonrpc/7) [root] /usr/bin/taskset
>>> --cpu-list 0-3 /usr/sbin/fence_xvm (cwd None) (commands:198)
>>> 2020-12-15 10:51:57,911+ INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
>>> RPC call Host.fenceNode failed (error 1) in 0.02 seconds (__init__:312)
>>> 2020-12-15 10:51:58,339+ DEBUG (jsonrpc/5) [jsonrpc.JsonRpcServer]
>>> Calling 'Host.fenceNode' in bridge with {u'username': u'root', u'addr':
>>> u'225.0.0.12', u'agent': u'xvm', u'options': u'port=ovirt-node0',
>>> u'action': u'status', u'password': '', u'port': u'0'} (__init__:329)
>>> 2020-12-15 10:51:58,340+ DEBUG (jsonrpc/5) [root] /usr/bin/taskset
>>> --cpu-list 0-3 /usr/sbin/fence_xvm (cwd None) (commands:198)
>>> 2020-12-15 10:51:58,356+ INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer]
>>> RPC call Host.fenceNode failed (error 1) in 0.01 seconds (__init__:312
>>>
>>> while at engine at got:
>>> 2020-12-15 10:51:57,873Z INFO
>>>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (default task-5) [a4f30921-37a9-45c1-97e5-26152f844d72] EVENT_ID:
>>> FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED(9,020), Executing power
>>> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local
>>> and Fence Agent xvm:225.0.0.12.
>>> 2020-12-15 10:51:57,888Z INFO
>>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
>>> task-5) [a4f30921-37a9-45c1-97e5-26152f844d72] START,
>>> FenceVdsVDSCommand(HostName = kvm1.lab.local,
>>> FenceVdsVDSCommandParameters:{hostId='91c81bbe-5933-4ed0-b9c5-2c8c277e44c7',
>>> targetVdsId='b5e8fe3d-cbea-44cb-835a-f88d6d70c163', action='STATUS',
>>> agent='FenceAgent:{id='null', hostId='null', order='1', type='xvm',
>>> ip='225.0.0.12', port='0', user='root', password='***',
>>> encryptOptions='false', options='port=ovirt-node0'}', policy='null'}), log
>>> id: e6d3e8c
>>> 2020-12-15 10:51:58,008Z WARN
>>>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (default task-5) [a4f30921-37a9-45c1-97e5-26152f844d72] EVENT_ID:
>>> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
>>> kvm0.lab.local.Internal JSON-RPC error
>>> 

[ovirt-users] Re: fence_xvm for testing

2020-12-17 Thread Alex K
Hi Strahil,

Do you have a working setup with fence_xvm for ovirt 4.3?

On Mon, Dec 14, 2020 at 8:59 PM Strahil Nikolov 
wrote:

> Fence_xvm requires a key is deployed on both the Host and the VMs in order
> to succeed. What is happening when you use the cli on any of the VMs ?
> Also, the VMs require an open tcp port to receive the necessary output of
> each request.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 14 декември 2020 г., 10:57:11 Гринуич+2, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
> Hi friends,
>
> I was wondering what is needed to setup fence_xvm in order to use for
> power management in virtual nested environments for testing purposes.
>
> I have followed the following steps:
> https://github.com/rightkick/Notes/blob/master/Ovirt-fence_xmv.md
>
> I tried also engine-config -s
> CustomFenceAgentMapping="fence_xvm=_fence_xvm"
> From command line all seems fine and I can get the status of the host VMs,
> but I was not able to find what is needed to set this up at engine UI:
>
>
> At username and pass I just filled dummy values as they should not be
> needed for fence_xvm.
> I always get an error at GUI while engine logs give:
>
>
> 2020-12-14 08:53:48,343Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
> kvm0.lab.local.Internal JSON-RPC error
> 2020-12-14 08:53:48,343Z INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand,
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN',
> message='Internal JSON-RPC error'}, log id: 2437b13c
> 2020-12-14 08:53:48,400Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local
> and Fence Agent fence_xvm:225.0.0.12 failed.
> 2020-12-14 08:53:48,400Z WARN
>  [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4)
> [07c1d540-6d8d-419c-affb-181495d75759] Fence action failed using proxy host
> 'kvm1.lab.local', trying another proxy
> 2020-12-14 08:53:48,485Z ERROR 
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] Can not run fence
> action on host 'kvm0.lab.local', no suitable proxy host was found.
> 2020-12-14 08:53:48,486Z WARN
>  [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4)
> [07c1d540-6d8d-419c-affb-181495d75759] Failed to find another proxy to
> re-run failed fence action, retrying with the same proxy 'kvm1.lab.local'
> 2020-12-14 08:53:48,582Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
> kvm0.lab.local.Internal JSON-RPC error
> 2020-12-14 08:53:48,582Z INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand,
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN',
> message='Internal JSON-RPC error'}, log id: 8607bc9
> 2020-12-14 08:53:48,637Z WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local
> and Fence Agent fence_xvm:225.0.0.12 failed.
>
>
> Any idea?
>
> Thanx,
> Alex
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7IHC4MYY5LJFJMEJMLRRFSTMD7IK23I/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DRBMQPDNN3KJL2CPODXVAJBA5X5OJ4J/


[ovirt-users] Move self hosted engine to a different gluster volume

2020-12-17 Thread ralf
Hi,
I apparently successfully upgraded a hyperconverged self hosted setup from  4.3 
to 4.4. During this process the selfhosted engine required a new gluster volume 
(/engine-new). I used a temporary storage for that. Is it possible to move the 
SHE back to the original volume (/engine)?
What steps would be needed? Could I just do:
1. global maintenance
2. stop engine and SHE guest
3. copy all files from glusterfs /engine-new to /engine
4. use hosted-engine --set-shared-config storage :/engine
hosted-engine --set-shared-config mnt_options
backup-volfile-servers=:
5. disable maintenance
Or are additional steps required?

Kind regards,
Ralf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RLJP6VK2TND3QQBJR6K534AZ5XNHZTDG/