[ovirt-users] Re: Multiple hosts stuck in Connecting state waiting for storage pool to go up.

2023-05-03 Thread Murilo Morais
Yesterday I went through the same situation after our router crashed and
bugged the connections with the Hosts.

The solution is quite simple and already documented by Red Hat. [1]

Just restarting the hosted-engine solves the problem.
`systemctl restart hosted-engine`

[1] https://access.redhat.com/solutions/4292981

Em ter., 2 de mai. de 2023 às 09:14,  escreveu:

> Hi!
>
> We have a problem with multiple hosts stuck in Connecting state, which I
> hoped somebody here could help us wrap our heads around.
>
> All hosts, except one, seem to have very similar symptoms but I'll focus
> on one host that represents the rest.
>
> So, the host is stuck in Connecting state and this what we see in oVirt
> log files.
>
>  /var/log/ovirt-engine/engine.log:
>
> 2023-04-20 09:51:53,021+03 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) []
> Command 'GetCapabilitiesAsyncVDSCommand(HostName = ABC010-176-XYZ,
> VdsIdAndVdsVDSCommandParametersBase:{hostId='2c458562-3d4d-4408-afc9-9a9484984a91',
> vds='Host[ABC010-176-XYZ,2c458562-3d4d-4408-afc9-9a9484984a91]'})'
> execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:
> SSL session is invalid
> 2023-04-20 09:55:16,556+03 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-67) []
> EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ABC010-176-XYZ command
> Get Host Capabilities failed: Message timeout which can be caused by
> communication issues
>
> /var/log/vdsm/vdsm.log:
>
> 2023-04-20 17:48:51,977+0300 INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList() from=internal,
> task_id=ebce7c8c-6ded-454e-9aee-86edf72764ef (api:31)
> 2023-04-20 17:48:51,977+0300 INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=ebce7c8c-6ded-454e-9aee-86edf72764ef (api:37)
> 2023-04-20 17:48:51,978+0300 INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:723)
>
> Both engine.log and vdsm.log are flooded with these messages. They are
> repeated at regular intervals ad infinitum. This is one common symptom
> shared by multiple hosts in our deployment. They all have these message
> loops in engine.log and vdsm.log files. On all
>
> Running vdsm-client Host getConnectedStoragePools also returns an empty
> list represented by [] on all hosts (but interestingly there is one that
> showed Storage Pool UUID and yet it was still stuck in Connecting state).
>
> This particular host (ABC010-176-XYZ) is connected to 3 CEPH iSCSI Storage
> Domains and lsblk shows 3 block devices with matching UUIDs in their device
> components. So, the storage seems to be connected but the Storage Pool is
> not? How is that even possible?
>
> Now, what's even more weird is that we tried rebooting the host (via
> Administrator Portal) and it didn't help. We even tried removing and
> re-adding the host in Administrator Portal but to no avail.
>
> Additionally, the host refused to go into Maintenance mode so we had to
> enforce it by manually updating Engine DB.
>
> We also tried reinstalling the host via Administrator Portal and ran into
> another weird problem, which I'm not sure if it's a related one or a
> problem that deserves a dedicated discussion thread but, basically, the
> underlying Ansible playbook exited with the following error message:
>
> "stdout" : "fatal: [10.10.10.176]: UNREACHABLE! => {\"changed\": false,
> \"msg\": \"Data could not be sent to remote host \\\"10.10.10.176\\\". Make
> sure this host can be reached over ssh: \", \"unreachable\": true}",
>
> Counterintuitively, just before running Reinstall via Administrator Portal
> we had been able to reboot the same host (which as you know oVirt does via
> Ansible as well). So, no changes on the host in between just different
> Ansible playbooks. To confirm that we actually had access to the host over
> ssh we successfully ran ssh -p $PORT root@10.10.10.176 -i
> /etc/pki/ovirt-engine/keys/engine_id_rsa and it worked.
>
> That made us scratch our heads for a while but what seems to had fixed
> Ansible's ssh access problems was manual full stop of all VDSM-related
> systemd services on the host. It was just a wild guess but as soon as we
> stopped all VDSM services Ansible stopped complaining about not being able
> to reach the target host and successfully did its job.
>
> I'm sure you'd like to see more logs but I'm not certain what exactly is
> relevant. There are a ton of logs as this deployment is comprised of nearly
> 80 hosts. So, I guess it's best if you just request to see specific logs,
> messages or configuration details and I'll cherry-pick what's relevant.
>
> We don't really understand what's going on and would appreciate any help.
> We tried just about anything we could think of to resolve this issue and
> are 

[ovirt-users] Error 500 when adding a new host

2023-03-16 Thread Murilo Morais
Good afternoon everybody.

I'm using oVirt 4.4.10 at the moment.

After activating Cinder, I believe that some packages were updated, which
is causing some problems, one of them is occurring when I try to add a
Host, then it shows a 500 error, the same is happening when editing a Host.

The following error is happening in the server.log:
2023-03-16 10:54:39,651-04 ERROR [io.undertow.servlet] (default task-133)
Exception while dispatching incoming RPC call:
com.google.gwt.user.server.rpc.UnexpectedException: Service method 'public
abstract java.util.ArrayList
org.ovirt.engine.ui.frontend.gwtservices.GenericApiGWTService.runMultipleQueries(java.util.ArrayList,java.util.ArrayList)'
threw an unexpected exception: javax.ejb.EJBException: WFLYEJB0442:
Unexpected Error
at
deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.RPC.encodeResponseForFailure(RPC.java:416)
at
deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:605)
at
deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:333)
at
deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:303)
at
deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:373)
at
deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.api@2.0.0.Final
//javax.servlet.http.HttpServlet.service(HttpServlet.java:523)
at
deployment.engine.ear.webadmin.war//org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.service(GenericApiGWTServiceImpl.java:78)
at javax.servlet.api@2.0.0.Final
//javax.servlet.http.HttpServlet.service(HttpServlet.java:590)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.servlet.CachingFilter.doFilter(CachingFilter.java:133)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
deployment.engine.ear.webadmin.war//org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:65)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at org.wildfly.extension.undertow@23.0.2.Final
//org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at io.undertow.core@2.2.5.Final
//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
at io.undertow.servlet@2.2.5.Final
//io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:117)
at io.undertow.servlet@2.2.5.Final

[ovirt-users] Re: RBD Mirror support

2023-03-14 Thread Murilo Morais
Of course OpenStack would be much better, unfortunately we need it in oVirt
while we don't implement OpenStack.

I am aware of the hack of the hack of another hack. We'll have to do that
while we don't implement another alternative to oVirt.

Konstantin, thank you so much for replying.

I'll see what I can do around here. Any news I update here on the list.

Em ter., 14 de mar. de 2023 às 10:26, Konstantin Shalygin 
escreveu:

> The Cinderlib is a Cinder. Cinder always have native RBD driver, but seems
> oVirt developers don't know about it
> Yes, you can add write the vdsm hook and modify the xmldom, but take a
> look - it's just another hack of hack of hack. May be better stop hack
> oVirt and choose OpenStack or Proxmox with normal RBD support? 
>
>
> Cheers,
> k
>
>
> On 14 Mar 2023, at 19:17, Murilo Morais  wrote:
>
> I understand. Is there no setting in Cinder to make this possible?
>
> I'm thinking of writing a hook specifically for this, I don't see any
> other alternative.
>
> Em ter., 14 de mar. de 2023 às 03:37, Konstantin Shalygin 
> escreveu:
>
>> Hi,
>>
>> Recently I was created BZ for librbd [1], closed as won't fix. So, still
>> impossible even upgrade RBD driver without reboot. Typical misunderstanding
>> between developers and operations
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1997241
>> k
>>
>>
>> On 14 Mar 2023, at 08:43, Murilo Morais  wrote:
>>
>> Good evening everyone!
>>
>> Guys, I managed to raise the RBD through Cinder without problem.
>> Everything works, including removing the Storage Domain (through postgres).
>>
>> The initial objective was to go up with RBD Mirror but I'm not
>> succeeding, because Cinder is connecting the volume through KRBD, it
>> doesn't support Journaling which ends up breaking the Mirror...
>>
>> Is there any way/configuration for Cinder to start the machine using
>> librbd instead of KRBD? Because in my scenario we have to use Mirror.
>>
>> Thanks in advance.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XT6MX3XX73AEB6TKGUOZNNVIZMW6KMZ2/
>>
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZV5IV674P2JFBBOM5RRF6FLQI3Z7ZCDC/
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7MOGM6QP3VIQID5LZEIU4QLIN72J5W3/


[ovirt-users] Re: RBD Mirror support

2023-03-14 Thread Murilo Morais
I understand. Is there no setting in Cinder to make this possible?

I'm thinking of writing a hook specifically for this, I don't see any other
alternative.

Em ter., 14 de mar. de 2023 às 03:37, Konstantin Shalygin 
escreveu:

> Hi,
>
> Recently I was created BZ for librbd [1], closed as won't fix. So, still
> impossible even upgrade RBD driver without reboot. Typical misunderstanding
> between developers and operations
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1997241
> k
>
>
> On 14 Mar 2023, at 08:43, Murilo Morais  wrote:
>
> Good evening everyone!
>
> Guys, I managed to raise the RBD through Cinder without problem.
> Everything works, including removing the Storage Domain (through postgres).
>
> The initial objective was to go up with RBD Mirror but I'm not succeeding,
> because Cinder is connecting the volume through KRBD, it doesn't support
> Journaling which ends up breaking the Mirror...
>
> Is there any way/configuration for Cinder to start the machine using
> librbd instead of KRBD? Because in my scenario we have to use Mirror.
>
> Thanks in advance.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XT6MX3XX73AEB6TKGUOZNNVIZMW6KMZ2/
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZV5IV674P2JFBBOM5RRF6FLQI3Z7ZCDC/


[ovirt-users] RBD Mirror support

2023-03-13 Thread Murilo Morais
Good evening everyone!

Guys, I managed to raise the RBD through Cinder without problem. Everything
works, including removing the Storage Domain (through postgres).

The initial objective was to go up with RBD Mirror but I'm not succeeding,
because Cinder is connecting the volume through KRBD, it doesn't support
Journaling which ends up breaking the Mirror...

Is there any way/configuration for Cinder to start the machine using librbd
instead of KRBD? Because in my scenario we have to use Mirror.

Thanks in advance.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XT6MX3XX73AEB6TKGUOZNNVIZMW6KMZ2/


[ovirt-users] Re: Failed to remove MBS

2023-03-13 Thread Murilo Morais
Benny, it worked perfectly. Thank you very much!

Em seg., 13 de mar. de 2023 às 08:02, Murilo Morais 
escreveu:

> Hello Benny, good morning!
>
> I will test the solution, any news I'll update here.
>
> Thanks a lot for answering!
>
> Em dom., 12 de mar. de 2023 às 06:32, Benny Zlotnik 
> escreveu:
>
>> I think there are more tables, perhaps running the stored
>> procedure Force_Delete_storage_domain(v_storage_domain_id UUID) would be
>> enough
>>
>> On Sat, Mar 11, 2023 at 5:17 PM Murilo Morais 
>> wrote:
>>
>>> Good afternoon everybody!
>>>
>>> I have an MBD (Managed Block Storage) Storage Domain that we no longer
>>> use, we want to remove it. We are using version 4.4.10.
>>>
>>> When trying to put the Storage Domain in Maintenance, a message appears
>>> saying that it was not possible to remove it and that there is a Task being
>>> executed. I already looked for it but I didn't find it.
>>>
>>> Therefore, I cannot put the Storage Domain in Maintenance in the
>>> Datacenter, making it impossible to carry out the removal.
>>>
>>> According to a Bug Report [1] the problem has been fixed in version
>>> 4.5.0, the problem is that we cannot perform the update.
>>>
>>> In the DB I found two references to this Storage Domain, one in the
>>> `storage_domain_static` table and another in the `cinder_storage` table. Is
>>> removing these two references enough to remove this Storage Domain?
>>>
>>> Is there any other way to perform this process manually?
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1959385
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/53CJYIAKUA2OCQ7XHX7SUAJSZRYEQFN2/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EI4YINICPWKQ5XIL4PNNGDJDIP2OWA5D/


[ovirt-users] Re: Failed to remove MBS

2023-03-13 Thread Murilo Morais
Hello Benny, good morning!

I will test the solution, any news I'll update here.

Thanks a lot for answering!

Em dom., 12 de mar. de 2023 às 06:32, Benny Zlotnik 
escreveu:

> I think there are more tables, perhaps running the stored
> procedure Force_Delete_storage_domain(v_storage_domain_id UUID) would be
> enough
>
> On Sat, Mar 11, 2023 at 5:17 PM Murilo Morais 
> wrote:
>
>> Good afternoon everybody!
>>
>> I have an MBD (Managed Block Storage) Storage Domain that we no longer
>> use, we want to remove it. We are using version 4.4.10.
>>
>> When trying to put the Storage Domain in Maintenance, a message appears
>> saying that it was not possible to remove it and that there is a Task being
>> executed. I already looked for it but I didn't find it.
>>
>> Therefore, I cannot put the Storage Domain in Maintenance in the
>> Datacenter, making it impossible to carry out the removal.
>>
>> According to a Bug Report [1] the problem has been fixed in version
>> 4.5.0, the problem is that we cannot perform the update.
>>
>> In the DB I found two references to this Storage Domain, one in the
>> `storage_domain_static` table and another in the `cinder_storage` table. Is
>> removing these two references enough to remove this Storage Domain?
>>
>> Is there any other way to perform this process manually?
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1959385
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/53CJYIAKUA2OCQ7XHX7SUAJSZRYEQFN2/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/55KE5C4VPORKOZMQLNVNMH6IGEKUZGYS/


[ovirt-users] Failed to remove MBS

2023-03-11 Thread Murilo Morais
Good afternoon everybody!

I have an MBD (Managed Block Storage) Storage Domain that we no longer use,
we want to remove it. We are using version 4.4.10.

When trying to put the Storage Domain in Maintenance, a message appears
saying that it was not possible to remove it and that there is a Task being
executed. I already looked for it but I didn't find it.

Therefore, I cannot put the Storage Domain in Maintenance in the
Datacenter, making it impossible to carry out the removal.

According to a Bug Report [1] the problem has been fixed in version 4.5.0,
the problem is that we cannot perform the update.

In the DB I found two references to this Storage Domain, one in the
`storage_domain_static` table and another in the `cinder_storage` table. Is
removing these two references enough to remove this Storage Domain?

Is there any other way to perform this process manually?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1959385
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/53CJYIAKUA2OCQ7XHX7SUAJSZRYEQFN2/


[ovirt-users] Move disk between POSIX FS and Managed Block Storage

2023-03-10 Thread Murilo Morais
Good morning everybody!

Guys, I managed to connect my oVirt cluster to the CEPH RBD, the VM disks
are stored in CephFS (POSIX FS). How can I move the disk from POSIX FS to
Managed Block Storage?

Thanks in advance!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZLN6IFQIQSXDA7VNGWQ3O7NIJTHKXT7/


[ovirt-users] Re: Hosted Engine No Longer Has IP Address - Help!

2023-01-06 Thread Murilo Morais
If you want to access the Console you can use the following:
hosted-engine --console [1]

If it doesn't work (with me sometimes it doesn't work) you can access the
console via VNC:
hosted-engine --add-console-password --password= [2]

If you just want to restart the VM you can do the following:
hosted-engine --vm-shutdown
# Check with --vm-status until the VM dies
hosted-engine --vm-start


Perform these procedures in global maintenance mode.


[1]
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/self-hosted_engine_guide/troubleshooting
[2] https://access.redhat.com/solutions/2221461

Em sex., 6 de jan. de 2023 às 03:56, Matthew J Black <
matt...@peregrineit.net> escreveu:

> Hi Guys,
>
> I've gone and shot myself in the foot - and I'm looking for some first-aid.
>
> I've managed to remove the IP Address of the oVirt Self-Hosted Engine and
> so have lost contact with it (don't ask how - let's just say I f*cked-up).
> I *think* its still running, I've got it set to DHCP, and I've got access
> to the Host its running on, so my question(s) is:
>
> - (The preferred method) How can I re-establish (console?) contact - I'm
> thinking via the Host Server and some kvm-commands, so I can issue a
> `dhclient` command
> - (The most drastic) How can I get it to reboot ie is there a command /
> command sequence to do this
>
> Any help would be appreciated.
>
> Cheers
>
> Dulux-Oz
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PY3WVEH22P7SGQ3N7XK2PXRILYTTV6PW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2ZLG6RZY5SCIUPNQ62RMFGL4USZO4XHE/


[ovirt-users] VM stuck reading and writing to disk

2022-12-28 Thread Murilo Morais
Good morning people. I'm experiencing a weird problem.

I have a specific VM that the IO simply crashes.

This VM, during the backup period, executes mysqldump and tries to
compress, after a long period of compacting the JBD2 process appears using
100% of the CPU and also of WA (Wait in the TOP command). Numerous other
VMs perform the same process but only this one remains in this state. When
JBD2 appears, it is necessary to perform a forced reboot, because it stops
practically everything.

I'm using CephFS as Storage Backend. In the VM I can get rates of at least
600 Megs, I can generate gigs and more gigs of data without any problem.

The VMs are using Virtio instead of Virtio-blk.

Hosts are interconnected on 10G SFP+ interfaces, Storages on 40G interfaces.

I believe that the problem is not the Storage, as this behavior does not
exist in other VMs.

What I find strange is that JBD2 goes up and spikes at 100%.

What could be happening? Could it be something in the Guest itself?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WBRIMINRNTLZAGJHW5F4352CTIJD5RY3/


[ovirt-users] RBD support

2022-12-26 Thread Murilo Morais
Good night people!

How is CEPH (RBD) supported through Cinderlib? How about latency? Is it
more performant to use CephFS or RBD through Cinderlib?

Thanks a lot in advance!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NPMNEMC3Y3HU36AXZI5XYKAAK7G2RT3U/


[ovirt-users] Re: Forced restart when losing communication with the Storages

2022-12-13 Thread Murilo Morais
Unfortunately, I haven't found anything like this.
At least this type of event didn't happen again, but I just can't find
anything in the logs that justifies these reboots, I've read everything
line by line, if anything was recorded there it passed my eyes.

Em qua., 30 de nov. de 2022 às 14:20, Volenbovskyi, Konstantin <
konstantin.volenbovs...@haufe.com> escreveu:

> Hi,
>
> “I'm assuming it used ssh as the hosts in question have a configuration
> problem with their power management and cannot be reset currently by the PDU
> ”
>
> It is interesting to find the events in logs on ovengine really doing
> reboot of ovirt hosts via SSH in this case/any cases.
>
> I guess that /var/log/messages and probably some ovengine logs should
> provide evidence if it is the case (and ovengine logs can contain
> additional information, what is functionality behind
>
> ‘restart host that lost a connection to storage domain’
>
>
>
> BR,
>
> Konstantin
>
>
>
> *From: *Patrick Hibbs 
> *Date: *Wednesday, 30 November 2022 at 16:31
> *To: *"users@ovirt.org" 
> *Subject: *[ovirt-users] Re: Forced restart when losing communication
> with the Storages
>
>
>
> Hello,
>
> I've seen something similar to this too. Although, for me it occurred when
> a standalone engine attempted to allocate a new disk on a Gluster storage.
> While the cluster's VMs were experiencing high virtual disk I/O. (Found out
> later they were doing updates at an odd time...)
>
> The result was random VMs being forced off until it had cleared enough of
> the bottleneck, and one host was rebooted. After around 3 minutes of wait
> time. I'm assuming it used ssh as the hosts in question have a
> configuration problem with their power management and cannot be reset
> currently by the PDU. But it was still an odd occurrence given that the
> engine host itself was the cause of the storage "outage."
>
> Is this the correct behavior of oVirt?
>
> -Patrick Hibbs
>
> On 11/30/22 07:45, Murilo Morais wrote:
>
> Konstantin, thank you very much for the explanation, it was very
> enlightening.
>
> I believe I left something open in the previous message.
>
> I'm using Hosted Engine, all VMs have HA enabled and Power Management is
> disabled on all hosts. No IPMI configured (at least I didn't configure
> anything about iLO/IPMI in oVirt).
>
> There was a loss of communication with the Storage for approximately 3
> minutes and this caused all Hosts to reboot.
>
>
>
> Em qua., 30 de nov. de 2022 às 08:50, Volenbovskyi, Konstantin <
> konstantin.volenbovs...@haufe.com> escreveu:
>
> Hi,
>
> I would say that you observed ‘fencing’ and not SSH soft fencing, but
> actual reboot via IPMI.
>
> https://www.ovirt.org/develop/developer-guide/engine/automatic-fencing.html
>
>
>
> You can disable Power management for hosts.
>
> Before doing that you need to understand following:
>
> -what is impact on VMs when this happens?
>
> -the working assumption is that your VMs
> work just fine, but you need to think about other cases where VMs lose
> their storage and/or network.
>
> For me it seems that this was storage domain that is not a VM storage
> domain, so VMs’ disks were just fine.
>
> Maybe it was hosted_storage domain in your case…
>
> -any of those VMs are High-availability VMs? Once you
> disable Power Management you will not have automatic restart on different
> hosts of those.
>
> You need to understand that idea of fencing is either to recover host
> automatically and possibly to restart VMs
>
> and make sure that there are no duplicated VMs.
>
> There are 100% cases where fencing is used and there is subset of those,
> X% number of cases where you would consider that behavior is suboptimal.
>
> The drawback of disabling fencing is that you might get suboptimal
> behavior in Y% cases (100% minus X%)
>
>
>
> BR,
>
> Konstantin
>
>
>
> *From: *Murilo Morais 
> *Date: *Wednesday, 30 November 2022 at 12:13
> *To: *users 
> *Subject: *[ovirt-users] Forced restart when losing communication with
> the Storages
>
>
>
> Good morning everyone!
>
> Is there a way to disable the forced reboot of the machines? This morning
> there was an event in our infrastructure where the hosts lost communication
> with the Storage but this caused all the hosts to restart abruptly.
>
> Would this be the correct behavior of oVirt? Is there any way to disable
> this?
>
>
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to use

[ovirt-users] Re: Forced restart when losing communication with the Storages

2022-11-30 Thread Murilo Morais
Konstantin, thank you very much for the explanation, it was very
enlightening.

I believe I left something open in the previous message.

I'm using Hosted Engine, all VMs have HA enabled and Power Management is
disabled on all hosts. No IPMI configured (at least I didn't configure
anything about iLO/IPMI in oVirt).

There was a loss of communication with the Storage for approximately 3
minutes and this caused all Hosts to reboot.

Em qua., 30 de nov. de 2022 às 08:50, Volenbovskyi, Konstantin <
konstantin.volenbovs...@haufe.com> escreveu:

> Hi,
>
> I would say that you observed ‘fencing’ and not SSH soft fencing, but
> actual reboot via IPMI.
>
> https://www.ovirt.org/develop/developer-guide/engine/automatic-fencing.html
>
>
>
> You can disable Power management for hosts.
>
> Before doing that you need to understand following:
>
> -what is impact on VMs when this happens?
>
> -the working assumption is that your VMs
> work just fine, but you need to think about other cases where VMs lose
> their storage and/or network.
>
> For me it seems that this was storage domain that is not a VM storage
> domain, so VMs’ disks were just fine.
>
> Maybe it was hosted_storage domain in your case…
>
> -any of those VMs are High-availability VMs? Once you
> disable Power Management you will not have automatic restart on different
> hosts of those.
>
> You need to understand that idea of fencing is either to recover host
> automatically and possibly to restart VMs
>
> and make sure that there are no duplicated VMs.
>
> There are 100% cases where fencing is used and there is subset of those,
> X% number of cases where you would consider that behavior is suboptimal.
>
> The drawback of disabling fencing is that you might get suboptimal
> behavior in Y% cases (100% minus X%)
>
>
>
> BR,
>
> Konstantin
>
>
>
> *From: *Murilo Morais 
> *Date: *Wednesday, 30 November 2022 at 12:13
> *To: *users 
> *Subject: *[ovirt-users] Forced restart when losing communication with
> the Storages
>
>
>
> Good morning everyone!
>
> Is there a way to disable the forced reboot of the machines? This morning
> there was an event in our infrastructure where the hosts lost communication
> with the Storage but this caused all the hosts to restart abruptly.
>
> Would this be the correct behavior of oVirt? Is there any way to disable
> this?
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JVNIMYBAXJE3YTM2BKB57VGYES2GIRF3/


[ovirt-users] Forced restart when losing communication with the Storages

2022-11-30 Thread Murilo Morais
Good morning everyone!

Is there a way to disable the forced reboot of the machines? This morning
there was an event in our infrastructure where the hosts lost communication
with the Storage but this caused all the hosts to restart abruptly.

Would this be the correct behavior of oVirt? Is there any way to disable
this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMQBUIV3IS7EYH2XYQHOUEY6ZTENGIQE/


[ovirt-users] Re: oVirt/Ceph iSCSI Issues

2022-11-29 Thread Murilo Morais
Matthew, good morning.

Is iSCSI Target configured with ACL?
Do all Gateways have the same amount of active sessions? It could be that
some Gateway has crashed the sessions (specifically gateway 3).

If you are not actually using Storage Domain on iSCSI, I recommend the
following:
1- Logout through oVirt
2- Check if there is still an initiator in the multipath on each oVirt Host
3- Log out of all sessions and delete through iscsiadm on each oVirt Host
4- Check if there is still an active session in CEPH
5- Restart all Gateway Daemons in CEPH, it may take a while if there is a
stuck session
6- Try to perform Discovery again through oVirt

Em ter., 29 de nov. de 2022 às 03:28, Matthew J Black <
matt...@peregrineit.net> escreveu:

> Hi All,
>
> I've got some issues with connecting my oVirt Cluster to my Ceph Cluster
> via iSCSI. There are two issues, and I don't know if one is causing the
> other, if they are related at all, or if they are two separate, unrelated
> issues. Let me explain.
>
> The Situation
> -
> - I have a working three node Ceph Cluster (Ceph Quincy on Rocky Linux 8.6)
> - The Ceph Cluster has four Storage Pools of between 4 and 8 TB each
> - The Ceph Cluster has three iSCSI Gateways
> - There is a single iSCSI Target on the Ceph Cluster
> - The iSCSI Target has all three iSCSI Gateways attached
> - The iSCSI Target has all four Storage Pools attached
> - The four Storage Pools have been assigned LUNs 0-3
> - I have set up (Discovery) CHAP Authorisation on the iSCSI Target
> - I have a working three node self-hosted oVirt Cluster (oVirt v4.5.3 on
> Rocky Linux 8.6)
> - The oVirt Cluster has (in addition to the hosted_storage Storage Domain)
> three GlusterFS Storage Domains
> - I can ping all three Ceph Cluster Nodes to/from all three oVirt Hosts
> - The iSCSI Target on the Ceph Cluster has all three oVirt Hosts
> Initiators attached
> - Each Initiator has all four Ceph Storage Pools attached
> - I have set up CHAP Authorisation on the iSCSI Target's Initiators
> - The Ceph Cluster Admin Portal reports that all three Initiators are
> "logged_in"
> - I have previous connected Ceph iSCSI LUNs to the oVirt Cluster
> successfully (as an experiment), but had to remove and re-instate them for
> the "final" version(?).
> - The oVirt Admin Portal (ie HostedEngine) reports that Initiators are 1 &
> 2 (ie oVirt Hosts 1 & 2) are "logged_in" to all three iSCSI Gateways
> - The oVirt Admin Portal reports that Initiator 3 (ie oVirt Host 3) is
> "logged_in" to iSCSI Gateways 1 & 2
> - I can "force" Initiator 3 to become "logged_in" to iSCSI Gateway 3, but
> when I do this it is *not* persistent
> - oVirt Hosts 1 & 2 can/have discovered all three iSCSI Gateways
> - oVirt Hosts 1 & 2 can/have discovered all four LUNs/Targets on all three
> iSCSI Gateways
> - oVirt Host 3 can only discover 2 of the iSCSI Gateways
> - For Target/LUN 0 oVirt Host 3 can only "see" the LUN provided by iSCSI
> Gateway 1
> - For Targets/LUNs 1-3 oVirt Host 3 can only "see" the LUNs provided by
> iSCSI Gateways 1 & 2
> - oVirt Host 3 can *not* "see" any of the Targets/LUNs provided by iSCSI
> Gateway 3
> - When I create a new oVirt Storage Domain for any of the four LUNs:
>   - I am presented with a message saying "The following LUNs are already
> in use..."
>   - I am asked to "Approve operation" via a checkbox, which I do
>   - As I watch the oVirt Admin Portal I can see the new iSCSI Storage
> Domain appear in the Storage Domain list, and then after a few minutes it
> is removed
>   - After those few minutes I am presented with this failure message:
> "Error while executing action New SAN Storage Domain: Network error during
> communication with the Host."
> - I have looked in the engine.log and all I could find that was relevant
> (as far as I know) was this:
> ~~~
> 2022-11-28 19:59:20,506+11 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-1) [77b0c12d] Command 'CreateStorageDomainVDSCommand(HostName
> = ovirt_node_1.mynet.local,
> CreateStorageDomainVDSCommandParameters:{hostId='967301de-be9f-472a-8e66-03c24f01fa71',
> storageDomain='StorageDomainStatic:{name='data',
> id='2a14e4bd-c273-40a0-9791-6d683d145558'}',
> args='s0OGKR-80PH-KVPX-Fi1q-M3e4-Jsh7-gv337P'})' execution failed:
> VDSGenericException: VDSNetworkException: Message timeout which can be
> caused by communication issues
>
> 2022-11-28 19:59:20,507+11 ERROR
> [org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand]
> (default task-1) [77b0c12d] Command
> 'org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand'
> failed: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> VDSGenericException: VDSNetworkException: Message timeout which can be
> caused by communication issues (Failed with error VDS_NETWORK_ERROR and
> code 5022)
> ~~~
>
> I cannot see/detect any "communication issue" - but then again I'm not
> 100% sure what I should be looking 

[ovirt-users] How are statistics collected for disk and network?

2022-11-25 Thread Murilo Morais
Good afternoon everyone.

Is there any way to collect disk and network statistics from VMs without
needing to install qemu-guest-agent? From what I noticed, this data only
appears in Grafana after installing this package on each VM, is that really
the case? Is there no way to collect this data without the need for the
package? At least for the network interface I can see the traffic of the VM
interface, but this data is not shown in Grafana.

Thanks in advance!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5MQD5A4EZO5NSWPP6SAJDJDGKPMBEIO5/


[ovirt-users] Re: Trouble with Ovirt mirrors

2022-11-23 Thread Murilo Morais
Good afternoon.

I can confirm exactly the same.

Em qua., 23 de nov. de 2022 às 14:09, Wesley Stewart 
escreveu:

> Trying to upgrade from 4.4 to 4.5 and following the directions.
>
> ovirt mirrors arent working for me... Is this just me?  Or is anyone else
> seeing this?
>
> [root@ovirt ~]# dnf install -y centos-release-ovirt45
> Updating Subscription Management repositories.
> oVirt upstream for CentOS Stream 8 - oVirt 4.5
>
> Errors during downloading metadata for repository 'ovirt-45-upstream':
>   - Status code: 503 for
> https://mirrorlist.ovirt.org/mirrorlist-ovirt-4.5-el8 (IP: 8.43.85.224)
> Error: Failed to download metadata for repo 'ovirt-45-upstream': Cannot
> prepare internal mirrorlist: Status code: 503 for
> https://mirrorlist.ovirt.org/mirrorlist-ovirt-4.5-el8 (IP: 8.43.85.224)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QBPES2T6A5L7ZSX4XJDJNNFICURK7TS4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAQBK4DZQAD6HQTH3U533HS3MYVDT3B5/


[ovirt-users] Re: Problem trying to deploy oVirt 4.5 hosted engine on Ceph iSCSI

2022-11-21 Thread Murilo Morais
Hello, good afternoon!

Install the ansible-core package in version 2.12 to work around the issue.

Em seg., 21 de nov. de 2022 às 11:40,  escreveu:

> Hello:
>
> I am following Sandro's guide to deploy hosted engine using Ceph and
> iSCSI.
>
> I have deployed a 3-node Ceph cluster and set up the iSCSI gateway. I am
> using Rocky Linux 8.6 as host OS.
>
> These are the steps I have taken:
>
> Install Rocky Linux on the hosts.
> Deploy Ceph Quincy cluster using cephadm and containers.
> Install and configure ceph-iscsi from repos pointed out in Sandro's
> guide.
> Provision an iSCSI target and lun in gwcli.
> Configure dnf and repos for RedHat derivatives
> (https://www.ovirt.org/download/install_on_rhel.html)
> Configure ovirt 4.5 repo:
> dnf install -y centos-release-ovirt45
> Reset and configure virt module:
> dnf module reset virt
> dnf module enable virt:rhel
> dnf distro-sync --nobest
> Install hosted apliance:
>  dnf install ovirt-engine-appliance
> Install hosted-engine-setup
> dnf install ovirt-hosted-engine-setup
>
> At this point I get an error about hosted-engine-setup conflicting with
> ansible-core:
>
> # dnf install ovirt-hosted-engine-setup
> Failed to set locale, defaulting to C.UTF-8
> Last metadata expiration check: 2:37:30 ago on Thu Nov 17 09:16:13 2022.
> Error:
>   Problem: package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch requires
> ansible-core >= 2.12, but none of the providers can be installed
>- package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch conflicts with
> ansible-core >= 2.13 provided by ansible-core-2.13.3-1.el8.x86_64
>- cannot install the best candidate for the job
> (try to add '--allowerasing' to command line to replace conflicting
> packages or '--skip-broken' to skip uninstallable packages or '--nobest'
> to use not only best candidate packages)
>
>
>
> Can you help me with this error? I am not sure about the order of the
> steps and the configured repos.
>
> Thanks in advance
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DHOBTJU4B4H4TCL7QRBYJF7WRLCSS7QG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHT7UZDHKHT5GQFIG5WVEJWNTFBKJJPT/


[ovirt-users] Re: HostedEngine deployment failure - Help

2022-11-21 Thread Murilo Morais
Could you post the deploy logs? By the way, have you tried to deploy with
the bridge (ovirtmgmt) created?

Em seg., 21 de nov. de 2022 às 07:27,  escreveu:

> Is all oVirt Support now gone - is oVirt dead in the water?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WL6MQJ44FOIHLVW7JIF4WHXGULFSGHZK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIZ3CDZYJU44E3ESFWWAW35ZSNS6BMUM/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-11-10 Thread Murilo Morais
Matthew, good morning!

If using iSCSI there is no need to copy /etc/ceph/.

Configure the credentials through the Dashboard, do not use the Mutual
User/Password, you can leave it blank, in the Target configuration, uncheck
the "ACL authentication" option.

Em qui., 10 de nov. de 2022 às 05:20, Matthew J Black <
matt...@peregrineit.net> escreveu:

> So, a follow-up (now that I'm in an actual position to go ahead and
> implement this):
>
> In the Blog post it says to:
>
> ~~~
> 1) Copy /etc/ceph directory from your ceph node to ovirt-engine host.
> 2) Change ownership of the files in /etc/ceph on the ovirt-engine host
> making them readable from the engine process:
>  # chown ovirt /etc/ceph/*
> ~~~
>
> I can discover the three Ceph iSCSI Gateways when I go to set up the
> storage, but I can't log into them (yes, I am using the correct CHAP
> username and p/word)
>
> The "ovirt" user does not exist on the host (pre- or post- engine install)
> - so my question is: Which user *should* own that folder once it is copied
> to the host?
>
> Or am I backing up the wrong tree?
>
> Anyone else using Ceph iSCSI Gateways with oVirt?
>
> Cheers
>
> Dulux-Oz
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7C4RIHBR3EUPLNS54SHETZJ3AFTRGIJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OQGTRMLAWF6VDGDO7L47WJB7WWCBG2K6/


[ovirt-users] Host restarting with error

2022-11-03 Thread Murilo Morais
Good morning gentlemen.

I have a weird problem with a setup I finished.

I'm using oVirt 4.4 Hosted Engine with NFS as Storage Domain, it happens
that, if the Storage disconnects, in a few minutes the machine restarts and
the red LED lights up as if there had been a crash.

I managed to capture a screenshot of this moment once, OOM-Killer killed
modprobe which doesn't make much sense as there is no VM other than the
Hosted Engine and there is 128G of RAM on the machine.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PERILTPPEDZFVUZ7QEIBXSZRKU7D3RKV/


[ovirt-users] Re: OVS Task Error when Add EL Host

2022-10-25 Thread Murilo Morais
Downgrade ansible-core to ansible-core-2.12. [1], [2]

[1]
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NIANNOHGJHC6BFVUBRVMBWSDOGNQ6C4C/#7QKZWCMWEQ2UTJX4WN6MZYHK4FINL5MT
[2]
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/JI72US3JIOXBWTMTVVGDLVAZV7UJXBYF/#JI72US3JIOXBWTMTVVGDLVAZV7UJXBYF

Em ter., 25 de out. de 2022 às 09:36, Ada  escreveu:

> I have the same issue.
>
>
>
> Error message while installing Ovirt hosted engine on newly installed
> node4.5.3.
>
>
>
> “ The ipaddr filter requires python’s netaddr to be installed on the
> ansible controller”
>
>
>
> Meanwhile the addr is installed as illustrated below
>
>
>
>
>
>
>
> Please advice on how to proceed.
>
>
>
> Sent from Mail  for
> Windows
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBJTAMH3NBVNIHZQPLCN4NUUNGM5WQA6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYD4VDSZPW7FQTYF6NYZJMR3NI25H6WY/


[ovirt-users] Re: Start Hosted Engine - new installation 4.5.3.1

2022-10-23 Thread Murilo Morais
Try downgrading ansible-core. [1]

[1]
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/JI72US3JIOXBWTMTVVGDLVAZV7UJXBYF/#JI72US3JIOXBWTMTVVGDLVAZV7UJXBYF

Em sáb., 22 de out. de 2022 às 11:42,  escreveu:

> Hi,
> I am trying to configure "Hosted Engine Deployment".
> I get a message
> "The ipaddr filter requires python's netaddr be installed on the ansible
> controller".
> I think I have the necessary components installed. How to solve the
> problem?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S5VUUDGJXHEWYNOFFO2N5F26D63PHL72/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EC6WAEXEEXFLHT3NGKLS2RWBUULBSE35/


[ovirt-users] Local and remote storage

2022-10-05 Thread Murilo Morais
Good evening everyone.

Guys, I have two machines that use oVirt, I managed to put CephFS as a
storage domain and everything is working perfectly, but on these two
machines there are 4 NVMe on each, I would like to know if there is any
possibility or any way to use their local storage? I don't care if the host
dies and I don't get the VM up on the other, as it would be for very
specific cases, so there's no problem with that.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BJGSPVQBP7MJHHXUWMHQOUVTNPYDTRRG/