[ovirt-users] Re: Delete snapshots task hung

2019-10-16 Thread Leo David
Thank you for help Strahil,

But although there where 4 images with a status 4 in the database, and did
the update query on them, same bloody message, and vms wont start.
Eventually, ive decided to delete the vms, and do a from scratch
installation. Persistance openshift vms are still ok, so i should be able
to reuse the volumes somehow.
This is why sometimes a subscription is good, when a lack of knowledge on
my side is happening. Production systems should not rely on upstreams
unless there is a strong understanding of the product.
Again, thank you so much for trying helping me out !
Cheers,

Leo

On Tue, Oct 15, 2019, 07:00 Leo David  wrote:

> Thank you Strahil,
> I'll proceed with these steps and come back to you.
> Cheers,
>
> Leo
>
> On Tue, Oct 15, 2019, 06:45 Strahil  wrote:
>
>> Have you checked this thread :
>> https://lists.ovirt.org/pipermail/users/2016-April/039277.html
>>
>> You can switch to postgre user, then 'source
>> /opt/rhn/postgresql10/enable' & then 'psql engine'.
>>
>> As per the thread you can find illegal snapshots via '*select
>> image_group_id,imagestatus from images where imagestatus =4;*'
>>
>> And then update them via '*update images set imagestatus =1 where
>> imagestatus = 4 and ;** commit'*
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Oct 13, 2019 15:45, Leo David  wrote:
>>
>> >
>> > Hi Everyone,
>> > Im still not being able to start the vms... Could anyone give me an
>> advice on sorign this out ?
>> > Still having th "Bad volume specification" error,  although the disk is
>> present on the storage.
>> > This issue would force me to reinstall a 10 nodes Openshift cluster
>> from scratch,  which would not be so funny..
>> > Thanks,
>> >
>> > Leo.
>> >
>> > On Fri, Oct 11, 2019 at 7:12 AM Strahil  wrote:
>>
>> >>
>> >> Nah...
>> >> It's done directly on the DB and I wouldn't recommend such action for
>> Production Cluster.
>> >> I've done it only once and it was based on some old mailing lists.
>> >>
>> >> Maybe someone from the dev can assist?
>> >>
>> >> On Oct 10, 2019 13:31, Leo David  wrote:
>>
>> >>>
>> >>> Thank you Strahil,
>> >>> Could you tell me what do you mean by changing status ? Is this
>> something to be done in the UI ?
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Leo
>> >>>
>> >>> On Thu, Oct 10, 2019, 09:55 Strahil  wrote:
>>
>> 
>>  Maybe you can change the status of the VM in order the engine to
>> know that it has to blockcommit the snapshots.
>> 
>>  Best Regards,
>>  Strahil Nikolov
>> 
>>  On Oct 9, 2019 09:02, Leo David  wrote:
>>
>> >
>> > Hi Everyone,
>> > Please let me know if any thoughts or recommandations that could
>> help me solve this issue..
>> > The real bad luck in this outage is that these 5 vms are part on an
>> Openshift deployment,  and now we are not able to start it up...
>> > Before trying to sort this at ocp platform level by replacing the
>> failed nodes with new vms, I would rather prefer to do it at the oVirt
>> level and have the vms starting since the disks are still present on
>> gluster.
>> > Thank you so much !
>> >
>> >
>> > Leo
>>
>> >
>> >
>> >
>> > --
>> > Best regards, Leo David
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYE2EO4AOCTWK4EWGMDQ7KSTF3M6JR6Q/


[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-16 Thread adrianquintero
Strahil,
this is what i see for each service
all services are active and running except for ovirt-ha-agent which says 
"activating", even though the rest of the services are Active/running they 
still show a few errors warnings.

---
● sanlock.service - Shared Storage Lease Manager
   Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor 
preset: disabled)
   Active: active (running) since Thu 2019-10-17 00:47:20 EDT; 2min 1s ago
  Process: 16495 ExecStart=/usr/sbin/sanlock daemon (code=exited, 
status=0/SUCCESS)
 Main PID: 2023
Tasks: 10
   CGroup: /system.slice/sanlock.service
   └─2023 /usr/sbin/sanlock daemon

Oct 17 00:47:20 host1.example.com systemd[1]: Starting Shared Storage Lease 
Manager...
Oct 17 00:47:20 host1.example.com systemd[1]: Started Shared Storage Lease 
Manager.
Oct 17 00:47:20 host1.example.com sanlock[16496]: 2019-10-17 00:47:20 33920 
[16496]: lockfile setlk error /var/run/sanlock/sanlock.pid: Resource 
temporarily unavailable
● supervdsmd.service - Auxiliary vdsm service for running helper functions as 
root
   Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static; vendor 
preset: enabled)
   Active: active (running) since Thu 2019-10-17 00:43:06 EDT; 6min ago
 Main PID: 15277 (supervdsmd)
Tasks: 5
   CGroup: /system.slice/supervdsmd.service
   └─15277 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile 
/var/run/vdsm/svdsm.sock

Oct 17 00:43:06 host1.example.com systemd[1]: Started Auxiliary vdsm service 
for running helper functions as root.
Oct 17 00:43:07 host1.example.com supervdsmd[15277]: failed to load module 
nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or 
directory
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Thu 2019-10-17 00:47:27 EDT; 1min 54s ago
  Process: 16402 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
--post-stop (code=exited, status=0/SUCCESS)
  Process: 16499 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 16572 (vdsmd)
Tasks: 38
   CGroup: /system.slice/vdsmd.service
   └─16572 /usr/bin/python2 /usr/share/vdsm/vdsmd

Oct 17 00:47:28 host1.example.com vdsm[16572]: WARN MOM not available.
Oct 17 00:47:28 host1.example.com vdsm[16572]: WARN MOM not available, KSM 
stats will be missing.
Oct 17 00:47:28 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:47:43 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:47:58 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:13 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:28 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:43 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:48:58 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
Oct 17 00:49:13 host1.example.com vdsm[16572]: ERROR failed to retrieve Hosted 
Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup 
finished?
● ovirt-ha-broker.service - oVirt Hosted Engine High Availability 
Communications Broker
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; 
vendor preset: disabled)
   Active: active (running) since Thu 2019-10-17 00:44:11 EDT; 5min ago
 Main PID: 16379 (ovirt-ha-broker)
Tasks: 2
   CGroup: /system.slice/ovirt-ha-broker.service
   └─16379 /usr/bin/python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker

Oct 17 00:44:11 host1.example.com systemd[1]: Started oVirt Hosted Engine High 
Availability Communications Broker.
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Thu 2019-10-17 
00:49:13 EDT; 8s ago
  Process: 16925 ExecStart=/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent 
(code=exited, status=157)
 Main PID: 16925 (code=exited, status=157)

Oct 17 00:49:13 host1.example.com systemd[1]: Unit ovirt-ha-agent.s

[ovirt-users] Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-16 Thread Strahil
Ssh to host and check the status of :
sanlock.service
supervdsmd.service
vdsmd.service
ovirt-ha-broker.service
ovirt-ha-agent.service

For example if sanlock is working, but supervdsmd is not - try to restart it.
If it fais, run:
systemctl cat supervdsmd.service

And execute the commands in sections:
ExecStartPre
ExecStart

And report any issues, then follow the next service in the chain.

Best Regards,
Strahil NikolovOn Oct 16, 2019 23:52, adrianquint...@gmail.com wrote: > > Hi, > 
I am trying to re-install a host from the web UI in oVirt 4.3.5, but it always 
fails and goes to "Setting Host state to Non-Operational" > > From the 
engine.log I see the following WARN/ERROR: > 2019-10-16 16:32:57,263-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s) attached to the Data Center Default-DC1. Setting Host state 
to Non-Operational. > 2019-10-16 16:32:57,271-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. > 2019-10-16 16:32:57,276-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1 > 2019-10-16 16:35:06,151-04 ERROR 
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(EE-ManagedThreadFactory-engine-Thread-137245) [] Could not connect host 
'host1.example.com' to pool 'Default-DC1': Error storage pool connection: 
(u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, 
msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, 
domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', 
u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', 
u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', 
u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",) > 2019-10-16 
16:35:06,248-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s) attached to the Data Center Default-DC1. Setting Host state 
to Non-Operational. > 2019-10-16 16:35:06,256-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. > 2019-10-16 16:35:06,261-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1 > 2019-10-16 16:37:46,011-04 ERROR 
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
Connection timeout for host 'host1.example.com', last response arrived 1501 ms 
ago. > 2019-10-16 16:41:57,095-04 ERROR 
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(EE-ManagedThreadFactory-engine-Thread-137527) [17f3aadd] Could not connect 
host 'host1.example.com' to pool 'Default-DC1': Error storage pool connection: 
(u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, 
msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, 
domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', 
u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', 
u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', 
u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",) > 2019-10-16 
16:41:57,199-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s) attached to the Data Center Default-DC1. Setting Host state 
to Non-Operational. > 2019-10-16 16:41:57,211-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. > 2019-10-16 16:41:57,216-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] 

[ovirt-users] oVirt 4.3.5/6 HC: Reinstall fails from WEB UI

2019-10-16 Thread adrianquintero
Hi, 
I am trying to re-install a host from the web UI in oVirt 4.3.5, but it always 
fails and goes to "Setting Host state to Non-Operational"

From the engine.log I see the following WARN/ERROR:
2019-10-16 16:32:57,263-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s)  attached to the Data Center Default-DC1. Setting 
Host state to Non-Operational.
2019-10-16 16:32:57,271-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. 
2019-10-16 16:32:57,276-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1
2019-10-16 16:35:06,151-04 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(EE-ManagedThreadFactory-engine-Thread-137245) [] Could not connect host 
'host1.example.com' to pool 'Default-DC1': Error storage pool connection: 
(u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, 
msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, 
domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', 
u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', 
u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', 
u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",)
2019-10-16 16:35:06,248-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s)  attached to the Data Center Default-DC1. Setting 
Host state to Non-Operational.
2019-10-16 16:35:06,256-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. 
2019-10-16 16:35:06,261-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1
2019-10-16 16:37:46,011-04 ERROR 
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
Connection timeout for host 'host1.example.com', last response arrived 1501 ms 
ago.
2019-10-16 16:41:57,095-04 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(EE-ManagedThreadFactory-engine-Thread-137527) [17f3aadd] Could not connect 
host 'host1.example.com' to pool 'Default-DC1': Error storage pool connection: 
(u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, 
msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, 
domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', 
u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', 
u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', 
u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",)
2019-10-16 16:41:57,199-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the 
Storage Domain(s)  attached to the Data Center Default-DC1. Setting 
Host state to Non-Operational.
2019-10-16 16:41:57,211-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
host1.example.com.There is no other host in the data center that can be used to 
test the power management settings. 
2019-10-16 16:41:57,216-04 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: 
CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to 
Storage Pool Default-DC1

Any ideas why this might be happening?
I have researched, however I have not been able to find a solution.

thanks,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/commu

[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Jayme
I had originally setup my cluster following this older guide at
https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
and
the fqdn steps did not exist at that time. After deployment I then followed
steps to add gluster network and set that network as storage and
migration.  What is the difference between what I did and the new method
gdeploy uses?  If my gluster config is using hostnames on the management
network subnet but the gluster network is set for gluster and migration
traffic what happens?

On Wed, Oct 16, 2019 at 12:05 PM Jayme  wrote:

> Is there a way to fix this on a hci deployment which is already in
> operation?  I do have a separate gluster network which is chosen for
> migration and gluster network but when I originally deployed I used just
> one set of host names which resolve to management network subnet.
>
> I appear to have a situation where gluster traffic may be going through
> both networks in seeing what looks like gluster traffic on both the gluster
> interface and ovirt management.
>
> On Wed, Oct 16, 2019 at 11:34 AM Stefano Stagnaro <
> stefa...@prismatelecomtesting.com> wrote:
>
>> Thank you Simone for the clarifications.
>>
>> I've redeployed with both management and storage FQDNs; now everything
>> seems to be in its place.
>>
>> I only have a couple of questions:
>>
>> 1) In the Gluster deployment Wizard, section 1 (Hosts) and 2 (Additional
>> Hosts) are misleading; should be renamed in something like "Host
>> Configuration: Storage side" / "Host Configuration: Management side".
>>
>> 2) what is the real function of the "Gluster Network" cluster traffic
>> type? What it actually does?
>>
>> Thanks,
>> Stefano.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTNXPJMOZEYVHIZV2SJXXOVXMXCXS2XP/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5GDPCLBXEQWFCLPVRPQUJJWZVYWYV42C/


[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Jayme
Is there a way to fix this on a hci deployment which is already in
operation?  I do have a separate gluster network which is chosen for
migration and gluster network but when I originally deployed I used just
one set of host names which resolve to management network subnet.

I appear to have a situation where gluster traffic may be going through
both networks in seeing what looks like gluster traffic on both the gluster
interface and ovirt management.

On Wed, Oct 16, 2019 at 11:34 AM Stefano Stagnaro <
stefa...@prismatelecomtesting.com> wrote:

> Thank you Simone for the clarifications.
>
> I've redeployed with both management and storage FQDNs; now everything
> seems to be in its place.
>
> I only have a couple of questions:
>
> 1) In the Gluster deployment Wizard, section 1 (Hosts) and 2 (Additional
> Hosts) are misleading; should be renamed in something like "Host
> Configuration: Storage side" / "Host Configuration: Management side".
>
> 2) what is the real function of the "Gluster Network" cluster traffic
> type? What it actually does?
>
> Thanks,
> Stefano.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTNXPJMOZEYVHIZV2SJXXOVXMXCXS2XP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KO5CLJNLRZOGLNGVGFMBADSDWNEUUAUG/


[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Stefano Stagnaro
Thank you Simone for the clarifications.

I've redeployed with both management and storage FQDNs; now everything seems to 
be in its place.

I only have a couple of questions:

1) In the Gluster deployment Wizard, section 1 (Hosts) and 2 (Additional Hosts) 
are misleading; should be renamed in something like "Host Configuration: 
Storage side" / "Host Configuration: Management side".

2) what is the real function of the "Gluster Network" cluster traffic type? 
What it actually does?

Thanks,
Stefano.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTNXPJMOZEYVHIZV2SJXXOVXMXCXS2XP/


[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Jayme
This is an interesting topic, I've had a 3 node HCI cluster configured the
same way and now I'm questioning whether or not my gluster traffic has been
using my gig network instead of my 10gig network on separate subnet.

On Wed, Oct 16, 2019 at 10:08 AM Simone Tiraboschi 
wrote:

>
>
> On Wed, Oct 16, 2019 at 2:16 PM Stefano Stagnaro <
> stefa...@prismatelecomtesting.com> wrote:
>
>> Hi,
>>
>> I've deployed an oVirt HC starting with latest oVirt Node 4.3.6; this is
>> my simple network plan (FQDNs only resolves the front-end addresses):
>>
>> front-end   back-end
>> engine.ovirt192.168.110.10
>> node1.ovirt 192.168.110.11  192.168.210.11
>> node2.ovirt 192.168.110.12  192.168.210.12
>> node3.ovirt 192.168.110.13  192.168.210.13
>>
>>
> The storage traffic allocation over multiple subnets is implicitly set by
> name resolution and routing rules.
>
> Please use two distinct hostnames for each host: the first one should
> resolve only as an address on the management network and the second one as
> an address on the storage network.
>
> In the cockpit wizard for the hyperconverged deployment you will be
> prompted twice about the name of the three hosts: on the first tab (named
> 'Hosts') use the three host-names that resolves on the storage network.
> On the second tab ('Additional Hosts') please use the hostnames that are
> going to be resolved over the management network.
>
>
> at the end I followed the RHHI-V 1.6 Deployment Guide where, at chapter 9
>> [1], it suggests to create a logical network for Gluster traffic. Now I can
>> see, indeed, back-end addresses added in the address pool:
>>
>> [root@node1 ~]# gluster peer status
>> Number of Peers: 2
>>
>> Hostname: node3.ovirt
>> Uuid: 3fe33e8b-d073-4d7a-8bda-441c42317c92
>> State: Peer in Cluster (Connected)
>> Other names:
>> 192.168.210.13
>>
>> Hostname: node2.ovirt
>> Uuid: a95a9233-203d-4280-92b9-04217fa338d8
>> State: Peer in Cluster (Connected)
>> Other names:
>> 192.168.210.12
>>
>> The problem is that the Gluster traffic seems still to flow on the
>> management interfaces:
>>
>> [root@node1 ~]# tcpdump -i ovirtmgmt portrange 49152-49664
>>
>>
>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>> listening on ovirtmgmt, link-type EN10MB (Ethernet), capture size 262144
>> bytes
>> 14:04:58.746574 IP node2.ovirt.49129 > node1.ovirt.49153: Flags [.], ack
>> 484303246, win 18338, options [nop,nop,TS val 6760049 ecr 6760932], length 0
>> 14:04:58.753050 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
>> 2507489191:2507489347, ack 2889633200, win 20874, options [nop,nop,TS val
>> 6760055 ecr 6757892], length 156
>> 14:04:58.753131 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
>> 156:312, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
>> length 156
>> 14:04:58.753142 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
>> 312:468, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
>> length 156
>> 14:04:58.753148 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
>> 468:624, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
>> length 156
>> 14:04:58.753203 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
>> 624:780, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
>> length 156
>> 14:04:58.753216 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
>> 780:936, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
>> length 156
>> 14:04:58.753231 IP node1.ovirt.49152 > node2.ovirt.49131: Flags [.], ack
>> 936, win 15566, options [nop,nop,TS val 6760978 ecr 6760055], length 0
>> ...
>>
>> and no yet to the eth1 I dedicated to gluster:
>>
>> [root@node1 ~]# tcpdump -i eth1 portrange 49152-49664
>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>> listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
>>
>> What am I missing here? What can I do to force the Gluster traffic to
>> really flow on dedicated Gluster network?
>>
>> Thank you,
>> Stefano.
>>
>> [1] https://red.ht/2MiZ4Ge
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U3ZAM3DGE3EBGCWBIM37PTKFNULN2KTF/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGQG6SIWR7R4B22HDZNBG53K2AWBF3CS/
>
_

[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Simone Tiraboschi
On Wed, Oct 16, 2019 at 2:16 PM Stefano Stagnaro <
stefa...@prismatelecomtesting.com> wrote:

> Hi,
>
> I've deployed an oVirt HC starting with latest oVirt Node 4.3.6; this is
> my simple network plan (FQDNs only resolves the front-end addresses):
>
> front-end   back-end
> engine.ovirt192.168.110.10
> node1.ovirt 192.168.110.11  192.168.210.11
> node2.ovirt 192.168.110.12  192.168.210.12
> node3.ovirt 192.168.110.13  192.168.210.13
>
>
The storage traffic allocation over multiple subnets is implicitly set by
name resolution and routing rules.

Please use two distinct hostnames for each host: the first one should
resolve only as an address on the management network and the second one as
an address on the storage network.

In the cockpit wizard for the hyperconverged deployment you will be
prompted twice about the name of the three hosts: on the first tab (named
'Hosts') use the three host-names that resolves on the storage network.
On the second tab ('Additional Hosts') please use the hostnames that are
going to be resolved over the management network.


at the end I followed the RHHI-V 1.6 Deployment Guide where, at chapter 9
> [1], it suggests to create a logical network for Gluster traffic. Now I can
> see, indeed, back-end addresses added in the address pool:
>
> [root@node1 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: node3.ovirt
> Uuid: 3fe33e8b-d073-4d7a-8bda-441c42317c92
> State: Peer in Cluster (Connected)
> Other names:
> 192.168.210.13
>
> Hostname: node2.ovirt
> Uuid: a95a9233-203d-4280-92b9-04217fa338d8
> State: Peer in Cluster (Connected)
> Other names:
> 192.168.210.12
>
> The problem is that the Gluster traffic seems still to flow on the
> management interfaces:
>
> [root@node1 ~]# tcpdump -i ovirtmgmt portrange 49152-49664
>
>
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on ovirtmgmt, link-type EN10MB (Ethernet), capture size 262144
> bytes
> 14:04:58.746574 IP node2.ovirt.49129 > node1.ovirt.49153: Flags [.], ack
> 484303246, win 18338, options [nop,nop,TS val 6760049 ecr 6760932], length 0
> 14:04:58.753050 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 2507489191:2507489347, ack 2889633200, win 20874, options [nop,nop,TS val
> 6760055 ecr 6757892], length 156
> 14:04:58.753131 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 156:312, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753142 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 312:468, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753148 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 468:624, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753203 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 624:780, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753216 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 780:936, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753231 IP node1.ovirt.49152 > node2.ovirt.49131: Flags [.], ack
> 936, win 15566, options [nop,nop,TS val 6760978 ecr 6760055], length 0
> ...
>
> and no yet to the eth1 I dedicated to gluster:
>
> [root@node1 ~]# tcpdump -i eth1 portrange 49152-49664
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
>
> What am I missing here? What can I do to force the Gluster traffic to
> really flow on dedicated Gluster network?
>
> Thank you,
> Stefano.
>
> [1] https://red.ht/2MiZ4Ge
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U3ZAM3DGE3EBGCWBIM37PTKFNULN2KTF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGQG6SIWR7R4B22HDZNBG53K2AWBF3CS/


[ovirt-users] [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Stefano Stagnaro
Hi,

I've deployed an oVirt HC starting with latest oVirt Node 4.3.6; this is my 
simple network plan (FQDNs only resolves the front-end addresses):

front-end   back-end
engine.ovirt192.168.110.10
node1.ovirt 192.168.110.11  192.168.210.11
node2.ovirt 192.168.110.12  192.168.210.12
node3.ovirt 192.168.110.13  192.168.210.13

at the end I followed the RHHI-V 1.6 Deployment Guide where, at chapter 9 [1], 
it suggests to create a logical network for Gluster traffic. Now I can see, 
indeed, back-end addresses added in the address pool:

[root@node1 ~]# gluster peer status
Number of Peers: 2

Hostname: node3.ovirt
Uuid: 3fe33e8b-d073-4d7a-8bda-441c42317c92
State: Peer in Cluster (Connected)
Other names:
192.168.210.13

Hostname: node2.ovirt
Uuid: a95a9233-203d-4280-92b9-04217fa338d8
State: Peer in Cluster (Connected)
Other names:
192.168.210.12

The problem is that the Gluster traffic seems still to flow on the management 
interfaces:

[root@node1 ~]# tcpdump -i ovirtmgmt portrange 49152-49664  

   
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ovirtmgmt, link-type EN10MB (Ethernet), capture size 262144 bytes
14:04:58.746574 IP node2.ovirt.49129 > node1.ovirt.49153: Flags [.], ack 
484303246, win 18338, options [nop,nop,TS val 6760049 ecr 6760932], length 0
14:04:58.753050 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
2507489191:2507489347, ack 2889633200, win 20874, options [nop,nop,TS val 
6760055 ecr 6757892], length 156
14:04:58.753131 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
156:312, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753142 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
312:468, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753148 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
468:624, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753203 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
624:780, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753216 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
780:936, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753231 IP node1.ovirt.49152 > node2.ovirt.49131: Flags [.], ack 936, 
win 15566, options [nop,nop,TS val 6760978 ecr 6760055], length 0
...

and no yet to the eth1 I dedicated to gluster:

[root@node1 ~]# tcpdump -i eth1 portrange 49152-49664
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes

What am I missing here? What can I do to force the Gluster traffic to really 
flow on dedicated Gluster network?

Thank you,
Stefano.

[1] https://red.ht/2MiZ4Ge
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U3ZAM3DGE3EBGCWBIM37PTKFNULN2KTF/


[ovirt-users] Re: ovirt and jackson security

2019-10-16 Thread Martin Perina
On Wed, Oct 16, 2019 at 12:12 PM Fabrice Bacchella <
fabrice.bacche...@icloud.com> wrote:

> When I launch ovirt 4.3.6, I see in the command line of the ovirt-engine:
>
> -Djackson.deserialization.whitelist.packages=org,com,java,javax
>
> That whitelist almost everything. Isn't that dangerous ?
>

There is no other easy way how to do that, because we are using huge number
of classes, which can be serialized into JSON. This was breaking backward
compatibility way how CVE for jackson was fixed, but oVirt is not affected
by this CVE, because we use jackson directly only when storing data in
database or for internal engine - VDSM communication. So unless you have an
attacker being able to tamper data in your database or an attacker in
internal network, who is able to masquerade as proper host and return
problematic JSON back to engine, you are not affected.


> When I read this:
> https://medium.com/@cowtowncoder/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062
> I think the white list should be as small as possible.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GZODZPENEN2RU5LJDWXSEYKVRCFPIHOU/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLLQEJEVP64YRPMVVA7F3VMFGJU7KDMY/


[ovirt-users] ovirt and jackson security

2019-10-16 Thread Fabrice Bacchella
When I launch ovirt 4.3.6, I see in the command line of the ovirt-engine:

-Djackson.deserialization.whitelist.packages=org,com,java,javax

That whitelist almost everything. Isn't that dangerous ?

When I read this: 
https://medium.com/@cowtowncoder/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062
 I think the white list should be as small as possible.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GZODZPENEN2RU5LJDWXSEYKVRCFPIHOU/


[ovirt-users] Re: oVirt Self-Hosted Engine - VLAN error

2019-10-16 Thread Dominik Holler
On Mon, Oct 14, 2019 at 8:06 PM  wrote:

> Hi folks,
>
> We are some days trying deploy oVirt Self-Hosted Engine, but it seems
> there are something wrong.
>
> The deploy process fail in this point
>
>
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : set_fact]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Collect error events from the
> Engine]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Generate the error message
> from the engine events]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Fail with error description]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> host has been set in non_operational status, deployment errors:   code 505:
> Host srv-virt7.cloud.blueit installation failed. Failed to configure
> management network on the host.,code 9000: Failed to verify Power
> Management configuration for Host srv-virt7.cloud.blueit.,   fix
> accordingly and re-deploy."}
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch logs from the engine VM]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Set destination directory path]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Create destination directory]
> [ INFO  ] changed: [localhost]
>
> and this
>
>
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
> /etc/hosts for the local VM]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a
> failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> system may not be provisioned according to the playbook results: please
> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> [ INFO  ] Stage: Clean up
> [ INFO  ] Cleaning temporary resources
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of
> steps]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
>
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
> /etc/hosts for the local VM]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a
> failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> system may not be provisioned according to the playbook results: please
> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> [ INFO  ] Stage: Clean up
> [ INFO  ] Cleaning temporary resources
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of
> steps]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
>
>
> But we did not found nothing relevant in the log.
>
>
Can you share the relevant part of vdsm.log and engine.log (inside engine
VM, if you have a way to access the VM)?


> Our environtment is:
>
> OS Version: RHEL - 7 - 7.1908.0.el7.centos
> OS Description: oVirt Node 4.3.6
> Kernel Version: 3.10.0 - 1062.1.1.el7.x86_64
> KVM Version: 2.12.0 - 33.1.el7
> LIBVIRT Version: libvirt-4.5.0-23.el7_7.1
> VDSM Version: vdsm-4.30.33-1.el7
>
> The ovirtmgmt bridge has TAGGED VLAN 10
>
> [root@srv-virt7 ~]# brctl show
> bridge name bridge id   STP enabled interfaces
> ovirtmgmt   8000.6cae8b284832   no  eno2.10
>
>
>
> Could someone has any idea or tip about this?
>
>
> Regards
>
> Carlos
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6AGHARXVEZ5ZUU6AALH2GP7AK62Q7DZ7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHT4TNCZDWRU42CQ6S6JNAJ5SCOLQRCA/


[ovirt-users] Re: Cannot enable maintenance mode

2019-10-16 Thread Lukas Svaty
Did you

"Consider manual intervention"

such as

"stopping" or "migrating Vms"

which are running on that host?

If you are trying to put host to maintenance, it will migrate all the VMs
somewhere else,
thus... you might have problem with migrations (try to migrate them to
other destination host) or power them off.

On Wed, Oct 16, 2019 at 9:15 AM Bruno Martins 
wrote:

> Hey guys,
>
> There are really no options left here? Is there something else I should
> check?
>
> Thank you!
>
> -Original Message-
> From: Bruno Martins 
> Sent: 3 de outubro de 2019 22:41
> To: Benny Zlotnik 
> Cc: users@ovirt.org
> Subject: [ovirt-users] Re: Cannot enable maintenance mode
>
> Hello Benny,
>
> I did. No luck, still...
>
> Cheers!
>
> -Original Message-
> From: Benny Zlotnik 
> Sent: 2 de outubro de 2019 19:19
> To: Bruno Martins 
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] Re: Cannot enable maintenance mode
>
> Did you try the "Confirm Host has been rebooted" button?
>
> On Wed, Oct 2, 2019 at 9:17 PM Bruno Martins 
> wrote:
> >
> > Hello guys,
> >
> > No ideas for this issue?
> >
> > Thanks for your cooperation!
> >
> > Kind regards,
> >
> > -Original Message-
> > From: Bruno Martins 
> > Sent: 29 de setembro de 2019 16:16
> > To: users@ovirt.org
> > Subject: [ovirt-users] Cannot enable maintenance mode
> >
> > Hello guys,
> >
> > I am being unable to put a host from a two nodes cluster into
> maintenance mode in order to remove it from the cluster afterwards.
> >
> > This is what I see in engine.log:
> >
> > 2019-09-27 16:20:58,364 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-6-thread-45) [4cc251c9] Correlation ID: 4cc251c9,
> Job ID: 65731fbb-db34-49a9-ab56-9fba59bc0ee0, Call Stack: null, Custom
> Event ID: -1, Message: Host CentOS-H1 cannot change into maintenance mode -
> not all Vms have been migrated successfully. Consider manual intervention:
> stopping/migrating Vms: Non interactive user (User: admin).
> >
> > Host has been rebooted multiple times. vdsClient shows no VM's running.
> >
> > What else can I do?
> >
> > Kind regards,
> >
> > Bruno Martins
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org Privacy
> > Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5FJWFW7
> > GXNW6YWRPFWOKA6VU3RH4WD3/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org Privacy
> > Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/DD5DW6KK
> > OOHGL3WFEKIIIS57BN3VWMAQ/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/64GZQKZA7LX7KLMXZ5K2BS46AJVVAMPZ/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJPEC7RDG3AUSAQAYJO4EZNKONUA3D5F/
>


-- 

Lukas Svaty

RHV QE
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XP25AFFHG7Q6UYLH3CULKRPB36GI36OZ/


[ovirt-users] How to activate Windows systems in stateless pool mode?

2019-10-16 Thread zhou...@vip.friendtimes.net
I have activated the windows 10 system and make it into a template,
Then  I use the template to create a  stateless pool with 20 vms;
But the vms canot be activated,How can I activate them?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/THNEGOQG5BMMVS7PJAVLBVS3UJ7NJSJQ/


[ovirt-users] Re: Cannot enable maintenance mode

2019-10-16 Thread Bruno Martins
Hey guys,

There are really no options left here? Is there something else I should check?

Thank you!

-Original Message-
From: Bruno Martins  
Sent: 3 de outubro de 2019 22:41
To: Benny Zlotnik 
Cc: users@ovirt.org
Subject: [ovirt-users] Re: Cannot enable maintenance mode

Hello Benny,

I did. No luck, still...

Cheers!

-Original Message-
From: Benny Zlotnik 
Sent: 2 de outubro de 2019 19:19
To: Bruno Martins 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Re: Cannot enable maintenance mode

Did you try the "Confirm Host has been rebooted" button?

On Wed, Oct 2, 2019 at 9:17 PM Bruno Martins  wrote:
>
> Hello guys,
>
> No ideas for this issue?
>
> Thanks for your cooperation!
>
> Kind regards,
>
> -Original Message-
> From: Bruno Martins 
> Sent: 29 de setembro de 2019 16:16
> To: users@ovirt.org
> Subject: [ovirt-users] Cannot enable maintenance mode
>
> Hello guys,
>
> I am being unable to put a host from a two nodes cluster into maintenance 
> mode in order to remove it from the cluster afterwards.
>
> This is what I see in engine.log:
>
> 2019-09-27 16:20:58,364 INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-6-thread-45) [4cc251c9] Correlation ID: 4cc251c9, Job 
> ID: 65731fbb-db34-49a9-ab56-9fba59bc0ee0, Call Stack: null, Custom Event ID: 
> -1, Message: Host CentOS-H1 cannot change into maintenance mode - not all Vms 
> have been migrated successfully. Consider manual intervention: 
> stopping/migrating Vms: Non interactive user (User: admin).
>
> Host has been rebooted multiple times. vdsClient shows no VM's running.
>
> What else can I do?
>
> Kind regards,
>
> Bruno Martins
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy
> Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5FJWFW7
> GXNW6YWRPFWOKA6VU3RH4WD3/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy
> Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DD5DW6KK
> OOHGL3WFEKIIIS57BN3VWMAQ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/64GZQKZA7LX7KLMXZ5K2BS46AJVVAMPZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJPEC7RDG3AUSAQAYJO4EZNKONUA3D5F/