[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-13 Thread Strahil Nikolov via Users
Imagine you got a host with 60 Spinning Disks -> I would recommend you to split 
it to 10/12 disk groups and these groups will represent several bricks (6/5).

Keep in mind that when you start using many (some articles state hundreds , but 
no exact number was given) bricks , you should consider brick multiplexing 
(cluster.brick-multiplex).

So, you can use as many bricks you want , but each brick requires cpu time 
(separate thread) , tcp port number and memory.

In my setup I use multiple bricks in order to spread the load via LACP over 
several small (1GBE) NICs.


The only "limitation" is to have your data on separate hosts , so when you 
create the volume it is extremely advisable that you follow this model:

hostA:/path/to/brick
hostB:/path/to/brick
hostC:/path/to/brick
hostA:/path/to/brick2
hostB:/path/to/brick2
hostC:/path/to/brick2

In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that in 
mind.

From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks 
should be in a raid of some type (maybe RAID10 for perf).


Best Regards,
Strahil Nikolov






В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams 
 написа: 





Hello,

I am getting some questions from others on my team.

I have some hosts that could provide up to 6 JBOD disks for oVirt data (not 
arbiter) bricks 

Would this be workable / advisable ?  I'm under the impression there should not 
be more than 1 data brick per HCI host .

Please correct me if I'm wrong.

Thank You For Your Help !


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XP3KR6S47ZIVK3K3AWIXCJQG7ZKTTO7Q/


[ovirt-users] Number of data bricks per oVirt HCI host

2020-10-13 Thread C Williams
Hello,

I am getting some questions from others on my team.

I have some hosts that could provide up to 6 JBOD disks for oVirt data (not
arbiter) bricks

Would this be workable / advisable ?  I'm under the impression there should
not be more than 1 data brick per HCI host .

Please correct me if I'm wrong.

Thank You For Your Help !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/


[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-13 Thread Jeff Bailey


On 10/13/2020 5:01 PM, Michael Thomas wrote:
After getting past the proxy issue, I was finally able to run the 
engine-setup --reconfigure-optional-components.  The new 
ManagedBlockStorage storage domain exists, and I was able to create a 
disk.  However, I am unable to attach the disk to a running VM.


The engine.log shows the following, with a reference to a possible 
cinderlib error ("cinderlib execution failed"):


2020-10-13 15:15:23,508-05 INFO 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13) 
[c73386d0-a713-4c37-bc9b-e7c4f9083f78] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[grafana=VM_NAME]', 
sharedLocks='[5676d441-660e-4d9f-a586-e53ff0ea054b=VM]'}'
2020-10-13 15:15:23,522-05 INFO 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13) 
[c73386d0-a713-4c37-bc9b-e7c4f9083f78] Running command: 
UpdateVmCommand internal: false. Entities affected :  ID: 
5676d441-660e-4d9f-a586-e53ff0ea054b Type: VMAction group 
EDIT_VM_PROPERTIES with role type USER
2020-10-13 15:15:23,536-05 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-13) [c73386d0-a713-4c37-bc9b-e7c4f9083f78] EVENT_ID: 
USER_UPDATE_VM(35), VM grafana configuration was updated by 
michael.thomas@internal-authz.
2020-10-13 15:15:23,539-05 INFO 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13) 
[c73386d0-a713-4c37-bc9b-e7c4f9083f78] Lock freed to object 
'EngineLock:{exclusiveLocks='[grafana=VM_NAME]', 
sharedLocks='[5676d441-660e-4d9f-a586-e53ff0ea054b=VM]'}'
2020-10-13 15:15:24,129-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] 
(default task-13) [f8829338-b040-46d0-a838-3cf28869637c] Lock Acquired 
to object 
'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]', 
sharedLocks=''}'
2020-10-13 15:15:24,147-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] 
(default task-13) [f8829338-b040-46d0-a838-3cf28869637c] Running 
command: AttachDiskToVmCommand internal: false. Entities affected :  
ID: 5676d441-660e-4d9f-a586-e53ff0ea054b Type: VMAction group 
CONFIGURE_VM_STORAGE with role type USER,  ID: 
5419640e-445f-4b3f-a29d-b316ad031b7a Type: DiskAction group 
ATTACH_DISK with role type USER
2020-10-13 15:15:24,152-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.managedblock.ConnectManagedBlockStorageDeviceCommand] 
(default task-13) [7cb262cc] Running command: 
ConnectManagedBlockStorageDeviceCommand internal: true.
2020-10-13 15:15:26,006-05 ERROR 
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(default task-13) [7cb262cc] cinderlib execution failed:
2020-10-13 15:15:26,011-05 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] START, HotPlugDiskVDSCommand(HostName = 
ovirt4-mgmt.ldas.ligo-la.caltech.edu, 
HotPlugDiskVDSParameters:{hostId='61da4cdf-638b-4cbd-9921-5be820998d31', 
vmId='5676d441-660e-4d9f-a586-e53ff0ea054b', 
diskId='5419640e-445f-4b3f-a29d-b316ad031b7a'}), log id: 660ebc9e
2020-10-13 15:15:26,012-05 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] Failed in 'HotPlugDiskVDS' method, for 
vds: 'ovirt4-mgmt.ldas.ligo-la.caltech.edu'; host: 
'ovirt4-mgmt.ldas.ligo-la.caltech.edu': null
2020-10-13 15:15:26,012-05 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] Command 'HotPlugDiskVDSCommand(HostName = 
ovirt4-mgmt.ldas.ligo-la.caltech.edu, 
HotPlugDiskVDSParameters:{hostId='61da4cdf-638b-4cbd-9921-5be820998d31', 
vmId='5676d441-660e-4d9f-a586-e53ff0ea054b', 
diskId='5419640e-445f-4b3f-a29d-b316ad031b7a'})' execution failed: null
2020-10-13 15:15:26,012-05 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] FINISH, HotPlugDiskVDSCommand, return: , 
log id: 660ebc9e
2020-10-13 15:15:26,012-05 ERROR 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] 
(default task-13) [7cb262cc] Command 
'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed: 
EngineException: java.lang.NullPointerException (Failed with error 
ENGINE and code 5001)
2020-10-13 15:15:26,013-05 ERROR 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] 
(default task-13) [7cb262cc] Transaction rolled-back for command 
'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand'.
2020-10-13 15:15:26,021-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-13) [7cb262cc] EVENT_ID: 
USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk 
testvm_disk to VM grafana (User: michael.thomas@internal-authz).
2020-10-13 15:15:26,021-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] 
(default task-13) [7cb262cc] Lock freed to object 
'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]', 
sharedLocks=''}'


The /var/log/cinder/ directory on the ovirt node is empty, and doesn't 

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-13 Thread Michael Thomas
After getting past the proxy issue, I was finally able to run the 
engine-setup --reconfigure-optional-components.  The new 
ManagedBlockStorage storage domain exists, and I was able to create a 
disk.  However, I am unable to attach the disk to a running VM.


The engine.log shows the following, with a reference to a possible 
cinderlib error ("cinderlib execution failed"):


2020-10-13 15:15:23,508-05 INFO 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13) 
[c73386d0-a713-4c37-bc9b-e7c4f9083f78] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[grafana=VM_NAME]', 
sharedLocks='[5676d441-660e-4d9f-a586-e53ff0ea054b=VM]'}'
2020-10-13 15:15:23,522-05 INFO 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13) 
[c73386d0-a713-4c37-bc9b-e7c4f9083f78] Running command: UpdateVmCommand 
internal: false. Entities affected :  ID: 
5676d441-660e-4d9f-a586-e53ff0ea054b Type: VMAction group 
EDIT_VM_PROPERTIES with role type USER
2020-10-13 15:15:23,536-05 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-13) [c73386d0-a713-4c37-bc9b-e7c4f9083f78] EVENT_ID: 
USER_UPDATE_VM(35), VM grafana configuration was updated by 
michael.thomas@internal-authz.
2020-10-13 15:15:23,539-05 INFO 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-13) 
[c73386d0-a713-4c37-bc9b-e7c4f9083f78] Lock freed to object 
'EngineLock:{exclusiveLocks='[grafana=VM_NAME]', 
sharedLocks='[5676d441-660e-4d9f-a586-e53ff0ea054b=VM]'}'
2020-10-13 15:15:24,129-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default 
task-13) [f8829338-b040-46d0-a838-3cf28869637c] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]', 
sharedLocks=''}'
2020-10-13 15:15:24,147-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default 
task-13) [f8829338-b040-46d0-a838-3cf28869637c] Running command: 
AttachDiskToVmCommand internal: false. Entities affected :  ID: 
5676d441-660e-4d9f-a586-e53ff0ea054b Type: VMAction group 
CONFIGURE_VM_STORAGE with role type USER,  ID: 
5419640e-445f-4b3f-a29d-b316ad031b7a Type: DiskAction group ATTACH_DISK 
with role type USER
2020-10-13 15:15:24,152-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.managedblock.ConnectManagedBlockStorageDeviceCommand] 
(default task-13) [7cb262cc] Running command: 
ConnectManagedBlockStorageDeviceCommand internal: true.
2020-10-13 15:15:26,006-05 ERROR 
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(default task-13) [7cb262cc] cinderlib execution failed:
2020-10-13 15:15:26,011-05 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] START, HotPlugDiskVDSCommand(HostName = 
ovirt4-mgmt.ldas.ligo-la.caltech.edu, 
HotPlugDiskVDSParameters:{hostId='61da4cdf-638b-4cbd-9921-5be820998d31', 
vmId='5676d441-660e-4d9f-a586-e53ff0ea054b', 
diskId='5419640e-445f-4b3f-a29d-b316ad031b7a'}), log id: 660ebc9e
2020-10-13 15:15:26,012-05 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] Failed in 'HotPlugDiskVDS' method, for vds: 
'ovirt4-mgmt.ldas.ligo-la.caltech.edu'; host: 
'ovirt4-mgmt.ldas.ligo-la.caltech.edu': null
2020-10-13 15:15:26,012-05 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] Command 'HotPlugDiskVDSCommand(HostName = 
ovirt4-mgmt.ldas.ligo-la.caltech.edu, 
HotPlugDiskVDSParameters:{hostId='61da4cdf-638b-4cbd-9921-5be820998d31', 
vmId='5676d441-660e-4d9f-a586-e53ff0ea054b', 
diskId='5419640e-445f-4b3f-a29d-b316ad031b7a'})' execution failed: null
2020-10-13 15:15:26,012-05 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(default task-13) [7cb262cc] FINISH, HotPlugDiskVDSCommand, return: , 
log id: 660ebc9e
2020-10-13 15:15:26,012-05 ERROR 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default 
task-13) [7cb262cc] Command 
'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed: 
EngineException: java.lang.NullPointerException (Failed with error 
ENGINE and code 5001)
2020-10-13 15:15:26,013-05 ERROR 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default 
task-13) [7cb262cc] Transaction rolled-back for command 
'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand'.
2020-10-13 15:15:26,021-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-13) [7cb262cc] EVENT_ID: 
USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk testvm_disk 
to VM grafana (User: michael.thomas@internal-authz).
2020-10-13 15:15:26,021-05 INFO 
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default 
task-13) [7cb262cc] Lock freed to object 
'EngineLock:{exclusiveLocks='[5419640e-445f-4b3f-a29d-b316ad031b7a=DISK]', 
sharedLocks=''}'


The /var/log/cinder/ directory on the ovirt node is empty, and doesn't 
exist on the engine itself.


To verify that 

[ovirt-users] Re: Latest ManagedBlockDevice documentation

2020-10-13 Thread Michael Thomas

Setting both http_proxy and https_proxy fixed the issue.

Thanks for the tip!

--Mike


I am not sure, it's been a long time since I tried that.

Feel free to file a bug.

You can also try setting env var 'http_proxy' for engine-setup, e.g.:

http_proxy=MY_PROXY_URL engine-setup --reconfigure-optional-components

Alternatively, you can also add '--offline' to engine-setup cmd, and then it
won't do any package management (not try to update, check for updates, etc.).

Best regards,

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1392312



--Mike


On 9/30/20 2:19 AM, Benny Zlotnik wrote:

When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`


On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas  wrote:


Hi Benny,

Thanks for the confirmation.  I've installed openstack-ussuri and ceph
Octopus.  Then I tried using these instructions, as well as the deep
dive that Eyal has posted at https://www.youtube.com/watch?v=F3JttBkjsX8.

I've done this a couple of times, and each time the engine fails when I
try to add the new managed block storage domain.  The error on the
screen indicates that it can't connect to the cinder database.  The
error in the engine log is:

2020-09-29 17:02:11,859-05 WARN
[org.ovirt.engine.core.bll.storage.domain.AddManagedBlockStorageDomainCommand]
(default task-2) [d519088c-7956-4078-b5cf-156e5b3f1e59] Validation of
action 'AddManagedBlockStorageDomain' failed for user
admin@internal-authz. Reasons:
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED,ACTION_TYPE_FAILED_CINDERLIB_DATA_BASE_REQUIRED

I had created the db on the engine with this command:

su - postgres -c "psql -d template1 -c \"create database cinder owner
engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8'
lc_ctype 'en_US.UTF-8';\""

...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:

   hostcinder  engine  ::0/0   md5
   hostcinder  engine  0.0.0.0/0   md5

Is there anywhere else I should look to find out what may have gone wrong?

--Mike

On 9/29/20 3:34 PM, Benny Zlotnik wrote:

The feature is currently in tech preview, but it's being worked on.
The feature page is outdated,  but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will update it soon.

Specifically speaking, the openstack version should be updated to
train (it is likely ussuri works fine too, but I haven't tried it) and
cinderlib has an RPM now (python3-cinderlib)[1], so it can be
installed instead of using pip, same goes for os-brick. The rest of
the information is valid.


[1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/

On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas  wrote:


I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.

I found this:

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."

The oVirt administration guide[1] does not talk about managed block devices.

I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.

Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?

--Mike
[1]ovirt.org/documentation/administration_guide/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/














___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KT3QQZZESXOTSQFBZZXYDH5WNZVKMJZ/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-13 Thread Strahil Nikolov via Users
One recommendation is to get rid of the multipath for your SSD.
Replica 3 volumes are quite resilient and I'm really surprised it happened to 
you.

For the multipath stuff , you can create something like this:
[root@ovirt1 ~]# cat /etc/multipath/conf.d/blacklist.conf  
blacklist {
   wwid Crucial_CT256MX100SSD1_14390D52DCF5
}

As you are running multipath already , just run the following to get the wwid 
of your ssd :
multipath -v4 | grep 'got wwid of'

What were the gluster vol options you were running with ? oVirt is running the 
volume with 'performance.strict-o-direct' and Direct I/O , so you should not 
loose any data.


Best Regards,
Strahil Nikolov



 





В вторник, 13 октомври 2020 г., 16:35:26 Гринуич+3, Jarosław Prokopowski 
 написа: 





Hi Nikolov,

Thanks for the very interesting answer :-)

I do not use any raid controller. I was hoping glusterfs would take care of 
fault tolerance but apparently it failed.
I have one Samsung 1TB SSD drives in each server for VM storage. I see it is of 
type "multipath".  There is XFS filesystem over standard LVM (not thin). 
Mount options are: inode64,noatime,nodiratime
SELinux was in permissive mode.

I must read more about the things you described as have never  dived into it.
Please let me know if you have any suggestions :-)

Thanks a lot!
Jarek



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBIDHY6P3KKTXFMPXP32YQ2FDZNXDB4L/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/476TCRSMAV2T4FKER4LLN2EPSEZRE7SH/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread Nir Soffer
On Tue, Oct 13, 2020 at 5:56 PM Nir Levy  wrote:
>
>
>
> On Tue, Oct 13, 2020 at 5:51 PM Nir Soffer  wrote:
>>
>> On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David  wrote:
>> >
>> > On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David  wrote:
>> > >
>> > > On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users
>> > >  wrote:
>> > > >
>> > > > So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we 
>> > > > are attempting to upgrade it to latest version, 4.4.2, but it fails as 
>> > > > shown below. Problem is that the Storage domains listed are all 
>> > > > located on an external iSCSI SAN. The Storage Domains were created in 
>> > > > another cluster we had (oVirt Node 4.3 based) and detached from the 
>> > > > old cluster and imported successfully into the new cluster through the 
>> > > > oVirt Management interface. As I understand, oVirt itself has created 
>> > > > the mount points under /rhev/data-center/mnt/blockSD/ for each of the 
>> > > > iSCSI domains, and as such they are not really storaged domains on the 
>> > > > / filesystem.
>> > > >
>> > > > I do believe the solution to the mentioned BugZilla bug has caused a 
>> > > > new bug, but I may be wrong. I cannot see what we have done wrong when 
>> > > > importing these storage domains to the cluster (well, actually, some 
>> > > > were freshly created in this cluster, thus fully managed by oVirt 4.4 
>> > > > manager interface).
>> > >
>> > > This is likely caused by the fix for:
>> > > https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
>> > >
>> > > Adding Nir.
>> > >
>> > > >
>> > > > What can we do to proceed in upgrading the hosts to latest oVirt Node?
>> > >
>> > > Right now, without another fix? Make sure that the following command:
>> > >
>> > > find / -xdev -path "*/dom_md/metadata" -not -empty
>> > >
>> > > Returns an empty output.
>> > >
>> > > You might need to move the host to maintenance and then manually
>> > > umount your SDs, or something like that.
>> > >
>> > > Please open a bug so that we can refine this command further.
>> >
>> > Nir (Levy) - perhaps we should change this command to something like:
>> >
>> > find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
>> >
>> > >
>> > > Thanks and best regards,
>> > >
>> > > >
>> > > > Dependencies resolved.
>> > > > =
>> > > >  Package   
>> > > > Architecture   
>> > > >Version 
>> > > >  Repository
>> > > > Size
>> > > > =
>> > > > Upgrading:
>> > > >  ovirt-node-ng-image-update
>> > > > noarch 
>> > > >4.4.2-1.el8 
>> > > >  ovirt-4.4 
>> > > >782 M
>> > > >  replacing  ovirt-node-ng-image-update-placeholder.noarch 
>> > > > 4.4.1.5-1.el8
>> > > >
>> > > > Transaction Summary
>> > > > =
>> > > > Upgrade  1 Package
>> > > >
>> > > > Total download size: 782 M
>> > > > Is this ok [y/N]: y
>> > > > Downloading Packages:
>> > > > ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm  
>> > > >
>> > > >
>> > > >
>> > > >  8.6 MB/s | 782 MB 01:31
>> > > > -
>> > > 

[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-13 Thread info
I hope it is not too forward of me to only send it to you.

Yours Sincerely,
 
Henni 


-Original Message-
From: Yedidyah Bar David  
Sent: Tuesday, 13 October 2020 16:42
To: i...@worldhostess.com
Cc: Edward Berger ; users 
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

Please share all of /var/log/ovirt-hosted-engine-setup:

cd /var/log/
tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup

Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing service (e.g. 
dropbox, google drive etc.) and share the link.

Thanks!

On Tue, Oct 13, 2020 at 10:56 AM  wrote:
>
> Hope this can help. It seems it crash every time when I install.
--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KYHQ622XUBYCTVZLQZ4AGLKT2R/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WW27EIM2YU4AAVZFT4ZZQIWYXOHR/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread Nir Levy
On Tue, Oct 13, 2020 at 5:51 PM Nir Soffer  wrote:

> On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David 
> wrote:
> >
> > On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David 
> wrote:
> > >
> > > On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users
> > >  wrote:
> > > >
> > > > So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we
> are attempting to upgrade it to latest version, 4.4.2, but it fails as
> shown below. Problem is that the Storage domains listed are all located on
> an external iSCSI SAN. The Storage Domains were created in another cluster
> we had (oVirt Node 4.3 based) and detached from the old cluster and
> imported successfully into the new cluster through the oVirt Management
> interface. As I understand, oVirt itself has created the mount points under
> /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such
> they are not really storaged domains on the / filesystem.
> > > >
> > > > I do believe the solution to the mentioned BugZilla bug has caused a
> new bug, but I may be wrong. I cannot see what we have done wrong when
> importing these storage domains to the cluster (well, actually, some were
> freshly created in this cluster, thus fully managed by oVirt 4.4 manager
> interface).
> > >
> > > This is likely caused by the fix for:
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
> > >
> > > Adding Nir.
> > >
> > > >
> > > > What can we do to proceed in upgrading the hosts to latest oVirt
> Node?
> > >
> > > Right now, without another fix? Make sure that the following command:
> > >
> > > find / -xdev -path "*/dom_md/metadata" -not -empty
> > >
> > > Returns an empty output.
> > >
> > > You might need to move the host to maintenance and then manually
> > > umount your SDs, or something like that.
> > >
> > > Please open a bug so that we can refine this command further.
> >
> > Nir (Levy) - perhaps we should change this command to something like:
> >
> > find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
> >
> > >
> > > Thanks and best regards,
> > >
> > > >
> > > > Dependencies resolved.
> > > >
> =
> > > >  Package
>Architecture
>   Version
> Repository
>   Size
> > > >
> =
> > > > Upgrading:
> > > >  ovirt-node-ng-image-update
>   noarch
> 4.4.2-1.el8
>   ovirt-4.4
> 782 M
> > > >  replacing  ovirt-node-ng-image-update-placeholder.noarch
> 4.4.1.5-1.el8
> > > >
> > > > Transaction Summary
> > > >
> =
> > > > Upgrade  1 Package
> > > >
> > > > Total download size: 782 M
> > > > Is this ok [y/N]: y
> > > > Downloading Packages:
> > > > ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm
>
>
> 8.6 MB/s |
> 782 MB 01:31
> > > >
> -
> > > > Total
>
>
> 8.6 MB/s |
> 782 MB 01:31
> > > > Running transaction check
> > > > Transaction check succeeded.
> > > > Running transaction test
> > > > Transaction test succeeded.
> > > > Running transaction
> > > >   Preparing:
>
>
>
>  1/1
> > > >   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>
>
>
>  1/3
> > > > Local storage domains were found on the same filesystem as / !
> Please migrate the data to a new LV before upgrading, or you will lose the
> VMs
> > > > See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3
> > > > Storage domains were found in:
> > > >
>  /rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md
> > > >
>  

[ovirt-users] Re: oVirt-node

2020-10-13 Thread Philip Brown
What is the longevity status of cockpit-machines-ovirt ?
I understand it was removed from automatically being in node, due to perceived 
redundancy.

But is the software package itself going away and/or not going to be usable in 
the future?


- Original Message -
From: "Sandro Bonazzola" 
To: "Budur Nagaraju" 
Cc: "users" 
Sent: Monday, October 12, 2020 5:05:01 AM
Subject: [ovirt-users] Re: oVirt-node

Il giorno lun 12 ott 2020 alle ore 12:36 Budur Nagaraju < [ 
mailto:nbud...@gmail.com | nbud...@gmail.com ] > ha scritto: 



Hi 

Is there a way to deploy vms on the ovirt node without using the oVirt engine? 

Hi, 
if you mean: 
"Can I use oVirt Node for running VMs without using oVirt Engine?" 
then yes, you can. 

oVirt Node is a CentOS Linux derivative and as such you can use virt-manager 
from your laptop to connect to it and manage VMs there as if it was a normal 
CentOS. 
You can also use cockpit for creating local VMs. 

If you mean: 
"Can I create VMs from oVirt Node and also manage them from the engine?" 
the short answer is no. 
The long answer is: you can still try using cockpit-machines-ovirt [ 
https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html | 
https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html ] 
which was deprecated in oVIrt 4.3 and removed in 4.4. 
Or run VMs on oVirt Node and try to make them visible to engine using KVM 
provider [ 
https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
 | 
https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
 ] 
But I wouldn't recommend using these flows. 

-- 


Sandro Bonazzola 

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV 

[ https://www.redhat.com/ | Red Hat EMEA ] 


[ mailto:sbona...@redhat.com | sbona...@redhat.com ] 
[ https://www.redhat.com/ ] 
Red Hat respects your work life balance. Therefore there is no need to answer 
this email out of your office hours. 
[ https://www.redhat.com/it/forums/emea/italy-track ] 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7RQZY3DCQ7TFFB4OHOO7EQOVYZCRCDJD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CYGFMNCKPW2I56FUV7IRQXHYK7IQNCDC/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread Nir Soffer
On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David  wrote:
>
> On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David  wrote:
> >
> > On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users
> >  wrote:
> > >
> > > So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are 
> > > attempting to upgrade it to latest version, 4.4.2, but it fails as shown 
> > > below. Problem is that the Storage domains listed are all located on an 
> > > external iSCSI SAN. The Storage Domains were created in another cluster 
> > > we had (oVirt Node 4.3 based) and detached from the old cluster and 
> > > imported successfully into the new cluster through the oVirt Management 
> > > interface. As I understand, oVirt itself has created the mount points 
> > > under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and 
> > > as such they are not really storaged domains on the / filesystem.
> > >
> > > I do believe the solution to the mentioned BugZilla bug has caused a new 
> > > bug, but I may be wrong. I cannot see what we have done wrong when 
> > > importing these storage domains to the cluster (well, actually, some were 
> > > freshly created in this cluster, thus fully managed by oVirt 4.4 manager 
> > > interface).
> >
> > This is likely caused by the fix for:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
> >
> > Adding Nir.
> >
> > >
> > > What can we do to proceed in upgrading the hosts to latest oVirt Node?
> >
> > Right now, without another fix? Make sure that the following command:
> >
> > find / -xdev -path "*/dom_md/metadata" -not -empty
> >
> > Returns an empty output.
> >
> > You might need to move the host to maintenance and then manually
> > umount your SDs, or something like that.
> >
> > Please open a bug so that we can refine this command further.
>
> Nir (Levy) - perhaps we should change this command to something like:
>
> find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
>
> >
> > Thanks and best regards,
> >
> > >
> > > Dependencies resolved.
> > > =
> > >  Package  
> > >  Architecture 
> > >  Version  
> > > Repository
> > > Size
> > > =
> > > Upgrading:
> > >  ovirt-node-ng-image-update   
> > >  noarch   
> > >  4.4.2-1.el8  
> > > ovirt-4.4 
> > >782 M
> > >  replacing  ovirt-node-ng-image-update-placeholder.noarch 
> > > 4.4.1.5-1.el8
> > >
> > > Transaction Summary
> > > =
> > > Upgrade  1 Package
> > >
> > > Total download size: 782 M
> > > Is this ok [y/N]: y
> > > Downloading Packages:
> > > ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 
> > >   
> > >   
> > >8.6 MB/s | 
> > > 782 MB 01:31
> > > -
> > > Total 
> > >   
> > >   
> > >8.6 MB/s | 
> > > 782 MB 01:31
> > > Running 

[ovirt-users] Maximum domains per data center

2020-10-13 Thread Tommaso - Shellrent via Users

Hi to all.

Can someone confirm to me the value of max domains per data center on 
ovirt 4.4 ?


We found only this for RHEV: https://access.redhat.com/articles/906543

Regards,
Tommaso.

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQFGCDRW2FF7EJRD77OZ6AEN4VXRWLIN/


[ovirt-users] Re: oVirt 4.4.2.6-1.el8 (SHE). Grafana integration not configured. The link to the Monitoring portal is not displayed on the Manager home page.

2020-10-13 Thread Dmitry Kharlamov
Many thanks, Didi, Gianluca!

Via Invite + usern...@ad.domain.name Everything worked out! )))

Is it possible to use a file /etc/grafana/ldap.toml for configure 
authentication in the Active Directory? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RLOZCQRCQ6V7H4PC5HLLBYLFDV5GMK5T/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread gantonjo-ovirt--- via Users
Hi, Didi.

The command "find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l" 
shows no storage domains, so I guess this would solve the "false positive" list 
of storage domains that are not really mounted to the node we want to update.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPM7LBRKVPQAQF2CTV66CG7E6LOMAABT/


[ovirt-users] Re: oVirt 4.4.2.6-1.el8 (SHE). Grafana integration not configured. The link to the Monitoring portal is not displayed on the Manager home page.

2020-10-13 Thread Gianluca Cecchi
On Tue, Oct 13, 2020 at 3:27 PM Yedidyah Bar David  wrote:

> On Tue, Oct 13, 2020 at 3:48 PM Dmitry Kharlamov 
> wrote:
> >
> > Good afternoon, Lucie! Thanks for the info.
> >
> > After reinstalling engine with --reconfigure-optional-components Grafana
> is work and link to Monitoring portal present on home page.
> > SSO works the same way, but only for Internal users. Users
> ActiveDirectory cannot enter the Monitoring Portal - "Invalid User or
> Password".
>
> This is, for now, by design. You have to invite them, and then SSO
> works. See also:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1846256
>
> Best regards,
> --
> Didi
> __
>

Nice, it works.
It would be nice to have the domain line also in Grafana as in web admin
gui login page (the "Profile" line in 4.4).

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2YHODT6OCTPGGZ43UZBY542COSCDMEV/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-13 Thread Jarosław Prokopowski
Hi Nikolov,

Thanks for the very interesting answer :-)

I do not use any raid controller. I was hoping glusterfs would take care of 
fault tolerance but apparently it failed.
I have one Samsung 1TB SSD drives in each server for VM storage. I see it is of 
type "multipath".  There is XFS filesystem over standard LVM (not thin). 
Mount options are: inode64,noatime,nodiratime
SELinux was in permissive mode.

I must read more about the things you described as have never  dived into it.
Please let me know if you have any suggestions :-)

Thanks a lot!
Jarek

 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBIDHY6P3KKTXFMPXP32YQ2FDZNXDB4L/


[ovirt-users] Re: oVirt 4.4.2.6-1.el8 (SHE). Grafana integration not configured. The link to the Monitoring portal is not displayed on the Manager home page.

2020-10-13 Thread Yedidyah Bar David
On Tue, Oct 13, 2020 at 3:48 PM Dmitry Kharlamov  wrote:
>
> Good afternoon, Lucie! Thanks for the info.
>
> After reinstalling engine with --reconfigure-optional-components Grafana is 
> work and link to Monitoring portal present on home page.
> SSO works the same way, but only for Internal users. Users ActiveDirectory 
> cannot enter the Monitoring Portal - "Invalid User or Password".

This is, for now, by design. You have to invite them, and then SSO
works. See also:

https://bugzilla.redhat.com/show_bug.cgi?id=1846256

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U35P6NFZ57ZFVTQUQ54M7FIOX76WRSE4/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread Yedidyah Bar David
On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David  wrote:
>
> On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users
>  wrote:
> >
> > So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are 
> > attempting to upgrade it to latest version, 4.4.2, but it fails as shown 
> > below. Problem is that the Storage domains listed are all located on an 
> > external iSCSI SAN. The Storage Domains were created in another cluster we 
> > had (oVirt Node 4.3 based) and detached from the old cluster and imported 
> > successfully into the new cluster through the oVirt Management interface. 
> > As I understand, oVirt itself has created the mount points under 
> > /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such 
> > they are not really storaged domains on the / filesystem.
> >
> > I do believe the solution to the mentioned BugZilla bug has caused a new 
> > bug, but I may be wrong. I cannot see what we have done wrong when 
> > importing these storage domains to the cluster (well, actually, some were 
> > freshly created in this cluster, thus fully managed by oVirt 4.4 manager 
> > interface).
>
> This is likely caused by the fix for:
> https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
>
> Adding Nir.
>
> >
> > What can we do to proceed in upgrading the hosts to latest oVirt Node?
>
> Right now, without another fix? Make sure that the following command:
>
> find / -xdev -path "*/dom_md/metadata" -not -empty
>
> Returns an empty output.
>
> You might need to move the host to maintenance and then manually
> umount your SDs, or something like that.
>
> Please open a bug so that we can refine this command further.

Nir (Levy) - perhaps we should change this command to something like:

find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l

>
> Thanks and best regards,
>
> >
> > Dependencies resolved.
> > =
> >  Package
> >Architecture 
> >  Version
> >   Repository
> > Size
> > =
> > Upgrading:
> >  ovirt-node-ng-image-update 
> >noarch   
> >  4.4.2-1.el8
> >   ovirt-4.4 
> >782 M
> >  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
> >
> > Transaction Summary
> > =
> > Upgrade  1 Package
> >
> > Total download size: 782 M
> > Is this ok [y/N]: y
> > Downloading Packages:
> > ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm   
> > 
> > 
> >  8.6 MB/s | 782 MB  
> >01:31
> > -
> > Total   
> > 
> > 
> >  8.6 MB/s | 782 MB  
> >01:31
> > Running transaction check
> > Transaction check succeeded.
> > Running transaction test
> > Transaction test succeeded.
> > Running transaction
> >   Preparing:
> >  

[ovirt-users] Re: oVirt 4.4.2.6-1.el8 (SHE). Grafana integration not configured. The link to the Monitoring portal is not displayed on the Manager home page.

2020-10-13 Thread Dmitry Kharlamov
Good afternoon, Lucie! Thanks for the info. 

After reinstalling engine with --reconfigure-optional-components Grafana is 
work and link to Monitoring portal present on home page.
SSO works the same way, but only for Internal users. Users ActiveDirectory 
cannot enter the Monitoring Portal - "Invalid User or Password".
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLXTIM5BA5XVVZNXTCINAGC5KKQYVHPT/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread gantonjo-ovirt--- via Users
Hi again.

Just did a test. I unmapped the node from the iSCSI SAN and rebooted the node. 
After reboot, the storage domains still where listed as / storage domains. In 
other words, this was not a solution. 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D2LXOYU2PK3V45BRT3SUIBNODUBXV5GZ/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread gantonjo-ovirt--- via Users
Hi, Didi.

Thanks for your answer. Unfortunately the suggested command shows the same 
storage domains as I listed above. The node is in Maintenance (as it would be 
when Installing an update from the GUI). None of these storage domains are 
mounted on the node ( at least not visible when running command "mount"), thus 
I am not able to unmount them. I guess they are visible to the node's OS due to 
the fact that they are iSCSI domains, even if the node itself makes no use of 
them.

That said, looking at the files listed by the find command you gave me, the 
storage domains all have links to non-existent /dev/ locations, like the 
following:
/rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md/:
total 0
lrwxrwxrwx. 1 vdsm kvm 45 Oct  6 09:36 ids -> 
/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/ids
lrwxrwxrwx. 1 vdsm kvm 47 Oct  6 09:36 inbox -> 
/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/inbox
lrwxrwxrwx. 1 vdsm kvm 48 Oct  6 09:36 leases -> 
/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/leases
lrwxrwxrwx. 1 vdsm kvm 48 Oct  6 09:36 master -> 
/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/master
lrwxrwxrwx. 1 vdsm kvm 50 Oct  6 09:36 metadata -> 
/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/metadata
lrwxrwxrwx. 1 vdsm kvm 48 Oct  6 09:36 outbox -> 
/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/outbox
lrwxrwxrwx. 1 vdsm kvm 49 Oct  6 09:36 xleases -> 
/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/xleases

ls -l  /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/ids
ls: cannot access '/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/ids': No such file 
or directory

I guess I can remove the export from the iSCSI SAN towards the node, reboot the 
node and then try to upgrade the node via "dnf update". However, the node will 
then not be able to serve the storage domains to it's VMs when taken out of 
Maintenance. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BXBEK6MSNQFZ35EOHAN5XZUOZSMRGDB/


[ovirt-users] Collectd version downgrade on oVirt engine

2020-10-13 Thread kushagra2agarwal
i am trying to downgrade collectd version on oVirt engine from 5.10.0 to 5.8.4 
using ansible, but getting error while doing so. Can someone help to fix issue

--

Ansible yml file

---

- name: Perform a yum clean
  command: /usr/bin/yum clean all

- name: downgrade collectd version to 5.8.1
  yum:
name:
  - collectd-5.8.1-4.el7.x86_64
  - collectd-disk-5.8.1-4.el7.x86_64
state: present
allow_downgrade: true
update_cache: true
  become: true

- error

 {"changed": false, "changes": {"installed": ["collectd-5.8.1-4.el7.x86_64"]}, 
"msg": "Error: Package: collectd-write_http-5.10.0-2.el7.x86_64 
(@ovirt-4.3-centos-opstools)\n   Requires: collectd(x86-64) = 
5.10.0-2.el7\n   Removing: collectd-5.10.0-2.el7.x86_64 
(@ovirt-4.3-centos-opstools)\n   collectd(x86-64) = 5.10.0-2.el7\n  
 Downgraded By: collectd-5.8.1-4.el7.x86_64 
(ovirt-4.3-centos-opstools)\n   collectd(x86-64) = 5.8.1-4.el7\n
   Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n 
  collectd(x86-64) = 5.7.2-1.el7\n   Available: 
collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.7.2-3.el7\n   Available: 
collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-2.el7\n   Available: 
collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-3.el7\n   Available: 
collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-5.el7\n   Available: 
collectd-5.8.0-6.1.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-6.1.el7\n   Available: 
collectd-5.8.1-1.el7.x86_64 (epel)\n   collectd(x86-64) = 
5.8.1-1.el7\n   Available: collectd-5.8.1-2.el7.x86_64 
(ovirt-4.3-centos-opstools)\n   collectd(x86-64) = 5.8.1-2.el7\n
   Available: collectd-5.8.1-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n 
  collectd(x86-64) = 5.8.1-3.el7\n   Available: 
collectd-5.8.1-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.1-5.el7\nError: Package: 
collectd-disk-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n   
Requires: collectd(x86-64) = 5.10.0-2.el7\n   Removing: 
collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.10.0-2.el7\n   Downgraded By: 
collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.1-4.el7\n   Available: 
collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.7.2-1.el7\n   Available: 
collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.7.2-3.el7\n   Available: 
collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-2.el7\n   Available: 
collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-3.el7\n   Available: 
collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-5.el7\n   Available: 
collectd-5.8.0-6.1.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-6.1.el7\n   Available: 
collectd-5.8.1-1.el7.x86_64 (epel)\n   collectd(x86-64) = 
5.8.1-1.el7\n   Available: collectd-5.8.1-2.el7.x86_64 
(ovirt-4.3-centos-opstools)\n   collectd(x86-64) = 5.8.1-2.el7\n
   A
▽
vailable: collectd-5.8.1-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n 
  collectd(x86-64) = 5.8.1-3.el7\n   Available: 
collectd-5.8.1-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.1-5.el7\nError: Package: 
collectd-postgresql-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n  
 Requires: collectd(x86-64) = 5.10.0-2.el7\n   Removing: 
collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.10.0-2.el7\n   Downgraded By: 
collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.1-4.el7\n   Available: 
collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.7.2-1.el7\n   Available: 
collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.7.2-3.el7\n   Available: 
collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-2.el7\n   Available: 
collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-3.el7\n   Available: 
collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n   
collectd(x86-64) = 5.8.0-5.el7\n   Available: 

[ovirt-users] Could not read L1 table: Input/output error (libgfapi enabled)

2020-10-13 Thread Николаев Алексей
Hi community!  I have some issue as https://bugzilla.redhat.com/show_bug.cgi?id=1855321VM from template can't start too.Cluster version 4.2 (LibgfApiSupported=true). VM rXX-TEST is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: [2020-10-13 10:29:08.503719] E [MSGID: 108006] [afr-common.c:5323:__afr_handle_child_down_event] 0-169-gluster-volume02-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.[2020-10-13 10:29:08.503926] I [io-stats.c:4027:fini] 0-169-gluster-volume02: io-stats translator unloaded2020-10-13T10:29:10.346661Z qemu-kvm: -drive file=gluster://10.169.2.227:24007/169-gluster-volume02/a706d3a9-c73d-474e-9544-6ca9491037d2/images/cf44e622-3f88-4dff-a883-ac18a1ff29f5/94ad4bcf-f56a-4695-9b8e-3340dd3d7665,file.debug=4,format=qcow2,if=none,id=drive-ua-cf44e622-3f88-4dff-a883-ac18a1ff29f5,serial=cf44e622-3f88-4dff-a883-ac18a1ff29f5,werror=stop,rerror=stop,cache=none,aio=native: Could not read L1 table: Input/output error. rpm -qa gluster*glusterfs-client-xlators-7.7-1.el7.x86_64glusterfs-rdma-7.7-1.el7.x86_64glusterfs-api-7.7-1.el7.x86_64glusterfs-libs-7.7-1.el7.x86_64glusterfs-fuse-7.7-1.el7.x86_64glusterfs-geo-replication-7.7-1.el7.x86_64glusterfs-server-7.7-1.el7.x86_64glusterfs-cli-7.7-1.el7.x86_64glusterfs-7.7-1.el7.x86_64glusterfs-events-7.7-1.el7.x86_64 Setting the value to LibgfApiSupported=false for a new cluster (version 4.3) solve the problem with starting the VM.Is there a workaround to fix the LibgfApi?___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BX4MMWVPCPDKC7YRA2SODS5LGW44RDVB/


[ovirt-users] Re: Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will

2020-10-13 Thread Yedidyah Bar David
On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users
 wrote:
>
> So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are 
> attempting to upgrade it to latest version, 4.4.2, but it fails as shown 
> below. Problem is that the Storage domains listed are all located on an 
> external iSCSI SAN. The Storage Domains were created in another cluster we 
> had (oVirt Node 4.3 based) and detached from the old cluster and imported 
> successfully into the new cluster through the oVirt Management interface. As 
> I understand, oVirt itself has created the mount points under 
> /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such 
> they are not really storaged domains on the / filesystem.
>
> I do believe the solution to the mentioned BugZilla bug has caused a new bug, 
> but I may be wrong. I cannot see what we have done wrong when importing these 
> storage domains to the cluster (well, actually, some were freshly created in 
> this cluster, thus fully managed by oVirt 4.4 manager interface).

This is likely caused by the fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .

Adding Nir.

>
> What can we do to proceed in upgrading the hosts to latest oVirt Node?

Right now, without another fix? Make sure that the following command:

find / -xdev -path "*/dom_md/metadata" -not -empty

Returns an empty output.

You might need to move the host to maintenance and then manually
umount your SDs, or something like that.

Please open a bug so that we can refine this command further.

Thanks and best regards,

>
> Dependencies resolved.
> =
>  Package  
>  Architecture 
>  Version  
> Repository
> Size
> =
> Upgrading:
>  ovirt-node-ng-image-update   
>  noarch   
>  4.4.2-1.el8  
> ovirt-4.4
> 782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
>
> Transaction Summary
> =
> Upgrade  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 
>   
>   
>8.6 MB/s | 782 MB 01:31
> -
> Total 
>   
>   
>8.6 MB/s | 782 MB 01:31
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:  
>   
>   
>   
>  1/1
>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  

[ovirt-users] Re: problems installing standard Linux as nodes in 4.4

2020-10-13 Thread Gianluca Cecchi
On Sat, Oct 10, 2020 at 10:13 AM Martin Perina  wrote:

[snip]


>> Can I replicate the command that the engine would run on host through ssh?
>>
>
> I don't think so there is an easy way to do it
> Let's see what else we can get from the logs...
>
> Martin
>
>
Hi,
I've run on engine the command
ovirt-log-collector --no-hypervisors
but potentially there is much sensitive information (like the dump of the
database).

Is there any particular file you are more interested in that archive I can
share?

BTW: can I put engine in debug for the time I'm trying to add the host so
that we can see if more messages are shown?
In that case how can I do?

Another information I have noticed is that when the new host command from
web admin GUI suddenly fails, anyway the ov200 host is now present in the
host list, with the down icon and "Install failed" info.
If I click on it and go in General subtab, in the section "Action Items" I
see 3 items with exclamation mark in front of them:

1) Power Management is not configured for this Host.
Enable Power Management
--> OK, I skipped it

2) Host has no default route.
---> I don't know why it says this.

[root@ov200 log]# ip route show
default via 10.4.192.254 dev bond0.68 proto static metric 400
10.4.192.0/24 dev bond0.68 proto kernel scope link src 10.4.192.32 metric
400
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
linkdown
[root@ov200 log]#

On the still in CentOS 7 active host I have:

[root@ov300 ~]# ip route show
default via 10.4.192.254 dev ovirtmgmntZ2Z3
10.4.187.0/24 dev p1p2.187 proto kernel scope link src 10.4.187.100
10.4.192.0/24 dev ovirtmgmntZ2Z3 proto kernel scope link src 10.4.192.33
10.10.100.0/24 dev p1p2 proto kernel scope link src 10.10.100.88
10.10.100.0/24 dev p1p1.100 proto kernel scope link src 10.10.100.87
[root@ov300 ~]#

[root@ov300 ~]# brctl show ovirtmgmntZ2Z3
bridge name bridge id STP enabled interfaces
ovirtmgmntZ2Z3 8000.1803730ba369 no bond0.68
[root@ov300 ~]#

Could it be the fact that for historical reasons my mgmt network has not
the name ovirtmgmt but ovirtmgmntZ2Z3 that confuses the installer that
expects ovirtmgmt to setup? And erroneously reports the no default route
message?

3) The host CPU does not match the Cluster CPU Type and is running in a
degraded mode. It is missing the following CPU flags: vmx, ssbd, nx,
model_Westmere, aes, spec_ctrl. Please update the host CPU microcode or
change the Cluster CPU Type.

The cluster is set as "Intel Westmere IBRS SSBD Family".
all the hosts are the same hw Dell PE M610, with same processor

Host installed in CentOS 8:
[root@ov200 log]# cat /proc/cpuinfo | grep "model name" | sort -u
model name : Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
[root@ov200 log]#

Host still in CentOS 7:
[root@ov300 ~]# cat /proc/cpuinfo | grep "model name" | sort -u
model name : Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
[root@ov300 ~]#

If I compare the cpu flags inside the OS I see:

CentOS 8:
[root@ov200 log]# cat /proc/cpuinfo | grep flags | sort -u
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm pti
ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat
flush_l1d
[root@ov200 log]#

CentOS 7:
[root@ov300 ~]# cat /proc/cpuinfo | grep flags | sort -u
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ssbd
ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat
spec_ctrl intel_stibp flush_l1d
[root@ov300 ~]#

When still in CentOS 7, ov200 had the same flags as ov300
ov200 has this more now:
cpuid pti

ov200 has these less now:
eagerfpu spec_ctrl intel_stibp

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E6F7RRQYXEUDLYMIL2UV362WKZWTAFWR/


[ovirt-users] Re: oVirt 4.4.2.6-1.el8 (SHE). Grafana integration not configured. The link to the Monitoring portal is not displayed on the Manager home page.

2020-10-13 Thread Lucie Leistnerova

Hi Dmitry,

On 10/10/20 8:36 AM, Dmitry Kharlamov wrote:

Good day!

Made a fresh installation of 4.4.2 Self Hosted Engine (not an upgrade from 
4.4.1) however there is no link to the Monitoring portal on the home page.

Service ovirt-engine-dwhd started and work. Grafana-server service not present 
in /etc/systemd and not configured.

Please tell me what needs to be done to make the monitoring portal work?
Grafana was disabled in HE installation, see 
https://bugzilla.redhat.com/show_bug.cgi?id=1866780


To reinstall run on the engine: engine-setup 
--reconfigure-optional-components

PS// In the previous  version installation 4.4.1 SHE the grafana-server worked 
immediately after installation and the link to the monitoring portal was 
available immediately without any additional steps.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YLIJKMO5H7EJHE7RMQTBYSNWCMQ32VI3/

Best regards,
--

Lucie Leistnerova
Associate Manager, Quality Engineering, RHV - QE Core & Tools
GChat: lleistne @ Virtualization 

Red Hat EMEA 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SI4VIKPPRBWV234ZPXJ77O6PJX5J7GMN/


[ovirt-users] Re: Imageio Daemon not listening on port 54323

2020-10-13 Thread Nir Soffer
On Tue, Oct 13, 2020 at 12:25 PM Tim Bordemann via Users
 wrote:
>
>
>
> > On 13. Oct 2020, at 10:01, Vojtech Juranek  wrote:
> >
> > On úterý 13. října 2020 9:37:11 CEST Tim Bordemann wrote:
> >>> On 12. Oct 2020, at 12:15, Vojtech Juranek  wrote:
> >>>
> >>> On pátek 9. října 2020 19:02:32 CEST tim-nospam--- via Users wrote:
>  Hello.
> 
>  After an upgrade I am not able to upload images anymore via the ovirt ui.
>  When testing the connection, I always get the error message "Connection
>  to
>  ovirt-imageio-proxy service has failed. Make sure the service is
>  installed,
>  configured, and ovirt-engine certificate is registered as a valid CA in
>  the
>  browser.".
> 
>  I found out that the imageio daemon doesn't listen on port 54323 anymore,
>  so the browser can not connect to it. The daemon is configured to listen
>  on port 54323 though:
> 
>  # cat /etc/ovirt-imageio/conf.d/50-engine.conf
>  [...]
>  [remote]
>  port = 54323
>  [...]
> 
>  The imageio daemon has been started successfully on the engine host as
>  well
>  as on the other hosts.
> 
>  I am currently stuck, what should I do next?
>  The ovirt version I am using is 4.4.
> >>>
> >>> what is exact version of imageio? (rpm -qa|grep imageio)
> >>
> >> # rpm -qa|grep imageio
> >> ovirt-engine-setup-plugin-imageio-4.4.2.6-1.el8.noarch
> >> ovirt-imageio-daemon-2.0.10-1.el8.x86_64
> >> ovirt-imageio-common-2.0.10-1.el8.x86_64
> >> ovirt-imageio-client-2.0.10-1.el8.x86_64
> >>
> >>> On which port imageio listens? You can use e.g. netstat etc. Also plase
> >>> check imageio logs (/var/log/ovirt-imageio/daemon.log), what is there,
> >>> there shold
> >>> be something like this:
> >> # netstat -tulpn | grep 543
> >> tcp0  0 0.0.0.0:54322   0.0.0.0:*   LISTEN
> >>   2527872/platform-py tcp0  0 127.0.0.1:54324
> >> 0.0.0.0:*   LISTEN  2527872/platform-py
> >>> 2020-10-08 08:37:48,906 INFO(MainThread) [services] remote.service
> >>> listening on ('::', 54323)
> >>
> >> 2020-10-09 17:46:38,216 INFO(MainThread) [server] Starting 
> >> (pid=2527872,
> >> version=2.0.10) 2020-10-09 17:46:38,220 INFO(MainThread) [services]
> >> remote.service listening on ('0.0.0.0', 54322) 2020-10-09 17:46:38,221 INFO
> >>   (MainThread) [services] control.service listening on ('127.0.0.1',
> >> 54324) 2020-10-09 17:46:38,227 INFO(MainThread) [server] Ready for
> >> requests
> >>
> >> No entries for Port 54323 in the last 3 months. I found logentries in july
> >> though:
> >>
> >> [root@helios ~]# cat /var/log/ovirt-imageio/daemon.log | grep 54323
> >> 2020-07-11 10:13:24,777 INFO(MainThread) [services] remote.service
> >> listening on ('::', 54323) [...]
> >> 2020-07-16 19:54:13,398 INFO(MainThread) [services] remote.service
> >> listening on ('::', 54323) 2020-07-16 19:54:36,715 INFO(MainThread)
> >> [services] remote.service listening on ('::', 54323)
> >>> Also, please check if there are any other config files (*.conf) in
> >>> /etc/ovirt- imageio/conf.d or in /usr/lib/ovirt-imageio/conf.d
> >>
> >> I have, but I couldn't find anything interesting in those two files:
> >>
> >> # ls -l /etc/ovirt-imageio/conf.d/
> >> total 8
> >> -rw-r--r--. 1 root root 1458 Oct  9 17:54 50-engine.conf
> >> -rw-r--r--. 1 root root 1014 Sep 15 11:16 50-vdsm.conf
> >
> > 50-vdsm.conf overwrites 50-engine.conf (later config taken alphabetically 
> > overwrites previous config if there is any).
> >
> > If you don't use engine as a host at the same time, stop an uninstall vdsm 
> > from engine (should remove also 50-vdsm.conf) and restart ovirt-imageio 
> > service.
> >
> > If you use engine as a host at the same time, note that this is 
> > unsupported. However, there were some patches in this area recently, but 
> > they are not released yet AFAICT. See [1, 2] for more details.
> >
> > [1] https://bugzilla.redhat.com/1871348
> > [2] 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W4OTINLXYDWG3YSF2OUQU3NW3ADRPGUR/
>
> Uninstalling vdsm would also remove nearly 500 packages on the machine. I 
> have disabled and masked the vdsm service for now and restarted the imageio 
> service. Uploading an image via the webui now works again.
> I will remove vdsm during a maintainence window and maybe even reinstall the 
> machine completely.

Uninstalling vdsm and it's 500 dependencies is safe, it is not needed
on the engine side unless you are using the same host as engine and
hypervisor host (all-in-one setup).

But using mainaintance windows is always safer.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Upgrade oVirt Host from 4.4.0 to 4.4.2 fails

2020-10-13 Thread Dana Elfassy
Hi,
Fixes were added in the 2 tasks that were mentioned:
1. The command that we run to check for updates:
https://gerrit.ovirt.org/#/c/110713/
2. Making sure that cache is up to date is done only once and not for each
package: https://gerrit.ovirt.org/#/c/111419/
Thanks,
Dana


On Mon, Oct 12, 2020 at 10:52 AM Yedidyah Bar David  wrote:

> On Sat, Oct 3, 2020 at 1:04 AM Erez Zarum  wrote:
> >
> > Seems the problem, atleast part of it (because still, it doesn't get to
> the part of creating the imgbase layer) is related to the /tmp/yum_updates
> file.
> >
> /usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-check-upgrade/tasks/main.yml
> > yum check-update -q | cut -d ' ' -f1 | sed '/^$/d' >> /tmp/yum_updates
> > For some reason it also lists those packages:
> > gluster-ansible-cluster.src
> > gluster-ansible-infra.src
> > gluster-ansible-maintenance.src
> > gluster-ansible-roles.src
> >
> > Which do not exists, those are source RPMs, so the ansible playbook for
> the host upgrade fails.
> >
> > I did a quick workaround and added an "egrep -v src"
> > yum check-update -q | egrep -v src | cut -d ' ' -f1 | sed '/^$/d' >>
> /tmp/yum_updates
> >
> > So now the ansible-playbook doesn't fail, it does says the upgrade was
> successful the host reboots, but it doesn't get upgraded.
> >
> > Also, i noticed that the playbook does make sure that the dnf cache is
> up to date (update_cache is set to true) when first checking the ovirt-host
> package, but it also does this for every single package in the task after,
> so there's no need for update_cache there.
>
> This is tracked in: https://bugzilla.redhat.com/show_bug.cgi?id=1880962
>
> >
> > As a workaround to upgrade to 4.4.2 i have to reinstall every host with
> oVirt node 4.4.2 as it seems the upgrade process is broken.
>
> Adding Dana.
>
> Would you like to open a bug about this? Not sure about the exact flow
> - I guess it's not strictly about having src packages but about having
> packages that are not re-installable (meaning, you installed them not
> from a repo, or removed the repo after installation, or they were
> removed from their repo, etc.).
>
> Thanks and best regards,
> --
> Didi
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FZEODPMSVPR27FQZLEXQ7L6NTWKRPYMA/


[ovirt-users] Re: Imageio Daemon not listening on port 54323

2020-10-13 Thread Tim Bordemann via Users


> On 13. Oct 2020, at 10:01, Vojtech Juranek  wrote:
> 
> On úterý 13. října 2020 9:37:11 CEST Tim Bordemann wrote:
>>> On 12. Oct 2020, at 12:15, Vojtech Juranek  wrote:
>>> 
>>> On pátek 9. října 2020 19:02:32 CEST tim-nospam--- via Users wrote:
 Hello.
 
 After an upgrade I am not able to upload images anymore via the ovirt ui.
 When testing the connection, I always get the error message "Connection
 to
 ovirt-imageio-proxy service has failed. Make sure the service is
 installed,
 configured, and ovirt-engine certificate is registered as a valid CA in
 the
 browser.".
 
 I found out that the imageio daemon doesn't listen on port 54323 anymore,
 so the browser can not connect to it. The daemon is configured to listen
 on port 54323 though:
 
 # cat /etc/ovirt-imageio/conf.d/50-engine.conf
 [...]
 [remote]
 port = 54323
 [...]
 
 The imageio daemon has been started successfully on the engine host as
 well
 as on the other hosts.
 
 I am currently stuck, what should I do next?
 The ovirt version I am using is 4.4.
>>> 
>>> what is exact version of imageio? (rpm -qa|grep imageio)
>> 
>> # rpm -qa|grep imageio
>> ovirt-engine-setup-plugin-imageio-4.4.2.6-1.el8.noarch
>> ovirt-imageio-daemon-2.0.10-1.el8.x86_64
>> ovirt-imageio-common-2.0.10-1.el8.x86_64
>> ovirt-imageio-client-2.0.10-1.el8.x86_64
>> 
>>> On which port imageio listens? You can use e.g. netstat etc. Also plase
>>> check imageio logs (/var/log/ovirt-imageio/daemon.log), what is there,
>>> there shold
>>> be something like this:
>> # netstat -tulpn | grep 543
>> tcp0  0 0.0.0.0:54322   0.0.0.0:*   LISTEN  
>>   2527872/platform-py tcp0  0 127.0.0.1:54324
>> 0.0.0.0:*   LISTEN  2527872/platform-py
>>> 2020-10-08 08:37:48,906 INFO(MainThread) [services] remote.service
>>> listening on ('::', 54323)
>> 
>> 2020-10-09 17:46:38,216 INFO(MainThread) [server] Starting (pid=2527872,
>> version=2.0.10) 2020-10-09 17:46:38,220 INFO(MainThread) [services]
>> remote.service listening on ('0.0.0.0', 54322) 2020-10-09 17:46:38,221 INFO
>>   (MainThread) [services] control.service listening on ('127.0.0.1',
>> 54324) 2020-10-09 17:46:38,227 INFO(MainThread) [server] Ready for
>> requests
>> 
>> No entries for Port 54323 in the last 3 months. I found logentries in july
>> though:
>> 
>> [root@helios ~]# cat /var/log/ovirt-imageio/daemon.log | grep 54323
>> 2020-07-11 10:13:24,777 INFO(MainThread) [services] remote.service
>> listening on ('::', 54323) [...]
>> 2020-07-16 19:54:13,398 INFO(MainThread) [services] remote.service
>> listening on ('::', 54323) 2020-07-16 19:54:36,715 INFO(MainThread)
>> [services] remote.service listening on ('::', 54323)
>>> Also, please check if there are any other config files (*.conf) in
>>> /etc/ovirt- imageio/conf.d or in /usr/lib/ovirt-imageio/conf.d
>> 
>> I have, but I couldn't find anything interesting in those two files:
>> 
>> # ls -l /etc/ovirt-imageio/conf.d/
>> total 8
>> -rw-r--r--. 1 root root 1458 Oct  9 17:54 50-engine.conf
>> -rw-r--r--. 1 root root 1014 Sep 15 11:16 50-vdsm.conf
> 
> 50-vdsm.conf overwrites 50-engine.conf (later config taken alphabetically 
> overwrites previous config if there is any).
> 
> If you don't use engine as a host at the same time, stop an uninstall vdsm 
> from engine (should remove also 50-vdsm.conf) and restart ovirt-imageio 
> service.
> 
> If you use engine as a host at the same time, note that this is unsupported. 
> However, there were some patches in this area recently, but they are not 
> released yet AFAICT. See [1, 2] for more details.
> 
> [1] https://bugzilla.redhat.com/1871348
> [2] 
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W4OTINLXYDWG3YSF2OUQU3NW3ADRPGUR/

Uninstalling vdsm would also remove nearly 500 packages on the machine. I have 
disabled and masked the vdsm service for now and restarted the imageio service. 
Uploading an image via the webui now works again.
I will remove vdsm during a maintainence window and maybe even reinstall the 
machine completely.

Thanks for your help!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AOYV6Z4WEVMTJKUFB3S32ZHNUGFXA5ZK/


[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-13 Thread Yedidyah Bar David
Please share all of /var/log/ovirt-hosted-engine-setup:

cd /var/log/
tar czf ovirt-hosted-engine-setup.tar.gz ovirt-hosted-engine-setup

Then upload ovirt-hosted-engine-setup.tar.gz to some file sharing
service (e.g. dropbox, google drive etc.) and share the link.

Thanks!

On Tue, Oct 13, 2020 at 10:56 AM  wrote:
>
> Hope this can help. It seems it crash every time when I install.
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OQZM2KYHQ622XUBYCTVZLQZ4AGLKT2R/


[ovirt-users] Re: Imageio Daemon not listening on port 54323

2020-10-13 Thread Vojtech Juranek
On úterý 13. října 2020 9:37:11 CEST Tim Bordemann wrote:
> > On 12. Oct 2020, at 12:15, Vojtech Juranek  wrote:
> > 
> > On pátek 9. října 2020 19:02:32 CEST tim-nospam--- via Users wrote:
> >> Hello.
> >> 
> >> After an upgrade I am not able to upload images anymore via the ovirt ui.
> >> When testing the connection, I always get the error message "Connection
> >> to
> >> ovirt-imageio-proxy service has failed. Make sure the service is
> >> installed,
> >> configured, and ovirt-engine certificate is registered as a valid CA in
> >> the
> >> browser.".
> >> 
> >> I found out that the imageio daemon doesn't listen on port 54323 anymore,
> >> so the browser can not connect to it. The daemon is configured to listen
> >> on port 54323 though:
> >> 
> >> # cat /etc/ovirt-imageio/conf.d/50-engine.conf
> >> [...]
> >> [remote]
> >> port = 54323
> >> [...]
> >> 
> >> The imageio daemon has been started successfully on the engine host as
> >> well
> >> as on the other hosts.
> >> 
> >> I am currently stuck, what should I do next?
> >> The ovirt version I am using is 4.4.
> > 
> > what is exact version of imageio? (rpm -qa|grep imageio)
> 
> # rpm -qa|grep imageio
> ovirt-engine-setup-plugin-imageio-4.4.2.6-1.el8.noarch
> ovirt-imageio-daemon-2.0.10-1.el8.x86_64
> ovirt-imageio-common-2.0.10-1.el8.x86_64
> ovirt-imageio-client-2.0.10-1.el8.x86_64
> 
> > On which port imageio listens? You can use e.g. netstat etc. Also plase
> > check imageio logs (/var/log/ovirt-imageio/daemon.log), what is there,
> > there shold
> > be something like this:
> # netstat -tulpn | grep 543
> tcp0  0 0.0.0.0:54322   0.0.0.0:*   LISTEN  
>2527872/platform-py tcp0  0 127.0.0.1:54324
> 0.0.0.0:*   LISTEN  2527872/platform-py
> > 2020-10-08 08:37:48,906 INFO(MainThread) [services] remote.service
> > listening on ('::', 54323)
> 
> 2020-10-09 17:46:38,216 INFO(MainThread) [server] Starting (pid=2527872,
> version=2.0.10) 2020-10-09 17:46:38,220 INFO(MainThread) [services]
> remote.service listening on ('0.0.0.0', 54322) 2020-10-09 17:46:38,221 INFO
>(MainThread) [services] control.service listening on ('127.0.0.1',
> 54324) 2020-10-09 17:46:38,227 INFO(MainThread) [server] Ready for
> requests
> 
> No entries for Port 54323 in the last 3 months. I found logentries in july
> though:
> 
> [root@helios ~]# cat /var/log/ovirt-imageio/daemon.log | grep 54323
> 2020-07-11 10:13:24,777 INFO(MainThread) [services] remote.service
> listening on ('::', 54323) [...]
> 2020-07-16 19:54:13,398 INFO(MainThread) [services] remote.service
> listening on ('::', 54323) 2020-07-16 19:54:36,715 INFO(MainThread)
> [services] remote.service listening on ('::', 54323)
> > Also, please check if there are any other config files (*.conf) in
> > /etc/ovirt- imageio/conf.d or in /usr/lib/ovirt-imageio/conf.d
> 
> I have, but I couldn't find anything interesting in those two files:
> 
> # ls -l /etc/ovirt-imageio/conf.d/
> total 8
> -rw-r--r--. 1 root root 1458 Oct  9 17:54 50-engine.conf
> -rw-r--r--. 1 root root 1014 Sep 15 11:16 50-vdsm.conf

50-vdsm.conf overwrites 50-engine.conf (later config taken alphabetically 
overwrites previous config if there is any).

If you don't use engine as a host at the same time, stop an uninstall vdsm from 
engine (should remove also 50-vdsm.conf) and restart ovirt-imageio service.

If you use engine as a host at the same time, note that this is unsupported. 
However, there were some patches in this area recently, but they are not 
released yet AFAICT. See [1, 2] for more details.

[1] https://bugzilla.redhat.com/1871348
[2] 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W4OTINLXYDWG3YSF2OUQU3NW3ADRPGUR/

> The imageio daemon should listen on Port 54323:
> 
> 
> # /etc/ovirt-imageio/conf.d/50-engine.conf
> [...]
> [remote]
> # Port cannot be changed as it's currently hardcoded in engine code.
> port = 54323
> [...]
> 
> >> There is one machine running the ovirt
> >> engine and there are 2 additional hosts. The OS on the machines is Centos
> >> 8.
> >> 
> >> 
> >> 
> >> 
> >> Thank you,
> >> Tim
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/ List
> >> Archives:
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWIVRHYHNGU
> >> VJ
> >> NSQTYCDQBOG6VFXCZPB/



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-13 Thread Yedidyah Bar David
Hi,

Please share all of /var/log/ovirt-hosted-engine-setup.

The last point in the logs you attached is in task 'Check engine VM health'.
This runs a loop of 'hosted-engine --vm-status --json' but did not
finish (yet?).

Thanks and best regards,

On Tue, Oct 13, 2020 at 5:35 AM  wrote:
>
> Hope this can help, not sure which files are best the last part of 
> “ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-“
>
>
>
> 2020-10-12 15:05:40,126+0200 INFO ansible task start {'status': 'OK', 
> 'ansible_type': 'task', 'ansible_playbook': 
> '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
> 'ansible_task': 'ovirt.hosted_engine_setup : Generate the error message from 
> the engine events'}
>
> 2020-10-12 15:05:40,126+0200 DEBUG ansible on_any args TASK: 
> ovirt.hosted_engine_setup : Generate the error message from the engine events 
> kwargs is_conditional:False
>
> 2020-10-12 15:05:40,126+0200 DEBUG ansible on_any args localhostTASK: 
> ovirt.hosted_engine_setup : Generate the error message from the engine events 
> kwargs
>
> 2020-10-12 15:05:41,045+0200 INFO ansible skipped {'status': 'SKIPPED', 
> 'ansible_type': 'task', 'ansible_playbook': 
> '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
> 'ansible_task': 'Generate the error message from the engine events', 
> 'ansible_host': 'localhost'}
>
> 2020-10-12 15:05:41,045+0200 DEBUG ansible on_any args 
>  kwargs
>
> 2020-10-12 15:05:41,991+0200 INFO ansible task start {'status': 'OK', 
> 'ansible_type': 'task', 'ansible_playbook': 
> '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
> 'ansible_task': 'ovirt.hosted_engine_setup : Fail with error description'}
>
> 2020-10-12 15:05:41,992+0200 DEBUG ansible on_any args TASK: 
> ovirt.hosted_engine_setup : Fail with error description kwargs 
> is_conditional:False
>
> 2020-10-12 15:05:41,992+0200 DEBUG ansible on_any args localhostTASK: 
> ovirt.hosted_engine_setup : Fail with error description kwargs
>
> 2020-10-12 15:05:42,917+0200 INFO ansible skipped {'status': 'SKIPPED', 
> 'ansible_type': 'task', 'ansible_playbook': 
> '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
> 'ansible_task': 'Fail with error description', 'ansible_host': 'localhost'}
>
> 2020-10-12 15:05:42,917+0200 DEBUG ansible on_any args 
>  kwargs
>
> 2020-10-12 15:05:43,865+0200 INFO ansible task start {'status': 'OK', 
> 'ansible_type': 'task', 'ansible_playbook': 
> '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
> 'ansible_task': 'ovirt.hosted_engine_setup : Fail with generic error'}
>
> 2020-10-12 15:05:43,865+0200 DEBUG ansible on_any args TASK: 
> ovirt.hosted_engine_setup : Fail with generic error kwargs 
> is_conditional:False
>
> 2020-10-12 15:05:43,865+0200 DEBUG ansible on_any args localhostTASK: 
> ovirt.hosted_engine_setup : Fail with generic error kwargs
>
> 2020-10-12 15:05:44,780+0200 INFO ansible skipped {'status': 'SKIPPED', 
> 'ansible_type': 'task', 'ansible_playbook': 
> '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
> 'ansible_task': 'Fail with generic error', 'ansible_host': 'localhost'}
>
> 2020-10-12 15:05:44,781+0200 DEBUG ansible on_any args 
>  kwargs
>
> 2020-10-12 15:05:44,782+0200 INFO ansible stats {
>
> "ansible_playbook": 
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
>
> "ansible_playbook_duration": "13:34 Minutes",
>
> "ansible_result": "type: \nstr: {'localhost': {'ok': 224, 
> 'failures': 0, 'unreachable': 0, 'changed': 73, 'skipped': 102, 'rescued': 0, 
> 'ignored': 1}}",
>
> "ansible_type": "finish",
>
> "status": "FAILED"
>
> }
>
> 2020-10-12 15:05:44,782+0200 INFO SUMMARY:
>
> Duration  Task Name
>
>   
>
> [ < 1 sec ] Execute just a specific set of steps
>
> [  00:01  ] Force facts gathering
>
> [  00:02  ] Install oVirt Hosted Engine packages
>
> [ < 1 sec ] System configuration validations
>
> [ < 1 sec ] Detecting interface on existing management bridge
>
> [ < 1 sec ] Generate output list
>
> [ < 1 sec ] Collect interface types
>
> [ < 1 sec ] Get list of Team devices
>
> [ < 1 sec ] Filter unsupported interface types
>
> [ < 1 sec ] Prepare getent key
>
> [ < 1 sec ] Get full hostname
>
> [ < 1 sec ] Get host address resolution
>
> [ < 1 sec ] Parse host address resolution
>
> [ < 1 sec ] Get target address from selected interface (IPv4)
>
> [ < 1 sec ] Get target address from selected interface (IPv6)
>
> [ < 1 sec ] Check for alias
>
> [ < 1 sec ] Filter resolved address list
>
> [ < 1 sec ] Get engine FQDN resolution
>
> [ < 1 sec ] Parse engine he_fqdn resolution
>
> [ < 1 sec ] Define he_cloud_init_host_name
>
> [ < 1 sec ] Get uuid
>
> [ < 1 sec ] Set he_vm_uuid
>
> [ < 1 sec ] Get uuid
>
> [ < 1 sec ] Set he_nic_uuid
>
> [ < 1 sec ] Get uuid
>
> [ < 1 sec ] Set he_cdrom_uuid
>
> [ < 1 sec ] get timezone
>

[ovirt-users] Re: Imageio Daemon not listening on port 54323

2020-10-13 Thread Tim Bordemann via Users


> On 12. Oct 2020, at 12:15, Vojtech Juranek  wrote:
> 
> On pátek 9. října 2020 19:02:32 CEST tim-nospam--- via Users wrote:
>> Hello.
>> 
>> After an upgrade I am not able to upload images anymore via the ovirt ui.
>> When testing the connection, I always get the error message "Connection to
>> ovirt-imageio-proxy service has failed. Make sure the service is installed,
>> configured, and ovirt-engine certificate is registered as a valid CA in the
>> browser.".
> 
>> I found out that the imageio daemon doesn't listen on port 54323 anymore, so
>> the browser can not connect to it. The daemon is configured to listen on
>> port 54323 though:
> 
>> # cat /etc/ovirt-imageio/conf.d/50-engine.conf
>> [...]
>> [remote]
>> port = 54323
>> [...]
>> 
>> The imageio daemon has been started successfully on the engine host as well
>> as on the other hosts.
> 
>> I am currently stuck, what should I do next?
>> The ovirt version I am using is 4.4.
> 
> what is exact version of imageio? (rpm -qa|grep imageio)

# rpm -qa|grep imageio
ovirt-engine-setup-plugin-imageio-4.4.2.6-1.el8.noarch
ovirt-imageio-daemon-2.0.10-1.el8.x86_64
ovirt-imageio-common-2.0.10-1.el8.x86_64
ovirt-imageio-client-2.0.10-1.el8.x86_64

> On which port imageio listens? You can use e.g. netstat etc. Also plase check 
> imageio logs (/var/log/ovirt-imageio/daemon.log), what is there, there shold 
> be something like this:

# netstat -tulpn | grep 543
tcp0  0 0.0.0.0:54322   0.0.0.0:*   LISTEN  
2527872/platform-py
tcp0  0 127.0.0.1:54324 0.0.0.0:*   LISTEN  
2527872/platform-py

> 2020-10-08 08:37:48,906 INFO(MainThread) [services] remote.service 
> listening on ('::', 54323)

2020-10-09 17:46:38,216 INFO(MainThread) [server] Starting (pid=2527872, 
version=2.0.10)
2020-10-09 17:46:38,220 INFO(MainThread) [services] remote.service 
listening on ('0.0.0.0', 54322)
2020-10-09 17:46:38,221 INFO(MainThread) [services] control.service 
listening on ('127.0.0.1', 54324)
2020-10-09 17:46:38,227 INFO(MainThread) [server] Ready for requests

No entries for Port 54323 in the last 3 months. I found logentries in july 
though:

[root@helios ~]# cat /var/log/ovirt-imageio/daemon.log | grep 54323
2020-07-11 10:13:24,777 INFO(MainThread) [services] remote.service 
listening on ('::', 54323)
[...]
2020-07-16 19:54:13,398 INFO(MainThread) [services] remote.service 
listening on ('::', 54323)
2020-07-16 19:54:36,715 INFO(MainThread) [services] remote.service 
listening on ('::', 54323)

> 
> Also, please check if there are any other config files (*.conf) in /etc/ovirt-
> imageio/conf.d or in /usr/lib/ovirt-imageio/conf.d

I have, but I couldn't find anything interesting in those two files:

# ls -l /etc/ovirt-imageio/conf.d/
total 8
-rw-r--r--. 1 root root 1458 Oct  9 17:54 50-engine.conf
-rw-r--r--. 1 root root 1014 Sep 15 11:16 50-vdsm.conf

The imageio daemon should listen on Port 54323:


# /etc/ovirt-imageio/conf.d/50-engine.conf
[...]
[remote]
# Port cannot be changed as it's currently hardcoded in engine code.
port = 54323
[...]

> 
>> There is one machine running the ovirt
>> engine and there are 2 additional hosts. The OS on the machines is Centos
>> 8.
> 
> 
> 
> 
>> Thank you,
>> Tim
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/ List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWIVRHYHNGUVJ
>> NSQTYCDQBOG6VFXCZPB/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DIDPODSIFQ3S5W7IOV2XBQD3SYX2UAOH/


[ovirt-users] Re: Imageio Daemon not listening on port 54323

2020-10-13 Thread Tim Bordemann via Users


> On 12. Oct 2020, at 13:56, Nir Soffer  wrote:
> 
> 
> 
> On Sun, Oct 11, 2020, 11:58 tim-nospam--- via Users  wrote:
> Hello.
> 
> After an upgrade I am not able to upload images anymore via the ovirt ui. 
> When testing the connection, I always get the error message "Connection to 
> ovirt-imageio-proxy service has failed. Make sure the service is installed, 
> configured, and ovirt-engine certificate is registered as a valid CA in the 
> browser.".
> 
> I found out that the imageio daemon doesn't listen on port 54323 anymore, so 
> the browser can not connect to it. The daemon is configured to listen on port 
> 54323 though:
> 
> # cat /etc/ovirt-imageio/conf.d/50-engine.conf
> [...]
> [remote]
> port = 54323
> [...]
> 
> The imageio daemon has been started successfully on the engine host as well 
> as on the other hosts.
> 
> I am currently stuck, what should I do next?
> The ovirt version I am using is 4.4.
> 
> 4.4 is not specific enough, can you share complete packages versions?

Sure. The list of installed ovirt packages is quite long, I hope the shortened 
list is enough though.

# rpm -qa | grep ovirt-
ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch
ovirt-hosted-engine-setup-2.4.6-1.el8.noarch
ovirt-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-engine-metrics-1.4.1.1-1.el8.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.2.6-1.el8.noarch
ovirt-engine-tools-backup-4.4.1.8-1.el8.noarch
ovirt-ansible-repositories-1.2.5-1.el8.noarch
ovirt-engine-dwh-setup-4.4.2.1-1.el8.noarch
ovirt-engine-setup-plugin-cinderlib-4.4.2.6-1.el8.noarch
ovirt-engine-extension-aaa-jdbc-1.2.0-1.el8.noarch
ovirt-engine-dbscripts-4.4.1.8-1.el8.noarch
ovirt-engine-tools-4.4.1.8-1.el8.noarch
ovirt-engine-setup-4.4.2.6-1.el8.noarch
[...]

> There is one machine running the ovirt engine and there are 2 additional 
> hosts. The OS on the machines is Centos 8.
> 
> Do you use engine host as another hypervisor or it only use for engine?

I don't use the ovirt engine as a hypervisor.

> If you used engine host as hypervisor in the past, you will have vdsm 
> configuration (60-vdsm.conf) overriding engine configuration.

So vdsm shouldn't be running on the engine host?
Even though I didn't intend to configure the engine host as a hypvervisor, 
there is vdsm running on it and there is also a config file in the imageio 
config folder:

# ls -l /etc/ovirt-imageio/conf.d/
total 8
-rw-r--r--. 1 root root 1458 Oct  9 17:54 50-engine.conf
-rw-r--r--. 1 root root 1014 Sep 15 11:16 50-vdsm.conf

> Nir
> 
> 
> Thank you,
> Tim
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWIVRHYHNGUVJNSQTYCDQBOG6VFXCZPB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6JRNR5MDCDDTTF3LPBUPC2AHDWNO5JRC/