[ovirt-users] Re: oVirt 4.3 DWH with Grafana

2021-09-16 Thread Tommy Sway
So, 4.3 cannot resolve it ?

Thanks.



-Original Message-
From: users-boun...@ovirt.org  On Behalf Of Kiran 
Rajendra
Sent: Thursday, September 16, 2021 4:54 PM
To: users@ovirt.org
Subject: [ovirt-users] Re: oVirt 4.3 DWH with Grafana

Hi Tommy,

In OLVM/oVirt 4.3, Column count_threads_as_cores is not present in table 
cluster_configuration

ovirt_engine_history=# \d cluster_configuration
Table "public.cluster_configuration"
  Column  |   Type   | Collation | 
Nullable |Default
--+--+---+--+
 history_id   | integer  |   | not 
null | nextval('configuration_seq'::regclass)
 cluster_id   | uuid |   | not 
null |
 cluster_name | character varying(40)|   | not 
null |
 cluster_description  | character varying(4000)  |   |  
|
 datacenter_id| uuid |   |  
|
 cpu_name | character varying(255)   |   |  
|
 compatibility_version| character varying(40)|   | not 
null | '2.2'::character varying
 datacenter_configuration_version | integer  |   |  
|
 create_date  | timestamp with time zone |   |  
|
 update_date  | timestamp with time zone |   |  
|
 delete_date  | timestamp with time zone |   |  
|
Indexes:
"cluster_configuration_pkey" PRIMARY KEY, btree (history_id)
"cluster_configuration_cluster_id_idx" btree (cluster_id)
"cluster_configuration_datacenter_id_idx" btree (datacenter_id)


It is added in oVirt 4.4 release.
https://gerrit.ovirt.org/c/ovirt-dwh/+/114568
https://gerrit.ovirt.org/c/ovirt-dwh/+/114568/11/packaging/dbscripts/create_views_4_4.sql
 

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOYM3DBPSHJIQUCBFV5STKSPLG47X4CY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYNIMRZX2FY3ZTLX2JADQAWE4XWL3QJZ/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Sandro Bonazzola
lvchange -a y /dev/mapper/onn-home

and eventually activating also remaining maps (grep mapper /etc/fstab
to see them)

and then exit from the rescue shell leads to a node up and running for me.



Il giorno gio 16 set 2021 alle ore 18:34 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

> Sounds like we hit https://bugzilla.redhat.com/show_bug.cgi?id=2002640 ,
> so it may be more complicated than expected to get the host up.
>
> Il giorno gio 16 set 2021 alle ore 18:14 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>>
>>
>> Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
>>> wrote:
>>>
 Hi,
 I'm still working on it but I have a first ISO ready for giving a first
 run at

 https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso

 Known limitations:
 - No hosted engine setup available


>>> Nice!
>>> If SHE not available, what would be the procedure to install the
>>> standalone engine before deploying the host?
>>> Or could I try to deploy the node using a 4.4.8 standalone engine in its
>>> own DC/Cluster?
>>>
>>
>> You can give it a run with a 4.4.8 standalone engine in its own
>> DC/Cluster as a start.
>> Or you can deploy a new engine as in the 4.4.8 flow but using
>> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
>> providing the repositories
>> Please note also these cases have never been tested yet.
>>
>>
>>
>>>
>>> Gianluca
>>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> *
>>
>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UAHV33VCJN2YDXYVZZAPSYRZLMF6DTJS/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Sandro Bonazzola
Sounds like we hit https://bugzilla.redhat.com/show_bug.cgi?id=2002640 , so
it may be more complicated than expected to get the host up.

Il giorno gio 16 set 2021 alle ore 18:14 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

>
>
> Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> I'm still working on it but I have a first ISO ready for giving a first
>>> run at
>>>
>>> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>>>
>>> Known limitations:
>>> - No hosted engine setup available
>>>
>>>
>> Nice!
>> If SHE not available, what would be the procedure to install the
>> standalone engine before deploying the host?
>> Or could I try to deploy the node using a 4.4.8 standalone engine in its
>> own DC/Cluster?
>>
>
> You can give it a run with a 4.4.8 standalone engine in its own DC/Cluster
> as a start.
> Or you can deploy a new engine as in the 4.4.8 flow but using
> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
> providing the repositories
> Please note also these cases have never been tested yet.
>
>
>
>>
>> Gianluca
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L72ZW7AECD63XQXUK4M5TMYFGLWKCHAK/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Sandro Bonazzola
Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
> wrote:
>
>> Hi,
>> I'm still working on it but I have a first ISO ready for giving a first
>> run at
>>
>> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>>
>> Known limitations:
>> - No hosted engine setup available
>>
>>
> Nice!
> If SHE not available, what would be the procedure to install the
> standalone engine before deploying the host?
> Or could I try to deploy the node using a 4.4.8 standalone engine in its
> own DC/Cluster?
>

You can give it a run with a 4.4.8 standalone engine in its own DC/Cluster
as a start.
Or you can deploy a new engine as in the 4.4.8 flow but using
https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
providing the repositories
Please note also these cases have never been tested yet.



>
> Gianluca
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GOKUJ3QC2ST3H23M4V5RWSYD5XZLOJ22/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Gianluca Cecchi
On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
wrote:

> Hi,
> I'm still working on it but I have a first ISO ready for giving a first
> run at
>
> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>
> Known limitations:
> - No hosted engine setup available
>
>
Nice!
If SHE not available, what would be the procedure to install the standalone
engine before deploying the host?
Or could I try to deploy the node using a 4.4.8 standalone engine in its
own DC/Cluster?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJY465PTKKOJMCPUQGJEKTUSUFZJGBJ4/


[ovirt-users] oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Sandro Bonazzola
Hi,
I'm still working on it but I have a first ISO ready for giving a first run
at
https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso

Known limitations:
- No hosted engine setup available
- No ansible available
- No collectd support
- No cinderlib support

Everything else should be there but it's totally untested.

Help giving it a try and providing feedback is appreciated.

Thanks
-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X3PDXUXTIGOF4MBVZZCNZ25DUB3IQYBT/


[ovirt-users] [ANN] oVirt 4.4.9 First Release Candidate is now available for testing

2021-09-16 Thread Sandro Bonazzola
oVirt 4.4.9 First Release Candidate is now available for testing

The oVirt Project is pleased to announce the availability of oVirt 4.4.9
First Release Candidate for testing, as of September 16th, 2021.

This update is the ninth in a series of stabilization updates to the 4.4
series.
Documentation

   -

   If you want to try oVirt as quickly as possible, follow the instructions
   on the Download  page.
   -

   For complete installation, administration, and usage instructions, see
   the oVirt Documentation .
   -

   For upgrading from a previous version, see the oVirt Upgrade Guide
   .
   -

   For a general overview of oVirt, see About oVirt
   .

Important notes before you try it

Please note this is a pre-release build.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not be used in production.
Installation instructions

For installation instructions and additional information please refer to:

https://ovirt.org/documentation/

This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 8.4 or similar

* CentOS Stream 8

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 8.4 or similar

* CentOS Stream 8

* oVirt Node 4.4 based on CentOS Stream 8 (available for x86_64 only)

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

Notes:

- oVirt Appliance is already available based on CentOS Stream 8

- oVirt Node NG is already available based on CentOS Stream 8

Additional Resources:

* Read more about the oVirt 4.4.9 release highlights:
http://www.ovirt.org/release/4.4.9/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.4.9/

[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQBRMB2NKAF2MPJKVKNVHVB2HYGARBWF/


[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread Roman Bednar
Make sure the VG name is correct, it won't complain if the name is wrong.

Also you can check if the backups are enabled on the hosts, to be sure:

# lvmconfig --typeconfig current | egrep "backup|archive"
backup {
backup=1
backup_dir="/etc/lvm/backup"
archive=1
archive_dir="/etc/lvm/archive"


If the backups are not available I'm afraid there's not much you can do at
this point.

On Thu, Sep 16, 2021 at 2:56 PM  wrote:

> Hi Roman,
>
> Unfortunately, step 1 returns nothing:
>
> kvmr03:~# vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
>No archives found in /etc/lvm/archive
>
> I tried several hosts and noone has a copy.
>
> Any other way to get a backup of the VG?
>
> El 2021-09-16 13:42, Roman Bednar escribió:
> > Hi Nicolas,
> >
> > You can try to recover VG metadata from a backup or archive which lvm
> > automatically creates by default.
> >
> > 1) To list all available backups for given VG:
> >
> > #vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
> >
> > Select the latest one which sounds right, something with a description
> > along the lines of "Created *before* lvremove".
> > You might want to select something older than the latest as lvm does a
> > backup also *after* running some command.
> >
> > 2) Find UUID of your broken PV (filter might not be needed, depends on
> > your local conf):
> >
> > #pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
> > /dev/mapper/36001405063455cf7cd74c20bc06e9304
> >
> > 3) Create a new PV on a different partition or disk (/dev/sdX) using
> > the UUID found in previous step and restorefile option:
> >
> > #pvcreate --uuid  --restorefile 
> > 
> >
> > 4) Try to display the VG:
> >
> > # vgdisplay Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
> >
> > -Roman
> >
> > On Thu, Sep 16, 2021 at 1:47 PM  wrote:
> >
> >> I can also see...
> >>
> >> kvmr03:~# lvs | grep 927f423a-6689-4ddb-8fda-b3375c3bbca3
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
> >> offset 2198927383040
> >> Couldn't read volume group metadata from
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304.
> >> Metadata location on
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304 at
> >> 2198927383040 has invalid summary for VG.
> >> Failed to read metadata summary from
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304
> >> Failed to scan VG from
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304
> >>
> >> Seems to me like metadata from that VG has been corrupted. Is there
> >> a
> >> way to recover?
> >>
> >> El 2021-09-16 11:19, nico...@devels.es escribió:
> >>> The most relevant log snippet I have found is the following. I
> >> assume
> >>> it cannot scan the Storage Domain, but I'm unsure why, as the
> >> storage
> >>> domain backend is up and running.
> >>>
> >>> 021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM]
> >>> Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {
> >>> preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
> >>> write_cache_state=0  disable_after_error_count=3
> >>>
> >>
> >
>
> filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648
> >>>
> >>
> >
>
> 

[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread nicolas

Hi Roman,

Unfortunately, step 1 returns nothing:

kvmr03:~# vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
  No archives found in /etc/lvm/archive

I tried several hosts and noone has a copy.

Any other way to get a backup of the VG?

El 2021-09-16 13:42, Roman Bednar escribió:

Hi Nicolas,

You can try to recover VG metadata from a backup or archive which lvm
automatically creates by default.

1) To list all available backups for given VG:

#vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp

Select the latest one which sounds right, something with a description
along the lines of "Created *before* lvremove".
You might want to select something older than the latest as lvm does a
backup also *after* running some command.

2) Find UUID of your broken PV (filter might not be needed, depends on
your local conf):

#pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
/dev/mapper/36001405063455cf7cd74c20bc06e9304

3) Create a new PV on a different partition or disk (/dev/sdX) using
the UUID found in previous step and restorefile option:

#pvcreate --uuid  --restorefile 


4) Try to display the VG:

# vgdisplay Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp

-Roman

On Thu, Sep 16, 2021 at 1:47 PM  wrote:


I can also see...

kvmr03:~# lvs | grep 927f423a-6689-4ddb-8fda-b3375c3bbca3
/dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
offset 2198927383040
Couldn't read volume group metadata from
/dev/mapper/36001405063455cf7cd74c20bc06e9304.
Metadata location on
/dev/mapper/36001405063455cf7cd74c20bc06e9304 at
2198927383040 has invalid summary for VG.
Failed to read metadata summary from
/dev/mapper/36001405063455cf7cd74c20bc06e9304
Failed to scan VG from
/dev/mapper/36001405063455cf7cd74c20bc06e9304

Seems to me like metadata from that VG has been corrupted. Is there
a
way to recover?

El 2021-09-16 11:19, nico...@devels.es escribió:

The most relevant log snippet I have found is the following. I

assume

it cannot scan the Storage Domain, but I'm unsure why, as the

storage

domain backend is up and running.

021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM]
Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {
preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
write_cache_state=0  disable_after_error_count=3






filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648







35b628d30610500$|^/dev/mapper/3600c0ff00064835b638d30610100$|^/dev/mapper/3600c0ff00064835b638d30610300$|^/dev/mapper/3600c0ff00064835b638d30610500$|^/dev/mapper/3600c0ff00064835b638d30610700$|^/dev/mapper/3600c0ff00064835b638d30610900$|^/dev/mapper/3600c0ff00064835b638d30610b00$|^/dev/mapper/3600c0ff00064835cb98f30610100$|^/dev/mapper/3600c0ff00064835cb98f30610300$|^/dev/mapper/3600c0ff00064835cb98f30610500$|^/dev/mapper/3600c0ff00064835cb98f30610700$|^/dev/mapper/3600c0ff00064835cb98f30610900$|^/dev/mapper/3600c0ff00064835cba8f30610100$|^/dev/mapper/3600c0ff00064835cba8f30610300$|^/dev/mapper/3600c0ff00064835cba8f30610500$|^/dev/mapper/3600c0ff00064835cba8f30610700$|^/dev/mapper/3634b35410019574796dcb0e30007$|^/dev/mapper/3634b35410019574796dcdffc0008$|^/dev/mapper/3634b354100195747999c2dc50003$|^/dev/mapper/3634b354100195747999c3c4a0004$|^/dev/mapper/3634b3541001957479c2b9c640001$|^/dev/mapper/3634

b3541001957479c2baba50002$|", "r|.*|"] } global {

locking_type=4

prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup

{

retain_min=50  retain_days=0 }', '--noheadings', '--units', 'b',
'--nosuffix', '--separator', '|', '--ignoreskippedcluster', '-o',




'uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name',

'--select', 'vg_name = 219fa16f-13c9-44e4-a07d-a40c0a7fe206']
succeeded with warnings: ['
/dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
offset 2198927383040', "  Couldn't read volume group metadata from
/dev/mapper/36001405063455cf7cd74c20bc06e9304.", '  

[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread Roman Bednar
Hi Nicolas,

You can try to recover VG metadata from a backup or archive which lvm
automatically creates by default.

1) To list all available backups for given VG:

#vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp

Select the latest one which sounds right, something with a description
along the lines of "Created *before* lvremove".
You might want to select something older than the latest as lvm does a
backup also *after* running some command.


2) Find UUID of your broken PV (filter might not be needed, depends on your
local conf):

#pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
/dev/mapper/36001405063455cf7cd74c20bc06e9304


3) Create a new PV on a different partition or disk (/dev/sdX) using the
UUID found in previous step and restorefile option:

#pvcreate --uuid  --restorefile 



4) Try to display the VG:

# vgdisplay Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp



-Roman

On Thu, Sep 16, 2021 at 1:47 PM  wrote:

> I can also see...
>
> kvmr03:~# lvs | grep 927f423a-6689-4ddb-8fda-b3375c3bbca3
>/dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/36001405063455cf7cd74c20bc06e9304.
>Metadata location on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/36001405063455cf7cd74c20bc06e9304
>Failed to scan VG from /dev/mapper/36001405063455cf7cd74c20bc06e9304
>
>
> Seems to me like metadata from that VG has been corrupted. Is there a
> way to recover?
>
> El 2021-09-16 11:19, nico...@devels.es escribió:
> > The most relevant log snippet I have found is the following. I assume
> > it cannot scan the Storage Domain, but I'm unsure why, as the storage
> > domain backend is up and running.
> >
> > 021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM]
> > Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {
> > preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
> > write_cache_state=0  disable_after_error_count=3
> >
>
> filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648
> >
>
> 35b628d30610500$|^/dev/mapper/3600c0ff00064835b638d30610100$|^/dev/mapper/3600c0ff00064835b638d30610300$|^/dev/mapper/3600c0ff00064835b638d30610500$|^/dev/mapper/3600c0ff00064835b638d30610700$|^/dev/mapper/3600c0ff00064835b638d30610900$|^/dev/mapper/3600c0ff00064835b638d30610b00$|^/dev/mapper/3600c0ff00064835cb98f30610100$|^/dev/mapper/3600c0ff00064835cb98f30610300$|^/dev/mapper/3600c0ff00064835cb98f30610500$|^/dev/mapper/3600c0ff00064835cb98f30610700$|^/dev/mapper/3600c0ff00064835cb98f30610900$|^/dev/mapper/3600c0ff00064835cba8f30610100$|^/dev/mapper/3600c0ff00064835cba8f30610300$|^/dev/mapper/3600c0ff00064835cba8f30610500$|^/dev/mapper/3600c0ff00064835cba8f30610700$|^/dev/mapper/3634b35410019574796dcb0e30007$|^/dev/mapper/3634b35410019574796dcdffc0008$|^/dev/mapper/3634b354100195747999c2dc50003$|^/dev/mapper/3634b354100195747999c3c4a0004$|^/dev/mapper/3634b3541001957479c2b9c640001$|^/dev/mapper/3634
> > b3541001957479c2baba50002$|", "r|.*|"] } global {  locking_type=4
> > prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup {
> > retain_min=50  retain_days=0 }', '--noheadings', '--units', 'b',
> > '--nosuffix', '--separator', '|', '--ignoreskippedcluster', '-o',
> >
> 'uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name',
> > '--select', 'vg_name = 219fa16f-13c9-44e4-a07d-a40c0a7fe206']
> > succeeded with warnings: ['
> > /dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
> > offset 2198927383040', "  Couldn't read volume group metadata from
> > /dev/mapper/36001405063455cf7cd74c20bc06e9304.", '  Metadata location
> > on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at 2198927383040 has
> > invalid summary for VG.', '  Failed to read metadata summary from
> > 

[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread nicolas

I can also see...

kvmr03:~# lvs | grep 927f423a-6689-4ddb-8fda-b3375c3bbca3
  /dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at 
offset 2198927383040
  Couldn't read volume group metadata from 
/dev/mapper/36001405063455cf7cd74c20bc06e9304.
  Metadata location on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at 
2198927383040 has invalid summary for VG.
  Failed to read metadata summary from 
/dev/mapper/36001405063455cf7cd74c20bc06e9304

  Failed to scan VG from /dev/mapper/36001405063455cf7cd74c20bc06e9304


Seems to me like metadata from that VG has been corrupted. Is there a 
way to recover?


El 2021-09-16 11:19, nico...@devels.es escribió:

The most relevant log snippet I have found is the following. I assume
it cannot scan the Storage Domain, but I'm unsure why, as the storage
domain backend is up and running.

021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM]
Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {
preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
write_cache_state=0  disable_after_error_count=3


filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648



35b628d30610500$|^/dev/mapper/3600c0ff00064835b638d30610100$|^/dev/mapper/3600c0ff00064835b638d30610300$|^/dev/mapper/3600c0ff00064835b638d30610500$|^/dev/mapper/3600c0ff00064835b638d30610700$|^/dev/mapper/3600c0ff00064835b638d30610900$|^/dev/mapper/3600c0ff00064835b638d30610b00$|^/dev/mapper/3600c0ff00064835cb98f30610100$|^/dev/mapper/3600c0ff00064835cb98f30610300$|^/dev/mapper/3600c0ff00064835cb98f30610500$|^/dev/mapper/3600c0ff00064835cb98f30610700$|^/dev/mapper/3600c0ff00064835cb98f30610900$|^/dev/mapper/3600c0ff00064835cba8f30610100$|^/dev/mapper/3600c0ff00064835cba8f30610300$|^/dev/mapper/3600c0ff00064835cba8f30610500$|^/dev/mapper/3600c0ff00064835cba8f30610700$|^/dev/mapper/3634b35410019574796dcb0e30007$|^/dev/mapper/3634b35410019574796dcdffc0008$|^/dev/mapper/3634b354100195747999c2dc50003$|^/dev/mapper/3634b354100195747999c3c4a0004$|^/dev/mapper/3634b3541001957479c2b9c640001$|^/dev/mapper/3634

b3541001957479c2baba50002$|", "r|.*|"] } global {  locking_type=4
prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup {
retain_min=50  retain_days=0 }', '--noheadings', '--units', 'b',
'--nosuffix', '--separator', '|', '--ignoreskippedcluster', '-o',
'uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name',
'--select', 'vg_name = 219fa16f-13c9-44e4-a07d-a40c0a7fe206']
succeeded with warnings: ['
/dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
offset 2198927383040', "  Couldn't read volume group metadata from
/dev/mapper/36001405063455cf7cd74c20bc06e9304.", '  Metadata location
on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at 2198927383040 has
invalid summary for VG.', '  Failed to read metadata summary from
/dev/mapper/36001405063455cf7cd74c20bc06e9304', '  Failed to scan VG
from /dev/mapper/36001405063455cf7cd74c20bc06e9304'] (lvm:462)
2021-09-16 11:16:58,909+0100 ERROR (monitor/219fa16) [storage.Monitor]
Setting up monitor for 219fa16f-13c9-44e4-a07d-a40c0a7fe206 failed
(monitor:330)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line 327, in _setupLoop
self._setupMonitor()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line 349, in _setupMonitor
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in 
wrapper

value = meth(self, *a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line 367, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
110, in produce
domain.getRealDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
51, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File 

[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread nicolas
The most relevant log snippet I have found is the following. I assume it 
cannot scan the Storage Domain, but I'm unsure why, as the storage 
domain backend is up and running.


021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM] 
Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {  
preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1  
write_cache_state=0  disable_after_error_count=3  
filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648

35b628d30610500$|^/dev/mapper/3600c0ff00064835b638d30610100$|^/dev/mapper/3600c0ff00064835b638d30610300$|^/dev/mapper/3600c0ff00064835b638d30610500$|^/dev/mapper/3600c0ff00064835b638d30610700$|^/dev/mapper/3600c0ff00064835b638d30610900$|^/dev/mapper/3600c0ff00064835b638d30610b00$|^/dev/mapper/3600c0ff00064835cb98f30610100$|^/dev/mapper/3600c0ff00064835cb98f30610300$|^/dev/mapper/3600c0ff00064835cb98f30610500$|^/dev/mapper/3600c0ff00064835cb98f30610700$|^/dev/mapper/3600c0ff00064835cb98f30610900$|^/dev/mapper/3600c0ff00064835cba8f30610100$|^/dev/mapper/3600c0ff00064835cba8f30610300$|^/dev/mapper/3600c0ff00064835cba8f30610500$|^/dev/mapper/3600c0ff00064835cba8f30610700$|^/dev/mapper/3634b35410019574796dcb0e30007$|^/dev/mapper/3634b35410019574796dcdffc0008$|^/dev/mapper/3634b354100195747999c2dc50003$|^/dev/mapper/3634b354100195747999c3c4a0004$|^/dev/mapper/3634b3541001957479c2b9c640001$|^/dev/mapper/3634
b3541001957479c2baba50002$|", 
"r|.*|"] } global {  locking_type=4  prioritise_write_locks=1  
wait_for_locks=1  use_lvmetad=0 } backup {  retain_min=50  retain_days=0 
}', '--noheadings', '--units', 'b', '--nosuffix', '--separator', '|', 
'--ignoreskippedcluster', '-o', 
'uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name', 
'--select', 'vg_name = 219fa16f-13c9-44e4-a07d-a40c0a7fe206'] succeeded 
with warnings: ['  /dev/mapper/36001405063455cf7cd74c20bc06e9304: 
Checksum error at offset 2198927383040', "  Couldn't read volume group 
metadata from /dev/mapper/36001405063455cf7cd74c20bc06e9304.", '  
Metadata location on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at 
2198927383040 has invalid summary for VG.', '  Failed to read metadata 
summary from /dev/mapper/36001405063455cf7cd74c20bc06e9304', '  Failed 
to scan VG from /dev/mapper/36001405063455cf7cd74c20bc06e9304'] 
(lvm:462)
2021-09-16 11:16:58,909+0100 ERROR (monitor/219fa16) [storage.Monitor] 
Setting up monitor for 219fa16f-13c9-44e4-a07d-a40c0a7fe206 failed 
(monitor:330)

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 
327, in _setupLoop

self._setupMonitor()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 
349, in _setupMonitor

self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in 
wrapper

value = meth(self, *a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 
367, in _produceDomain

self.domain = sdCache.produce(self.sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, 
in produce

domain.getRealDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, 
in getRealDomain

return self._cache._realProduce(self._sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, 
in _realProduce

domain = self._findDomain(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, 
in _findDomain

return findMethod(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, 
in _findUnfetchedDomain

raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'219fa16f-13c9-44e4-a07d-a40c0a7fe206',)



El 2021-09-16 08:28, Vojtech Juranek escribió:

On Wednesday, 15 September 2021 14:52:27 CEST nico...@devels.es wrote:

Hi,

We're running oVirt 4.3.8 and we recently had a 

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread Strahil Nikolov via Users
It should be 
  system_u:object_r:glusterd_brick_t:s0
Best Regards,Strahil Nikolov
 
I'm having this same issue on 4.4.8 with a fresh 3-node install as well.

Same errors as the OP.  

Potentially relevant test command: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type

Seems like the glusterd selinux fcontexts are missing.  Are they provided by 
glusterd_selinux?

[root@ovirt-node0 ~]# dnf install selinux-policy
Last metadata expiration check: 0:03:51 ago on Wed 15 Sep 2021 11:31:59 AM UTC.
Package selinux-policy-3.14.3-79.el8.noarch is already installed.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KJYS43SATHCHCRZUHIJA5475SAJ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LVEKFE6HUHZGIHX36FDQV4KIMTTSQYHF/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread Strahil Nikolov via Users
I think that in the UI there is an option to edit.Find and replace 
glusterd_brick_t with system_u:object_r:glusterd_brick_t:s0 and run again.
Best Regards,Strahil Nikolov 
 
  On Thu, Sep 16, 2021 at 12:33, 
bgrif...@affinityplus.org wrote:   I had the same 
issue with a new 3-node deploy on 4.4.8

[root@ovirt-node0 ~]# dnf list ovirt*
Last metadata expiration check: 0:20:41 ago on Wed 15 Sep 2021 10:54:31 AM UTC.
Installed Packages
ovirt-ansible-collection.noarch                          1.6.2-1.el8            
              @System
ovirt-host.x86_64                                        4.4.8-1.el8            
              @System


TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux context 
on the bricks] ***
failed: [ovirt-node1] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
ovirt-node0    : ok=52  changed=13  unreachable=0    failed=1    skipped=117  
rescued=0    ignored=1  
ovirt-node1  : ok=51  changed=12  unreachable=0    failed=1    skipped=117  
rescued=0    ignored=1  
ovirt-node2  : ok=51  changed=12  unreachable=0    failed=1    skipped=117  
rescued=0    ignored=1  

Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more 
informations.




Just to confirm: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type
[root@ovirt-node0 ~]#
___
Users mailing list -- users@ovirt.org
To 

[ovirt-users] Re: about the power management of the hosts

2021-09-16 Thread Strahil Nikolov via Users
If your Host is stale and is having thr SPM role -> some tasks will fail.Also, 
the VMs on such host won't be recovered unless VM HA is enabled (with storage 
lease).
For prod, I would setup that.Keep in mind that the fencing is issued by another 
host in the cluster, so you need minimum 2 hosts in a cluster before starting 
setup & test.
Best Regards,Strahil Nikolov
 
 
  On Thu, Sep 16, 2021 at 10:32, Tommy Sway wrote:   

Everybody, hi:

I would like to ask, after the configuration of power management on the host, 
under what circumstances will work? 

That is, when does the engine send a request to the IPMI module to restart the 
power supply?

  

Is it necessary to configure power management modules in the production 
environment? 

Are there any risks?

  

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3FVP7FH57KOYW7MP2LFKUVQF3ROGL6M/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KC5AO644BRAOLLFWXHG7S2I3HJ4XWQP4/


[ovirt-users] Windows guest vm SSO doen't work perfectly

2021-09-16 Thread wharton wang
Description of problem:
   i am trying deploy oVirt4.3 for windows guest vm. and entercountered the 
same problem as: https://bugzilla.redhat.com/show_bug.cgi?id=1940707. so, i 
checked this solution https://access.redhat.com/solutions/5893381,while it's 
weird that no solutions on the solution page. and , it is said that another 
weird thing, ovirt 4.4 will remove SSO support, anyway, i just need find a 
workaround for this.
 thanks to google, i found an ovirtcredprov.dll works for the windows sso,but, 
the sso only works when the vm startup, does NOT work when the windows 
locked/signout. so i checked the vdsm log, vdsm just didn't send desktopLogin 
command to OGA when i unlock or sign in after locked/signout. May I know how 
could i force VDSM to send desktopLogin command ONCE connect the windows VM 
console but not just when i power on the vm? and, one more thing, i don't know 
if below failed results in this problem(but the sso works when poweron the 
vm),as below vdsm log when i poweron windows vm.
2021-09-15 16:17:08,766+0800 INFO  (jsonrpc/6) [api.virt] START 
desktopLogin(domain=u'LIDC.LE', username=u'chris', password='') 
from=:::192.168.18.42,47756, flow_id=0e204198-ddee-4dfb-9223-5789b05fc699, 
vmId=719047ed-1a5b-4948-875c-ef1014df392e (api:48)
2021-09-15 16:17:08,766+0800 INFO  (jsonrpc/6) [api.virt] FINISH desktopLogin 
return={'status': {'message': 'Guest agent non-responsive', 'code': 19}} 
from=:::192.168.18.42,47756, flow_id=0e204198-ddee-4dfb-9223-5789b05fc699, 
vmId=719047ed-1a5b-4948-875c-ef1014df392e (api:54)
2021-09-15 16:17:08,767+0800 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call 
VM.desktopLogin failed (error 19) in 0.00 seconds (__init__:312)
2021-09-15 16:17:09,875+0800 WARN  (qgapoller/1) [virt.periodic.VmDispatcher] 
could not run  at 0x7faf604b46e0> on 
['719047ed-1a5b-4948-875c-ef1014df392e'] (periodic:289)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2SFOBLVOFVRGNWVRL5PK7NMY2DIVX7ZA/


[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread bgriffis
I'm having this same issue on 4.4.8 with a fresh 3-node install as well.

Same errors as the OP.  

Potentially relevant test command: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type

Seems like the glusterd selinux fcontexts are missing.  Are they provided by 
glusterd_selinux?

[root@ovirt-node0 ~]# dnf install selinux-policy
Last metadata expiration check: 0:03:51 ago on Wed 15 Sep 2021 11:31:59 AM UTC.
Package selinux-policy-3.14.3-79.el8.noarch is already installed.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KJYS43SATHCHCRZUHIJA5475SAJ/


[ovirt-users] oVirt - No supported package manager found in your system

2021-09-16 Thread German Sandoval
Probably this isn't the place to ask, but I'm doing a test with an Almalinux 
Physical host and trying to install a standalone instance and I get this error 
when I use the Engine-Setup, I'm using a Centos stream guide.

[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  Configuration files: 
/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, 
/etc/ovirt-engine-setup.conf.d/10-packaging.conf
  Log file: 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20210915140413-hsjs2f.log
  Version: otopi-1.9.5 (otopi-1.9.5-1.el8)
[ ERROR ] Failed to execute stage 'Environment setup': No supported package 
manager found in your system
[ INFO  ] Stage: Clean up
  Log file is located at 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20210915140413-hsjs2f.log
[ INFO  ] Generating answer file 
'/var/lib/ovirt-engine/setup/answers/20210915140414-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed

2021-09-15 14:12:16,421-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/groupKvm=str:'kvm'
2021-09-15 14:12:16,421-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/groupVmConsole=str:'ovirt-vmconsole'
2021-09-15 14:12:16,421-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV 
OVESETUP_SYSTEM/hostileServices=str:'ovirt-engine-dwhd,ovirt-engine-notifier'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/memCheckEnabled=bool:'True'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/memCheckMinimumMB=int:'4096'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/memCheckRecommendedMB=int:'16384'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/memCheckThreshold=int:'90'
2021-09-15 14:12:16,422-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/nfsConfigEnabled=NoneType:'None'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/nfsConfigEnabled_legacyInPostInstall=bool:'False'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/nfsServiceName=NoneType:'None'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/reservedPorts=set:'set()'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/selinuxBooleans=list:'[]'
2021-09-15 14:12:16,423-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/selinuxContexts=list:'[]'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/selinuxPorts=list:'[]'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/selinuxRestorePaths=list:'[]'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/shmmax=int:'68719476736'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/userApache=str:'apache'
2021-09-15 14:12:16,424-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/userEngine=str:'ovirt'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/userPostgres=str:'postgres'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/userRoot=str:'root'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/userVdsm=str:'vdsm'
2021-09-15 14:12:16,425-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_SYSTEM/userVmConsole=str:'ovirt-vmconsole'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=NoneType:'None'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyPort=int:''
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV OVESETUP_WSP_RPMDISTRO_PACKAGES=str:'ovirt-engine-websocket-proxy'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV 
OVESETUP_WSP_RPMDISTRO_PACKAGES_SETUP=str:'ovirt-engine-setup-plugin-websocket-proxy'
2021-09-15 14:12:16,426-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV PACKAGER/dnfDisabledPlugins=list:'[]'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV PACKAGER/dnfExpireCache=bool:'True'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV PACKAGER/dnfRollback=bool:'True'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV PACKAGER/dnfpackagerEnabled=bool:'False'
2021-09-15 14:12:16,427-0400 DEBUG otopi.context context.dumpEnvironment:775 
ENV 

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread bgriffis
I had the same issue with a new 3-node deploy on 4.4.8

[root@ovirt-node0 ~]# dnf list ovirt*
Last metadata expiration check: 0:20:41 ago on Wed 15 Sep 2021 10:54:31 AM UTC.
Installed Packages
ovirt-ansible-collection.noarch   1.6.2-1.el8   
   @System
ovirt-host.x86_64 4.4.8-1.el8   
   @System


TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux context 
on the bricks] ***
failed: [ovirt-node1] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/engine', 'lvname': 
'gluster_lv_engine', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_engine", "path": 
"/gluster_bricks/engine", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/data', 'lvname': 
'gluster_lv_data', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"lvname": "gluster_lv_data", "path": 
"/gluster_bricks/data", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: Type 
glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node1] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node0] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}
failed: [ovirt-node2] (item={'path': '/gluster_bricks/vmstore', 'lvname': 
'gluster_lv_vmstore', 'vgname': 'gluster_vg_sdb'}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"lvname": "gluster_lv_vmstore", "path": 
"/gluster_bricks/vmstore", "vgname": "gluster_vg_sdb"}, "msg": "ValueError: 
Type glusterd_brick_t is invalid, must be a file or device type\n"}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
ovirt-node0: ok=52   changed=13   unreachable=0failed=1skipped=117  
rescued=0ignored=1   
ovirt-node1   : ok=51   changed=12   unreachable=0failed=1skipped=117  
rescued=0ignored=1   
ovirt-node2   : ok=51   changed=12   unreachable=0failed=1skipped=117  
rescued=0ignored=1   

Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more 
informations.




Just to confirm: 

[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks(/.*)?"
ValueError: Type glusterd_brick_t is invalid, must be a file or device type
[root@ovirt-node0 ~]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: oVirt 4.3 DWH with Grafana

2021-09-16 Thread Kiran Rajendra
Hi Tommy,

In OLVM/oVirt 4.3, Column count_threads_as_cores is not present in table 
cluster_configuration

ovirt_engine_history=# \d cluster_configuration
Table "public.cluster_configuration"
  Column  |   Type   | Collation | 
Nullable |Default
--+--+---+--+
 history_id   | integer  |   | not 
null | nextval('configuration_seq'::regclass)
 cluster_id   | uuid |   | not 
null |
 cluster_name | character varying(40)|   | not 
null |
 cluster_description  | character varying(4000)  |   |  
|
 datacenter_id| uuid |   |  
|
 cpu_name | character varying(255)   |   |  
|
 compatibility_version| character varying(40)|   | not 
null | '2.2'::character varying
 datacenter_configuration_version | integer  |   |  
|
 create_date  | timestamp with time zone |   |  
|
 update_date  | timestamp with time zone |   |  
|
 delete_date  | timestamp with time zone |   |  
|
Indexes:
"cluster_configuration_pkey" PRIMARY KEY, btree (history_id)
"cluster_configuration_cluster_id_idx" btree (cluster_id)
"cluster_configuration_datacenter_id_idx" btree (datacenter_id)


It is added in oVirt 4.4 release.
https://gerrit.ovirt.org/c/ovirt-dwh/+/114568
https://gerrit.ovirt.org/c/ovirt-dwh/+/114568/11/packaging/dbscripts/create_views_4_4.sql
 

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOYM3DBPSHJIQUCBFV5STKSPLG47X4CY/


[ovirt-users] Windows guest vm SSO doen't work perfectly

2021-09-16 Thread wharton wang
I am trying deploy oVirt4.3 for windows VMs, and entercountered the same 
problem as:https://bugzilla.redhat.com/show_bug.cgi?id=1940707, so i checked 
this solution:https://access.redhat.com/solutions/5893381, but no any solution 
on this verified solution page. It seems that oVirt will remove SSO support for 
windows, any way, i just need find a workaround for this, thanks google,i 
searched and found an OvirtCredProv.dll works BUT only works when VM poweron. 
it does NOT work when unlock/sign in after locked/sign off. i checked the VDSM 
log,find that VDSM didn't send desktopLogin command to OGA when sign in/ unlok. 
May i know how to force VDSM to send desktopLogin command to OGA,ONCE connect 
to VM console. but not just when poweron.  and i don't know if anything with 
this failed as below vdsm log,it is the log when i poweron vm, and the sso 
works though vdsm log said its failed.

2021-09-15 16:17:08,766+0800 INFO  (jsonrpc/6) [api.virt] START 
desktopLogin(domain=u'LIDC.LE', username=u'chris', password='') 
from=:::192.168.18.42,47756, flow_id=0e204198-ddee-4dfb-9223-5789b05fc699, 
vmId=719047ed-1a5b-4948-875c-ef1014df392e (api:48)
2021-09-15 16:17:08,766+0800 INFO  (jsonrpc/6) [api.virt] FINISH desktopLogin 
return={'status': {'message': 'Guest agent non-responsive', 'code': 19}} 
from=:::192.168.18.42,47756, flow_id=0e204198-ddee-4dfb-9223-5789b05fc699, 
vmId=719047ed-1a5b-4948-875c-ef1014df392e (api:54)
2021-09-15 16:17:08,767+0800 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call 
VM.desktopLogin failed (error 19) in 0.00 seconds (__init__:312)
2021-09-15 16:17:09,875+0800 WARN  (qgapoller/1) [virt.periodic.VmDispatcher] 
could not run  at 0x7faf604b46e0> on 
['719047ed-1a5b-4948-875c-ef1014df392e'] (periodic:289)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQBUTOBL2SR3JDPLNNFMP43VY7MY6MGU/


[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread Vojtech Juranek
On Wednesday, 15 September 2021 14:52:27 CEST nico...@devels.es wrote:
> Hi,
> 
> We're running oVirt 4.3.8 and we recently had a oVirt crash after moving 
> too much disks between storage domains.
> 
> Concretely, one of the Storage Domains reports status "Unknown", 
> "Total/Free/Guaranteed free spaces" are "[N/A]".
> 
> After trying to activate it in the Domain Center we see messages like 
> these from all of the hosts:
> 
>  VDSM hostX command GetVGInfoVDS failed: Volume Group does not exist: 
> (u'vg_uuid: Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp',)
> 
> I tried putting the Storage Domain in maintenance and it fails with 
> messages like:
> 
>  Storage Domain iaasb13 (Data Center KVMRojo) was deactivated by 
> system because it's not visible by any of the hosts.
>  Failed to update OVF disks 8661acd1-d1c4-44a0-a4d4-ddee834844e9, OVF 
> data isn't updated on those OVF stores (Data Center KVMRojo, Storage 
> Domain iaasb13).
>  Failed to update VMs/Templates OVF data for Storage Domain iaasb13 
> in Data Center KVMRojo.
> 
> I'm sure the storage domain backend is up and running, and the LUN being 
> exported.
> 
> Any hints how can I debug this problem and restore the Storage Domain?

I'd suggest to ssh to any of the host from given data center and investigate 
manually, if the device is visible to the host (e.g. using lsblk) and 
eventually check /var/log/messages to determine where the problem could be.

 
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UNXKR7HRCRDTT
> WLEYO6FFM4WOLD6YATW/



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y6BBLM6M5Z5S25HGLS7N3GJ7OX7M2L6U/


[ovirt-users] about the power management of the hosts

2021-09-16 Thread Tommy Sway
Everybody, hi:

I would like to ask, after the configuration of power management on the
host, under what circumstances will work? 

That is, when does the engine send a request to the IPMI module to restart
the power supply?

 

Is it necessary to configure power management modules in the production
environment? 

Are there any risks?

 

Thanks!

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3FVP7FH57KOYW7MP2LFKUVQF3ROGL6M/