[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread Yedidyah Bar David
On Wed, Mar 10, 2021 at 10:16 PM penguin pages  wrote:
>
> well.. figured the package remove was means to get rid of "upgrade pending" 
> which would then allow me to get engine failover to start working but...  
> ya.. don't do that.

If you refer to "Use --allowerasing without fully understanding what's
going to be erased", then I definitely agree - don't do that.

>
> How to destroy engine:
> 1) yum update --allowerasing

What did it remove? If this includes vdsm, it will definitely prevent
starting the engine vm.

> 2) reboot
> 3) no more engine starting.  
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/troubleshooting
>
> Validated services look ok
> [root@thor ~]# systemctl status ovirt-ha-proxy
> Unit ovirt-ha-proxy.service could not be found.
> [root@thor ~]# systemctl status ovirt-ha-agent
> ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
> Agent
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
> vendor preset: disabled)
>Active: active (running) since Wed 2021-03-10 14:55:17 EST; 14min ago
>  Main PID: 6390 (ovirt-ha-agent)
> Tasks: 2 (limit: 1080501)
>Memory: 25.8M
>CGroup: /system.slice/ovirt-ha-agent.service
>└─6390 /usr/libexec/platform-python 
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
>
> Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted 
> Engine High Availability Monitoring Agent.
> [root@thor ~]# systemctl status -l ovirt-ha-agent
> ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
> Agent
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
> vendor preset: disabled)
>Active: active (running) since Wed 2021-03-10 14:55:17 EST; 16min ago
>  Main PID: 6390 (ovirt-ha-agent)
> Tasks: 2 (limit: 1080501)
>Memory: 25.6M
>CGroup: /system.slice/ovirt-ha-agent.service
>└─6390 /usr/libexec/platform-python 
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
>
> Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted 
> Engine High Availability Monitoring Agent.
> [root@thor ~]#journalctl -u ovirt-ha-agent
>
> -- Logs begin at Wed 2021-03-10 14:47:34 EST, end at Wed 2021-03-10 15:12:12 
> EST. --
> Mar 10 14:48:35 thor.penguinpages.local systemd[1]: Started oVirt Hosted 
> Engine High Availability Monitoring Agent.
> Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent 
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start 
> necessary monitors
> Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent 
> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
> last):
> File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
> line 85, in start_monitor

I think this is while trying to connect to ovirt-ha-broker, you might
want to check the status of that one.

>   response = 
> self._proxy.start_monitor(type, options)
> File 
> "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
>   return 
> self.__send(self.__name, args)
> File 
> "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
>   
> verbose=self.__verbose
> File 
> "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
>   return 
> self.single_request(host, handler, request_body, verbose)
> File 
> "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request
>   http_conn = 
> self.send_request(host, handler, request_body, verbose)
> File 
> "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request
>   
> self.send_content(connection, request_body)
> File 
> "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content
>   
> connection.endheaders(request_body)
> File 
> "/usr/lib64/python3.6/http/client.py", line 1264, in endheaders
>   
> self._send_output(message_body, 

[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread Yedidyah Bar David
On Wed, Mar 10, 2021 at 7:19 PM penguin pages  wrote:
>
> I did make that post but that was more about convert to CentOS 8 to streams 
> fubar my cluster up... ya.. still trying to get it back on its feet.
>
> I have been trying to move to IaC based deployment but ..  kind of given up 
> on that as oVirt seems to really need its last steps "HCI Wizard"
>
> yum install ovirt-hosted-engine-setup

This is just a wrapper above a set of ansible playbooks/roles. See also:

https://github.com/oVirt/ovirt-ansible-collection/

and specifically:

https://github.com/oVirt/ovirt-ansible-collection/blob/master/roles/hosted_engine_setup/README.md

We also used to have code that used this directly in
ovirt-system-tests. But it was broken for a long time and eventually
removed. Might be revived one day, one can always hope:

https://gerrit.ovirt.org/c/ovirt-system-tests/+/113217

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDTT6QHXO5JOEQB26COD2C3WUYY27Y2R/


[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread penguin pages
well.. figured the package remove was means to get rid of "upgrade pending" 
which would then allow me to get engine failover to start working but...  
ya.. don't do that.

How to destroy engine:
1) yum update --allowerasing 
2) reboot 
3) no more engine starting.  
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/troubleshooting
 

Validated services look ok
[root@thor ~]# systemctl status ovirt-ha-proxy
Unit ovirt-ha-proxy.service could not be found.
[root@thor ~]# systemctl status ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: active (running) since Wed 2021-03-10 14:55:17 EST; 14min ago
 Main PID: 6390 (ovirt-ha-agent)
Tasks: 2 (limit: 1080501)
   Memory: 25.8M
   CGroup: /system.slice/ovirt-ha-agent.service
   └─6390 /usr/libexec/platform-python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent

Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine 
High Availability Monitoring Agent.
[root@thor ~]# systemctl status -l ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: active (running) since Wed 2021-03-10 14:55:17 EST; 16min ago
 Main PID: 6390 (ovirt-ha-agent)
Tasks: 2 (limit: 1080501)
   Memory: 25.6M
   CGroup: /system.slice/ovirt-ha-agent.service
   └─6390 /usr/libexec/platform-python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent

Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine 
High Availability Monitoring Agent.
[root@thor ~]#journalctl -u ovirt-ha-agent

-- Logs begin at Wed 2021-03-10 14:47:34 EST, end at Wed 2021-03-10 15:12:12 
EST. --
Mar 10 14:48:35 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine 
High Availability Monitoring Agent.
Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start 
necessary monitors
Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
last):
File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 85, in start_monitor
  response = 
self._proxy.start_monitor(type, options)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
  return 
self.__send(self.__name, args)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
  
verbose=self.__verbose
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
  return 
self.single_request(host, handler, request_body, verbose)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request
  http_conn = 
self.send_request(host, handler, request_body, verbose)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request
  
self.send_content(connection, request_body)
File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content
  
connection.endheaders(request_body)
File 
"/usr/lib64/python3.6/http/client.py", line 1264, in endheaders
  
self._send_output(message_body, encode_chunked=encode_chunked)
File 
"/usr/lib64/python3.6/http/client.py", line 1040, in _send_output
  self.send(msg)
File 
"/usr/lib64/python3.6/http/client.py", line 978, in send
  self.connect()
File 

[ovirt-users] Re: How to replace a failed oVirt Hyperconverged Host

2021-03-10 Thread Prajith Kesava Prasad
Hi Ramon,

We have an ansible playbook[2] for replacing a failed host in a
gluster-enabled cluster, do check it out [1], and see if that would work
out for you.

[1]
https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L57
[2]https://github.com/gluster/gluster-ansible


Regards,
Prajith Kesava Prasad

On Wed, Mar 10, 2021 at 11:20 PM Ramon Sierra  wrote:

> Hi,
>
> We have a three hosts hyperconverged ovirt setup. A few weeks ago one of
> the hosts failed and we lost a RAID5 array on it. We removed it from the
> cluster and repaired. We are trying to setup and add it back to the
> cluster but we are not clear on how to proceed. There is a replica 2
> with 1 arbiter Gluster setup on the cluster and I have no idea on how to
> recreate the LVM partitions, gluster bricks, and then add it to the
> cluster in order to start the healing process.
>
> Any help on how to proceed with this scenario will be very welcome.
>
> Ramon
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YIS5OHW7BMTGIDPD4O3TTB2YEZUVQ3QQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGBXL4U4M3T5TQHLQBVZUQCLWIBCADAY/


[ovirt-users] How to replace a failed oVirt Hyperconverged Host

2021-03-10 Thread Ramon Sierra

Hi,

We have a three hosts hyperconverged ovirt setup. A few weeks ago one of 
the hosts failed and we lost a RAID5 array on it. We removed it from the 
cluster and repaired. We are trying to setup and add it back to the 
cluster but we are not clear on how to proceed. There is a replica 2 
with 1 arbiter Gluster setup on the cluster and I have no idea on how to 
recreate the LVM partitions, gluster bricks, and then add it to the 
cluster in order to start the healing process.


Any help on how to proceed with this scenario will be very welcome.

Ramon
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YIS5OHW7BMTGIDPD4O3TTB2YEZUVQ3QQ/


[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread penguin pages
I did make that post but that was more about convert to CentOS 8 to streams 
fubar my cluster up... ya.. still trying to get it back on its feet.

I have been trying to move to IaC based deployment but ..  kind of given up on 
that as oVirt seems to really need its last steps "HCI Wizard" 

yum install ovirt-hosted-engine-setup

# what I wish it would spit out ansible playbook so I could copy this over and 
run it as  a playbook.  Same for sub wizard of "gluster"
This was sort of posted 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTLI55VFCFSK3F7MATAHGJIGRJZBTDLA/
   

Issue I have some of the cluster working but until I can trust it is stable, 
can deploy and maintain VM, I don't want to move it into production to take VMs.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WLNGHPUDCQC25NOJNDY43AMB3W7ZORGY/


[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread Yedidyah Bar David
On Wed, Mar 10, 2021 at 4:57 PM penguin pages  wrote:
>
>
> Fresh install of minimal CentOS8
>
> Then deploy:
> - EPEL
> - Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
>
> Install all nodes:
> - cockpit-ovirt-dashboard
> - gluster-ansible-roles
> - vdsm-gluster
> - ovirt-host
> - ovirt-ansible-roles
> - ovirt-ansible-infra
>
> Install on "first node of cluster"
> - ovirt-engine-appliance
>
>
>
> Now each node is stuck with same package conflict error: (and this blocks GUI 
> "upgrades")
>
> [root@medusa ~]# yum update
> Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM 
> EST.
> Error:
>  Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, 
> but none of the providers can be installed
>   - package cockpit-bridge-238.1-1.el8.x86_64 conflicts with 
> cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch
>   - cannot install the best update candidate for package 
> ovirt-host-4.4.1-4.el8.x86_64
>   - cannot install the best update candidate for package 
> cockpit-bridge-217-1.el8.x86_64
>  Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
>   - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but 
> none of the providers can be installed
>   - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
> provided by cockpit-dashboard-217-1.el8.noarch
>   - cannot install the best update candidate for package 
> cockpit-dashboard-217-1.el8.noarch
>  Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires 
> ovirt-host >= 4.4.0, but none of the providers can be installed
>   - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but 
> none of the providers can be installed
>   - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but 
> none of the providers can be installed
>   - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but 
> none of the providers can be installed
>   - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but 
> none of the providers can be installed
>   - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
> provided by cockpit-dashboard-217-1.el8.noarch
>   - cannot install the best update candidate for package 
> ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
>   - cannot install the best update candidate for package 
> cockpit-system-217-1.el8.noarch
> (try to add '--allowerasing' to command line to replace conflicting packages 
> or '--skip-broken' to skip uninstallable packages or '--nobest' to use not 
> only best candidate packages)
> [root@medusa ~]# yum update --allowerasing
> Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM 
> EST.
> Dependencies resolved.
> =
>  Package 
> Architecture Version  
>Repository 
>Size
> =
> Upgrading:
>  cockpit-bridge  x86_64   
> 238.1-1.el8   
>   baseos   535 k
>  cockpit-system  noarch   
> 238.1-1.el8   
>   baseos   3.4 M
>  replacing  cockpit-dashboard.noarch 217-1.el8
> Removing dependent packages:
>  cockpit-ovirt-dashboard noarch   
> 0.14.17-1.el8 
>   @ovirt-4.416 M
>  ovirt-host  x86_64   
> 4.4.1-4.el8   
>   @ovirt-4.411 k
>  ovirt-hosted-engine-setup   noarch   
> 2.4.9-1.el8   
>   @ovirt-4.4   1.3 M
>
> Transaction Summary
> 

[ovirt-users] Re: Update Package Conflict

2021-03-10 Thread Giorgio Biacchi

Il 3/10/21 3:56 PM, penguin pages ha scritto:


Fresh install of minimal CentOS8

Then deploy:
- EPEL
- Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm

Install all nodes:
 - cockpit-ovirt-dashboard
 - gluster-ansible-roles
 - vdsm-gluster
 - ovirt-host
 - ovirt-ansible-roles
 - ovirt-ansible-infra

Install on "first node of cluster"
- ovirt-engine-appliance



Now each node is stuck with same package conflict error: (and this blocks GUI 
"upgrades")

[root@medusa ~]# yum update
Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Error:
  Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, 
but none of the providers can be installed
   - package cockpit-bridge-238.1-1.el8.x86_64 conflicts with cockpit-dashboard 
< 233 provided by cockpit-dashboard-217-1.el8.noarch
   - cannot install the best update candidate for package 
ovirt-host-4.4.1-4.el8.x86_64
   - cannot install the best update candidate for package 
cockpit-bridge-217-1.el8.x86_64
  Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
   - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
   - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
   - cannot install the best update candidate for package 
cockpit-dashboard-217-1.el8.noarch
  Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires 
ovirt-host >= 4.4.0, but none of the providers can be installed
   - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
   - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
   - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
   - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
   - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
   - cannot install the best update candidate for package 
ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
   - cannot install the best update candidate for package 
cockpit-system-217-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or 
'--skip-broken' to skip uninstallable packages or '--nobest' to use not only 
best candidate packages)
[root@medusa ~]# yum update --allowerasing
Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Dependencies resolved.
=
  Package 
Architecture Version
 Repository
Size
=
Upgrading:
  cockpit-bridge  x86_64
   238.1-1.el8  
   baseos   535 k
  cockpit-system  noarch
   238.1-1.el8  
   baseos   3.4 M
  replacing  cockpit-dashboard.noarch 217-1.el8
Removing dependent packages:
  cockpit-ovirt-dashboard noarch
   0.14.17-1.el8
   @ovirt-4.416 M
  ovirt-host  x86_64
   4.4.1-4.el8  
   @ovirt-4.411 k
  ovirt-hosted-engine-setup   noarch
   2.4.9-1.el8  
   @ovirt-4.4   1.3 M

Transaction Summary
=
Upgrade  2 Packages
Remove   3 Packages



##


[ovirt-users] Re: oVirt 4.3.6 and Security Measures

2021-03-10 Thread scroodj
Ales, Nir thank you for the fast response.

> On Tue, Mar 9, 2021, 14:21 Ales Musil  
> Sanlock use 0775 for good reason. Sanlock is started as root, and it needs
> permissions to create the pid file before dropping privileges. It may be
> possible to solve this with better selinux policy but nobody contributed
> this.

> Can you explain what is the actual issue with this configuration?
I got an answer from a colleague for that question:
The user sanlock is still owner of the folder and should be able to create 
files in there, especially when sanlock is started as root. We just want to 
lower the rights for the group. Which is root. This might be a more or less 
abstract potentials risk, as a user that is not ‘root’ being member of group 
root might be not that common. Still, this is a standard procedure on the 
servers that a home-folder of a user usually has r-x for the user’ s group and 
our security check marks this a potential risk.

BR
Aleksandr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQ77N3WFD4UMIPJDHUNN7MWMGDHDRHKY/


[ovirt-users] Update Package Conflict

2021-03-10 Thread penguin pages

Fresh install of minimal CentOS8

Then deploy: 
- EPEL
- Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm

Install all nodes:
- cockpit-ovirt-dashboard
- gluster-ansible-roles 
- vdsm-gluster
- ovirt-host
- ovirt-ansible-roles
- ovirt-ansible-infra

Install on "first node of cluster"
- ovirt-engine-appliance



Now each node is stuck with same package conflict error: (and this blocks GUI 
"upgrades")

[root@medusa ~]# yum update
Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Error:
 Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, 
but none of the providers can be installed
  - package cockpit-bridge-238.1-1.el8.x86_64 conflicts with cockpit-dashboard 
< 233 provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
ovirt-host-4.4.1-4.el8.x86_64
  - cannot install the best update candidate for package 
cockpit-bridge-217-1.el8.x86_64
 Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
  - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
cockpit-dashboard-217-1.el8.noarch
 Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires 
ovirt-host >= 4.4.0, but none of the providers can be installed
  - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none 
of the providers can be installed
  - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard 
provided by cockpit-dashboard-217-1.el8.noarch
  - cannot install the best update candidate for package 
ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
  - cannot install the best update candidate for package 
cockpit-system-217-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or 
'--skip-broken' to skip uninstallable packages or '--nobest' to use not only 
best candidate packages)
[root@medusa ~]# yum update --allowerasing
Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Dependencies resolved.
=
 Package 
Architecture Version
 Repository
Size
=
Upgrading:
 cockpit-bridge  x86_64 
  238.1-1.el8   
  baseos   535 k
 cockpit-system  noarch 
  238.1-1.el8   
  baseos   3.4 M
 replacing  cockpit-dashboard.noarch 217-1.el8
Removing dependent packages:
 cockpit-ovirt-dashboard noarch 
  0.14.17-1.el8 
  @ovirt-4.416 M
 ovirt-host  x86_64 
  4.4.1-4.el8   
  @ovirt-4.411 k
 ovirt-hosted-engine-setup   noarch 
  2.4.9-1.el8   
  @ovirt-4.4   1.3 M

Transaction Summary
=
Upgrade  2 Packages
Remove   3 Packages



##

Initially I assumed this was a path I was taking that was not standard.. but 

[ovirt-users] Set host CPU type to kvm64 for a single VM

2021-03-10 Thread Andrei Verovski
Hi !


Is it possible to set host CPU type to kvm64 for a single VM ?


Thanks.
Andrei

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J3T7JF3GBPAKQMRJZ5WDMG3BRXCDVYKE/


[ovirt-users] Re: engine - gluster volume import

2021-03-10 Thread penguin pages


Thanks ..   That worked.  now the engine, data, vmstore   gluster volumes are 
under "engine" control.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OASWKINVFSSMPX7A6KEN6NBFRYC566N/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-10 Thread Maria Souvalioti
Should I delete the file and restart glusterd on the ov-no1 server?


Thank you very much


On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:
> It seems to me that ov-no1 didn't update the file properly.
>
> What was the output of the gluster volume heal command ?
>
> Best Regards,
> Strahil Nikolov
>
> The output of the getfattr command on the nodes was the following:
>
> Node1:
> [root@ov-no1  ~]# getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x0394
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> Node2:
> [root@ov-no2  ~]#  getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x043a
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> Node3:
> [root@ov-no3  ~]#  getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x0444
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUVBESAIZEJ7URDMDQ7LDUPNS6YDBVAS/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> 

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-10 Thread Maria Souvalioti
The gluster volume heal engine command didn't output anything in the CLI.


The gluster volume heal engine info gives:


# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7

Status: Connected
Number of entries: 1

Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7

Status: Connected
Number of entries: 1  


And gluster volume heal engine info summary gives:

   



   

# gluster volume heal engine info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0


Also I found the following warning message in the logs that has been
repeating itself since the problem started:

[2021-03-10 10:08:11.646824] W [MSGID: 114061]
[client-common.c:2644:client_pre_fsync_v2] 0-engine-client-0: 
(3fafabf3-d0cd-4b9a-8dd7-43145451f7cf) remote_fd is -1. EBADFD [File
descriptor in bad state]


And from what I see in the logs, the healing process seems to be still
trying to fix the volume.


[2021-03-10 10:47:34.820229] I [MSGID: 108026]
[afr-self-heal-common.c:1741:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on 3fafabf3-d0cd-4b9a-8dd7-43145451f7cf.
sources=1 [2]  sinks=0
The message "I [MSGID: 108026]
[afr-self-heal-common.c:1741:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on 3fafabf3-d0cd-4b9a-8dd7-43145451f7cf.
sources=1 [2]  sinks=0 " repeated 8 times between [2021-03-10
10:47:34.820229] and [2021-03-10 10:48:00.088805]



On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:
> It seems to me that ov-no1 didn't update the file properly.
>
> What was the output of the gluster volume heal command ?
>
> Best Regards,
> Strahil Nikolov
>
> The output of the getfattr command on the nodes was the following:
>
> Node1:
> [root@ov-no1  ~]# getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x0394
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> Node2:
> [root@ov-no2  ~]#  getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x043a
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> 

[ovirt-users] Re: Commvault

2021-03-10 Thread Tony Brian Albers
Oh, I just checked, and Bareos 20 now does agentless oVirt/RHV backups 
too. You should probably have a close look at it. It's quite simple to 
manage and very, very stable.

/tony

On 10/03/2021 11:47, Tony Brian Albers wrote:
> Hmm.. It seems that NW does not support RHV, I was quite sure though
> -but I must have remembered it wrong. I think what we used to do is use
> an oVirt python backup thing that snapshotted the VM's and then we could
> fetch the snaps via the NetWorker client on the fileserver that stores
> the snapshots.
> 
> Using a NetWorker client on all the vm's is also possible of course, and
> quite easy to maintain.
> 
> 
> IMO the best option is probably storware, reach out to Pawel Maczka
> -he's very helpful.
> 
> /tony
> 
> On 10/03/2021 11:16, Colin Coe wrote:
>> Thanks all for the input so far.
>>
>> Does EMC Networker do agentless RHV backups, or do I install
>> the software on all nodes?
>>
>> On Wed, 10 Mar 2021 at 16:41, Gianluca Cecchi > > wrote:
>>
>>  On Wed, Mar 10, 2021 at 7:57 AM Tony Brian Albers >  > wrote:
>>
>>  I agree with Dan, however EMC NetWorker can also backup RHEV.
>>
>>
>>  Can you give a pointer about Networker capabilities?
>>  I know Netbackup included RHV support since its 8.2 version.
>>  Eg for 8.3 you have this "NetBackup™ Web UI RHV Administrator's
>>  Guide" and other documents:
>>  
>> https://www.veritas.com/content/support/en_US/doc/138617403-138789763-0/v141695751-138789763
>>
>>  Do you have a similar pointer for Netbackup?
>>
>>  Thanks,
>>  Gianluca
>>
>>
>>  ___
>>  Users mailing list -- users@ovirt.org 
>>  To unsubscribe send an email to users-le...@ovirt.org
>>  
>>  Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>  oVirt Code of Conduct:
>>  https://www.ovirt.org/community/about/community-guidelines/
>>  List Archives:
>>  
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSEA5XCFCZVTCLM3RSI2QT4A7JYI7IOY/
>>
> 
> 


-- 
Tony Albers - Systems Architect - IT Development Royal Danish Library, 
Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BGP63VHVZ6H5QYLEA7UHBNSDMNSHMK2Y/


[ovirt-users] Re: Commvault

2021-03-10 Thread Tony Brian Albers
Hmm.. It seems that NW does not support RHV, I was quite sure though 
-but I must have remembered it wrong. I think what we used to do is use 
an oVirt python backup thing that snapshotted the VM's and then we could 
fetch the snaps via the NetWorker client on the fileserver that stores 
the snapshots.

Using a NetWorker client on all the vm's is also possible of course, and 
quite easy to maintain.


IMO the best option is probably storware, reach out to Pawel Maczka 
-he's very helpful.

/tony

On 10/03/2021 11:16, Colin Coe wrote:
> Thanks all for the input so far.
> 
> Does EMC Networker do agentless RHV backups, or do I install 
> the software on all nodes?
> 
> On Wed, 10 Mar 2021 at 16:41, Gianluca Cecchi  > wrote:
> 
> On Wed, Mar 10, 2021 at 7:57 AM Tony Brian Albers  > wrote:
> 
> I agree with Dan, however EMC NetWorker can also backup RHEV.
> 
> 
> Can you give a pointer about Networker capabilities?
> I know Netbackup included RHV support since its 8.2 version.
> Eg for 8.3 you have this "NetBackup™ Web UI RHV Administrator's
> Guide" and other documents:
> 
> https://www.veritas.com/content/support/en_US/doc/138617403-138789763-0/v141695751-138789763
> 
> Do you have a similar pointer for Netbackup?
> 
> Thanks,
> Gianluca
> 
> 
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSEA5XCFCZVTCLM3RSI2QT4A7JYI7IOY/
> 


-- 
Tony Albers - Systems Architect - IT Development Royal Danish Library, 
Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2YSI4WR4POK2J5EQY4752FR26UJDWWFC/


[ovirt-users] Re: Commvault

2021-03-10 Thread Colin Coe
Thanks all for the input so far.

Does EMC Networker do agentless RHV backups, or do I install the software
on all nodes?

On Wed, 10 Mar 2021 at 16:41, Gianluca Cecchi 
wrote:

> On Wed, Mar 10, 2021 at 7:57 AM Tony Brian Albers  wrote:
>
>> I agree with Dan, however EMC NetWorker can also backup RHEV.
>>
>
> Can you give a pointer about Networker capabilities?
> I know Netbackup included RHV support since its 8.2 version.
> Eg for 8.3 you have this "NetBackup™ Web UI RHV Administrator's Guide" and
> other documents:
>
> https://www.veritas.com/content/support/en_US/doc/138617403-138789763-0/v141695751-138789763
>
> Do you have a similar pointer for Netbackup?
>
> Thanks,
> Gianluca
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSEA5XCFCZVTCLM3RSI2QT4A7JYI7IOY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IMOPQEBPQ6KI2U227WGCPSMRSK7N2COH/


[ovirt-users] Re: Setup Hosts Network Disabled

2021-03-10 Thread Andrei Verovski
Hi,

I’ve got around this. 
Simply created new logical network, buggy should NOT be deleted, simply rename 
it to temp/dummy/unused or whatever.


> On 10 Mar 2021, at 08:41, Ales Musil  wrote:
> 
> 
> 
> On Tue, Mar 9, 2021 at 4:25 PM Andrei Verovski  > wrote:
> Hi !
> 
> Hi,
>  
> 
> I run into a problem which looks like a software bug.
> 
> there was no change in this part for a long time but it might be possible 
> that there is some hidden bug. 
>  
> 
> Network -> Networks -> My_Net_Name -> Hosts
> Setup Hosts Network button is disabled (greyed out). I deleted this network, 
> created again, restarted hosted engine - no changes.
> 
> The button here does not work if you have selected multiple hosts at once. 
> 
> 
> Is it  possible to fix this for example from command line ?
> 
> Actually no, there is nothing that you could do from command line, but you 
> can still access the dialog by going into:
> Compute -> Hosts -> $YOUR_HOST -> Network Interfaces -> Setup Host Networks.
> 
> Also there is possibility to do it through REST API.
> 
> Hopefully it helps.
> 
> Best regards,
> Ales
>  
> 
> Thanks in advance.
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GN5GSC7JKH5FQCBQAYFSHF5CECZWXTMZ/
>  
> 
> 
> 
> -- 
> Ales Musil
> Software Engineer - RHV Network
> Red Hat EMEA 
> amu...@redhat.com IM: amusil
>   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I5XK6BOJQJUIBFR2LUCBWCYVX77ACFJQ/


[ovirt-users] Re: Commvault

2021-03-10 Thread Gianluca Cecchi
On Wed, Mar 10, 2021 at 7:57 AM Tony Brian Albers  wrote:

> I agree with Dan, however EMC NetWorker can also backup RHEV.
>

Can you give a pointer about Networker capabilities?
I know Netbackup included RHV support since its 8.2 version.
Eg for 8.3 you have this "NetBackup™ Web UI RHV Administrator's Guide" and
other documents:
https://www.veritas.com/content/support/en_US/doc/138617403-138789763-0/v141695751-138789763

Do you have a similar pointer for Netbackup?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSEA5XCFCZVTCLM3RSI2QT4A7JYI7IOY/


[ovirt-users] Re: ERROR: Installing oVirt Node & Hosted-Engine on one physical server

2021-03-10 Thread ivanpashchuk
Thanks for your answer!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PAKRPFISSYXYIMVONFVWDCBOPAPVUKAB/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-10 Thread Strahil Nikolov via Users
It seems to me that ov-no1 didn't update the file properly.
What was the output of the gluster volume heal command ?
Best Regards,Strahil Nikolov
 
 
The output of the getfattr command on the nodes was the following:

Node1:
[root@ov-no1 ~]# getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x0394
trusted.afr.engine-client-2=0x
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b


Node2:
[root@ov-no2 ~]#  getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-0=0x043a
trusted.afr.engine-client-2=0x
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b


Node3:
[root@ov-no3 ~]#  getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-0=0x0444
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUVBESAIZEJ7URDMDQ7LDUPNS6YDBVAS/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3ODLVEODDFWP3IVLPFNQXNLBCPPSZTR/