[ovirt-users] Gluster JSON-RPC errors

2018-10-04 Thread Maton, Brett
I'm seeing the following errors appear in the event log every 10 minutes
for each participating host in the gluster cluster

GetGlusterVolumeHealInfoVDS failed: Internal JSON-RPC error: {'reason':
"'bool' object has no attribute 'getiterator'"}

Gluster brick health is good

Any ideas ?

oVirt 4.2.7.2-1.el7
CentOS 7
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KLFU2C5UHDFLTH3XUHZ5DGF7WVNGNJZ/


[ovirt-users] Error when running hosted-engine deploy on system with bond of bonds

2018-10-04 Thread Ben Webber
Hi,

I'm trying to set up ovirt using the hosted-engine --deploy command on CentOS7, 
but am encountering an error. I am running a slightly unusual network 
configuration. I have two fairly basic non stacked gigabit switches with port 
channels connecting the two switches together. I have a lacp bond from the host 
consisting of 4 ports to each switch (bond1 and bond2). I have then created an 
active-backup bond (bond0) using the two lacp bonds as slaves in the hope to 
create ha at the switch layer using my basic switches. There is then a VLAN 
(101) on bond0.

This network configuration runs fine on the host, however, when run, after a 
short while, the hosted-engine --deploy command outputs the following error:

...

[ INFO  ] TASK [Force host-deploy in offline mode]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Add host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Wait for the host to be up]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Check host status]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The host 
has been set in non_operational status, please check engine logs, fix 
accordingly and re-deploy.\n"}

...


Looking in /var/log/ovirt-engine/engine.log on the machine created, I can see 
the following errors logged:

...

2018-10-04 21:51:30,116+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-1) [59fb360a] START, 
HostSetupNetworksVDSCommand(HostName = ov1.test.local, 
HostSetupNetworksVdsCommandParameters:{hostId='7440c9b9-e530-4341-a317-d3a9041dc777',
 vds='Host[ov1.test.local,7440c9b9-e530-4341-a317-d3a9041dc777]', 
rollbackOnFailure='true', connectivityTimeout='120', 
networks='[HostNetwork:{defaultRoute='true', bonding='true', 
networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='bond0', vlan='101', 
vmNetwork='true', stp='false', properties='null', ipv4BootProtocol='STATIC_IP', 
ipv4Address='192.168.1.11', ipv4Netmask='255.255.255.0', 
ipv4Gateway='192.168.1.1', ipv6BootProtocol='AUTOCONF', ipv6Address='null', 
ipv6Prefix='null', ipv6Gateway='null', nameServers='null'}]', 
removedNetworks='[]', bonds='[]', removedBonds='[]', 
clusterSwitchType='LEGACY'}), log id: 4f0c7eaa
2018-10-04 21:51:30,121+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-1) [59fb360a] FINISH, 
HostSetupNetworksVDSCommand, log id: 4f0c7eaa
2018-10-04 21:51:30,645+01 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-1) [59fb360a] Failed in 
'HostSetupNetworksVDS' method
2018-10-04 21:51:30,687+01 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-1) [59fb360a] EVENT_ID: 
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ov1.test.local command 
HostSetupNetworksVDS failed: Unknown nics in: ['bond1', 'bond2']
2018-10-04 21:51:30,688+01 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-1) [59fb360a] Error: 
VDSGenericException: VDSErrorException: Failed to HostSetupNetworksVDS, error = 
Unknown nics in: ['bond1', 'bond2'], code = 23

...


It looks like when HostSetupNetworksVDS is run, it is checking that the slave 
interfaces to the bonds are physical network devices and being as the slaves of 
bond0 are bond1 and bond2, rather than physical devices, it then throws the 
error Unknown nics in: ['bond1', 'bond2'].

Is there anything I can do or any configuration that I can put anywhere to make 
it work with this "stacked bond" configuration or does ovirt just not work when 
bonds are set up like this?

Thanks in advance,

Ben




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XPHQTPUINKZBSZVUDP2G66UPA5OJL3J7/


[ovirt-users] Re: master domain wont activate

2018-10-04 Thread Oliver Riesener
Hi Vincent,

nice to hear the news :-)
 
I have read the BZ and see you run into NFS trouble and solved it now.

I took a look on my centos server for nfs data domains and
see server running V4 and the clients (node mounts with protocol vers=4.1)

I run the latest (and greatest) ovirt stable 4.2.6.4-1 on centos 7.5+ with
engine installed and a ovirt-node 4.2.6.4.

* If you can migrate your running VMs and can switch your SPM, 
  i would upgrade and reboot the hosts one by one, now.

* reboot seem to be a minimum, remember you do that `virt. thing´,
  therefor you can access and boot your bare metal and host os ;-)


ok, back to iSCSI, i have also a EQUALOGIC running as iSCSI target over years.

* I have allowed multi host access to the volumes which ovirt uses.
  Access control lists contains raw IP addresses from my ovirt-hosts.

  ovirt handles the volume access virtuos with multipathd and lvm vg’s and lv’s.
  unused lvs are offline (host specific) and released volumes are deactivated.
  
* Also it’s possible you have to reinstall (from GUI) your hosts,
  to upgrade or install the needed packages, which handles iSCSI Client access.

* If you then free from errors and your iscsi data domain still missing, we talk
  about vg activation and domain import.

Sheers

Oliver


> Am 04.10.2018 um 22:00 schrieb Vincent Royer :
> 
> Ok, getting somewhere here. 
> 
> did a rpcinfo -p and found no nfs entries in portmap. 
> 
> systemctl stop nfs
> systemctl start nfs
> 
> Suddenly shares are mounted and datacenter is up again. 
> 
> was able to add export domain over NFS.  
> 
> Why would nfs shit the bed?  
> 
> still can't seem to get iscsi mounted properly now, and that's where all the 
> disks are located :/
> 
> 
> On Thu, Oct 4, 2018 at 11:00 AM Vincent Royer  > wrote:
> Thanks for your help Oliver, 
> 
> To give you some background here:
> 
> Host 1 on Ovirt 4.2 attached to NFS storage
> Host 2 I upgraded to Ovirt 4.2.5 and then 4.2.6, since then it has had 
> troubles with NFS due to this bug 
> https://bugzilla.redhat.com/show_bug.cgi?id=1595549 
> .   The host was up and 
> could run the hosted engine, but could not migrate any VMs to it. 
> 
> I decided to switch from NFS to ISCSI so that I could stay on current 
> releases.   So I began the work of attaching iscsi domain. 
> 
> The iscsi domain attached, and I transferred most of the disks to it.  Then 
> it started melting down saying that Host 1 could not mount it, and the whole 
> DC went down. 
> 
> Current status is data center "non responsive".  Keeps trying "Reconstructing 
> master domain on Data Center"  over and over again but always fails.  Master 
> domain status is "inactive".  Clicking activate fails.  The new ISCSI domain, 
> I put in maintenance until I figure the rest out.  I can't add or remove any 
> other domains, Ovirt says I need to attach the master first. 
> 
> Both hosts are "UP".   Host 1 health is "bad"   Host 2 health is "ok", and it 
> is running HE.  Host 1 (the 4.2 host) says "this host needs to be 
> reinstalled".  But the reinstall option is grayed out. 
> 
> I am weary about updating host1, because of the NFS storage bug... I fear it 
> won't ever be able to attach the old domain again. 
> 
> If I try mounting the NFS shares in cockpit from either node, they say 
> "mount.nfs: Remote I/O error".   However on another blank centos machine 
> sitting on the same network, I can mount the shares normally. 
> 
> Vincent Royer
> 778-825-1057
> 
> 
>  
> SUSTAINABLE MOBILE ENERGY SOLUTIONS
> 
> 
> 
> 
> 
> On Thu, Oct 4, 2018 at 1:04 AM Oliver Riesener  > wrote:
> When your hosts are up and running and your Domain didn't go active within 
> minutes
> * Activate your Storage Domain under:
> 
> Storage -> Storage Domain -> (Open your Domain)  -> Data Center -> (Right 
> Click Your Data Center Name) -> Activate.
> On 10/4/18 9:50 AM, Oliver Riesener wrote:
>> Hi Vincent,
>> 
>> OK you master domain, isn't avail a the moment, but no panic.
>> First off all we need the status from your hosts. No HOSTS -> No Storage !
>> * Do you reboot them hard, without Confirm "Host has been rebooted"
>> 
>> * Are they actived in the DataCenter / Cluster ? Green Arrow ?
>> 
>> 
>> On 10/4/18 7:46 AM, Vincent Royer wrote:
>>> I was attempting to migrate from nfs to iscsi storage domains.  I have 
>>> reached a state where I can no longer activate the old master storage 
>>> domain, and thus no others will activate either. 
>>> 
>>> I'm ready to give up on the installation and just move to an HCI deployment 
>>> instead.  Wipe all the hosts clean and start again. 
>>> 
>>> My plan was to create and use an export domain, then wipe the nodes and set 
>>> them up HCI where I could re-import.  But without being able to activate a 
>>> master domain, I can't create the export domain.
>>> 
>>> I'm not sure why 

[ovirt-users] Re: master domain wont activate

2018-10-04 Thread Vincent Royer
Ok, getting somewhere here.

did a rpcinfo -p and found no nfs entries in portmap.

systemctl stop nfs
systemctl start nfs

Suddenly shares are mounted and datacenter is up again.

was able to add export domain over NFS.

Why would nfs shit the bed?

still can't seem to get iscsi mounted properly now, and that's where all
the disks are located :/


On Thu, Oct 4, 2018 at 11:00 AM Vincent Royer  wrote:

> Thanks for your help Oliver,
>
> To give you some background here:
>
> Host 1 on Ovirt 4.2 attached to NFS storage
> Host 2 I upgraded to Ovirt 4.2.5 and then 4.2.6, since then it has had
> troubles with NFS due to this bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1595549.   The host was up
> and could run the hosted engine, but could not migrate any VMs to it.
>
> I decided to switch from NFS to ISCSI so that I could stay on current
> releases.   So I began the work of attaching iscsi domain.
>
> The iscsi domain attached, and I transferred most of the disks to it.
> Then it started melting down saying that Host 1 could not mount it, and the
> whole DC went down.
>
> Current status is data center "non responsive".  Keeps trying
> "Reconstructing master domain on Data Center"  over and over again but
> always fails.  Master domain status is "inactive".  Clicking activate
> fails.  The new ISCSI domain, I put in maintenance until I figure the rest
> out.  I can't add or remove any other domains, Ovirt says I need to attach
> the master first.
>
> Both hosts are "UP".   Host 1 health is "bad"   Host 2 health is "ok", and
> it is running HE.  Host 1 (the 4.2 host) says "this host needs to be
> reinstalled".  But the reinstall option is grayed out.
>
> I am weary about updating host1, because of the NFS storage bug... I fear
> it won't ever be able to attach the old domain again.
>
> If I try mounting the NFS shares in cockpit from either node, they say
> "mount.nfs: Remote I/O error".   However on another blank centos machine
> sitting on the same network, I can mount the shares normally.
>
> *Vincent Royer*
> *778-825-1057*
>
>
> 
> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>
>
>
>
>
> On Thu, Oct 4, 2018 at 1:04 AM Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
>
>> When your hosts are up and running and your Domain didn't go active
>> within minutes
>>
>> * Activate your Storage Domain under:
>>
>> Storage -> Storage Domain -> (Open your Domain)  -> Data Center -> (Right
>> Click Your Data Center Name) -> Activate.
>> On 10/4/18 9:50 AM, Oliver Riesener wrote:
>>
>> Hi Vincent,
>>
>> OK you master domain, isn't avail a the moment, but no panic.
>>
>> First off all we need the status from your hosts. No HOSTS -> No Storage !
>>
>> * Do you reboot them hard, without Confirm "Host has been rebooted"
>>
>> * Are they actived in the DataCenter / Cluster ? Green Arrow ?
>>
>>
>> On 10/4/18 7:46 AM, Vincent Royer wrote:
>>
>> I was attempting to migrate from nfs to iscsi storage domains.  I have
>> reached a state where I can no longer activate the old master storage
>> domain, and thus no others will activate either.
>>
>> I'm ready to give up on the installation and just move to an HCI
>> deployment instead.  Wipe all the hosts clean and start again.
>>
>> My plan was to create and use an export domain, then wipe the nodes and
>> set them up HCI where I could re-import.  But without being able to
>> activate a master domain, I can't create the export domain.
>>
>> I'm not sure why it can't find the master anymore, as nothing has
>> happened to the NFS storage, but the error in vdsm says it just can't find
>> it:
>>
>> StoragePoolMasterNotFound: Cannot find master domain:
>> u'spUUID=5a77bed1-0238-030c-0122-03b3,
>> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'
>> 2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) [storage.TaskManager.Task]
>> (Task='83f33db5-90f3-4064-87df-0512ab9b6378') aborting: Task is aborted:
>> "Cannot find master domain: u'spUUID=5a77bed1-0238-030c-0122-03b3,
>> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'" - code 304 (task:1181)
>> 2018-10-03 22:40:33,751-0700 ERROR (jsonrpc/3) [storage.Dispatcher]
>> FINISH connectStoragePool error=Cannot find master domain:
>> u'spUUID=5a77bed1-0238-030c-0122-03b3,
>> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68' (dispatcher:82)
>> 2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer]
>> RPC call StoragePool.connect failed (error 304) in 0.17 seconds
>> (__init__:573)
>> 2018-10-03 22:40:34,200-0700 INFO  (jsonrpc/1) [api.host] START
>> getStats() from=:::172.16.100.13,39028 (api:46)
>>
>> When I look in cockpit on the hosts, the storage domain is mounted and
>> seems fine.
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> 

[ovirt-users] VM Portal noVNC Console invocation

2018-10-04 Thread briwils2
When I use the VM Portal and invoke a console I'm not sure how i can leverage 
the html5 noVNC version of this.  I can only get the .vv file and would like to 
use a web based console, or rather provide them to users of an engine.

TIA
Brian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CK5W3SWPZPPZBHWBODAGPMVCBLZM6VUB/


[ovirt-users] Re: [ANN] oVirt Node 4.2.6 async update is now available

2018-10-04 Thread Staniforth, Paul
Thanks Sandro,

 has the bug with wrong network threshold limits on 
vlan networks been fixed?



https://bugzilla.redhat.com/show_bug.cgi?id=1625098


Regards,

Paul S.


From: Sandro Bonazzola 
Sent: 04 October 2018 09:34
To: annou...@ovirt.org; users
Subject: [ovirt-users] [ANN] oVirt Node 4.2.6 async update is now available

The oVirt Team has just released a new version of oVirt Node image including 
latest CentOS updates,
fixing a regression introduced in kernel package [1] breaking IP over 
infiniband.
We recommend to users to upgrade to this new release.

Errata included:
CEEA-2018:2397 CentOS 7 microcode_ctl Enhancement Update 

CESA-2018:2748 Important CentOS 7 kernel Security Update 

CEBA-2018:2760 CentOS 7 ipa BugFix Update 

CESA-2018:2768 Moderate CentOS 7 nss Security Update 

CEBA-2018:2753 CentOS 7 systemd BugFix Update 

CEBA-2018:2756 CentOS 7 sssd BugFix Update 

CEBA-2018:2769 CentOS 7 libvirt BugFix Update 

CEBA-2018:2761 CentOS 7 kexec-tools BugFix Update 

CEBA-2018:2758 CentOS 7 firewalld BugFix Update 

CEBA-2018:2764 CentOS 7 initscripts BugFix Update 

CESA-2018:2731 Important CentOS 7 spice Security Update 


Ansible 2.6.5: 
https://github.com/ansible/ansible/blob/v2.6.5/changelogs/CHANGELOG-v2.6.rst
Cockpit 176: https://cockpit-project.org/blog/cockpit-176.html

Updates included:
+ansible-2.6.5-1.el7.noarch
+cockpit-176-2.el7.centos.x86_64
+cockpit-bridge-176-2.el7.centos.x86_64
+cockpit-dashboard-176-2.el7.centos.x86_64
+cockpit-machines-ovirt-176-2.el7.centos.noarch
+cockpit-storaged-176-2.el7.centos.noarch
+cockpit-system-176-2.el7.centos.noarch
+cockpit-ws-176-2.el7.centos.x86_64
+firewalld-0.4.4.4-15.el7_5.noarch
+firewalld-filesystem-0.4.4.4-15.el7_5.noarch
+initscripts-9.49.41-1.el7_5.2.x86_64
+ipa-client-4.5.4-10.el7.centos.4.4.x86_64
+ipa-client-common-4.5.4-10.el7.centos.4.4.noarch
+ipa-common-4.5.4-10.el7.centos.4.4.noarch
+kernel-3.10.0-862.14.4.el7.x86_64
+kernel-tools-3.10.0-862.14.4.el7.x86_64
+kernel-tools-libs-3.10.0-862.14.4.el7.x86_64
+kexec-tools-2.0.15-13.el7_5.2.x86_64
+libgudev1-219-57.el7_5.3.x86_64
+libipa_hbac-1.16.0-19.el7_5.8.x86_64
+libsss_autofs-1.16.0-19.el7_5.8.x86_64
+libsss_certmap-1.16.0-19.el7_5.8.x86_64
+libsss_idmap-1.16.0-19.el7_5.8.x86_64
+libsss_nss_idmap-1.16.0-19.el7_5.8.x86_64
+libsss_sudo-1.16.0-19.el7_5.8.x86_64
+libvirt-3.9.0-14.el7_5.8.x86_64
+libvirt-client-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-config-network-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-interface-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-lxc-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-network-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-qemu-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-secret-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-kvm-3.9.0-14.el7_5.8.x86_64
+libvirt-libs-3.9.0-14.el7_5.8.x86_64
+libvirt-lock-sanlock-3.9.0-14.el7_5.8.x86_64
+microcode_ctl-2.1-29.16.el7_5.x86_64
+mokutil-12-2.el7.x86_64
+nss-3.36.0-7.el7_5.x86_64
+nss-sysinit-3.36.0-7.el7_5.x86_64
+nss-tools-3.36.0-7.el7_5.x86_64
+ovirt-node-ng-image-update-placeholder-4.2.6.2-1.el7.noarch
+ovirt-node-ng-nodectl-4.2.0-0.20181003.0.el7.noarch
+ovirt-release-host-node-4.2.6.2-1.el7.noarch
+ovirt-release42-4.2.6.2-1.el7.noarch
+python-firewall-0.4.4.4-15.el7_5.noarch
+python-libipa_hbac-1.16.0-19.el7_5.8.x86_64

[ovirt-users] Re: master domain wont activate

2018-10-04 Thread Vincent Royer
Thanks for your help Oliver,

To give you some background here:

Host 1 on Ovirt 4.2 attached to NFS storage
Host 2 I upgraded to Ovirt 4.2.5 and then 4.2.6, since then it has had
troubles with NFS due to this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1595549.   The host was up and
could run the hosted engine, but could not migrate any VMs to it.

I decided to switch from NFS to ISCSI so that I could stay on current
releases.   So I began the work of attaching iscsi domain.

The iscsi domain attached, and I transferred most of the disks to it.  Then
it started melting down saying that Host 1 could not mount it, and the
whole DC went down.

Current status is data center "non responsive".  Keeps trying
"Reconstructing master domain on Data Center"  over and over again but
always fails.  Master domain status is "inactive".  Clicking activate
fails.  The new ISCSI domain, I put in maintenance until I figure the rest
out.  I can't add or remove any other domains, Ovirt says I need to attach
the master first.

Both hosts are "UP".   Host 1 health is "bad"   Host 2 health is "ok", and
it is running HE.  Host 1 (the 4.2 host) says "this host needs to be
reinstalled".  But the reinstall option is grayed out.

I am weary about updating host1, because of the NFS storage bug... I fear
it won't ever be able to attach the old domain again.

If I try mounting the NFS shares in cockpit from either node, they say
"mount.nfs: Remote I/O error".   However on another blank centos machine
sitting on the same network, I can mount the shares normally.

*Vincent Royer*
*778-825-1057*



*SUSTAINABLE MOBILE ENERGY SOLUTIONS*





On Thu, Oct 4, 2018 at 1:04 AM Oliver Riesener 
wrote:

> When your hosts are up and running and your Domain didn't go active within
> minutes
>
> * Activate your Storage Domain under:
>
> Storage -> Storage Domain -> (Open your Domain)  -> Data Center -> (Right
> Click Your Data Center Name) -> Activate.
> On 10/4/18 9:50 AM, Oliver Riesener wrote:
>
> Hi Vincent,
>
> OK you master domain, isn't avail a the moment, but no panic.
>
> First off all we need the status from your hosts. No HOSTS -> No Storage !
>
> * Do you reboot them hard, without Confirm "Host has been rebooted"
>
> * Are they actived in the DataCenter / Cluster ? Green Arrow ?
>
>
> On 10/4/18 7:46 AM, Vincent Royer wrote:
>
> I was attempting to migrate from nfs to iscsi storage domains.  I have
> reached a state where I can no longer activate the old master storage
> domain, and thus no others will activate either.
>
> I'm ready to give up on the installation and just move to an HCI
> deployment instead.  Wipe all the hosts clean and start again.
>
> My plan was to create and use an export domain, then wipe the nodes and
> set them up HCI where I could re-import.  But without being able to
> activate a master domain, I can't create the export domain.
>
> I'm not sure why it can't find the master anymore, as nothing has happened
> to the NFS storage, but the error in vdsm says it just can't find it:
>
> StoragePoolMasterNotFound: Cannot find master domain:
> u'spUUID=5a77bed1-0238-030c-0122-03b3,
> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'
> 2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) [storage.TaskManager.Task]
> (Task='83f33db5-90f3-4064-87df-0512ab9b6378') aborting: Task is aborted:
> "Cannot find master domain: u'spUUID=5a77bed1-0238-030c-0122-03b3,
> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'" - code 304 (task:1181)
> 2018-10-03 22:40:33,751-0700 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH
> connectStoragePool error=Cannot find master domain:
> u'spUUID=5a77bed1-0238-030c-0122-03b3,
> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68' (dispatcher:82)
> 2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.connect failed (error 304) in 0.17 seconds (__init__:573)
> 2018-10-03 22:40:34,200-0700 INFO  (jsonrpc/1) [api.host] START getStats()
> from=:::172.16.100.13,39028 (api:46)
>
> When I look in cockpit on the hosts, the storage domain is mounted and
> seems fine.
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTZ6SIFYDFEMSZ4ACUNVC5KETWG7BBIZ/
>
> --
> Mit freundlichem Gruß
>
>
> Oliver Riesener
>
> --
> Hochschule Bremen
> Elektrotechnik und Informatik
> Oliver Riesener
> Neustadtswall 30
> D-28199 Bremen
>
> Tel: 0421 5905-2405, Fax: -2400e-mail:oliver.riese...@hs-bremen.de
>
>
> Tel: 0421 5905-2405, Fax: -2400e-mail:oliver.riese...@hs-bremen.de
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> 

[ovirt-users] Re: iso domain don't list available images

2018-10-04 Thread Nathanaël Blanchet



Le 04/10/2018 à 07:08, Elad Ben Aharon a écrit :

During detach, have you by any chance marked the format domain checkbox?

I reattached it to the old ovirt, and all images are still present.

If not, what is the storage domain status in the fresh 4.2 ovirt instance?

all domains status are up


On Wed, Oct 3, 2018 at 6:53 PM, Nathanaël Blanchet > wrote:


Hello,

I detached an existing iso domain from an other ovirt instance and
attached it to on a fresh 4.2 install, but no images are
available. Is it a known bug?

-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala


34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr 
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/PGTZWT6QMTXWS2VAW4ZMUVQVNIE2KDOK/






--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNI4WTZ5UFQD2V5XUDMDKFQDUMMLFXS6/


[ovirt-users] Re: Unable to mount CephFS as POSIX Compliant FS

2018-10-04 Thread Oliver Riesener
When filesystems are mounted, the inner attributes from mounted media (.) are 
used for user/group/permissions.

Change them within your storage -or- better on mount point when mounted.

chown 36:36 10.78.71.1:_vmdata/.
chmod 0770  10.78.71.1:_vmdata/.

Sheers
 Olri

 
> Am 04.10.2018 um 17:58 schrieb ryan.terps...@gmail.com:
> 
> I put selinux in permissive mode, but same issue.
> 
> I have found that the directory being made by vdsm for the mount is owned by 
> root:
> 
> drwxr-xr-x  1 root root  0 Oct  4 02:28 10.78.71.1:_vmdata
> 
> If i chown vdsm:kvm ahd make it 777 then try again the storage domain is 
> succesfully created.  Why would vdsm be making a directory with the wrong 
> ownership?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JTWHFDBSJB5EWDPXMTTKITWUHOICWHB2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZM5OVFRYN5MLEYLHJAFKM6R7A6O7RY6L/


[ovirt-users] Re: Unable to mount CephFS as POSIX Compliant FS

2018-10-04 Thread ryan . terpstra
I put selinux in permissive mode, but same issue.

I have found that the directory being made by vdsm for the mount is owned by 
root:

drwxr-xr-x  1 root root  0 Oct  4 02:28 10.78.71.1:_vmdata

If i chown vdsm:kvm ahd make it 777 then try again the storage domain is 
succesfully created.  Why would vdsm be making a directory with the wrong 
ownership?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JTWHFDBSJB5EWDPXMTTKITWUHOICWHB2/


[ovirt-users] Re: [ANN] oVirt Node 4.2.6 async update is now available

2018-10-04 Thread Staniforth, Paul
Thanks,

 Regards,

Paul S.


From: Sandro Bonazzola 
Sent: 04 October 2018 10:37
To: Staniforth, Paul
Cc: users
Subject: Re: [ovirt-users] [ANN] oVirt Node 4.2.6 async update is now available



Il giorno gio 4 ott 2018 alle ore 11:36 Staniforth, Paul 
mailto:p.stanifo...@leedsbeckett.ac.uk>> ha 
scritto:

Thanks Sandro,

 has the bug with wrong network threshold limits on 
vlan networks been fixed?



https://bugzilla.redhat.com/show_bug.cgi?id=1625098

This one is targeted to 4.2.7, we are releasing a second release candidate 
today including that fix.



Regards,

Paul S.


From: Sandro Bonazzola mailto:sbona...@redhat.com>>
Sent: 04 October 2018 09:34
To: annou...@ovirt.org; users
Subject: [ovirt-users] [ANN] oVirt Node 4.2.6 async update is now available

The oVirt Team has just released a new version of oVirt Node image including 
latest CentOS updates,
fixing a regression introduced in kernel package [1] breaking IP over 
infiniband.
We recommend to users to upgrade to this new release.

Errata included:
CEEA-2018:2397 CentOS 7 microcode_ctl Enhancement Update 

CESA-2018:2748 Important CentOS 7 kernel Security Update 

CEBA-2018:2760 CentOS 7 ipa BugFix Update 

CESA-2018:2768 Moderate CentOS 7 nss Security Update 

CEBA-2018:2753 CentOS 7 systemd BugFix Update 

CEBA-2018:2756 CentOS 7 sssd BugFix Update 

CEBA-2018:2769 CentOS 7 libvirt BugFix Update 

CEBA-2018:2761 CentOS 7 kexec-tools BugFix Update 

CEBA-2018:2758 CentOS 7 firewalld BugFix Update 

CEBA-2018:2764 CentOS 7 initscripts BugFix Update 

CESA-2018:2731 Important CentOS 7 spice Security Update 


Ansible 2.6.5: 
https://github.com/ansible/ansible/blob/v2.6.5/changelogs/CHANGELOG-v2.6.rst
Cockpit 176: https://cockpit-project.org/blog/cockpit-176.html

Updates included:
+ansible-2.6.5-1.el7.noarch
+cockpit-176-2.el7.centos.x86_64
+cockpit-bridge-176-2.el7.centos.x86_64
+cockpit-dashboard-176-2.el7.centos.x86_64
+cockpit-machines-ovirt-176-2.el7.centos.noarch
+cockpit-storaged-176-2.el7.centos.noarch
+cockpit-system-176-2.el7.centos.noarch
+cockpit-ws-176-2.el7.centos.x86_64
+firewalld-0.4.4.4-15.el7_5.noarch
+firewalld-filesystem-0.4.4.4-15.el7_5.noarch
+initscripts-9.49.41-1.el7_5.2.x86_64
+ipa-client-4.5.4-10.el7.centos.4.4.x86_64
+ipa-client-common-4.5.4-10.el7.centos.4.4.noarch
+ipa-common-4.5.4-10.el7.centos.4.4.noarch
+kernel-3.10.0-862.14.4.el7.x86_64
+kernel-tools-3.10.0-862.14.4.el7.x86_64
+kernel-tools-libs-3.10.0-862.14.4.el7.x86_64
+kexec-tools-2.0.15-13.el7_5.2.x86_64
+libgudev1-219-57.el7_5.3.x86_64
+libipa_hbac-1.16.0-19.el7_5.8.x86_64
+libsss_autofs-1.16.0-19.el7_5.8.x86_64
+libsss_certmap-1.16.0-19.el7_5.8.x86_64
+libsss_idmap-1.16.0-19.el7_5.8.x86_64
+libsss_nss_idmap-1.16.0-19.el7_5.8.x86_64
+libsss_sudo-1.16.0-19.el7_5.8.x86_64
+libvirt-3.9.0-14.el7_5.8.x86_64
+libvirt-client-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-config-network-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-interface-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-lxc-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-network-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-qemu-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-secret-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.8.x86_64

[ovirt-users] Re: Network configuration for self-hosted engine deployement oVirt node 4.2

2018-10-04 Thread Simone Tiraboschi
On Thu, Oct 4, 2018 at 2:46 PM  wrote:

> Dear all,
>
> I am going to deploy a self-hosted engine in order to manage 3 servers
> with 8 network interfaces each. I couldn't find in the documentation when
> the network shall be configured to perform the self-hosted engine
> deployment. In others words, what shall be done at the oVirt Node level and
> what shall managed afterwards via the Engine once the self-hosted engine is
> performed.
>
> Node version: virt-node-ng-installer-ovirt-4.2-2018062610
> Storage: SAN via iSCSI
>
> I tried this order in my previous tests:
> 1. Install the oVirt node on every server with DNS configured
> 2. Configure IP + bond for the administration network (2 interfaces)
> 3. Configure IP for iSCSI network (4 interfaces)
> 4. Configure IP + bond for the VM network
> 5. self-hosted --deploy
> At this point, oVirt complained by not finding an network interface
> available for the deployment. Is the bond must be configured afterward via
> the Engine?
>

What you did looks fine but please take care that bonds should be named
bondX where X is [0, 1, 2...].
Please note then that not all the bonding modes could be used for VMs and
the engine VM should reach the hosts over the management network:
https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/#bonding-modes



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6R3ER42JVQRLKDTG4W5XCL36FKQHRMT5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V36WJ2U332WKTLKD2PQQ3MN73E35AH2I/


[ovirt-users] Re: no space left to upgrade hosted-engine

2018-10-04 Thread Nathanaël Blanchet
Thank you Simone, so you confirm that the only way is to upgrade the 
appliance?



Le 04/10/2018 à 13:57, Simone Tiraboschi a écrit :

Hi Nathanaël,
manually extending the HE VM disk could be quite a complex task, we 
are working on an ansible role to easily perform that task. It will be 
available soon.


On Thu, Oct 4, 2018 at 1:01 PM Nathanaël Blanchet > wrote:


I've deployed and hosted-engine with the hosted-engine provided
appliance.

When runing hosted-engine --upgrade-appliance on one host, it
tells me
"If your engine VM is already based on el7 you can also simply
upgrade
the engine there."

So I launched a yum update on the hosted-engine, but it complains
there
is no space left on /

[root@infra ~]# df -h
Sys. de fichiers    Taille Utilisé Dispo Uti% Monté sur
/dev/vda3 6,2G    6,2G   20K 100% /
devtmpfs  7,8G   0  7,8G   0% /dev
tmpfs 7,8G    4,0K  7,8G   1% /dev/shm
tmpfs 7,8G    9,0M  7,8G   1% /run
tmpfs 7,8G   0  7,8G   0% /sys/fs/cgroup
/dev/mapper/ovirt-home   1014M 33M  982M   4% /home
/dev/mapper/ovirt-tmp 2,0G 33M  2,0G   2% /tmp
/dev/vda1    1014M    162M  853M  16% /boot
/dev/mapper/ovirt-var  20G    631M   20G   4% /var
/dev/mapper/ovirt-log  10G 58M   10G   1% /var/log
/dev/mapper/ovirt-audit  1014M 59M  956M   6% /var/log/audit
tmpfs 1,6G   0  1,6G   0% /run/user/0

Is the only way is to deploy the new appliance?

-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr 
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/OENYXBK3FVDP6WF6AKO4DYFDGZIM66YI/



--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XL7DXOZP64YA5MP6F7ZRZ4Y2YCIY5UFG/


[ovirt-users] Re: Bad libvirt-spice certificate - regenerate?

2018-10-04 Thread Chris Adams
Is there a way to force the libvirt-spice certificates to be renewed now
(since they are invalid and keeping me from connecting to VM consoles)?

Once upon a time, Staniforth, Paul  said:
> Hello Chris,
>engine-setup should renew the certificates, the event 
> notifier can send warnings about expired or expiring certificates.
> 
> Regards,
>  Paul S.
> 
> From: Chris Adams 
> Sent: 02 October 2018 15:04
> To: users@ovirt.org
> Subject: [ovirt-users] Bad libvirt-spice certificate - regenerate?
> 
> I have an oVirt 4.1 cluster that was initially installed with 3.5 in
> 2014.  The SSL certificates on the physical hosts in
> /etc/pki/vdsm/libvirt-spice have a problem - the "not before" date is
> invalid (it doesn't include a time zone), and so I can't connect to VM
> consoles from a client with OpenSSL 1.1.0i (up to date Fedora 27).
> 
> How can I regenerate these certificates?
> 
> Also, I noticed they expire next year - is that expiration handled
> automatically?
> 
> --
> Chris Adams 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3EMV5VLZMMT7MGQKDZKYQUQXN3FARG4D/
> To view the terms under which this email is distributed, please go to:-
> http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html

-- 
Chris Adams 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WDGV3QIRKEE4IGSWMTRVXIQUGDKPTLZQ/


[ovirt-users] Network configuration for self-hosted engine deployement oVirt node 4.2

2018-10-04 Thread debec . arnaud
Dear all,

I am going to deploy a self-hosted engine in order to manage 3 servers with 8 
network interfaces each. I couldn't find in the documentation when the network 
shall be configured to perform the self-hosted engine deployment. In others 
words, what shall be done at the oVirt Node level and what shall managed 
afterwards via the Engine once the self-hosted engine is performed.

Node version: virt-node-ng-installer-ovirt-4.2-2018062610
Storage: SAN via iSCSI

I tried this order in my previous tests:
1. Install the oVirt node on every server with DNS configured
2. Configure IP + bond for the administration network (2 interfaces)
3. Configure IP for iSCSI network (4 interfaces)
4. Configure IP + bond for the VM network
5. self-hosted --deploy
At this point, oVirt complained by not finding an network interface available 
for the deployment. Is the bond must be configured afterward via the Engine?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6R3ER42JVQRLKDTG4W5XCL36FKQHRMT5/


[ovirt-users] Failed to lock byte 100 - when booting VM with attached CD

2018-10-04 Thread Simon Vincent
I have found recently I can't boot any VMs which have a CD attached to it.
I get the following error.

VM Test is down with error. Exit message: internal error: qemu unexpectedly
closed the monitor: 2018-10-04T11:53:07.031875Z qemu-kvm: -device
ide-cd,bus=ide.1,unit=0,drive=drive-ua-5ccd6310-a1e5-4019-b989-795529c70794,id=ua-5ccd6310-a1e5-4019-b989-795529c70794,bootindex=2:
Failed to lock byte 100 2018-10-04T11:53:07.032264Z qemu-kvm: -device
ide-cd,bus=ide.1,unit=0,drive=drive-ua-5ccd6310-a1e5-4019-b989-795529c70794,id=ua-5ccd6310-a1e5-4019-b989-795529c70794,bootindex=2:
Failed to lock byte 100 2018-10-04T11:53:07.032285Z qemu-kvm: -device
ide-cd,bus=ide.1,unit=0,drive=drive-ua-5ccd6310-a1e5-4019-b989-795529c70794,id=ua-5ccd6310-a1e5-4019-b989-795529c70794,bootindex=2:
Failed to lock byte 100 Unexpected error in raw_apply_lock_bytes() at
block/file-posix.c:642: 2018-10-04T11:53:07.032665Z qemu-kvm: -device
ide-cd,bus=ide.1,unit=0,drive=drive-ua-5ccd6310-a1e5-4019-b989-795529c70794,id=ua-5ccd6310-a1e5-4019-b989-795529c70794,bootindex=2:
Failed to lock byte 100.

I have tried relaxing the permissions on the iSO NFS share and even removed
the iSO domain and added a new one, but I still get the problem.

Regards

Simon
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T3BGVIPZ2IW23DBOXIYWJMVNYX6GWNVA/


[ovirt-users] Re: no space left to upgrade hosted-engine

2018-10-04 Thread Simone Tiraboschi
Hi Nathanaël,
manually extending the HE VM disk could be quite a complex task, we are
working on an ansible role to easily perform that task. It will be
available soon.

On Thu, Oct 4, 2018 at 1:01 PM Nathanaël Blanchet  wrote:

> I've deployed and hosted-engine with the hosted-engine provided appliance.
>
> When runing hosted-engine --upgrade-appliance on one host, it tells me
> "If your engine VM is already based on el7 you can also simply upgrade
> the engine there."
>
> So I launched a yum update on the hosted-engine, but it complains there
> is no space left on /
>
> [root@infra ~]# df -h
> Sys. de fichiersTaille Utilisé Dispo Uti% Monté sur
> /dev/vda3 6,2G6,2G   20K 100% /
> devtmpfs  7,8G   0  7,8G   0% /dev
> tmpfs 7,8G4,0K  7,8G   1% /dev/shm
> tmpfs 7,8G9,0M  7,8G   1% /run
> tmpfs 7,8G   0  7,8G   0% /sys/fs/cgroup
> /dev/mapper/ovirt-home   1014M 33M  982M   4% /home
> /dev/mapper/ovirt-tmp 2,0G 33M  2,0G   2% /tmp
> /dev/vda11014M162M  853M  16% /boot
> /dev/mapper/ovirt-var  20G631M   20G   4% /var
> /dev/mapper/ovirt-log  10G 58M   10G   1% /var/log
> /dev/mapper/ovirt-audit  1014M 59M  956M   6% /var/log/audit
> tmpfs 1,6G   0  1,6G   0% /run/user/0
>
> Is the only way is to deploy the new appliance?
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OENYXBK3FVDP6WF6AKO4DYFDGZIM66YI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NLQFTLODMRVD774RA5TXJACDQOLJ4DXU/


[ovirt-users] ODL and Openstack issue

2018-10-04 Thread Soujanya Bargavi
I am running one controller and one compute node. Controller node is running 
both ODL and OpenStack. I created tenants, underthem i created networks and 
launched instances on them. All is see on ODL web GUI is 3 swithces, and I 
guess those are br-int and br-ex of controller and br-int of compute and the 
links are missing too. Is there anyway where I can see my whole OpenStack 
https://goo.gl/6CgufL topology on ODL GUI with the links?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YYICXIQGHX4GWU7SLONE6PODINEEHYIY/


[ovirt-users] Re: Reg: Enabling Nested virtualization.

2018-10-04 Thread Elad Ben Aharon
Please make sure you have vdsm-hook-nestedvt installed on your hypervisor

On Thu, Oct 4, 2018 at 1:10 PM,  wrote:

> Dear Team,
>
>
>
> Need help to enable nested virtualization in Ovirt VM’s.
>
>
>
> As when I checked my cpu info and kvm acceleration info, it is giving
> below output.
>
>
>
> *# kvm-ok*
>
> INFO: Your CPU does not support KVM extensions
>
> KVM acceleration can NOT be used
>
>
>
> *# cat /sys/module/kvm/parameters/nested*
>
> cat: /sys/module/kvm/parameters/nested: No such file or directory
>
>
>
> *# virsh -r list*
>
> IdName   State
>
> 
>
>
>
>
>
>
>
>
>
> *[image: linux-admin-training]*
>
> *Thanks & Regards,*
>
> *Syed Abdul Qadeer.*
>
> *7660022818.*
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/AD6WMDWA6MV5QVXFTIMV2Q4OSCYERCWF/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HDVK6YLKFD2AICOHGO5I43ZI4ELZRPIR/


[ovirt-users] Re: Need to register with oVirt community

2018-10-04 Thread Evgheni Dereveanchin
Hi Manish,

If you want to deploy oVirt and use it, please follow our installation
guide:
https://www.ovirt.org/documentation/install-guide/Installation_Guide/

If you'd like to contribute to the project there's several ways like
submitting patches to the code or helping with documentation or infra:
https://www.ovirt.org/community/

Please let me know if you have any further questions.

On Wed, Oct 3, 2018 at 10:12 AM Manish Shukla 
wrote:

> Dear Team,
>
>
>
> Please suggest me how to joined community. I would like to be an
> implementation of the oVirt infrastructure.
>
>
>
>
>
>
>
>
>
> Thanks & Regards,
>
>
>
> Manish SHUKLA
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NMDN6K643COC6FHFABJDCDESH5TNC2T/
>


-- 
Regards,
Evgheni Dereveanchin
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RNYBPSAX746552JFTINO6F7FX2LFNADC/


[ovirt-users] [ANN] oVirt 4.2.7 Second Release Candidate is now available

2018-10-04 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.2.7 Second Release Candidate, as of October 4th, 2018.

This update is a release candidate of the seventh in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now for:
* Red Hat Enterprise Linux 7.5 or later
* CentOS Linux (or similar) 7.5 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.5 or later
* CentOS Linux (or similar) 7.5 or later

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is available
- oVirt Node is available [2]

Additional Resources:
* Read more about the oVirt 4.2.7 release highlights:
http://www.ovirt.org/release/4.2.7/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.2.7/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YUU6VSM3D3EYT7TQI3FIGTFNY63Q5ML3/


[ovirt-users] no space left to upgrade hosted-engine

2018-10-04 Thread Nathanaël Blanchet

I've deployed and hosted-engine with the hosted-engine provided appliance.

When runing hosted-engine --upgrade-appliance on one host, it tells me 
"If your engine VM is already based on el7 you can also simply upgrade 
the engine there."


So I launched a yum update on the hosted-engine, but it complains there 
is no space left on /


[root@infra ~]# df -h
Sys. de fichiers    Taille Utilisé Dispo Uti% Monté sur
/dev/vda3 6,2G    6,2G   20K 100% /
devtmpfs  7,8G   0  7,8G   0% /dev
tmpfs 7,8G    4,0K  7,8G   1% /dev/shm
tmpfs 7,8G    9,0M  7,8G   1% /run
tmpfs 7,8G   0  7,8G   0% /sys/fs/cgroup
/dev/mapper/ovirt-home   1014M 33M  982M   4% /home
/dev/mapper/ovirt-tmp 2,0G 33M  2,0G   2% /tmp
/dev/vda1    1014M    162M  853M  16% /boot
/dev/mapper/ovirt-var  20G    631M   20G   4% /var
/dev/mapper/ovirt-log  10G 58M   10G   1% /var/log
/dev/mapper/ovirt-audit  1014M 59M  956M   6% /var/log/audit
tmpfs 1,6G   0  1,6G   0% /run/user/0

Is the only way is to deploy the new appliance?

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OENYXBK3FVDP6WF6AKO4DYFDGZIM66YI/


[ovirt-users] Re: lost connection to hosted engine

2018-10-04 Thread Sahina Bose
On Tue, Oct 2, 2018 at 4:16 PM Artem Tambovskiy 
wrote:

> Hi,
>
> Just run into the issue during cluster upgrade from 4.24 to 4.2.6.1. I'm
> running small cluster with 2 hosts and gluster storage. Once I upgraded one
> of the hosts to 4.2.6.1 something went wrong (looks like it tried to start
> HE instance) and I can't connect to hosted-engine any longer.
>
> As I can see HostedEngine is still running on the second host (and another
> yet 7 VM's) , but I can't stop it.
> ovirt-ha-agent and ovirt-ha-broker are failing to start. hosted-engine
> --vm-status gives nothing but error message
> "The hosted engine configuration has not been retrieved from shared
> storage. Please ensure that ovirt-ha-agent is running and the storage
> server is reachable."
>

Is the storage available? Check gluster volume status  and
gluster volume heal  info (replace  with name of
the gluster volume hosting your HE disk.
You mention cluster with 2 hosts - replica 2? You're likely to run into
split brain scenarios.

>
> ps -ef shows plenty of vdsm processes in defunc state thats probably the
> reason why agent and brocker can't start. Just wondering that is the good
> way to start problem resolution here to minimize downtime for running VM's?
>
> Restart vdsm and try again restarting agent and broker or just reboot the
> whole host?
>

If storage is available , try restarting the vdsm, agent and broker services


> Regards,
> Artem
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BKU2N2UOEHWJ3XKJ5DRTERKBTQZ4X7EB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEDK4MCPPS7J4PVX6FKRMN6LVAZ44VO6/


[ovirt-users] removal of 3 vdsm hooks

2018-10-04 Thread Dan Kenigsberg
I've identified 3 ancient vdsm-hooks that have been obsoleted by
proper oVirt features.

vdsm-hook-isolatedvlan: obsoleted in ovirt-4.2.6  by
clean-traffic-gateway filter
https://gerrit.ovirt.org/#/q/I396243e1943eca245ab4da64bb286da19f9b47ec

vdsm-hook-qos: obsoleted in ovirt-3.3 by
https://www.ovirt.org/documentation/sla/network-qos/

vdsm-hook-noipspoof: obsoleted in ovirt-4.0 by choosing the
"clean-traffic" filter https://ovirt.org/feature/networkfilter

I would like to remove this code from vdsm-4.30, destined for
ovirt-4.3. Is there any objection for that? Is anybody still using
them?

Regards,
Dan.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GG2D4PPNNYJLZLPJ5Y5ELMRETG37WEYQ/


[ovirt-users] Reg: Enabling Nested virtualization.

2018-10-04 Thread syedquadeer
Dear Team,

 

Need help to enable nested virtualization in Ovirt VM's.

 

As when I checked my cpu info and kvm acceleration info, it is giving below
output.

 

# kvm-ok

INFO: Your CPU does not support KVM extensions

KVM acceleration can NOT be used

 

# cat /sys/module/kvm/parameters/nested

cat: /sys/module/kvm/parameters/nested: No such file or directory

 

# virsh -r list

IdName   State



 

 

 

 



Thanks & Regards,

Syed Abdul Qadeer.

7660022818.

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AD6WMDWA6MV5QVXFTIMV2Q4OSCYERCWF/


[ovirt-users] Re: [ANN] oVirt Node 4.2.6 async update is now available

2018-10-04 Thread Oliver Riesener
manual remove cockpit-networkmanager-172-1.el7.noarch and update ovirt 
again fixes this issue for me.


On 10/4/18 11:20 AM, Maton, Brett wrote:
Having trouble upgrading my test instance (4.2.7.1-1.el7), there 
appear to be some dependency issues:


Transaction check error:
  file /usr/share/cockpit/networkmanager/manifest.json from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ca.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.cs.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.de.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.es.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.eu.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.fi.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.fr.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.hr.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.hu.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ja.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ko.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.my.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.nl.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pa.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pl.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pt.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pt_BR.js.gz from install 
of cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.tr.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.uk.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.zh_CN.js.gz from install 
of cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/firewall.css.gz from install 
of cockpit-system-176-2.el7.centos.noarch conflicts with file from 
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/network.min.js.gz from 
install of cockpit-system-176-2.el7.centos.noarch conflicts with file 
from package cockpit-networkmanager-172-1.el7.noarch




On Thu, 4 Oct 2018 at 09:44, Sandro Bonazzola > wrote:


The oVirt Team has just released a new version of oVirt Node image
including latest CentOS updates,
fixing a regression introduced in kernel package [1] breaking IP
over infiniband.
We recommend to users to upgrade to this new release.

Errata included:
CEEA-2018:2397 CentOS 7 microcode_ctl Enhancement Update


[ovirt-users] Re: [ANN] oVirt Node 4.2.6 async update is now available

2018-10-04 Thread Sandro Bonazzola
Il giorno gio 4 ott 2018 alle ore 11:36 Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> ha scritto:

> Thanks Sandro,
>
>  has the bug with wrong network threshold limits
> on vlan networks been fixed?
>
>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1625098
>

This one is targeted to 4.2.7, we are releasing a second release candidate
today including that fix.


>
> Regards,
>
> Paul S.
> --
> *From:* Sandro Bonazzola 
> *Sent:* 04 October 2018 09:34
> *To:* annou...@ovirt.org; users
> *Subject:* [ovirt-users] [ANN] oVirt Node 4.2.6 async update is now
> available
>
> The oVirt Team has just released a new version of oVirt Node image
> including latest CentOS updates,
> fixing a regression introduced in kernel package [1] breaking IP over
> infiniband.
> We recommend to users to upgrade to this new release.
>
> Errata included:
> CEEA-2018:2397 CentOS 7 microcode_ctl Enhancement Update
> 
> CESA-2018:2748 Important CentOS 7 kernel Security Update
> 
> CEBA-2018:2760 CentOS 7 ipa BugFix Update
> 
> CESA-2018:2768 Moderate CentOS 7 nss Security Update
> 
>
> CEBA-2018:2753 CentOS 7 systemd BugFix Update
> 
> CEBA-2018:2756 CentOS 7 sssd BugFix Update
> 
> CEBA-2018:2769 CentOS 7 libvirt BugFix Update
> 
>
> CEBA-2018:2761 CentOS 7 kexec-tools BugFix Update
> 
> CEBA-2018:2758 CentOS 7 firewalld BugFix Update
> 
> CEBA-2018:2764 CentOS 7 initscripts BugFix Update
> 
> CESA-2018:2731 Important CentOS 7 spice Security Update
> 
>
> Ansible 2.6.5:
> https://github.com/ansible/ansible/blob/v2.6.5/changelogs/CHANGELOG-v2.6.rst
> Cockpit 176: https://cockpit-project.org/blog/cockpit-176.html
>
> Updates included:
> +ansible-2.6.5-1.el7.noarch
> +cockpit-176-2.el7.centos.x86_64
> +cockpit-bridge-176-2.el7.centos.x86_64
> +cockpit-dashboard-176-2.el7.centos.x86_64
> +cockpit-machines-ovirt-176-2.el7.centos.noarch
> +cockpit-storaged-176-2.el7.centos.noarch
> +cockpit-system-176-2.el7.centos.noarch
> +cockpit-ws-176-2.el7.centos.x86_64
> +firewalld-0.4.4.4-15.el7_5.noarch
> +firewalld-filesystem-0.4.4.4-15.el7_5.noarch
> +initscripts-9.49.41-1.el7_5.2.x86_64
> +ipa-client-4.5.4-10.el7.centos.4.4.x86_64
> +ipa-client-common-4.5.4-10.el7.centos.4.4.noarch
> +ipa-common-4.5.4-10.el7.centos.4.4.noarch
> +kernel-3.10.0-862.14.4.el7.x86_64
> +kernel-tools-3.10.0-862.14.4.el7.x86_64
> +kernel-tools-libs-3.10.0-862.14.4.el7.x86_64
> +kexec-tools-2.0.15-13.el7_5.2.x86_64
> +libgudev1-219-57.el7_5.3.x86_64
> +libipa_hbac-1.16.0-19.el7_5.8.x86_64
> +libsss_autofs-1.16.0-19.el7_5.8.x86_64
> +libsss_certmap-1.16.0-19.el7_5.8.x86_64
> +libsss_idmap-1.16.0-19.el7_5.8.x86_64
> +libsss_nss_idmap-1.16.0-19.el7_5.8.x86_64
> +libsss_sudo-1.16.0-19.el7_5.8.x86_64
> +libvirt-3.9.0-14.el7_5.8.x86_64
> +libvirt-client-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-config-network-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-interface-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-lxc-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-network-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-qemu-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-secret-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.8.x86_64
> +libvirt-daemon-kvm-3.9.0-14.el7_5.8.x86_64
> +libvirt-libs-3.9.0-14.el7_5.8.x86_64
> +libvirt-lock-sanlock-3.9.0-14.el7_5.8.x86_64
> +microcode_ctl-2.1-29.16.el7_5.x86_64
> 

[ovirt-users] Re: [ANN] oVirt Node 4.2.6 async update is now available

2018-10-04 Thread Sandro Bonazzola
Il giorno gio 4 ott 2018 alle ore 11:21 Maton, Brett <
mat...@ltresources.co.uk> ha scritto:

> Having trouble upgrading my test instance (4.2.7.1-1.el7), there appear
> to be some dependency issues:
>


> Transaction check error:
>   file /usr/share/cockpit/networkmanager/manifest.json from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>


This is unrelated to the 4.2.6 Async release, it's due to an update in
CentOS of cockpit related packages.
I tagged  cockpit-176-1.el7
http://cbs.centos.org/koji/buildinfo?buildID=24244 for release, should
solve this issue as soon as the build lands on mirrors updating
cockpit-networkmanager.




>   file /usr/share/cockpit/networkmanager/po.ca.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.cs.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.de.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.es.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.eu.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.fi.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.fr.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.hr.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.hu.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.ja.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.ko.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.my.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.nl.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.pa.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.pl.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.pt.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.pt_BR.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.tr.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.uk.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/po.zh_CN.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/firewall.css.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>   file /usr/share/cockpit/networkmanager/network.min.js.gz from install of
> cockpit-system-176-2.el7.centos.noarch conflicts with file from package
> cockpit-networkmanager-172-1.el7.noarch
>
>
>
> On Thu, 4 Oct 2018 at 09:44, Sandro Bonazzola  wrote:
>
>> The oVirt Team has just released a new version of oVirt Node image
>> including 

[ovirt-users] Re: [ANN] oVirt Node 4.2.6 async update is now available

2018-10-04 Thread Maton, Brett
Having trouble upgrading my test instance (4.2.7.1-1.el7), there appear to
be some dependency issues:

Transaction check error:
  file /usr/share/cockpit/networkmanager/manifest.json from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ca.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.cs.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.de.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.es.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.eu.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.fi.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.fr.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.hr.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.hu.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ja.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ko.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.my.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.nl.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pa.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pl.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pt.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pt_BR.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.tr.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.uk.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.zh_CN.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/firewall.css.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/network.min.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from package
cockpit-networkmanager-172-1.el7.noarch



On Thu, 4 Oct 2018 at 09:44, Sandro Bonazzola  wrote:

> The oVirt Team has just released a new version of oVirt Node image
> including latest CentOS updates,
> fixing a regression introduced in kernel package [1] breaking IP over
> infiniband.
> We recommend to users to upgrade to this new release.
>
> Errata included:
> CEEA-2018:2397 CentOS 7 microcode_ctl Enhancement Update
> 
> CESA-2018:2748 Important CentOS 7 kernel Security Update
> 
> CEBA-2018:2760 CentOS 7 ipa BugFix Update
> 

[ovirt-users] Re: hosted-engine --deploy erorr

2018-10-04 Thread Simone Tiraboschi
Hi,
can you please attach the whole content of
/var/log/ovirt-hosted-engine-setup ?

On Thu, Oct 4, 2018 at 9:25 AM  wrote:

> i configuer ovirt-self host and then i clear it by this command :
> /usr/sbin/ovirt-hosted-engine-cleanup
> then i want to re-deploy the selfhost by this error appear :
>
> [root@ovirtnode44 ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Stage: Environment setup
>   During customization use CTRL-D to abort.
>   Continuing will configure this host for serving as hypervisor
> and create a local VM with a running engine.
>   The locally running engine will be used to configure a storage
> domain and create a VM there.
>   At the end the disk of the local VM will be moved to the shared
> storage.
>   Are you sure you want to continue? (Yes, No)[Yes]:
>   It has been detected that this program is executed through an
> SSH connection without using screen.
>   Continuing with the installation may lead to broken installation
> if the network connection fails.
>   It is highly recommended to abort the installation and run it
> inside a screen session using command "screen".
>   Do you want to continue anyway? (Yes, No)[No]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20181004025609-bzthh6.log
>   Version: otopi-1.7.8 (otopi-1.7.8-1.el7)
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== STORAGE CONFIGURATION ==--
>
>
>   --== HOST NETWORK CONFIGURATION ==--
>
> [ INFO  ] Bridge ovirtmgmt already created
>   Please indicate a pingable gateway IP address [192.168.3.2]:
> [ INFO  ] TASK [Gathering Facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Detecting interface on existing management bridge]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Get all active network interfaces]
> [ INFO  ] TASK [Filter bonds with bad naming]
> [ INFO  ] TASK [Generate output list]
> [ INFO  ] ok: [localhost]
>   Please indicate a nic to set ovirtmgmt bridge on: (enp24s0f0)
> [enp24s0f0]:
>
>   --== VM CONFIGURATION ==--
>
>   If you want to deploy with a custom engine appliance image,
>   please specify the path to the OVA archive you would like to use
>   (leave it empty to skip, the setup will use
> ovirt-engine-appliance rpm installing it if missing):
> [ INFO  ] Detecting host timezone.
>   Please provide the FQDN you would like to use for the engine
> appliance.
>   Note: This will be the FQDN of the engine VM you are now going
> to launch,
>   it should not point to the base host or to any other existing
> machine.
>   Engine VM FQDN:  []: ovirtengine.exalt.ps
>   Please provide the domain name you would like to use for the
> engine appliance.
>   Engine VM domain: [exalt.ps]
>   Enter root password that will be used for the engine appliance:
>   Confirm appliance root password:
>   Enter ssh public key for the root user that will be used for the
> engine appliance (leave it empty to skip):
> [WARNING] Skipping appliance root ssh public key
>   Do you want to enable ssh access for the root user (yes, no,
> without-password) [yes]:
>   Please specify the number of virtual CPUs for the VM (Defaults
> to appliance OVF value): [4]:
>   Please specify the memory size of the VM in MB (Defaults to
> appliance OVF value): [16384]:
>   You may specify a unicast MAC address for the VM or accept a
> randomly generated default [00:16:3e:03:a8:09]:
>   How should the engine VM network be configured (DHCP,
> Static)[DHCP]? Static
>   Please enter the IP address to be used for the engine VM
> [192.168.0.1]: 192.168.200.45
> [ INFO  ] The engine VM will be configured to use 192.168.200.45/16
>   Please provide a comma-separated list (max 3) of IP addresses of
> domain name servers for the engine VM
>   Engine VM DNS (leave it empty to skip) [192.168.200.6]:
>   Add lines for the appliance itself and for this host to
> /etc/hosts on the engine VM?
>   Note: ensuring that this host could resolve the engine VM
> hostname is still up to you
>   (Yes, No)[No]
>
>   --== HOSTED ENGINE CONFIGURATION ==--
>
>   Please provide the name of the SMTP server through which we will
> send notifications [localhost]:
>   Please provide the TCP port number of the SMTP server [25]:
>   Please provide the email address from which notifications will
> be sent [root@localhost]:
>   Please provide a comma-separated list of email addresses which
> will get notifications [root@localhost]:
>   Enter engine admin password:
>   Confirm engine admin password:
> [ 

[ovirt-users] Re: IPoIB broken with ovirt 4.2.6

2018-10-04 Thread Sandro Bonazzola
Il giorno mer 3 ott 2018 alle ore 12:58 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

>
>
> Il giorno mer 3 ott 2018 alle ore 12:37 Giulio Casella 
> ha scritto:
>
>> Il 04/09/2018 12:54, Sandro Bonazzola ha scritto:
>> >
>> >
>> > 2018-09-03 16:22 GMT+02:00 Giulio Casella > > >:
>> >
>> > Hi,
>> > latest ovirt node stable (4.2.6 today) introduced a bug in kernel:
>> IP
>> > over infiniband is not workingh anymore after an upgrade, due to
>> kernel
>> > 3.10.0-862.11.6.el7.x86_64.
>> >
>> > You can find some detail here:
>> >
>> > https://bugs.centos.org/view.php?id=15193
>> > 
>> >
>> > dmesg is full of "failed to modify QP to RTR: -22", and the
>> networking
>> > stack (in my case used to connect to storage) is broken. The
>> interface
>> > can obtain an address via DHCP, but also a simple ICMP ping fails.
>> >
>> > Does someone have news about a fix for this issue?
>> >
>> >
>> > Thanks for reporting, I wasn't aware of this issue.
>> > We'll issue an async respin as soon as a new kernel will be available.
>> > Adding this to release notes.
>> >
>>
>> Hi Sandro, kernel 3.10.0-862.14.4.el7 for centos (that should fix this
>> issue) is out since 9/28. Any news about respin of ovirt node?
>>
>
> Thanks for the heads up! We are preparing oVirt 4.2.7 RC2 today, I'll
> issue a oVirt Node 4.2.6 Async 2 in parallel, should both go out tomorrow.
>

Released


>
>
>
>>
>> Ciao,
>> gc
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y2MV75653ZCEDTR73S23ZQN3FWR67X2Y/


[ovirt-users] [ANN] oVirt Node 4.2.6 async update is now available

2018-10-04 Thread Sandro Bonazzola
The oVirt Team has just released a new version of oVirt Node image
including latest CentOS updates,
fixing a regression introduced in kernel package [1] breaking IP over
infiniband.
We recommend to users to upgrade to this new release.

Errata included:
CEEA-2018:2397 CentOS 7 microcode_ctl Enhancement Update

CESA-2018:2748 Important CentOS 7 kernel Security Update

CEBA-2018:2760 CentOS 7 ipa BugFix Update

CESA-2018:2768 Moderate CentOS 7 nss Security Update


CEBA-2018:2753 CentOS 7 systemd BugFix Update

CEBA-2018:2756 CentOS 7 sssd BugFix Update

CEBA-2018:2769 CentOS 7 libvirt BugFix Update


CEBA-2018:2761 CentOS 7 kexec-tools BugFix Update

CEBA-2018:2758 CentOS 7 firewalld BugFix Update

CEBA-2018:2764 CentOS 7 initscripts BugFix Update

CESA-2018:2731 Important CentOS 7 spice Security Update


Ansible 2.6.5:
https://github.com/ansible/ansible/blob/v2.6.5/changelogs/CHANGELOG-v2.6.rst
Cockpit 176: https://cockpit-project.org/blog/cockpit-176.html

Updates included:
+ansible-2.6.5-1.el7.noarch
+cockpit-176-2.el7.centos.x86_64
+cockpit-bridge-176-2.el7.centos.x86_64
+cockpit-dashboard-176-2.el7.centos.x86_64
+cockpit-machines-ovirt-176-2.el7.centos.noarch
+cockpit-storaged-176-2.el7.centos.noarch
+cockpit-system-176-2.el7.centos.noarch
+cockpit-ws-176-2.el7.centos.x86_64
+firewalld-0.4.4.4-15.el7_5.noarch
+firewalld-filesystem-0.4.4.4-15.el7_5.noarch
+initscripts-9.49.41-1.el7_5.2.x86_64
+ipa-client-4.5.4-10.el7.centos.4.4.x86_64
+ipa-client-common-4.5.4-10.el7.centos.4.4.noarch
+ipa-common-4.5.4-10.el7.centos.4.4.noarch
+kernel-3.10.0-862.14.4.el7.x86_64
+kernel-tools-3.10.0-862.14.4.el7.x86_64
+kernel-tools-libs-3.10.0-862.14.4.el7.x86_64
+kexec-tools-2.0.15-13.el7_5.2.x86_64
+libgudev1-219-57.el7_5.3.x86_64
+libipa_hbac-1.16.0-19.el7_5.8.x86_64
+libsss_autofs-1.16.0-19.el7_5.8.x86_64
+libsss_certmap-1.16.0-19.el7_5.8.x86_64
+libsss_idmap-1.16.0-19.el7_5.8.x86_64
+libsss_nss_idmap-1.16.0-19.el7_5.8.x86_64
+libsss_sudo-1.16.0-19.el7_5.8.x86_64
+libvirt-3.9.0-14.el7_5.8.x86_64
+libvirt-client-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-config-network-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-interface-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-lxc-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-network-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-qemu-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-secret-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.8.x86_64
+libvirt-daemon-kvm-3.9.0-14.el7_5.8.x86_64
+libvirt-libs-3.9.0-14.el7_5.8.x86_64
+libvirt-lock-sanlock-3.9.0-14.el7_5.8.x86_64
+microcode_ctl-2.1-29.16.el7_5.x86_64
+mokutil-12-2.el7.x86_64
+nss-3.36.0-7.el7_5.x86_64
+nss-sysinit-3.36.0-7.el7_5.x86_64
+nss-tools-3.36.0-7.el7_5.x86_64
+ovirt-node-ng-image-update-placeholder-4.2.6.2-1.el7.noarch
+ovirt-node-ng-nodectl-4.2.0-0.20181003.0.el7.noarch
+ovirt-release-host-node-4.2.6.2-1.el7.noarch
+ovirt-release42-4.2.6.2-1.el7.noarch
+python-firewall-0.4.4.4-15.el7_5.noarch
+python-libipa_hbac-1.16.0-19.el7_5.8.x86_64
+python-perf-3.10.0-862.14.4.el7.x86_64
+python-sss-murmur-1.16.0-19.el7_5.8.x86_64
+python-sssdconfig-1.16.0-19.el7_5.8.noarch
+python2-ipaclient-4.5.4-10.el7.centos.4.4.noarch
+python2-ipalib-4.5.4-10.el7.centos.4.4.noarch
+shim-x64-12-2.el7.x86_64
+spice-server-0.14.0-2.el7_5.5.x86_64
+sssd-1.16.0-19.el7_5.8.x86_64
+sssd-ad-1.16.0-19.el7_5.8.x86_64
+sssd-client-1.16.0-19.el7_5.8.x86_64

[ovirt-users] Re: master domain wont activate

2018-10-04 Thread Oliver Riesener
When your hosts are up and running and your Domain didn't go active 
within minutes


* Activate your Storage Domain under:

Storage -> Storage Domain -> (Open your Domain)  -> Data Center -> 
(Right Click Your Data Center Name) -> Activate.


On 10/4/18 9:50 AM, Oliver Riesener wrote:


Hi Vincent,

OK you master domain, isn't avail a the moment, but no panic.

First off all we need the status from your hosts. No HOSTS -> No Storage !

* Do you reboot them hard, without Confirm "Host has been rebooted"

* Are they actived in the DataCenter / Cluster ? Green Arrow ?


On 10/4/18 7:46 AM, Vincent Royer wrote:
I was attempting to migrate from nfs to iscsi storage domains.  I 
have reached a state where I can no longer activate the old master 
storage domain, and thus no others will activate either.


I'm ready to give up on the installation and just move to an HCI 
deployment instead.  Wipe all the hosts clean and start again.


My plan was to create and use an export domain, then wipe the nodes 
and set them up HCI where I could re-import.  But without being able 
to activate a master domain, I can't create the export domain.


I'm not sure why it can't find the master anymore, as nothing has 
happened to the NFS storage, but the error in vdsm says it just can't 
find it:


StoragePoolMasterNotFound: Cannot find master domain: 
u'spUUID=5a77bed1-0238-030c-0122-03b3, 
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'
2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) 
[storage.TaskManager.Task] 
(Task='83f33db5-90f3-4064-87df-0512ab9b6378') aborting: Task is 
aborted: "Cannot find master domain: 
u'spUUID=5a77bed1-0238-030c-0122-03b3, 
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'" - code 304 (task:1181)
2018-10-03 22:40:33,751-0700 ERROR (jsonrpc/3) [storage.Dispatcher] 
FINISH connectStoragePool error=Cannot find master domain: 
u'spUUID=5a77bed1-0238-030c-0122-03b3, 
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68' (dispatcher:82)
2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) 
[jsonrpc.JsonRpcServer] RPC call StoragePool.connect failed (error 
304) in 0.17 seconds (__init__:573)
2018-10-03 22:40:34,200-0700 INFO  (jsonrpc/1) [api.host] START 
getStats() from=:::172.16.100.13,39028 (api:46)


When I look in cockpit on the hosts, the storage domain is mounted 
and seems fine.




___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/site/privacy-policy/
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTZ6SIFYDFEMSZ4ACUNVC5KETWG7BBIZ/

--
Mit freundlichem Gruß


Oliver Riesener

--
Hochschule Bremen
Elektrotechnik und Informatik
Oliver Riesener
Neustadtswall 30
D-28199 Bremen

Tel: 0421 5905-2405, Fax: -2400
e-mail:oliver.riese...@hs-bremen.de


Tel: 0421 5905-2405, Fax: -2400
e-mail:oliver.riese...@hs-bremen.de

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V72KMULZJAT3XIR3GBTOCA5RLACVQSRC/


[ovirt-users] Re: master domain wont activate

2018-10-04 Thread Oliver Riesener

Hi Vincent,

OK you master domain, isn't avail a the moment, but no panic.

First off all we need the status from your hosts. No HOSTS -> No Storage !

* Do you reboot them hard, without Confirm "Host has been rebooted"

* Are they actived in the DataCenter / Cluster ? Green Arrow ?


On 10/4/18 7:46 AM, Vincent Royer wrote:
I was attempting to migrate from nfs to iscsi storage domains.  I have 
reached a state where I can no longer activate the old master storage 
domain, and thus no others will activate either.


I'm ready to give up on the installation and just move to an HCI 
deployment instead.  Wipe all the hosts clean and start again.


My plan was to create and use an export domain, then wipe the nodes 
and set them up HCI where I could re-import.  But without being able 
to activate a master domain, I can't create the export domain.


I'm not sure why it can't find the master anymore, as nothing has 
happened to the NFS storage, but the error in vdsm says it just can't 
find it:


StoragePoolMasterNotFound: Cannot find master domain: 
u'spUUID=5a77bed1-0238-030c-0122-03b3, 
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'
2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) 
[storage.TaskManager.Task] 
(Task='83f33db5-90f3-4064-87df-0512ab9b6378') aborting: Task is 
aborted: "Cannot find master domain: 
u'spUUID=5a77bed1-0238-030c-0122-03b3, 
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'" - code 304 (task:1181)
2018-10-03 22:40:33,751-0700 ERROR (jsonrpc/3) [storage.Dispatcher] 
FINISH connectStoragePool error=Cannot find master domain: 
u'spUUID=5a77bed1-0238-030c-0122-03b3, 
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68' (dispatcher:82)
2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] 
RPC call StoragePool.connect failed (error 304) in 0.17 seconds 
(__init__:573)
2018-10-03 22:40:34,200-0700 INFO  (jsonrpc/1) [api.host] START 
getStats() from=:::172.16.100.13,39028 (api:46)


When I look in cockpit on the hosts, the storage domain is mounted and 
seems fine.




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTZ6SIFYDFEMSZ4ACUNVC5KETWG7BBIZ/


--
Mit freundlichem Gruß


Oliver Riesener

--
Hochschule Bremen
Elektrotechnik und Informatik
Oliver Riesener
Neustadtswall 30
D-28199 Bremen

Tel: 0421 5905-2405, Fax: -2400
e-mail:oliver.riese...@hs-bremen.de

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYM2TUFODFDZIVYOHC464O334YK3YGFZ/


[ovirt-users] hosted-engine --deploy erorr

2018-10-04 Thread mustafa . taha . mu95
i configuer ovirt-self host and then i clear it by this command :
/usr/sbin/ovirt-hosted-engine-cleanup 
then i want to re-deploy the selfhost by this error appear : 

[root@ovirtnode44 ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  During customization use CTRL-D to abort.
  Continuing will configure this host for serving as hypervisor and 
create a local VM with a running engine.
  The locally running engine will be used to configure a storage domain 
and create a VM there.
  At the end the disk of the local VM will be moved to the shared 
storage.
  Are you sure you want to continue? (Yes, No)[Yes]:
  It has been detected that this program is executed through an SSH 
connection without using screen.
  Continuing with the installation may lead to broken installation if 
the network connection fails.
  It is highly recommended to abort the installation and run it inside 
a screen session using command "screen".
  Do you want to continue anyway? (Yes, No)[No]: yes
  Configuration files: []
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20181004025609-bzthh6.log
  Version: otopi-1.7.8 (otopi-1.7.8-1.el7)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

  --== STORAGE CONFIGURATION ==--


  --== HOST NETWORK CONFIGURATION ==--

[ INFO  ] Bridge ovirtmgmt already created
  Please indicate a pingable gateway IP address [192.168.3.2]:
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Detecting interface on existing management bridge]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Get all active network interfaces]
[ INFO  ] TASK [Filter bonds with bad naming]
[ INFO  ] TASK [Generate output list]
[ INFO  ] ok: [localhost]
  Please indicate a nic to set ovirtmgmt bridge on: (enp24s0f0) 
[enp24s0f0]:

  --== VM CONFIGURATION ==--

  If you want to deploy with a custom engine appliance image,
  please specify the path to the OVA archive you would like to use
  (leave it empty to skip, the setup will use ovirt-engine-appliance 
rpm installing it if missing):
[ INFO  ] Detecting host timezone.
  Please provide the FQDN you would like to use for the engine 
appliance.
  Note: This will be the FQDN of the engine VM you are now going to 
launch,
  it should not point to the base host or to any other existing machine.
  Engine VM FQDN:  []: ovirtengine.exalt.ps
  Please provide the domain name you would like to use for the engine 
appliance.
  Engine VM domain: [exalt.ps]
  Enter root password that will be used for the engine appliance:
  Confirm appliance root password:
  Enter ssh public key for the root user that will be used for the 
engine appliance (leave it empty to skip):
[WARNING] Skipping appliance root ssh public key
  Do you want to enable ssh access for the root user (yes, no, 
without-password) [yes]:
  Please specify the number of virtual CPUs for the VM (Defaults to 
appliance OVF value): [4]:
  Please specify the memory size of the VM in MB (Defaults to appliance 
OVF value): [16384]:
  You may specify a unicast MAC address for the VM or accept a randomly 
generated default [00:16:3e:03:a8:09]:
  How should the engine VM network be configured (DHCP, Static)[DHCP]? 
Static
  Please enter the IP address to be used for the engine VM 
[192.168.0.1]: 192.168.200.45
[ INFO  ] The engine VM will be configured to use 192.168.200.45/16
  Please provide a comma-separated list (max 3) of IP addresses of 
domain name servers for the engine VM
  Engine VM DNS (leave it empty to skip) [192.168.200.6]:
  Add lines for the appliance itself and for this host to /etc/hosts on 
the engine VM?
  Note: ensuring that this host could resolve the engine VM hostname is 
still up to you
  (Yes, No)[No]

  --== HOSTED ENGINE CONFIGURATION ==--

  Please provide the name of the SMTP server through which we will send 
notifications [localhost]:
  Please provide the TCP port number of the SMTP server [25]:
  Please provide the email address from which notifications will be 
sent [root@localhost]:
  Please provide a comma-separated list of email addresses which will 
get notifications [root@localhost]:
  Enter engine admin password:
  Confirm engine admin password:
[ INFO  ] Stage: Setup validation
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
[ INFO  ] Cleaning previous attempts
[ INFO  ] TASK 

[ovirt-users] Re: master domain wont activate

2018-10-04 Thread Elad Ben Aharon
As a workaround, you can re-initialize the data center from an unattached
domain:
https://access.redhat.com/documentation/en-us/red_hat_
virtualization/4.2/html/administration_guide/sect-data_center_tasks#Re-
initializing_a_Data_Center

On Thu, Oct 4, 2018 at 8:46 AM, Vincent Royer  wrote:

> I was attempting to migrate from nfs to iscsi storage domains.  I have
> reached a state where I can no longer activate the old master storage
> domain, and thus no others will activate either.
>
> I'm ready to give up on the installation and just move to an HCI
> deployment instead.  Wipe all the hosts clean and start again.
>
> My plan was to create and use an export domain, then wipe the nodes and
> set them up HCI where I could re-import.  But without being able to
> activate a master domain, I can't create the export domain.
>
> I'm not sure why it can't find the master anymore, as nothing has happened
> to the NFS storage, but the error in vdsm says it just can't find it:
>
> StoragePoolMasterNotFound: Cannot find master domain:
> u'spUUID=5a77bed1-0238-030c-0122-03b3,
> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'
> 2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) [storage.TaskManager.Task]
> (Task='83f33db5-90f3-4064-87df-0512ab9b6378') aborting: Task is aborted:
> "Cannot find master domain: u'spUUID=5a77bed1-0238-030c-0122-03b3,
> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'" - code 304 (task:1181)
> 2018-10-03 22:40:33,751-0700 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH
> connectStoragePool error=Cannot find master domain:
> u'spUUID=5a77bed1-0238-030c-0122-03b3,
> msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68' (dispatcher:82)
> 2018-10-03 22:40:33,751-0700 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.connect failed (error 304) in 0.17 seconds (__init__:573)
> 2018-10-03 22:40:34,200-0700 INFO  (jsonrpc/1) [api.host] START getStats()
> from=:::172.16.100.13,39028 (api:46)
>
> When I look in cockpit on the hosts, the storage domain is mounted and
> seems fine.
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/users@ovirt.org/message/LTZ6SIFYDFEMSZ4ACUNVC5KETWG7BBIZ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WIMTGHZOIYEDQQO3M5ZUAM43IQL2Y3GX/