[ovirt-users] Best practice for iSCSI storage domains

2017-12-06 Thread Richard Chan
What is the best practice for iSCSI storage domains:

Many small targets vs a few large targets?

Specific example: if you wanted a 8TB storage domain would you prepare a
single 8TB LUN or (for example) 8 x 1 TB LUNs.



-- 
Richard Chan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Nir Soffer
On Thu, Dec 7, 2017 at 1:22 AM Gianluca Cecchi 
wrote:

> On Wed, Dec 6, 2017 at 11:42 PM, Nir Soffer  wrote:
>
>>
>>
>>>
>>> BTW: I notice that the disk seems preallocated even if original qcow2 is
>>> thin... is this expected?
>>> This obviously also impacts the time to upload (20Gb virtual disk with
>>> actual 1.2Gb occupied needs the time equivalent for full 20Gb...)
>>>
>>
>> We upload exactly the file you provided, there is no way we can upload
>> 20G from 1.2G file :-)
>>
>
> But the upload process at a medium rate of 40-50MB/s has last about 9
> minutes that confirms the 20Gb size
> The disk at source has been created as virtio type and format qcow2 from
> virt-manager and then only installed a CentOS 7.2 OS with infrastructure
> server configuration.
> Apart from qemu-img also ls:
> # ls -lhs c7lab1.qcow2
> 1.3G -rw--- 1 root root 21G Dec  6 23:05 c7lab1.qcow2
>

The fiile size is 21G - matching what you see. This is the size we upload.

1.3G is the used size on the file system, we cannot upload only used blocks.
qemu-img info "Disk size" is not the file size the the used size, not
useful
for upload.

Maybe this file was crated with preallocation=full?

To optimize this file for upload, you can do:

qemu convert -p -f qcow2 -O qcow2 c7lab1.qcow2 c7lab1-opt.qcow2

I think the output file size will be shrink.

You can also compress it:

qemu convert -p -c -f qcow2 -O qcow2 c7lab1.qcow2
c7lab1-compressed.qcow2

Please share the results.


>
> On target after upload:
>
> [root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]# ls -lhs
> 77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
> 21G -rw-rw. 1 vdsm kvm 21G Dec  6 23:44
> 77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
> [root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]#
>
>
>
>> But maybe we created the file in the wrong format?
>>
>> Can you share vdsm logs from the spm, showing how the disk was created?
>>
>>
>>>
>>>
> First message at beginning of upload:
>
> 2017-12-06 23:09:50,183+0100 INFO  (jsonrpc/4) [vdsm.api] START
> createVolume(sdUUID=u'572eabe7-15d0-42c2-8fa9-0bd773e22e2e',
> spUUID=u'0001-0001-0001-0001-0343',
> imgUUID=u'251063f6-5570-4bdc-b28f-21e82aa5e185', size=u'22548578304',
> volFormat=4, preallocate=2, diskType=2,
> volUUID=u'77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0',
> desc=u'{"DiskAlias":"c7lab1","DiskDescription":""}',
> srcImgUUID=u'----',
> srcVolUUID=u'----', initialSize=None)
> from=192.168.1.212,56846, flow_id=18c6bd3b-76ab-45f9-b8c7-09c727f44c91,
> task_id=e7cc67e6-4b61-4bb3-81b1-6bc687ea5ee9 (api:46)
>
>
> Last message that corresponds to completion of upload:
>
> 2017-12-06 23:21:03,914+0100 INFO  (jsonrpc/4) [storage.VolumeManifest]
> 572eabe7-15d0-42c2-8fa9-0bd773e22e2e/251063f6-5570-4bdc-b28f-21e82aa5e185/77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
> info is {'status': 'OK', 'domain': '572eabe7-15d0-42c2-8fa9-0bd773e22e2e',
> 'voltype': 'LEAF', 'description':
> '{"DiskAlias":"c7lab1","DiskDescription":""}', 'parent':
> '----', 'format': 'COW', 'generation': 0,
> 'image': '251063f6-5570-4bdc-b28f-21e82aa5e185', 'ctime': '1512598190',
> 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
> '21496266752', 'children': [], 'pool': '', 'capacity': '22548578304',
> 'uuid': u'77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0', 'truesize': '21496602624',
> 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
> 2017-12-06 23:21:03,915+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH
> getVolumeInfo return={'info': {'status': 'OK', 'domain':
> '572eabe7-15d0-42c2-8fa9-0bd773e22e2e', 'voltype': 'LEAF', 'description':
> '{"DiskAlias":"c7lab1","DiskDescription":""}', 'parent':
> '----', 'format': 'COW', 'generation': 0,
> 'image': '251063f6-5570-4bdc-b28f-21e82aa5e185', 'ctime': '1512598190',
> 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
> '21496266752', 'children': [], 'pool': '', 'capacity': '22548578304',
> 'uuid': u'77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0', 'truesize': '21496602624',
> 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}}
> from=192.168.1.212,56840, flow_id=18c6bd3b-76ab-45f9-b8c7-09c727f44c91,
> task_id=6b268651-4a7f-4366-9d01-829deaa16bfd (api:52)
>
> Full vdsm.log.gz in between here:
>
>
> https://drive.google.com/file/d/1IZIKDXyNN3bc6035C5Rc5WiI_UZ_g0ZV/view?usp=sharing
>
>
>
>>
>> NFS version?
>>
>
> The mount done from the host is this:
>
> [root@ovirt01 /] # mount | grep NFS_DOMAIN
> ovirt01:/NFS_DOMAIN on /rhev/data-center/mnt/ovirt01:_NFS__DOMAIN type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.1.211,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.1.211)
>
> This is a test system so that I only have one host and the NFS mount is
> done over an XFS local filesystem 

Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Gianluca Cecchi
On Wed, Dec 6, 2017 at 11:42 PM, Nir Soffer  wrote:

>
>
>>
>> BTW: I notice that the disk seems preallocated even if original qcow2 is
>> thin... is this expected?
>> This obviously also impacts the time to upload (20Gb virtual disk with
>> actual 1.2Gb occupied needs the time equivalent for full 20Gb...)
>>
>
> We upload exactly the file you provided, there is no way we can upload 20G
> from 1.2G file :-)
>

But the upload process at a medium rate of 40-50MB/s has last about 9
minutes that confirms the 20Gb size
The disk at source has been created as virtio type and format qcow2 from
virt-manager and then only installed a CentOS 7.2 OS with infrastructure
server configuration.
Apart from qemu-img also ls:
# ls -lhs c7lab1.qcow2
1.3G -rw--- 1 root root 21G Dec  6 23:05 c7lab1.qcow2

On target after upload:

[root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]# ls -lhs
77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
21G -rw-rw. 1 vdsm kvm 21G Dec  6 23:44
77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
[root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]#



> But maybe we created the file in the wrong format?
>
> Can you share vdsm logs from the spm, showing how the disk was created?
>
>
>>
>>
First message at beginning of upload:

2017-12-06 23:09:50,183+0100 INFO  (jsonrpc/4) [vdsm.api] START
createVolume(sdUUID=u'572eabe7-15d0-42c2-8fa9-0bd773e22e2e',
spUUID=u'0001-0001-0001-0001-0343',
imgUUID=u'251063f6-5570-4bdc-b28f-21e82aa5e185', size=u'22548578304',
volFormat=4, preallocate=2, diskType=2,
volUUID=u'77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0',
desc=u'{"DiskAlias":"c7lab1","DiskDescription":""}',
srcImgUUID=u'----',
srcVolUUID=u'----', initialSize=None)
from=192.168.1.212,56846, flow_id=18c6bd3b-76ab-45f9-b8c7-09c727f44c91,
task_id=e7cc67e6-4b61-4bb3-81b1-6bc687ea5ee9 (api:46)


Last message that corresponds to completion of upload:

2017-12-06 23:21:03,914+0100 INFO  (jsonrpc/4) [storage.VolumeManifest]
572eabe7-15d0-42c2-8fa9-0bd773e22e2e/251063f6-5570-4bdc-b28f-21e82aa5e185/77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
info is {'status': 'OK', 'domain': '572eabe7-15d0-42c2-8fa9-0bd773e22e2e',
'voltype': 'LEAF', 'description':
'{"DiskAlias":"c7lab1","DiskDescription":""}', 'parent':
'----', 'format': 'COW', 'generation': 0,
'image': '251063f6-5570-4bdc-b28f-21e82aa5e185', 'ctime': '1512598190',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'21496266752', 'children': [], 'pool': '', 'capacity': '22548578304',
'uuid': u'77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0', 'truesize': '21496602624',
'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
2017-12-06 23:21:03,915+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH
getVolumeInfo return={'info': {'status': 'OK', 'domain':
'572eabe7-15d0-42c2-8fa9-0bd773e22e2e', 'voltype': 'LEAF', 'description':
'{"DiskAlias":"c7lab1","DiskDescription":""}', 'parent':
'----', 'format': 'COW', 'generation': 0,
'image': '251063f6-5570-4bdc-b28f-21e82aa5e185', 'ctime': '1512598190',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'21496266752', 'children': [], 'pool': '', 'capacity': '22548578304',
'uuid': u'77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0', 'truesize': '21496602624',
'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}}
from=192.168.1.212,56840, flow_id=18c6bd3b-76ab-45f9-b8c7-09c727f44c91,
task_id=6b268651-4a7f-4366-9d01-829deaa16bfd (api:52)

Full vdsm.log.gz in between here:

https://drive.google.com/file/d/1IZIKDXyNN3bc6035C5Rc5WiI_UZ_g0ZV/view?usp=sharing



>
> NFS version?
>

The mount done from the host is this:

[root@ovirt01 /] # mount | grep NFS_DOMAIN
ovirt01:/NFS_DOMAIN on /rhev/data-center/mnt/ovirt01:_NFS__DOMAIN type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.1.211,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.1.211)

This is a test system so that I only have one host and the NFS mount is
done over an XFS local filesystem exported by itself, but I think this
should not be relevant for this particular test...

Another note: it seems that in events there is no message related to image
upload completion. I only see:

Dec 6, 2017 11:09:51 PM Add-Disk operation of 'c7lab1' was initiated by the
system.
Dec 6, 2017 11:09:49 PM Image Upload with disk c7lab1 was initiated by
admin@internal-authz.

and no message around 23:21 when the upload completes.
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Nir Soffer
On Thu, Dec 7, 2017 at 12:33 AM Gianluca Cecchi 
wrote:

> On Wed, Dec 6, 2017 at 5:23 PM, Paolo Margara 
> wrote:
>
>> I think that it could be useful.
>>
>>
>> Greetings,
>>
>> Paolo
>>
>
> +1
>
> BTW: I notice that the disk seems preallocated even if original qcow2 is
> thin... is this expected?
> This obviously also impacts the time to upload (20Gb virtual disk with
> actual 1.2Gb occupied needs the time equivalent for full 20Gb...)
>

We upload exactly the file you provided, there is no way we can upload 20G
from 1.2G file :-)

But maybe we created the file in the wrong format?

Can you share vdsm logs from the spm, showing how the disk was created?


>
> On source (created from virt-manager in Fedora 26):
>
> # qemu-img info c7lab1.qcow2
> image: c7lab1.qcow2
> file format: qcow2
> virtual size: 20G (21474836480 bytes)
> disk size: 1.2G
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: true
> refcount bits: 16
> corrupt: false
>
> After uploading on NFS storage domain:
>

NFS version?


>
> [root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]# qemu-img info
> 77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
> image: 77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
> file format: qcow2
> virtual size: 20G (21474836480 bytes)
> disk size: 20G
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: true
> refcount bits: 16
> corrupt: false
> [root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]#
>

> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Gianluca Cecchi
On Wed, Dec 6, 2017 at 5:23 PM, Paolo Margara 
wrote:

> I think that it could be useful.
>
>
> Greetings,
>
> Paolo
>

+1

BTW: I notice that the disk seems preallocated even if original qcow2 is
thin... is this expected?
This obviously also impacts the time to upload (20Gb virtual disk with
actual 1.2Gb occupied needs the time equivalent for full 20Gb...)

On source (created from virt-manager in Fedora 26):

# qemu-img info c7lab1.qcow2
image: c7lab1.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 1.2G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
refcount bits: 16
corrupt: false

After uploading on NFS storage domain:

[root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]# qemu-img info
77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
image: 77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 20G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
refcount bits: 16
corrupt: false
[root@ovirt01 251063f6-5570-4bdc-b28f-21e82aa5e185]#

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Scheduling daily Snapshot

2017-12-06 Thread Maor Lipchuk
On Wed, Dec 6, 2017 at 6:01 PM, Jason Lelievre 
wrote:

> Hello,
>
> What is the best way to set up a daily live snapshot for all VM, and have
> the possibility to recover, for example, a specific VM to a specific day?
>
> I use a Hyperconverged Infrastructure with 3 nodes, gluster storage.
>
> Thank you,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
One idea is to use crontab to run a daily script which will use the
engine-sdk to grep all VMs and for each one, create a snapshot.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Paolo Margara
I think that it could be useful.


Greetings,

    Paolo


Il 06/12/2017 13:22, Nir Soffer ha scritto:
> Thanks for sharing the solutions.
>
> Maybe we need to mention this in the documentation?
>
> On Wed, Dec 6, 2017 at 1:27 PM Gianluca Cecchi
> > wrote:
>
> On Wed, Dec 6, 2017 at 9:03 AM, Paolo Margara
> > wrote:
>
> Hi Gianluca,
>
> if you execute "engine-config --get ImageProxyAddress" the
> value of that attribute is your engine's fqdn?
>
> I had a similar issue in the past with my setup and in my case
> the problem was the wrong value of the attribute
> ImageProxyAddress, I hope this could be useful also for you.
>
> Greetings,
>
>     Paolo
>
>
>
> Oh, yes... I remember in the past something like this I forgot ...
>
> Indeed
> [root@ovirt ovirt-engine]# engine-config --get ImageProxyAddress
> ImageProxyAddress: localhost:54323 version: general
> [root@ovirt ovirt-engine]# 
>
> So after executing:
>
> engine-config -s ImageProxyAddress=ovirt:54323
>
> where "ovirt" is the hostname of my engine, engine service restart
> and ovirt-imageio-proxy service restart I was able to upload an image
> Thanks!
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Scheduling daily Snapshot

2017-12-06 Thread Jason Lelievre
Hello,

What is the best way to set up a daily live snapshot for all VM, and have
the possibility to recover, for example, a specific VM to a specific day?

I use a Hyperconverged Infrastructure with 3 nodes, gluster storage.

Thank you,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] critical production issue for a vm

2017-12-06 Thread Maor Lipchuk
On Wed, Dec 6, 2017 at 12:30 PM, Nicolas Ecarnot 
wrote:

> Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :
>
>> Hi all,
>>
>> I'm about to lose one very important vm. I shut down this vm for
>> maintenance and then I moved the four disks to a new created lun. This vm
>> has 2 snapshots.
>>
>> After successful move, the vm refuses to start with this message:
>>
>> Bad volume specification {u'index': 0, u'domainID':
>> u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format':
>> u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID':
>> u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648
>> <(214)%20748-3648>', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>> u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional':
>> u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>> 'truesize': '2147483648 <(214)%20748-3648>', u'poolID':
>> u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared':
>> u'false', u'propagateErrors': u'off', u'type': u'disk'}.
>>
>> I tried to merge the snaphots, export , clone from snapshot, copy disks,
>> or deactivate disks and every action fails when it is about disk.
>>
>> I began to dd lv group to get a new vm intended to a standalone
>> libvirt/kvm, the vm quite boots up but it is an outdated version before the
>> first snapshot. There is a lot of disks when doing a "lvs | grep 961ea94a"
>> supposed to be disks snapshots. Which of them must I choose to get the last
>> vm before shutting down? I'm not used to deal snapshot with virsh/libvirt,
>> so some help will be much appreciated.
>>
>
The disks which you want to copy should contain the entire volume chain.
Based on the log you mentioned, It looks like this image is problematic:

  storage id: '961ea94a-aced-4dd0-a9f0-266ce1810177',
  imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a
  volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b'

What if you try to deactivate this image and try to run the VM, will it run?




>
>> Is there some unknown command to recover this vm into ovirt?
>>
>> Thank you in advance.
>>
>>
>>




>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> Beside specific oVirt answers, did you try to get informations about the
> snapshot tree with qemu-img info --backing-chain on the adequate /dev/...
> logical volume?
> As you know how to dd from LVs, you could extract every needed snapshots
> files and rebuild your VM outside of oVirt.
> Then take time to re-import it later and safely.
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How do I connect to my VM from another PC?

2017-12-06 Thread Wesley Stewart
Did you add the NIC after you did the OS install?  Most installers are
pretty good about detecting and activating NICs when you do the install.
But if you add a NIC afterwards sometimes you have to activate it manually.

What OS is your guest? Version?

If you run "sudo nmtui" does that get you to the text based network manager?

On Dec 6, 2017 12:08 AM, "José Manuel Noguerol"  wrote:

> Hi all.
>
> After a month, I have installed my first VM. And it works almost perfect…
>
>
> I added a vnic but there is no ethernet interface in the VM…
>
>
> I got one more question… Supposing that my VM is completely installed… How
> do I access to it from a different PC. I mean, a PC apart from the one
> which has the ovirt engine. I got another host in the hosts list but I
> can’t access to the portal.
>
> Thanks for your time!
>
> Regards.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Logical network setup with neutron

2017-12-06 Thread Marcin Mirecki
Hi Lakshmi,

Yes, only the v2 keystone api is supported at the moment.

The "Physical network" and "Interface mappings" are related.
"Physical network" defines the physical (external) network,
while "Interface mappings" maps this network to a specific interface
on the host.

"Physical network" is mapped to the "provider:physical_network"
parameter in a neutron network.
"Interface mappings" is mapped to the "physical_interface_mappings"
parameter in the neutron agent on the host.
Please look at the following link for a better explanation of how to connect
neutron to an interface:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/networking_guide/sec-connect-instance



On Mon, Nov 27, 2017 at 3:02 PM, Lakshmi Narasimhan Sundararajan <
lakshm...@msystechnologies.com> wrote:

> Hi Team,
> I am looking at integrating openstack neutron with oVirt.
> Reading the docs so far, and through my setup experiments, I can see that
> oVirt and neutron do seem to understand each other.
>
> But I need some helpful pointers to help me understand a few items
> during configuration.
>
> 1) During External Provider registration,
>
> a) although openstack keystone is currently supporting v3 api
> endpoints, only configuring v2 works. I see an exception otherwise.I
> have a feeling only v2 auth is supported with oVirt.
>
> b) Interface mappings.
> This I believe is a way for logical networks to switch/route traffic
> back to physical networks. This is of the form label:interface. Where
> label is placed on each Hosts network setting to point to the right
> physical interface.
>
> I did map label "red" when I setup Host networks to a physical Nic.
> And used "red:br-red, green:br-green" here, wherein my intention is to
> create a bridge br-red on each Host for this logical network and
> switch/route packets over the "red" label mapped physical nic on each
> host. And every vm attached to "red" logical network shall have a vnic
> placed on "br-red" Is my understanding correct?
>
> 2) Now I finally create a logical network using external provider
> "openstack neutron". Herein "Physical Network" parameter that I
> totally do not understand.
> If the registration were to have many interface mappings, is this a
> way of pinning to the right interface?
>
> I cannot choose, red, red:br-red... I can only leave it empty,
>
> So what is the IP address of the physical address argument part of
> logical network creation?
>
> "Optionally select the Create on external provider check box. Select
> the External Provider from the drop-down list and provide the IP
> address of the Physical Network". What this field means?
>
> I would appreciate some clarity and helpful pointers here.
>
> Best regards
>
> --
>
>
> DISCLAIMER
>
> The information in this e-mail is confidential and may be subject to legal
> privilege. It is intended solely for the addressee. Access to this e-mail
> by anyone else is unauthorized. If you have received this communication in
> error, please address with the subject heading "Received in error," send to
> i...@msystechnologies.com,  then delete the e-mail and destroy any copies of
> it. If you are not the intended recipient, any disclosure, copying,
> distribution or any action taken or omitted to be taken in reliance on it,
> is prohibited and may be unlawful. The views, opinions, conclusions and
> other information expressed in this electronic mail and any attachments are
> not given or endorsed by the company unless otherwise indicated by an
> authorized representative independent of this message.
> MSys cannot guarantee that e-mail communications are secure or error-free,
> as information could be intercepted, corrupted, amended, lost, destroyed,
> arrive late or incomplete, or contain viruses, though all reasonable
> precautions have been taken to ensure no viruses are present in this
> e-mail.
> As our company cannot accept responsibility for any loss or damage arising
> from the use of this e-mail or attachments we recommend that you subject
> these to your virus checking procedures prior to use
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] bonding mode-alb

2017-12-06 Thread Demeter Tibor
Dear members, 

I would like to use two switches for make high-availability network connection 
for my nfs storge. 
Unfortunately, these switches does not support 802.3.ad lacp, (really I can't 
stack them) but I've read about mode-alb and mode-tlb bonding modes. 
I know,these modes are available in ovirt, but how is work that? Also how is 
safe? Are there for HA or for load balance? 

I've read some forums, where does not recommended these modes to use in ovirt. 
What is the truths? 
I would like to use only for storage-traffic, it will be separated from other 
network traffic. I have two 10Gbe switches and two 10Gbe ports in my nodes. 

Thanks in advance, 

R 

Tibor 






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM multipath.conf - prevent automatic management of local devices

2017-12-06 Thread Nir Soffer
On Tue, Nov 28, 2017 at 12:25 AM Ben Bradley  wrote:

> On 23/11/17 06:46, Maton, Brett wrote:
> > Might not be quite what you're after but adding
> >
> > # RHEV PRIVATE
> >
> > To /etc/multipath.conf will stop vdsm from changing the file.
> > |||
>
> Hi there. Thanks for the reply.
> Yes I am aware of that and it seems that's what I will have to do.
> I have no problem with VDSM managing the file, I just wish it didn't
> automatically load local storage devices into multipathd.
>

We don't load anything, this is multipathd default behavior.

We cannot blacklist local devices since we don't know which devices
are local, this is something the administrator of the machine should handle.

In RHEL/CentOS 7.5 we will have a good way to blacklist most local devices,
see https://bugzilla.redhat.com/show_bug.cgi?id=1456955#c0

Once this is available, your multipath.conf will be upgrade to use the
new feature when you upgrade vdsm.


>
> I'm still not clear on the purpose of this automatic management though.
>  From what I can tell there is no difference to hosts/clusters made
> through this automatic management - i.e. you still have to add storage
> domains manually in oVirt.
>

The purpose of this configuration is to allow Vdsm to manage shared
storage - we require certain configuration.

In 4.2 we documented every setting in multipath.conf, please see
https://github.com/oVirt/vdsm/blob/913c2e202de718ec828b240642e37e059a86dac9/lib/vdsm/tool/configurators/multipath.py#L66
to understand why we require each value.

If the default multiapth.conf does not work for you, you can make the file
private, and then you are responsible for using new features provided
by multipath, we will never touch the file.

You can also file a bug if you think the default configuration can be
improved to support some use case.

Nir


>
> Could anyone give any info on the purpose of this auto-management of
> local storage devices into multipathd in VDSM?
> Then I will be able to make an informed decision as to the benefit of
> letting it continue.
>
> Thanks, Ben
>
> >
> > On 22 November 2017 at 22:42, Ben Bradley  > > wrote:
> >
> > Hi All
> >
> > I have been running ovirt in a lab environment on CentOS 7 for
> > several months but have only just got around to really testing
> things.
> > I understand that VDSM manages multipath.conf and I understand that
> > I can make changes to that file and set it to private to prevent
> > VDSM making further changes.
> >
> > I don't mind VDSM managing the file but is it possible to set to
> > prevent local devices being automatically added to multipathd?
> >
> > Many times I have had to flush local devices from multipath when
> > they are added/removed or re-partitioned or the system is rebooted.
> > It doesn't even look like oVirt does anything with these devices
> > once they are setup in multipathd.
> >
> > I'm assuming it's the VDSM additions to multipath that are causing
> > this. Can anyone else confirm this?
> >
> > Is there a way to prevent new or local devices being added
> > automatically?
> >
> > Regards
> > Ben
> > ___
> > Users mailing list
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> >
> >
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Nir Soffer
Thanks for sharing the solutions.

Maybe we need to mention this in the documentation?

On Wed, Dec 6, 2017 at 1:27 PM Gianluca Cecchi 
wrote:

> On Wed, Dec 6, 2017 at 9:03 AM, Paolo Margara 
> wrote:
>
>> Hi Gianluca,
>>
>> if you execute "engine-config --get ImageProxyAddress" the value of that
>> attribute is your engine's fqdn?
>>
>> I had a similar issue in the past with my setup and in my case the
>> problem was the wrong value of the attribute ImageProxyAddress, I hope this
>> could be useful also for you.
>>
>> Greetings,
>>
>> Paolo
>>
>>
>
> Oh, yes... I remember in the past something like this I forgot ...
>
> Indeed
> [root@ovirt ovirt-engine]# engine-config --get ImageProxyAddress
> ImageProxyAddress: localhost:54323 version: general
> [root@ovirt ovirt-engine]#
>
> So after executing:
>
> engine-config -s ImageProxyAddress=ovirt:54323
>
> where "ovirt" is the hostname of my engine, engine service restart and
> ovirt-imageio-proxy service restart I was able to upload an image
> Thanks!
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Gianluca Cecchi
On Wed, Dec 6, 2017 at 9:03 AM, Paolo Margara 
wrote:

> Hi Gianluca,
>
> if you execute "engine-config --get ImageProxyAddress" the value of that
> attribute is your engine's fqdn?
>
> I had a similar issue in the past with my setup and in my case the problem
> was the wrong value of the attribute ImageProxyAddress, I hope this could
> be useful also for you.
>
> Greetings,
>
> Paolo
>
>

Oh, yes... I remember in the past something like this I forgot ...

Indeed
[root@ovirt ovirt-engine]# engine-config --get ImageProxyAddress
ImageProxyAddress: localhost:54323 version: general
[root@ovirt ovirt-engine]#

So after executing:

engine-config -s ImageProxyAddress=ovirt:54323

where "ovirt" is the hostname of my engine, engine service restart and
ovirt-imageio-proxy service restart I was able to upload an image
Thanks!
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed deply of ovirt-engine using ovirt-node-ng-installer-ovirt-4.2-pre-2017120512.iso

2017-12-06 Thread Roberto Nunin
Yes, both times on Cockpit.

2017-12-06 11:43 GMT+01:00 Simone Tiraboschi :

>
>
> On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin  wrote:
>
>> Ciao Simone
>> thanks for really quick answer.
>>
>> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi :
>>
>>> Ciao Roberto,
>>>
>>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin 
>>> wrote:
>>>
 I'm having trouble to deploy one three host hyperconverged lab using
 iso-image named above.

>>>
>>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512
>>> <(201)%20712-0512>.iso is still a pre-release software.
>>> Your contribute testing it is really appreciated!
>>>
>>
>> ​It's a pleasure !.​
>>
>>
>>>
>>>
>>>

 My test environment is based on HPE BL680cG7 blade servers.
 These servers has 6 physical 10GB network interfaces (flexNIC), each
 one with four profiles (ethernet,FCoE,iSCSI,etc).

 I choose one of these six phys interfaces (enp5s0f0) and assigned it a
 static IPv4 address, for each node.

 After node reboot, interface ONBOOT param is still set to no.
 Changed via iLO interface to yes and restarted network. Fine.

 After gluster setup, with gdeploy script under Cockpit interface,
 avoiding errors coming from :
 /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine
 deploy.

 With the new version, I'm having an error never seen before:

 The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will
 not be in the same IP subnet. Static routing configuration are not
 supported on automatic VM configuration.
  Failed to execute stage 'Environment customization': The Engine VM
 (10.114.60.117) and this host (10.114.60.134/24) will not be in the
 same IP subnet. Static routing configuration are not supported on automatic
 VM configuration.
  Hosted Engine deployment failed.

 There's no input field for HE subnet mask. Anyway in our class-c ovirt
 management network these ARE in the same subnet.
 How to recover from this ? I cannot add /24 CIDR in HE Static IP
 address field, it isn't allowed.

>>>
>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so
>>> it should't fail.
>>> The issue here seams different:
>>>
>>> From hosted-engine-setup log I see that you passed the VM IP address via
>>> answerfile:
>>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context
>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic
>>> CIDR=str:'10.114.60.117'
>>>
>>> while the right syntax should be:
>>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
>>>
>>> Did you wrote the answerfile by yourself or did you entered the IP
>>> address in the cockpit wizard? if so we probably have a regression there.
>>>
>>
>> ​I've inserted it while providing data for setup, using Cockpit
>> interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web
>> interface.​ No manual update of answer file.
>>
>>>
>>>
>>>

 Moreover, VM FQDN is asked two times during the deploy process. It's
 correct ?

>>>
>>> No, I don't think so but I don't see it from you logs.
>>> Could you please explain it?
>>>
>>
>> ​Yes: first time is requested during initial setup of HE VM deploy
>>
>> The second one, instead, is asked (at least to me ) in this step, after
>> initial setup:
>>
>
> So both on cockpit side?
>
>
>>
>> [image: Immagine incorporata 1]​
>>
>>>
>>>

 Some additional, general questions:
 NetworkManager: must be disabled deploying HCI solution ? In my
 attempt, wasn't disabled.

>>>
>> ​Simone, could you confirm or not that NM must stay in place while
>> deploying ? This qustion was struggling since 3.6.. what is the "best
>> practice" ?
>> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but
>> I wasn't able to find any mandatory rule.
>>
>
> In early 3.6 you had to disable it but now you can safely keep it on.
>
>
>
>> ​
>>
>>> There's some document to follow to perform a correct deploy ?
 Is this one still "valid" ? : https://ovirt.org/blog/2017/
 04/up-and-running-with-ovirt-4.1-and-gluster-storage/

 Attached hosted-engine-setup log.
 TIA



 --
 Roberto
 110-006-970​

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>>
>> --
>> Roberto Nunin
>>
>>
>>
>>
>


-- 
Roberto Nunin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed deply of ovirt-engine using ovirt-node-ng-installer-ovirt-4.2-pre-2017120512.iso

2017-12-06 Thread Simone Tiraboschi
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin  wrote:

> Ciao Simone
> thanks for really quick answer.
>
> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi :
>
>> Ciao Roberto,
>>
>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin 
>> wrote:
>>
>>> I'm having trouble to deploy one three host hyperconverged lab using
>>> iso-image named above.
>>>
>>
>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512
>> <(201)%20712-0512>.iso is still a pre-release software.
>> Your contribute testing it is really appreciated!
>>
>
> ​It's a pleasure !.​
>
>
>>
>>
>>
>>>
>>> My test environment is based on HPE BL680cG7 blade servers.
>>> These servers has 6 physical 10GB network interfaces (flexNIC), each one
>>> with four profiles (ethernet,FCoE,iSCSI,etc).
>>>
>>> I choose one of these six phys interfaces (enp5s0f0) and assigned it a
>>> static IPv4 address, for each node.
>>>
>>> After node reboot, interface ONBOOT param is still set to no.
>>> Changed via iLO interface to yes and restarted network. Fine.
>>>
>>> After gluster setup, with gdeploy script under Cockpit interface,
>>> avoiding errors coming from :
>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine
>>> deploy.
>>>
>>> With the new version, I'm having an error never seen before:
>>>
>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not
>>> be in the same IP subnet. Static routing configuration are not supported on
>>> automatic VM configuration.
>>>  Failed to execute stage 'Environment customization': The Engine VM
>>> (10.114.60.117) and this host (10.114.60.134/24) will not be in the
>>> same IP subnet. Static routing configuration are not supported on automatic
>>> VM configuration.
>>>  Hosted Engine deployment failed.
>>>
>>> There's no input field for HE subnet mask. Anyway in our class-c ovirt
>>> management network these ARE in the same subnet.
>>> How to recover from this ? I cannot add /24 CIDR in HE Static IP address
>>> field, it isn't allowed.
>>>
>>
>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it
>> should't fail.
>> The issue here seams different:
>>
>> From hosted-engine-setup log I see that you passed the VM IP address via
>> answerfile:
>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context
>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic
>> CIDR=str:'10.114.60.117'
>>
>> while the right syntax should be:
>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
>>
>> Did you wrote the answerfile by yourself or did you entered the IP
>> address in the cockpit wizard? if so we probably have a regression there.
>>
>
> ​I've inserted it while providing data for setup, using Cockpit interface.
> Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface.​
> No manual update of answer file.
>
>>
>>
>>
>>>
>>> Moreover, VM FQDN is asked two times during the deploy process. It's
>>> correct ?
>>>
>>
>> No, I don't think so but I don't see it from you logs.
>> Could you please explain it?
>>
>
> ​Yes: first time is requested during initial setup of HE VM deploy
>
> The second one, instead, is asked (at least to me ) in this step, after
> initial setup:
>

So both on cockpit side?


>
> [image: Immagine incorporata 1]​
>
>>
>>
>>>
>>> Some additional, general questions:
>>> NetworkManager: must be disabled deploying HCI solution ? In my attempt,
>>> wasn't disabled.
>>>
>>
> ​Simone, could you confirm or not that NM must stay in place while
> deploying ? This qustion was struggling since 3.6.. what is the "best
> practice" ?
> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but
> I wasn't able to find any mandatory rule.
>

In early 3.6 you had to disable it but now you can safely keep it on.



> ​
>
>> There's some document to follow to perform a correct deploy ?
>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/
>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
>>>
>>> Attached hosted-engine-setup log.
>>> TIA
>>>
>>>
>>>
>>> --
>>> Roberto
>>> 110-006-970​
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
>
> --
> Roberto Nunin
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] critical production issue for a vm

2017-12-06 Thread Nicolas Ecarnot

Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :

Hi all,

I'm about to lose one very important vm. I shut down this vm for 
maintenance and then I moved the four disks to a new created lun. This 
vm has 2 snapshots.


After successful move, the vm refuses to start with this message:

Bad volume specification {u'index': 0, u'domainID': 
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': 
u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': 
u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648', 
u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {}, 
u'readonly': u'false', u'iface': u'virtio', u'optional': u'false', 
u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': 
'2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', 
u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', 
u'type': u'disk'}.


I tried to merge the snaphots, export , clone from snapshot, copy disks, 
or deactivate disks and every action fails when it is about disk.


I began to dd lv group to get a new vm intended to a standalone 
libvirt/kvm, the vm quite boots up but it is an outdated version before 
the first snapshot. There is a lot of disks when doing a "lvs | grep 
961ea94a" supposed to be disks snapshots. Which of them must I choose to 
get the last vm before shutting down? I'm not used to deal snapshot with 
virsh/libvirt, so some help will be much appreciated.


Is there some unknown command to recover this vm into ovirt?

Thank you in advance.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Beside specific oVirt answers, did you try to get informations about the 
snapshot tree with qemu-img info --backing-chain on the adequate 
/dev/... logical volume?
As you know how to dd from LVs, you could extract every needed snapshots 
files and rebuild your VM outside of oVirt.

Then take time to re-import it later and safely.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine and about more than 1000 vms

2017-12-06 Thread Yaniv Kaul
On Fri, Dec 1, 2017 at 3:42 AM, 董青龙  wrote:

> Hi, all
> We want to deploy an environment of hosted engine which will
> manage more than 1000 vms. How many vcpus and memory should we give to
> hosted engine? And are there any other things should we pay attention to?
> Hope someone can help. Thanks!
>

More than memory and CPU, you should be concerned with disk performance -
specifically for the database. The slowest part of the system would be
queries to the database, those that are not cached or are fairly large and
therefore require some temp. storage space.
Ensuring it's on high-performance media (SSD/NVMe) is probably very
important.

Other than that, I think a 32GB RAM and 8 vCPUs should suffice. But you can
increase later if needed.
Y.


>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] critical production issue for a vm

2017-12-06 Thread Nathanaël Blanchet

Hi all,

I'm about to lose one very important vm. I shut down this vm for 
maintenance and then I moved the four disks to a new created lun. This 
vm has 2 snapshots.


After successful move, the vm refuses to start with this message:

Bad volume specification {u'index': 0, u'domainID': 
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': 
u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': 
u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648', 
u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {}, 
u'readonly': u'false', u'iface': u'virtio', u'optional': u'false', 
u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': 
'2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', 
u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', 
u'type': u'disk'}.


I tried to merge the snaphots, export , clone from snapshot, copy disks, 
or deactivate disks and every action fails when it is about disk.


I began to dd lv group to get a new vm intended to a standalone 
libvirt/kvm, the vm quite boots up but it is an outdated version before 
the first snapshot. There is a lot of disks when doing a "lvs | grep 
961ea94a" supposed to be disks snapshots. Which of them must I choose to 
get the last vm before shutting down? I'm not used to deal snapshot with 
virsh/libvirt, so some help will be much appreciated.


Is there some unknown command to recover this vm into ovirt?

Thank you in advance.

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

 Event ID: -1, Message: VM hortensia was started by sblanc...@levant.abes.fr@abes.fr-authz (Host: aquilon).
2017-12-06 11:01:16,292+01 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec' was reported as Down on VDS 'b692c250-4f71-4569-801f-6bfd3b8f50b9'(aquilon)
2017-12-06 11:01:16,294+01 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] START, DestroyVDSCommand(HostName = aquilon, DestroyVmVDSCommandParameters:{runAsync='true', hostId='b692c250-4f71-4569-801f-6bfd3b8f50b9', vmId='f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 6ddce93f
2017-12-06 11:01:17,301+01 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] FINISH, DestroyVDSCommand, log id: 6ddce93f
2017-12-06 11:01:17,301+01 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec'(hortensia) moved from 'WaitForLaunch' --> 'Down'
2017-12-06 11:01:17,399+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] EVENT_ID: VM_DOWN_ERROR(119), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM hortensia is down with error. Exit message: Bad volume specification {u'index': 0, u'domainID': u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional': u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': '2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': u'disk'}.
2017-12-06 11:01:17,400+01 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] add VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec'(hortensia) to rerun treatment
2017-12-06 11:01:17,404+01 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-10) [] Rerun VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec'. Called from VDS 'aquilon'
2017-12-06 11:01:17,466+01 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-39) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Correlation ID: d6fc6f3b-b3b2-466d-8fcd-c145d3cf645a, Job ID: 5674a186-14c2-46f3-9008-99fd9d3fd979, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to run VM hortensia on Host aquilon.
2017-12-06 11:01:17,474+01 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-7-thread-39) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec=VM]', sharedLocks=''}'
2017-12-06 11:01:17,525+01 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-7-thread-39) [] START, IsVmDuringInitiatingVDSCommand( 

Re: [ovirt-users] Fedora support (was: [ANN] oVirt 4.2.0 Second Beta Release is now available for testing)

2017-12-06 Thread Yedidyah Bar David
On Thu, Nov 30, 2017 at 11:13 PM, Blaster  wrote:
> Thank you.
>
> The mention of Fedora then should be removed from the release notes, maybe
> even stating that it's not recommended?

Now pushed:

https://github.com/oVirt/ovirt-site/pull/1396

>
>
> On 11/30/2017 4:21 AM, Yedidyah Bar David wrote:
>>
>> On Wed, Nov 29, 2017 at 7:29 PM, Blaster  wrote:
>>>
>>> Is Fedora not supported anymore?
>>>
>>> I've read the release notes for the 4.2r2 beta and 4.1.7, they mention
>>> specific versions of RHEL and CentOS, but only mention Fedora by name,
>>> with
>>> no specific version information.
>>
>> We currently have too many problems with fedora to call it even 'Technical
>> Preview', as was done in the past.
>>
>> You can still use the nightly snapshots, and most things work,
>> more-or-less,
>> with some issues having known workarounds. See e.g.:
>>
>>
>> https://bugzilla.redhat.com/showdependencytree.cgi?id=1460625_resolved=1
>>
>> And also:
>>
>> http://lists.ovirt.org/pipermail/devel/2017-August/030990.html
>>
>> (not sure that one is still relevant for Fedora 27, didn't check
>> recently).
>>
>>> On 11/15/2017 9:17 AM, Sandro Bonazzola wrote
>>>
>>>
>>> This release is available now on x86_64 architecture for:
>>>
>>> * Red Hat Enterprise Linux 7.4 or later
>>>
>>> * CentOS Linux (or similar) 7.4 or later
>>>
>>>
>>> This release supports Hypervisor Hosts on x86_64 and ppc64le
>>> architectures
>>> for:
>>>
>>> * Red Hat Enterprise Linux 7.4 or later
>>>
>>> * CentOS Linux (or similar) 7.4 or later
>>>
>>> * oVirt Node 4.2 (available for x86_64 only)
>>>
>>> tp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed deply of ovirt-engine using ovirt-node-ng-installer-ovirt-4.2-pre-2017120512.iso

2017-12-06 Thread Simone Tiraboschi
Ciao Roberto,

On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin  wrote:

> I'm having trouble to deploy one three host hyperconverged lab using
> iso-image named above.
>

Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512
<(201)%20712-0512>.iso is still a pre-release software.
Your contribute testing it is really appreciated!


>
> My test environment is based on HPE BL680cG7 blade servers.
> These servers has 6 physical 10GB network interfaces (flexNIC), each one
> with four profiles (ethernet,FCoE,iSCSI,etc).
>
> I choose one of these six phys interfaces (enp5s0f0) and assigned it a
> static IPv4 address, for each node.
>
> After node reboot, interface ONBOOT param is still set to no.
> Changed via iLO interface to yes and restarted network. Fine.
>
> After gluster setup, with gdeploy script under Cockpit interface, avoiding
> errors coming from :
> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine
> deploy.
>
> With the new version, I'm having an error never seen before:
>
> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not
> be in the same IP subnet. Static routing configuration are not supported on
> automatic VM configuration.
>  Failed to execute stage 'Environment customization': The Engine VM
> (10.114.60.117) and this host (10.114.60.134/24) will not be in the same
> IP subnet. Static routing configuration are not supported on automatic VM
> configuration.
>  Hosted Engine deployment failed.
>
> There's no input field for HE subnet mask. Anyway in our class-c ovirt
> management network these ARE in the same subnet.
> How to recover from this ? I cannot add /24 CIDR in HE Static IP address
> field, it isn't allowed.
>

10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it
should't fail.
The issue here seams different:

>From hosted-engine-setup log I see that you passed the VM IP address via
answerfile:
2017-12-06 09:14:30,195+0100 DEBUG otopi.context
context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStaticCIDR=str:'10.
114.60.117'

while the right syntax should be:
OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24

Did you wrote the answerfile by yourself or did you entered the IP address
in the cockpit wizard? if so we probably have a regression there.


>
> Moreover, VM FQDN is asked two times during the deploy process. It's
> correct ?
>

No, I don't think so but I don't see it from you logs.
Could you please explain it?


>
> Some additional, general questions:
> NetworkManager: must be disabled deploying HCI solution ? In my attempt,
> wasn't disabled.
> There's some document to follow to perform a correct deploy ?
> Is this one still "valid" ? : https://ovirt.org/blog/2017/
> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
>
> Attached hosted-engine-setup log.
> TIA
>
>
>
> --
> Roberto
> 110-006-970​
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to setup image-io-proxy after initially disabling it

2017-12-06 Thread Paolo Margara
Hi Gianluca,

if you execute "engine-config --get ImageProxyAddress" the value of that
attribute is your engine's fqdn?

I had a similar issue in the past with my setup and in my case the
problem was the wrong value of the attribute ImageProxyAddress, I hope
this could be useful also for you.

Greetings,

    Paolo

Il 06/12/2017 07:22, Gianluca Cecchi ha scritto:
> On Tue, Dec 5, 2017 at 10:22 PM, Gianluca Cecchi
> > wrote:
>
> Hello,
> I'm on oVirt 4.1.7, the latest in 4.1 right now.
> Initially in engine-setup when prompted I set Image I/O Proxy to
> false.
>
>           Configure Image I/O Proxy               : False
>
> Now instead I would like to enable it, but if I run engine-setup I
> can't find a way to do it. I can only confirm the settings in a
> whole or exit the setup...
> How can I do?
>
> Currently I have these packages already installed on the system
>
> 
> ovirt-imageio-proxy-setup-1.0.0-0.201701151456.git89ae3b4.el7.centos.noarch
> ovirt-imageio-proxy-1.0.0-0.201701151456.git89ae3b4.el7.centos.noarch
> and
> [root@ovirt ~]# systemctl status ovirt-imageio-proxy
> ● ovirt-imageio-proxy.service - oVirt ImageIO Proxy
>    Loaded: loaded
> (/usr/lib/systemd/system/ovirt-imageio-proxy.service; disabled;
> vendor preset: disabled)
>    Active: inactive (dead)
> [root@ovirt ~]#
>
> Can I simply and manually enable/start the service through
> systemctl commands?
>
> The same question arises in case I had not / had
> enabled  VMConsole Proxy and/or WebSocket Proxy during install and
> in a second time I want to enable / diable them.
>
> Thanks,
> Gianluca
>
>
> After reading the help page for engine-setup and discovering that the
> option "--reconfigure-optional-components" has no effect in 4.1 for
> Image I/O Proxy, I used the workaround offered throughout this bugzilla:
> https://bugzilla.redhat.com/show_bug.cgi?id=1486753
> 
>
> Now it seems all ok, but the upload fails after going into pause just
> at the beginning with the message
>
> "
> Unable to upload image to disk de28015a-39e9-44e4-acb2-2d2e3b9cdc7f
> due to a network error. Make sure ovirt-imageio-proxy service is
> installed and configured, and ovirt-engine's certificate is registered
> as a valid CA in the browser. The certificate can be fetched from
> https:///ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA
> "
>
> I already fetched into firefox the certificate and selected all 3
> check boxes when configuring...
>
> Any other thing to check?
>
> On engine:
>
> [root@ovirt ~]# firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: eth0
>   sources:
>   services: ovirt-websocket-proxy ovirt-vmconsole-proxy ovirt-http
> dhcpv6-client ovirt-https ssh ovirt-postgres
> ovirt-fence-kdump-listener ovirt-imageio-proxy
>   ports:
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
>    
> [root@ovirt ~]# 
>
> [root@ovirt ~]# systemctl status ovirt-imageio-proxy.service -l
> ● ovirt-imageio-proxy.service - oVirt ImageIO Proxy
>    Loaded: loaded
> (/usr/lib/systemd/system/ovirt-imageio-proxy.service; enabled; vendor
> preset: disabled)
>    Active: active (running) since Wed 2017-12-06 00:07:48 CET; 7h ago
>  Main PID: 9402 (ovirt-imageio-p)
>    CGroup: /system.slice/ovirt-imageio-proxy.service
>    └─9402 /usr/bin/python /usr/bin/ovirt-imageio-proxy
>
> Dec 06 00:07:47 ovirt systemd[1]: Starting oVirt ImageIO Proxy...
> Dec 06 00:07:48 ovirt systemd[1]: Started oVirt ImageIO Proxy.
> [root@ovirt ~]#
>
> In  /var/log/ovirt-imageio-proxy/image-proxy.log
>
> (MainThread) INFO 2017-12-06 00:07:48,460 image_proxy:26:root:(main)
> Server started, successfully notified systemd
>
> In engine.log
>
> 2017-12-06 00:10:07,534+01 INFO 
> [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
> (DefaultQuartzScheduler1) [cefb309e-1127-4811-b69b-a7d83c3f7df6]
> Updating image upload 208cc8c5-66ae-40aa-9748-04ee90c022a6 (image
> bad43962-dcc4-4f16-8b8e-dafc26573e2c) phase to Transferring (message:
> 'Initiating new upload')
> 2017-12-06 00:10:07,537+01 INFO 
> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
> (DefaultQuartzScheduler1) [cefb309e-1127-4811-b69b-a7d83c3f7df6]
> Returning from proceedCommandExecution after starting transfer session
> for image transfer command '208cc8c5-66ae-40aa-9748-04ee90c022a6'
> 2017-12-06 00:10:10,506+01 INFO 
> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
> (default task-1) [dae9a850-5db2-40be-92ca-a6fff0d54aac] Running
> command: TransferImageStatusCommand internal: false. Entities affected
> :  ID: aaa0----123456789aaa Type: SystemAction group
> CREATE_DISK with role type USER
>