Re: [ovirt-users] CBT question

2017-09-12 Thread Pavel Gashev
The dirty bitmap is already working well enough in QEMU 2.6. It can be used via 
LibVirt qemu-monitor-command. The only issue is that dirty bitmaps do not 
sustain VM restarts, snapshot creating/deleting, and VM live migration. 
However, this is not a big issue if you perform backups often than these 
operations. Since backups are in the qcow2 format, they can be placed on the 
Export storage, and restored directly from oVirt.

It would be relative simple to implement third-party backup solution, if disk 
locking mechanics was accessible via oVirt API. Implement this directly in 
oVirt would be even simpler as a Live Export feature, since dirty bitmaps work 
exactly like snapshots, and oVirt already has most of required logic.


From:  on behalf of Yaniv Kaul 
Date: Monday, 11 September 2017 at 17:14
To: Demeter Tibor 
Cc: users 
Subject: Re: [ovirt-users] CBT question



On Mon, Sep 11, 2017 at 4:17 PM, Demeter Tibor 
> wrote:
Dear Listmembers,

Somebody know when will be available the CBT (changed block tracking) feature 
in ovirt/rhev?

It has to be implemented first in QEMU - where it is still being worked on.
We plan to take advantage of it after it is ready in QEMU (and libvirt of 
course).

We looking for an usable backup solution for our ovirt guests, but I've see, 
there are some API limitation yet.

But nevertheless there are various backup solutions that can be used today, 
even if not as efficient as CBT.
Y.


Thanks in advance,
R

Tibor

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Disk image upload via CLI?

2017-09-12 Thread Matthias Leopold

Hi,

is there a way to upload disk images (not OVF files, not ISO files) to 
oVirt storage domains via CLI? I need to upload a 800GB file and this is 
not really comfortable via browser. I looked at ovirt-shell and 
https://www.ovirt.org/develop/release-management/features/storage/image-upload/, 
but i didn't find an option in either of them.


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-09-12 Thread Charles Kozler
Hey All -

So I havent tested this yet but what I do know is that I did setup
backupvol option when I added the data gluster volume, however, mount
options on mount -l do not show it as being used

n1:/data on /rhev/data-center/mnt/glusterSD/n1:_data type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

I will delete it and re-add it, but I think this might be part of the
problem. Perhaps me and Jim have the same issue because oVirt is actually
not passing the additional mount options from the web UI to the backend to
mount with said parameters?

Thoughts?

On Mon, Sep 4, 2017 at 10:51 AM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> I had the very same impression. It doesn't look like that it works then.
> So for a fully redundant where you can loose a complete host you must have
> at least 3 nodes then ?
>
> Fernando
>
> On 01/09/2017 12:53, Jim Kusznir wrote:
>
> Huh...Ok., how do I convert the arbitrar to full replica, then?  I was
> misinformed when I created this setup.  I thought the arbitrator held
> enough metadata that it could validate or refudiate  any one replica (kinda
> like the parity drive for a RAID-4 array).  I was also under the impression
> that one replica  + Arbitrator is enough to keep the array online and
> functional.
>
> --Jim
>
> On Fri, Sep 1, 2017 at 5:22 AM, Charles Kozler 
> wrote:
>
>> @ Jim - you have only two data volumes and lost quorum. Arbitrator only
>> stores metadata, no actual files. So yes, you were running in degraded mode
>> so some operations were hindered.
>>
>> @ Sahina - Yes, this actually worked fine for me once I did that.
>> However, the issue I am still facing, is when I go to create a new gluster
>> storage domain (replica 3, hyperconverged) and I tell it "Host to use" and
>> I select that host. If I fail that host, all VMs halt. I do not recall this
>> in 3.6 or early 4.0. This to me makes it seem like this is "pinning" a node
>> to a volume and vice versa like you could, for instance, for a singular
>> hyperconverged to ex: export a local disk via NFS and then mount it via
>> ovirt domain. But of course, this has its caveats. To that end, I am using
>> gluster replica 3, when configuring it I say "host to use: " node 1, then
>> in the connection details I give it node1:/data. I fail node1, all VMs
>> halt. Did I miss something?
>>
>> On Fri, Sep 1, 2017 at 2:13 AM, Sahina Bose  wrote:
>>
>>> To the OP question, when you set up a gluster storage domain, you need
>>> to specify backup-volfile-servers=: where server2 and
>>> server3 also have bricks running. When server1 is down, and the volume is
>>> mounted again - server2 or server3 are queried to get the gluster volfiles.
>>>
>>> @Jim, if this does not work, are you using 4.1.5 build with libgfapi
>>> access? If not, please provide the vdsm and gluster mount logs to analyse
>>>
>>> If VMs go to paused state - this could mean the storage is not
>>> available. You can check "gluster volume status " to see if
>>> atleast 2 bricks are running.
>>>
>>> On Fri, Sep 1, 2017 at 11:31 AM, Johan Bernhardsson 
>>> wrote:
>>>
 If gluster drops in quorum so that it has less votes than it should it
 will stop file operations until quorum is back to normal.If i rember it
 right you need two bricks to write for quorum to be met and that the
 arbiter only is a vote to avoid split brain.


 Basically what you have is a raid5 solution without a spare. And when
 one disk dies it will run in degraded mode. And some raid systems will stop
 the raid until you have removed the disk or forced it to run anyway.

 You can read up on it here: https://gluster.readthed
 ocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/

 /Johan

 On Thu, 2017-08-31 at 22:33 -0700, Jim Kusznir wrote:

 Hi all:

 Sorry to hijack the thread, but I was about to start essentially the
 same thread.

 I have a 3 node cluster, all three are hosts and gluster nodes (replica
 2 + arbitrar).  I DO have the mnt_options=backup-volfile-servers= set:

 storage=192.168.8.11:/engine
 mnt_options=backup-volfile-servers=192.168.8.12:192.168.8.13

 I had an issue today where 192.168.8.11 went down.  ALL VMs immediately
 paused, including the engine (all VMs were running on host2:192.168.8.12).
 I couldn't get any gluster stuff working until host1 (192.168.8.11) was
 restored.

 What's wrong / what did I miss?

 (this was set up "manually" through the article on setting up
 self-hosted gluster cluster back when 4.0 was new..I've upgraded it to 4.1
 since).

 Thanks!
 --Jim


 On Thu, Aug 31, 2017 at 12:31 PM, Charles Kozler 
 wrote:

 Typo..."Set it up and then failed that **HOST**"

 And upon that host going down, the 

Re: [ovirt-users] hyperconverged question

2017-09-12 Thread Charles Kozler
So also on my engine storage domain. Shouldnt we see the mount options in
mount -l output? It appears fault tolerance worked (sort of - see more
below) during my test

[root@appovirtp01 ~]# grep -i mnt_options
/etc/ovirt-hosted-engine/hosted-engine.conf
mnt_options=backup-volfile-servers=n2:n3

[root@appovirtp02 ~]# grep -i mnt_options
/etc/ovirt-hosted-engine/hosted-engine.conf
mnt_options=backup-volfile-servers=n2:n3

[root@appovirtp03 ~]# grep -i mnt_options
/etc/ovirt-hosted-engine/hosted-engine.conf
mnt_options=backup-volfile-servers=n2:n3

Meanwhile not visible in mount -l output:

[root@appovirtp01 ~]# mount -l | grep -i n1:/engine
n1:/engine on /rhev/data-center/mnt/glusterSD/n1:_engine type
fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

[root@appovirtp02 ~]# mount -l | grep -i n1:/engine
n1:/engine on /rhev/data-center/mnt/glusterSD/n1:_engine type
fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

[root@appovirtp03 ~]# mount -l | grep -i n1:/engine
n1:/engine on /rhev/data-center/mnt/glusterSD/n1:_engine type
fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

So since everything is "pointed" at node 1 for engine storage, I decided to
hard shut down node 1 while hosted engine VM runs on node 3

The result was that after ~30 seconds the engine crashed likely because of
the gluster 42 second timeout. The hosted engine VM came back up (with node
1 still down) after about 5-7 minutes

Is this expected for the VM to go down? I thought gluster fuse mounted all
bricks in the volume
http://lists.gluster.org/pipermail/gluster-users/2015-May/021989.html so I
would imagine this to be more seamless?




On Tue, Sep 12, 2017 at 7:04 PM, Charles Kozler 
wrote:

> Hey All -
>
> So I havent tested this yet but what I do know is that I did setup
> backupvol option when I added the data gluster volume, however, mount
> options on mount -l do not show it as being used
>
> n1:/data on /rhev/data-center/mnt/glusterSD/n1:_data type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,
> allow_other,max_read=131072)
>
> I will delete it and re-add it, but I think this might be part of the
> problem. Perhaps me and Jim have the same issue because oVirt is actually
> not passing the additional mount options from the web UI to the backend to
> mount with said parameters?
>
> Thoughts?
>
> On Mon, Sep 4, 2017 at 10:51 AM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> I had the very same impression. It doesn't look like that it works then.
>> So for a fully redundant where you can loose a complete host you must have
>> at least 3 nodes then ?
>>
>> Fernando
>>
>> On 01/09/2017 12:53, Jim Kusznir wrote:
>>
>> Huh...Ok., how do I convert the arbitrar to full replica, then?  I was
>> misinformed when I created this setup.  I thought the arbitrator held
>> enough metadata that it could validate or refudiate  any one replica (kinda
>> like the parity drive for a RAID-4 array).  I was also under the impression
>> that one replica  + Arbitrator is enough to keep the array online and
>> functional.
>>
>> --Jim
>>
>> On Fri, Sep 1, 2017 at 5:22 AM, Charles Kozler 
>> wrote:
>>
>>> @ Jim - you have only two data volumes and lost quorum. Arbitrator only
>>> stores metadata, no actual files. So yes, you were running in degraded mode
>>> so some operations were hindered.
>>>
>>> @ Sahina - Yes, this actually worked fine for me once I did that.
>>> However, the issue I am still facing, is when I go to create a new gluster
>>> storage domain (replica 3, hyperconverged) and I tell it "Host to use" and
>>> I select that host. If I fail that host, all VMs halt. I do not recall this
>>> in 3.6 or early 4.0. This to me makes it seem like this is "pinning" a node
>>> to a volume and vice versa like you could, for instance, for a singular
>>> hyperconverged to ex: export a local disk via NFS and then mount it via
>>> ovirt domain. But of course, this has its caveats. To that end, I am using
>>> gluster replica 3, when configuring it I say "host to use: " node 1, then
>>> in the connection details I give it node1:/data. I fail node1, all VMs
>>> halt. Did I miss something?
>>>
>>> On Fri, Sep 1, 2017 at 2:13 AM, Sahina Bose  wrote:
>>>
 To the OP question, when you set up a gluster storage domain, you need
 to specify backup-volfile-servers=: where server2
 and server3 also have bricks running. When server1 is down, and the volume
 is mounted again - server2 or server3 are queried to get the gluster
 volfiles.

 @Jim, if this does not work, are you using 4.1.5 build with libgfapi
 access? If not, please provide the vdsm and gluster mount logs to analyse

 If VMs go to paused state - this could mean the storage is not
 available. You can check "gluster volume 

[ovirt-users] Hyper converged network setup

2017-09-12 Thread Tailor, Bharat
Hi,

I am trying to deploy 3 hosts hyper converged setup.
I am using Centos and installed KVM on all hosts.

Host-1
Hostname - test1.localdomain
 eth0 - 192.168.100.15/24
GW - 192.168.100.1

Hoat-2
Hostname - test2.localdomain
eth0 - 192.168.100.16/24
GW - 192.168.100.1

Host-3
Hostname - test3.localdomain
eth0 - 192.168.100.16/24
GW - 192.168.100.1

I have created two gluster volume "engine" & "data" with replica 3.
I have add fqdn entry in /etc/hosts for all host for DNS resolution.

I want to deploy Ovirt engine self hosted OVA to manage all the hosts and
production VM and my ovirt-engine VM should have HA enabled.

I found multiple docs over internet to deply Self-hosted-engine-ova but I
don't what kind of network configuration I've to do on Centos network card
& KVM. As KVM docs suggest that I've to create a bridge network for Pnic to
Vnic bridge. If I configure a bridge br0 for eth0 bridge that I can't see
eth0 while deploying ovirt-engine setup at NIC card choice.

Kindly help me to do correct configuration for Centos hosts, KVM &
ovirt-engine-vm for HA enabled DC.
Regrards
Bharat Kumar

G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
Udaipur (Raj.)
313001
Mob: +91-9950-9960-25
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vmware ova import problem

2017-09-12 Thread Michal Skrivanek

> On 11 Sep 2017, at 14:12, Jiří Sléžka  wrote:
> 
> Hi,
> 
> today I tested vmware ova import for first time but unfortunately it
> failed. (btw. it would be great to have a possibility to upload ova
> directly from manager…)

in 4.1 you can only upload qcow disks, in future we will support complete OVA 
(native oVirt’s OVA’s though)

> 
> problem seems to be this (vdsm/import log)
> 
> ...
> libguestfs: trace: v2v: inspect_get_type = "linux"
>   i_root = /dev/sda1
>  i_type =
> linux
>  i_distro = debian
> i_arch = i386
> 
>   i_major_version = 9
>  i_minor_version = 0
> 
> i_package_format = deb
>   i_package_management = apt
> 
> i_product_name = 9.0
>   i_product_variant = unknown
>  i_uefi =
> false
> 
> ...
> ...
> ...
> [  13.7] Converting 9.0 to run on KVM
>   virt-v2v: error: virt-v2v is unable to
> convert this guest type
> (linux/debian)
>   rm -rf '/var/tmp/ova.lB3mEN'
>  rm -rf
> '/var/tmp/null.kaX2uZ'
> ...
> 
> so is debian really unsupported to be imported fom vmware ova?

it is supported only in libguestfs 1.36+ which is released in EL 7.4

> 
> I believe this vm could run as is, even without v2v convert.
> 
> It would be nice to have a possibility to bypass v2v process and import
> appliance as is, just disk(s) and vm definition. Does it make sense?

the disk needs conversion, as well as changes to be able to boot (that’s the 
part originally missing for debina-based guests)

Thanks,
michal

> 
> Cheers,
> 
> Jiri Slezka
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disk image upload via CLI?

2017-09-12 Thread Fred Rolland
Hi,

You can check this example:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

Regards,
Fred

On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> is there a way to upload disk images (not OVF files, not ISO files) to
> oVirt storage domains via CLI? I need to upload a 800GB file and this is
> not really comfortable via browser. I looked at ovirt-shell and
> https://www.ovirt.org/develop/release-management/features/st
> orage/image-upload/, but i didn't find an option in either of them.
>
> thx
> matthias
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disk image upload via CLI?

2017-09-12 Thread Yaniv Kaul
On Tue, Sep 12, 2017 at 2:15 PM, Fred Rolland  wrote:

> Hi,
>
> You can check this example:
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/
> sdk/examples/upload_disk.py
>

Or via Ansibe:
https://github.com/oVirt/ovirt-ansible/blob/master/roles/ovirt-image-template/README.md

Y.


>
> Regards,
> Fred
>
> On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold <
> matthias.leop...@meduniwien.ac.at> wrote:
>
>> Hi,
>>
>> is there a way to upload disk images (not OVF files, not ISO files) to
>> oVirt storage domains via CLI? I need to upload a 800GB file and this is
>> not really comfortable via browser. I looked at ovirt-shell and
>> https://www.ovirt.org/develop/release-management/features/st
>> orage/image-upload/, but i didn't find an option in either of them.
>>
>> thx
>> matthias
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CBT question

2017-09-12 Thread Yaniv Kaul
On Tue, Sep 12, 2017 at 11:25 AM, Demeter Tibor  wrote:

> Dear Yaniv,
>
> Thank you for your reply.
>
> I have to backup ~35 VM those used around 7 TB disk. Could you show me an
> usable solution/script/software for backup these VMs?
>
> I don't want to backup fulls in everyday because it takes too long time
> and space...
>

Of course. You backup snapshots.
You can integrate with the backup API. See[1].
You can see an implementation by the community[2].
Y.


[1]
https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/#backing-up-and-restoring-virtual-machines-using-the-backup-and-restore-api
[2] https://github.com/wefixit-AT/oVirtBackup


>
> Thanks in advance,
> R
> Tibor
>
>
>
> - 2017. szept.. 11., 16:14, Yaniv Kaul  írta:
>
>
>
> On Mon, Sep 11, 2017 at 4:17 PM, Demeter Tibor 
> wrote:
>
>> Dear Listmembers,
>>
>> Somebody know when will be available the CBT (changed block tracking)
>> feature in ovirt/rhev?
>>
>
> It has to be implemented first in QEMU - where it is still being worked on.
> We plan to take advantage of it after it is ready in QEMU (and libvirt of
> course).
>
>
>> We looking for an usable backup solution for our ovirt guests, but I've
>> see, there are some API limitation yet.
>>
>
> But nevertheless there are various backup solutions that can be used
> today, even if not as efficient as CBT.
> Y.
>
>
>>
>> Thanks in advance,
>> R
>>
>> Tibor
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CBT question

2017-09-12 Thread Demeter Tibor
Dear Yaniv, 

Thank you for your reply. 

I have to backup ~35 VM those used around 7 TB disk. Could you show me an 
usable solution/script/software for backup these VMs? 

I don't want to backup fulls in everyday because it takes too long time and 
space... 

Thanks in advance, 
R 
Tibor 

- 2017. szept.. 11., 16:14, Yaniv Kaul  írta: 

> On Mon, Sep 11, 2017 at 4:17 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Dear Listmembers,

>> Somebody know when will be available the CBT (changed block tracking) 
>> feature in
>> ovirt/rhev?

> It has to be implemented first in QEMU - where it is still being worked on.
> We plan to take advantage of it after it is ready in QEMU (and libvirt of
> course).

>> We looking for an usable backup solution for our ovirt guests, but I've see,
>> there are some API limitation yet.

> But nevertheless there are various backup solutions that can be used today, 
> even
> if not as efficient as CBT.
> Y.

>> Thanks in advance,
>> R

>> Tibor

>> ___
>> Users mailing list
>> [ mailto:Users@ovirt.org | Users@ovirt.org ]
>> [ http://lists.ovirt.org/mailman/listinfo/users |
>> http://lists.ovirt.org/mailman/listinfo/users ]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] epel and collectd

2017-09-12 Thread Fabrice Bacchella
In the releases notes, even for the 4.6 rc, I see:

https://www.ovirt.org/release/4.1.6/
...
OpsTools currently includes collectd 5.7.0, and the write_http plugin is 
packaged separately.

But if I check the current state:
yum list collectd-write_http collectd
...
collectd.x86_64 
5.7.2-1.el7 
 @centos-opstools-release
collectd-write_http.x86_64  
5.7.2-1.el7 
 @centos-opstools-release

So I think the warning is not needed any more. One can uses both ovirt and epel 
without any special check.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyper converged network setup

2017-09-12 Thread Charles Kozler
Interestingly enough I literally just went through this same thing with a
slight variation.

Note to the below: I am not sure if this would be considerd best practice
or good for something long term support but I made due with what I had

I had 10Gb cards for my storage network but no 10Gb switch, so I direct
connected them with some fun routing and /etc/hosts settings. I also didnt
want my storage network on a routed network (have firewalls in the way of
VLANs) and I wanted the network separate from my ovirtmgmt - and, as I
said, had no switches for 10Gb. Here is what you need at a bare minimum.
Adapt / change it as you need

1 dedicated NIC on each node for ovirtmgmt. Ex: eth0

1 dedicated NIC to direct connect node 1 and node 2 - eth1 node1
1 dedicated NIC to direct connect node 1 and node 3 - eth2 node1

1 dedicated NIC to direct connect node 2 and node 1 - eth1 node2
1 dedicated NIC to direct connect node 2 and node 3 - eth2 node2

1 dedicated NIC to direct connect node 3 and node 1 - eth1 node3
1 dedicated NIC to direct connect node 3 and node 2 - eth2 node3

You'll need custom routes too:

Route to node 3 from node 1 via eth2
Route to node 3 from node 2 via eth2
Route to node 2 from node 3 via eth2

Finally, entries in your /etc/hosts which match to your routes above

Then, advisably, a dedicated NIC per box for VM network but you can
leverage ovirtmgmt if you are just proofing this out

At this point if you can reach all of your nodes via this direct connect
IPs then you setup gluster as you normally would referencing your entries
in /etc/hosts when you call "gluster volume create"

In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server so I
setup LACP as well as you can see below

This is what my Frankenstein POC looked like: http://i.imgur.com/iURL9jv.png


You can optionally choose to setup this network in ovirt as well (and add
the NICs to each host) but dont configure it as a VM network. Then you can
also, with some other minor tweaks, use these direct connects as migration
networks rather than ovirtmgmt or VM network

On Tue, Sep 12, 2017 at 9:12 AM, Tailor, Bharat <
bha...@synergysystemsindia.com> wrote:

> Hi,
>
> I am trying to deploy 3 hosts hyper converged setup.
> I am using Centos and installed KVM on all hosts.
>
> Host-1
> Hostname - test1.localdomain
>  eth0 - 192.168.100.15/24
> GW - 192.168.100.1
>
> Hoat-2
> Hostname - test2.localdomain
> eth0 - 192.168.100.16/24
> GW - 192.168.100.1
>
> Host-3
> Hostname - test3.localdomain
> eth0 - 192.168.100.16/24
> GW - 192.168.100.1
>
> I have created two gluster volume "engine" & "data" with replica 3.
> I have add fqdn entry in /etc/hosts for all host for DNS resolution.
>
> I want to deploy Ovirt engine self hosted OVA to manage all the hosts and
> production VM and my ovirt-engine VM should have HA enabled.
>
> I found multiple docs over internet to deply Self-hosted-engine-ova but I
> don't what kind of network configuration I've to do on Centos network card
> & KVM. As KVM docs suggest that I've to create a bridge network for Pnic to
> Vnic bridge. If I configure a bridge br0 for eth0 bridge that I can't see
> eth0 while deploying ovirt-engine setup at NIC card choice.
>
> Kindly help me to do correct configuration for Centos hosts, KVM &
> ovirt-engine-vm for HA enabled DC.
> Regrards
> Bharat Kumar
>
> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
> Udaipur (Raj.)
> 313001
> Mob: +91-9950-9960-25
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyper converged network setup

2017-09-12 Thread Tailor, Bharat
Hi Charles,

Thank you so much to share a cool stuff with us.

My doubts are still not cleared.


   1. What If I have only single Physical network adaptor? Can't I use it
   for management network & production network both.
   2. If I have two Physical network adaptor, Can I configure NIC teaming
   as like Vmware ESXi.
   3. What If my ovirt machine fails during production period? In vmware we
   can access ESXi hosts and VM without Vcenter and do all the stuffs. Can we
   do the same with Ovirt & KVM.
   4. To deploy ovirt engine VM, what kind of configuration I'll have to do
   on network adaptors? (eg. just configure IP on physical network or have to
   create br0 for it.)
   5. Can I make multiple VM networks for vlan configuration?


Regrards
Bharat Kumar

G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
Udaipur (Raj.)
313001
Mob: +91-9950-9960-25





On Tue, Sep 12, 2017 at 9:30 PM, Charles Kozler 
wrote:

>
> Interestingly enough I literally just went through this same thing with a
> slight variation.
>
> Note to the below: I am not sure if this would be considerd best practice
> or good for something long term support but I made due with what I had
>
> I had 10Gb cards for my storage network but no 10Gb switch, so I direct
> connected them with some fun routing and /etc/hosts settings. I also didnt
> want my storage network on a routed network (have firewalls in the way of
> VLANs) and I wanted the network separate from my ovirtmgmt - and, as I
> said, had no switches for 10Gb. Here is what you need at a bare minimum.
> Adapt / change it as you need
>
> 1 dedicated NIC on each node for ovirtmgmt. Ex: eth0
>
> 1 dedicated NIC to direct connect node 1 and node 2 - eth1 node1
> 1 dedicated NIC to direct connect node 1 and node 3 - eth2 node1
>
> 1 dedicated NIC to direct connect node 2 and node 1 - eth1 node2
> 1 dedicated NIC to direct connect node 2 and node 3 - eth2 node2
>
> 1 dedicated NIC to direct connect node 3 and node 1 - eth1 node3
> 1 dedicated NIC to direct connect node 3 and node 2 - eth2 node3
>
> You'll need custom routes too:
>
> Route to node 3 from node 1 via eth2
> Route to node 3 from node 2 via eth2
> Route to node 2 from node 3 via eth2
>
> Finally, entries in your /etc/hosts which match to your routes above
>
> Then, advisably, a dedicated NIC per box for VM network but you can
> leverage ovirtmgmt if you are just proofing this out
>
> At this point if you can reach all of your nodes via this direct connect
> IPs then you setup gluster as you normally would referencing your entries
> in /etc/hosts when you call "gluster volume create"
>
> In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server so I
> setup LACP as well as you can see below
>
> This is what my Frankenstein POC looked like: http://i.imgur.com/
> iURL9jv.png
>
> You can optionally choose to setup this network in ovirt as well (and add
> the NICs to each host) but dont configure it as a VM network. Then you can
> also, with some other minor tweaks, use these direct connects as migration
> networks rather than ovirtmgmt or VM network
>
> On Tue, Sep 12, 2017 at 9:12 AM, Tailor, Bharat <
> bha...@synergysystemsindia.com> wrote:
>
>> Hi,
>>
>> I am trying to deploy 3 hosts hyper converged setup.
>> I am using Centos and installed KVM on all hosts.
>>
>> Host-1
>> Hostname - test1.localdomain
>>  eth0 - 192.168.100.15/24
>> GW - 192.168.100.1
>>
>> Hoat-2
>> Hostname - test2.localdomain
>> eth0 - 192.168.100.16/24
>> GW - 192.168.100.1
>>
>> Host-3
>> Hostname - test3.localdomain
>> eth0 - 192.168.100.16/24
>> GW - 192.168.100.1
>>
>> I have created two gluster volume "engine" & "data" with replica 3.
>> I have add fqdn entry in /etc/hosts for all host for DNS resolution.
>>
>> I want to deploy Ovirt engine self hosted OVA to manage all the hosts and
>> production VM and my ovirt-engine VM should have HA enabled.
>>
>> I found multiple docs over internet to deply Self-hosted-engine-ova but I
>> don't what kind of network configuration I've to do on Centos network card
>> & KVM. As KVM docs suggest that I've to create a bridge network for Pnic to
>> Vnic bridge. If I configure a bridge br0 for eth0 bridge that I can't see
>> eth0 while deploying ovirt-engine setup at NIC card choice.
>>
>> Kindly help me to do correct configuration for Centos hosts, KVM &
>> ovirt-engine-vm for HA enabled DC.
>> Regrards
>> Bharat Kumar
>>
>> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
>> Udaipur (Raj.)
>> 313001
>> Mob: +91-9950-9960-25
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyper converged network setup

2017-09-12 Thread Donny Davis
1. Yes, you can do this
2. Yes, In linux it's called bonding and this can be done from the UI
3. You can get around using the Engine machine if required with virsh or
virt-manager - however I would just wait for the manager to migrate and
start on another host in the cluster
4.  The deployment will take care of everything for you. You just need an IP
5. Yes, you can use vlans or virtual networking(NSXish) called OVS in
oVirt.

I noticed on your deployment machines 2 and 3 have the same IP. Might want
to fix that before deploying

Happy trails
~D


On Tue, Sep 12, 2017 at 2:00 PM, Tailor, Bharat <
bha...@synergysystemsindia.com> wrote:

> Hi Charles,
>
> Thank you so much to share a cool stuff with us.
>
> My doubts are still not cleared.
>
>
>1. What If I have only single Physical network adaptor? Can't I use it
>for management network & production network both.
>2. If I have two Physical network adaptor, Can I configure NIC teaming
>as like Vmware ESXi.
>3. What If my ovirt machine fails during production period? In vmware
>we can access ESXi hosts and VM without Vcenter and do all the stuffs. Can
>we do the same with Ovirt & KVM.
>4. To deploy ovirt engine VM, what kind of configuration I'll have to
>do on network adaptors? (eg. just configure IP on physical network or have
>to create br0 for it.)
>5. Can I make multiple VM networks for vlan configuration?
>
>
> Regrards
> Bharat Kumar
>
> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
> Udaipur (Raj.)
> 313001
> Mob: +91-9950-9960-25
>
>
>
>
>
> On Tue, Sep 12, 2017 at 9:30 PM, Charles Kozler 
> wrote:
>
>>
>> Interestingly enough I literally just went through this same thing with a
>> slight variation.
>>
>> Note to the below: I am not sure if this would be considerd best practice
>> or good for something long term support but I made due with what I had
>>
>> I had 10Gb cards for my storage network but no 10Gb switch, so I direct
>> connected them with some fun routing and /etc/hosts settings. I also didnt
>> want my storage network on a routed network (have firewalls in the way of
>> VLANs) and I wanted the network separate from my ovirtmgmt - and, as I
>> said, had no switches for 10Gb. Here is what you need at a bare minimum.
>> Adapt / change it as you need
>>
>> 1 dedicated NIC on each node for ovirtmgmt. Ex: eth0
>>
>> 1 dedicated NIC to direct connect node 1 and node 2 - eth1 node1
>> 1 dedicated NIC to direct connect node 1 and node 3 - eth2 node1
>>
>> 1 dedicated NIC to direct connect node 2 and node 1 - eth1 node2
>> 1 dedicated NIC to direct connect node 2 and node 3 - eth2 node2
>>
>> 1 dedicated NIC to direct connect node 3 and node 1 - eth1 node3
>> 1 dedicated NIC to direct connect node 3 and node 2 - eth2 node3
>>
>> You'll need custom routes too:
>>
>> Route to node 3 from node 1 via eth2
>> Route to node 3 from node 2 via eth2
>> Route to node 2 from node 3 via eth2
>>
>> Finally, entries in your /etc/hosts which match to your routes above
>>
>> Then, advisably, a dedicated NIC per box for VM network but you can
>> leverage ovirtmgmt if you are just proofing this out
>>
>> At this point if you can reach all of your nodes via this direct connect
>> IPs then you setup gluster as you normally would referencing your entries
>> in /etc/hosts when you call "gluster volume create"
>>
>> In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server so I
>> setup LACP as well as you can see below
>>
>> This is what my Frankenstein POC looked like: http://i.imgur.com/iURL9
>> jv.png
>>
>> You can optionally choose to setup this network in ovirt as well (and add
>> the NICs to each host) but dont configure it as a VM network. Then you can
>> also, with some other minor tweaks, use these direct connects as migration
>> networks rather than ovirtmgmt or VM network
>>
>> On Tue, Sep 12, 2017 at 9:12 AM, Tailor, Bharat <
>> bha...@synergysystemsindia.com> wrote:
>>
>>> Hi,
>>>
>>> I am trying to deploy 3 hosts hyper converged setup.
>>> I am using Centos and installed KVM on all hosts.
>>>
>>> Host-1
>>> Hostname - test1.localdomain
>>>  eth0 - 192.168.100.15/24
>>> GW - 192.168.100.1
>>>
>>> Hoat-2
>>> Hostname - test2.localdomain
>>> eth0 - 192.168.100.16/24
>>> GW - 192.168.100.1
>>>
>>> Host-3
>>> Hostname - test3.localdomain
>>> eth0 - 192.168.100.16/24
>>> GW - 192.168.100.1
>>>
>>> I have created two gluster volume "engine" & "data" with replica 3.
>>> I have add fqdn entry in /etc/hosts for all host for DNS resolution.
>>>
>>> I want to deploy Ovirt engine self hosted OVA to manage all the hosts
>>> and production VM and my ovirt-engine VM should have HA enabled.
>>>
>>> I found multiple docs over internet to deply Self-hosted-engine-ova but
>>> I don't what kind of network configuration I've to do on Centos network
>>> card & KVM. As KVM docs suggest that I've to create a bridge network for
>>> Pnic to Vnic bridge. If I configure a bridge br0 for eth0 bridge that I

Re: [ovirt-users] Hyper converged network setup

2017-09-12 Thread Charles Kozler
Bharat -

1. Yes. Will need to configure switch port as a trunk and setup your VLANs
and VLAN ID's
2. Yes
3. You can still access the hosts. The engine itself crashing or being down
wont stop your VMs or hosts or anything (unless fencing). You can use virsh
4. My suggestion here is start immediately after a fresh server install and
yum update. Installer does a lot and checks a lot and wont like things: ex
- you setup ovirtmgmt bridged network yourself
5. Yes. See #1, usually what I do is each ovirt node I have I set an IP of
.5, then .6, and so on. This way I can be sure my network itself is working
before adding a VM and attaching that NIC to it

On Tue, Sep 12, 2017 at 4:41 PM, Donny Davis  wrote:

> 1. Yes, you can do this
> 2. Yes, In linux it's called bonding and this can be done from the UI
> 3. You can get around using the Engine machine if required with virsh or
> virt-manager - however I would just wait for the manager to migrate and
> start on another host in the cluster
> 4.  The deployment will take care of everything for you. You just need an
> IP
> 5. Yes, you can use vlans or virtual networking(NSXish) called OVS in
> oVirt.
>
> I noticed on your deployment machines 2 and 3 have the same IP. Might want
> to fix that before deploying
>
> Happy trails
> ~D
>
>
> On Tue, Sep 12, 2017 at 2:00 PM, Tailor, Bharat <
> bha...@synergysystemsindia.com> wrote:
>
>> Hi Charles,
>>
>> Thank you so much to share a cool stuff with us.
>>
>> My doubts are still not cleared.
>>
>>
>>1. What If I have only single Physical network adaptor? Can't I use
>>it for management network & production network both.
>>2. If I have two Physical network adaptor, Can I configure NIC
>>teaming as like Vmware ESXi.
>>3. What If my ovirt machine fails during production period? In vmware
>>we can access ESXi hosts and VM without Vcenter and do all the stuffs. Can
>>we do the same with Ovirt & KVM.
>>4. To deploy ovirt engine VM, what kind of configuration I'll have to
>>do on network adaptors? (eg. just configure IP on physical network or have
>>to create br0 for it.)
>>5. Can I make multiple VM networks for vlan configuration?
>>
>>
>> Regrards
>> Bharat Kumar
>>
>> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
>> Udaipur (Raj.)
>> 313001
>> Mob: +91-9950-9960-25
>>
>>
>>
>>
>>
>> On Tue, Sep 12, 2017 at 9:30 PM, Charles Kozler 
>> wrote:
>>
>>>
>>> Interestingly enough I literally just went through this same thing with
>>> a slight variation.
>>>
>>> Note to the below: I am not sure if this would be considerd best
>>> practice or good for something long term support but I made due with what I
>>> had
>>>
>>> I had 10Gb cards for my storage network but no 10Gb switch, so I direct
>>> connected them with some fun routing and /etc/hosts settings. I also didnt
>>> want my storage network on a routed network (have firewalls in the way of
>>> VLANs) and I wanted the network separate from my ovirtmgmt - and, as I
>>> said, had no switches for 10Gb. Here is what you need at a bare minimum.
>>> Adapt / change it as you need
>>>
>>> 1 dedicated NIC on each node for ovirtmgmt. Ex: eth0
>>>
>>> 1 dedicated NIC to direct connect node 1 and node 2 - eth1 node1
>>> 1 dedicated NIC to direct connect node 1 and node 3 - eth2 node1
>>>
>>> 1 dedicated NIC to direct connect node 2 and node 1 - eth1 node2
>>> 1 dedicated NIC to direct connect node 2 and node 3 - eth2 node2
>>>
>>> 1 dedicated NIC to direct connect node 3 and node 1 - eth1 node3
>>> 1 dedicated NIC to direct connect node 3 and node 2 - eth2 node3
>>>
>>> You'll need custom routes too:
>>>
>>> Route to node 3 from node 1 via eth2
>>> Route to node 3 from node 2 via eth2
>>> Route to node 2 from node 3 via eth2
>>>
>>> Finally, entries in your /etc/hosts which match to your routes above
>>>
>>> Then, advisably, a dedicated NIC per box for VM network but you can
>>> leverage ovirtmgmt if you are just proofing this out
>>>
>>> At this point if you can reach all of your nodes via this direct connect
>>> IPs then you setup gluster as you normally would referencing your entries
>>> in /etc/hosts when you call "gluster volume create"
>>>
>>> In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server so I
>>> setup LACP as well as you can see below
>>>
>>> This is what my Frankenstein POC looked like: http://i.imgur.com/iURL9
>>> jv.png
>>>
>>> You can optionally choose to setup this network in ovirt as well (and
>>> add the NICs to each host) but dont configure it as a VM network. Then you
>>> can also, with some other minor tweaks, use these direct connects as
>>> migration networks rather than ovirtmgmt or VM network
>>>
>>> On Tue, Sep 12, 2017 at 9:12 AM, Tailor, Bharat <
>>> bha...@synergysystemsindia.com> wrote:
>>>
 Hi,

 I am trying to deploy 3 hosts hyper converged setup.
 I am using Centos and installed KVM on all hosts.

 Host-1
 Hostname - 

Re: [ovirt-users] Disk image upload via CLI?

2017-09-12 Thread Matthias Leopold

Thanks, i tried this script and it _almost_ worked ;-)

i uploaded two images i created with
qemu-img create -f qcow2 -o preallocation=full
and
qemu-img create -f qcow2 -o preallocation=falloc

for initial_size and provisioned_size i took the value reported by 
"qemu-img info" in "virtual size" (same as "disk size" in this case)


the upload goes to 100% and then fails with

200 OK Completed 100%
Traceback (most recent call last):
  File "./upload_disk.py", line 157, in 
headers=upload_headers,
  File "/usr/lib64/python2.7/httplib.py", line 1017, in request
self._send_request(method, url, body, headers)
  File "/usr/lib64/python2.7/httplib.py", line 1051, in _send_request
self.endheaders(body)
  File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 840, in send
self.sock.sendall(data)
  File "/usr/lib64/python2.7/ssl.py", line 746, in sendall
v = self.send(data[count:])
  File "/usr/lib64/python2.7/ssl.py", line 712, in send
v = self._sslobj.write(data)
socket.error: [Errno 104] Connection reset by peer

in web GUI the disk stays in Status: "Transferring via API"
it can only be removed when manually unlocking it (unlock_entity.sh)

engine.log tells nothing interesting

i attached the last lines of ovirt-imageio-proxy/image-proxy.log and 
ovirt-imageio-daemon/daemon.log (from the executing node)


the HTTP status 403 in ovirt-imageio-daemon/daemon.log doesn't look too 
nice to me


can you explain what happens?

ovirt engine is 4.1.5
ovirt node is 4.1.3 (is that a problem?)

thx
matthias



Am 2017-09-12 um 13:15 schrieb Fred Rolland:

Hi,

You can check this example:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

Regards,
Fred

On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold 
> wrote:


Hi,

is there a way to upload disk images (not OVF files, not ISO files)
to oVirt storage domains via CLI? I need to upload a 800GB file and
this is not really comfortable via browser. I looked at ovirt-shell
and

https://www.ovirt.org/develop/release-management/features/storage/image-upload/

,
but i didn't find an option in either of them.

thx
matthias

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
2017-09-12 16:07:10,046 INFO(Thread-632) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.28s)
2017-09-12 16:07:10,171 INFO(Thread-633) [images] Writing 8388608 bytes at offset 5301600256 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:10,439 INFO(Thread-633) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.27s)
2017-09-12 16:07:10,556 INFO(Thread-634) [images] Writing 8388608 bytes at offset 5309988864 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:10,819 INFO(Thread-634) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.26s)
2017-09-12 16:07:10,924 INFO(Thread-635) [images] Writing 8388608 bytes at offset 5318377472 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:11,219 INFO(Thread-635) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.30s)
2017-09-12 16:07:11,336 INFO(Thread-636) [images] Writing 8388608 bytes at offset 5326766080 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:11,595 INFO(Thread-636) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.26s)
2017-09-12 16:07:11,711 INFO(Thread-637) [images] Writing 8388608 bytes at offset 5335154688 to