Re: [ovirt-users] How do you oVirt? Here the answers!

2017-04-14 Thread Yura Poltoratskiy

Hi,

On 14.04.2017 11:27, Sandro Bonazzola wrote:

- Within the storage, there isn't a "winner" between NFS, Gluster and
iSCSI. Within Other Storage, Fiber Channel is the most used. We had
also: Ceph, DAS, EMC ScaleIO, GPFS


> EMC ScaleIO
There would be interesting to know in which way. I've done a google 
search about oVirt+ScaleIO - just a few thread about.




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Export VM

2017-02-15 Thread Yura Poltoratskiy



15.02.2017 10:10, Yaniv Kaul пишет:



On Wed, Feb 15, 2017 at 10:08 AM, Yura Poltoratskiy 
<yurapolt...@gmail.com <mailto:yurapolt...@gmail.com>> wrote:


Export VM one can do only to the EXPORT_DOMAIN. Basically, export
domain is (can be) on the public net, which is not a part of oVirt
infrastructure. Exporting process itself is converting disk of a
vm from DATA domain to the EXPORT domain, which can be done only
by host with SPM role (Storage Pool Manager). So, the main idea
is: SPM host should have access to the EXPORT domain, and this
domain is on the public net, and the host has the default gateway
on the management network. So, exporting traffic should go through
the management network.


Wouldn't it be more accurate to say that it is on the same network 
where the storage traffic is going over?

Y.

Ok, let us say in such a way :)

if (export domain is on the network which is not a part of oVirt)
then
traffic will go through the management network
else
depends on which network domain is connected to
fi



You can do unexpected network config and get the exporting traffic
through the Migration network, but this is very specific scenario.
15.02.2017 09:27, Yura Poltoratskiy пишет:


Hi.

Through the management.

I can explain more detail if need.


15.02.2017 09:09, David David пишет:

Hi.
There are several network roles as Management, VM and Migration.
Export VM action through which network role is going on?
Thanks.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>

___ Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Export VM

2017-02-15 Thread Yura Poltoratskiy
Export VM one can do only to the EXPORT_DOMAIN. Basically, export domain 
is (can be) on the public net, which is not a part of oVirt 
infrastructure. Exporting process itself is converting disk of a vm from 
DATA domain to the EXPORT domain, which can be done only by host with 
SPM role (Storage Pool Manager). So, the main idea is: SPM host should 
have access to the EXPORT domain, and this domain is on the public net, 
and the host has the default gateway on the management network. So, 
exporting traffic should go through the management network.


You can do unexpected network config and get the exporting traffic 
through the Migration network, but this is very specific scenario.

15.02.2017 09:27, Yura Poltoratskiy пишет:


Hi.

Through the management.

I can explain more detail if need.


15.02.2017 09:09, David David пишет:

Hi.
There are several network roles as Management, VM and Migration.
Export VM action through which network role is going on?
Thanks.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Export VM

2017-02-14 Thread Yura Poltoratskiy

Hi.

Through the management.

I can explain more detail if need.


15.02.2017 09:09, David David пишет:

Hi.
There are several network roles as Management, VM and Migration.
Export VM action through which network role is going on?
Thanks.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Yura Poltoratskiy

I've done an upgrade of ovirt-engine tomorrow. There were two problems.

The first - packages from epel repo, solved by disable repo and 
downgrade package to an existing version in ovirt-release40 repo (yes, 
there is info in documentation about epel repo).


The second (and it is not only for current version) - run the 
engine-setup always not complete successfully because cat not start 
ovirt-engine-notifier.service after upgrade, and the error in notifier 
is that there is no MAIL_SERVER. Every time I am upgrading engine I have 
the same error. Than I add MAIL_SERVER=127.0.0.1 to 
/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf 
and start notifier without problem. Is it my mistake?


And one more question. In Events tab I can see "User vasya@internal 
logged out.", but there are no message that 'vasya' logged in. Could 
someone tell me how to debug this issue?



02.02.2017 14:19, Sandro Bonazzola пишет:

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it 
works fine for you :-)


If you're not planning an update to 4.1.0 in the near future, let us 
know why.

Maybe we can help.

Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Yura Poltoratskiy

Here you are:

iSCSI multipathing 



network setup of a host 





01.02.2017 15:31, Nicolas Ecarnot пишет:

Hello,

Before replying further, may I ask you, Yura, to post a screenshot of 
your iSCSI multipathing setup in the web GUI?


And also the same for the network setup of a host ?

Thank you.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Yura Poltoratskiy

Hi,

As for me personally I have such a config: compute nodes with 4x1G nics 
and storages with 2x1G nics and 2 switches (not stackable). All servers 
runs on CentOS 7.X (7.3 at this monent).


On compute nodes I have bonding with two nic1 and nic2 (attached to 
different switches) for mgmt and VM's network, and the other two nics 
nic3 and nic4 without bonding (and also attached to different switches). 
On storage nodes I have no bonding and nics nic1 and nic2 connected to 
different switches.


I have two networks for iSCSI: 10.0.2.0/24 and 10.0.3.0/24, nic1 of 
storage and nic3 of computes connected to one network; nic2 of storage 
and nic4 of computes - to another one.


On webUI I've created network iSCSI1 and iSCSI2 for nic3 and nic4, also 
created multipath. To have active/active links with double bw throughput 
I've added 'path_grouping_policy "multibus"' in defaults section of 
/etc/multipath.conf.


After all of that, I have 200+MB/sec throughput to the storage (like 
raid0 with 2 sata hdd) and I can lose one nic/link/swith without 
stopping vms.


[root@compute02 ~]# multipath -ll
360014052f28c9a60 dm-6 LIO-ORG ,ClusterLunHDD
size=902G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 6:0:0:0 sdc 8:32  active ready running
  `- 8:0:0:0 sdf 8:80  active ready running
36001405551a9610d09b4ff9aa836b906 dm-40 LIO-ORG ,SSD_DOMAIN
size=915G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 7:0:0:0 sde 8:64  active ready running
  `- 9:0:0:0 sdh 8:112 active ready running
360014055eb8d30a91044649bda9ee620 dm-5 LIO-ORG ,ClusterLunSSD
size=135G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 6:0:0:1 sdd 8:48  active ready running
  `- 8:0:0:1 sdg 8:96  active ready running

[root@compute02 ~]# iscsiadm -m session
tcp: [1] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash)
tcp: [2] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash)
tcp: [3] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash)
tcp: [4] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash)

[root@compute02 ~]# ip route show | head -4
default via 10.0.1.1 dev ovirtmgmt
10.0.1.0/24 dev ovirtmgmt  proto kernel  scope link  src 10.0.1.102
10.0.2.0/24 dev enp5s0.2  proto kernel  scope link  src 10.0.2.102
10.0.3.0/24 dev enp2s0.3  proto kernel  scope link  src 10.0.3.102

[root@compute02 ~]# brctl show ovirtmgmt
bridge name bridge id   STP enabled interfaces
ovirtmgmt   8000.000475b4f262   no bond0.1001

[root@compute02 ~]# cat /proc/net/bonding/bond0 | grep "Bonding\|Slave 
Interface"

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Slave Interface: enp4s6
Slave Interface: enp6s0


01.02.2017 12:50, Nicolas Ecarnot пишет:

Hello,

I'm starting over on this subject because I wanted to clarify what was 
the oVirt way to manage multipathing.


(Here I will talk only about the data/iSCSI/SAN/LUN/you name it 
networks.)
According to what I see in the host network setup, one can assign 
*ONE* data network to an interface or to a group of interfaces.


That implies that if my host has two data-dedicated interfaces, I can
- either group them using bonding (and oVirt is handy for that in the 
host network setup), then assign the data virtual network to this bond
- either assign each nic a different ip in each a different VLAN, then 
use two different data networks, and assign them each a different data 
network. I never played this game and don't know where it's going to.


At first, may the oVirt storage experts comment on the above to check 
it's ok.


Then, as many users here, our hardware is this :
- Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers
- SANs : Equallogic PS4xxx and PS6xxx

Equallogic's recommendation is that bonding is evil in iSCSI access.
To them, multipath is the only true way.
After reading tons of docs and using Dell support, everything is 
telling me to use at least two different NICs with different ip, not 
bonded - using the same network is bad but ok.


How can oVirt handle that ?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New Install Issues

2017-01-30 Thread Yura Poltoratskiy
Two days ago the same question was asked there with subj "Ovirt FQDN". I 
took a method from the neighbor thread and it works for me. The method 
is quite simple - just to create a file  with a single line:


[root@ovirt ~]# cat /etc/ovirt-engine/engine.conf.d/99-alternate-fqdn.conf
SSO_ALTERNATE_ENGINE_FQDNS="your.fqdn.there"

I did not try with IP, but with FQDN, which is not that I've used for 
deploy engine, works fine. You can give a try to put IP instead of FQDN.


28.01.2017 22:10, Talk Jesus пишет:


“Please consult following bugs targeted to oVirt 4.0.4 which should 
fix this limitation: https://bugzilla.redhat.com/1325746 
https://bugzilla.redhat.com/1362196”


I checked both links, I see nothing about how to fix it so I can 
access via IP address?


*From:*Martin Perina [mailto:mper...@redhat.com]
*Sent:* Saturday, January 28, 2017 2:56 PM
*To:* Talk Jesus 
*Cc:* users ; Nir Soffer 
*Subject:* Re: [ovirt-users] New Install Issues

On Sat, Jan 28, 2017 at 8:50 PM, Nir Soffer > wrote:


On Sat, Jan 28, 2017 at 8:46 PM, Talk Jesus > wrote:
> Hey team,
>
>
>
> Just installed Ovirt 4x on Centos 7. Like many who have reported
this issue,
> I cannot log into the web gui via IP address. I get this:
>
>
>
> The client is not authorized to request an authorization. It's
required to
> access the system using FQDN.

​Hi,

please read the release notes [1], it's written in section "Install / 
Upgrade from previous versions" step 2.


For details how to configure engine access via IP address please take 
a look at [2] as mentioned in release notes.


Thanks

Martin

[1] http://www.ovirt.org/release/4.0.6/
[2] https://bugzilla.redhat.com/1325746
​


This happens when you try to access the server not via the server
address
set when you installed your engine.

For example, you select the address foo.bar.com
 when you installed your engine,
and you are trying to access it as https:foo/. (maybe you have an
alias).

Check how the engine was configured and use the same address when
you access it.

Nir

>
>
>
> I can’t figure out a fix for this. Not wanting to use a domain,
just IP for
> access for testing.
>
>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] posix compliant fs with ceph rbd

2017-01-26 Thread Yura Poltoratskiy



26.01.2017 11:11, Nir Soffer пишет:

On Wed, Jan 25, 2017 at 8:55 PM, Yura Poltoratskiy
<yurapolt...@gmail.com> wrote:

Hi,

I want to use Ceph with oVirt in some non standard way. The main idea is to
map rbd volume to all computes and to get the same block device, say
/dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
option to add Storage Domain.

Am I crazy?

Yes

Thnx :)




If not, what should I do next: create a file system on top of
/dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it work,
I mean does oVirt compatible with not clustered file system in this
scenario?

This can work only for clustered file system, not with XFS. Double mounting will
quickly corrupt the file system.

Can you tell me what FS should I choose to do some experiments?

And in general: what is use cases for option like "POSIX compliant FS"?




Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
scalability and high availability (for example, when one storage node
failed).

You have two ways to use ceph:

- via cinder - you will get best performance and scalability
- via cephfs - you will get all features, works like fault tolerant NFS

Nir


Thanks for advice.

PS. Yes, I know about Gluster but want to use Ceph :)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] posix compliant fs with ceph rbd

2017-01-26 Thread Yura Poltoratskiy
2017-01-25 21:01 GMT+02:00 Logan Kuhn <supp...@jac-properties.com>:

> We prefer Ceph too and we've got our ovirt instance configured in two
> different ways.
>
> 1. Openstack Cinder, each VM's disk will have a single volume in ceph with
> all volumes being under the same pool.
>
I am familiar with OpenStack, but do not want to deploy parts of it. That's
why I want just to map rbd and use it like VMware uses mapped datastore:
create a file system on it and create a file like virtual block device per
VM, or even without file system at all just by using LVM.

This scenario is not far from iSCSI: we have mapped one block device (with
LVM on top) across all computes, oVirt agent manage volumes on that block
device, and agent manage also mapping themselves. My idea is to do mapping
block device by hand and all other process grant to oVirt.


> 2. Export an RBD via NFS from a gateway machine, this can be a trivially
> small physical or virtual machine that just exports the NFS share that is
> pointed at whatever RBD you choose to use.
>
I can see two cons:
1. Single point of failure.
2. Potential growth of latency.


>
> Not a direct answer to your question, but hopefully it helps.
>
> Regards,
> Logan
>
> On Wed, Jan 25, 2017 at 12:55 PM, Yura Poltoratskiy <yurapolt...@gmail.com
> > wrote:
>
>> Hi,
>>
>> I want to use Ceph with oVirt in some non standard way. The main idea is
>> to map rbd volume to all computes and to get the same block device, say
>> /dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
>> option to add Storage Domain.
>>
>> Am I crazy? If not, what should I do next: create a file system on top of
>> /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
>> work, I mean does oVirt compatible with not clustered file system in this
>> scenario?
>>
>> Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
>> scalability and high availability (for example, when one storage node
>> failed).
>>
>> Thanks for advice.
>>
>> PS. Yes, I know about Gluster but want to use Ceph :)
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] posix compliant fs with ceph rbd

2017-01-25 Thread Yura Poltoratskiy
Hi,

I want to use Ceph with oVirt in some non standard way. The main idea is to
map rbd volume to all computes and to get the same block device, say
/dev/foo/bar, across all nodes, and then use "POSIX compliant file systems"
option to add Storage Domain.

Am I crazy? If not, what should I do next: create a file system on top of
/dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
work, I mean does oVirt compatible with not clustered file system in this
scenario?

Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
scalability and high availability (for example, when one storage node
failed).

Thanks for advice.

PS. Yes, I know about Gluster but want to use Ceph :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users