Re: [ovirt-users] Networking setup

2017-04-14 Thread FERNANDO FREDIANI
Hello Alexis.

Why use all those physical NICs and not simplify with fewer NICs and
cabling.

Things like Management, Migration and Display you may put on the same set
of NICs (bond0) as Management and Display traffic are marginal so you have
an almost dedicated NICs for Migration traffic.
Then yes is good to separate VMs traffic and iSCSI in different set of NICs.

If you really want to use all your physical NICs add the one left in this
scenario to either add more bandwidth to VMs traffic or to user as
active/backup if you have two non-stackable switches, either with bonding
too.

Regards
Fernando

2017-04-14 10:37 GMT-03:00 Sandro Bonazzola :

> Adding Dan and Marcin
>
> On Wed, Apr 12, 2017 at 9:29 AM, Alexis HAUSER <
> alexis.hau...@imt-atlantique.fr> wrote:
>
>> Hi,
>>
>>
>> I have an Ovirt installation with 3 nodes (5 soon), containing 6 network
>> cards (8 soon), a multipath iSCSI array and I would like to know how you
>> would advice me to choose which link to bond or not.
>>
>> I thought about :
>>
>> 1+2 : ovirtmgmt (bond)
>> 3+4 : iSCSI (multipath)
>> 5 : VM and Display
>> 6 : Migration
>>
>> What do you think about this configuration ?
>> Is it a bad idea to set VM and display on the same network interface ?
>> Do ovirtmgmt need high bandwidth ?
>> In terms of bandwidth, is it a bad idea to have one single NIC for
>> Migration ?
>>
>>
>> Thanks in advance for your suggestions
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-15 Thread FERNANDO FREDIANI
Well, make it not go through host1 and dedicate a storage server for
running NFS and make both hosts connect to it.
In my view NFS is much easier to manage than any other type of storage,
specially FC and iSCSI and performance is pretty much the same, so you
won`t get better results other than management going to other type.

Fernando

2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi :

> Hi guys,
> I have one nfs storage,
> it's connected through host1.
> host2 also has access to it, I can easily migrate vms between them.
>
> The question is - if host1 is down - all infrastructure is down, since all
> traffic goes through host1,
> is there any way in oVirt to use redundant storage?
>
> Only glusterfs?
>
> Thanks
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-15 Thread FERNANDO FREDIANI
Hello Konstantin.

That doesn`t make much sense make a whole cluster depend on a single host.
>From what I know any host talk directly to NFS Storage Array or whatever
other Shared Storage you have.
Have you tested that host going down if that affects the other with the NFS
mounted directlly in a NFS Storage array ?

Fernando

2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi :

> In ovirt you have to attach storage through specific host.
> If host goes down storage is not available.
>
> On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Well, make it not go through host1 and dedicate a storage server for
>> running NFS and make both hosts connect to it.
>> In my view NFS is much easier to manage than any other type of storage,
>> specially FC and iSCSI and performance is pretty much the same, so you
>> won`t get better results other than management going to other type.
>>
>> Fernando
>>
>> 2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi :
>>
>>> Hi guys,
>>> I have one nfs storage,
>>> it's connected through host1.
>>> host2 also has access to it, I can easily migrate vms between them.
>>>
>>> The question is - if host1 is down - all infrastructure is down, since
>>> all traffic goes through host1,
>>> is there any way in oVirt to use redundant storage?
>>>
>>> Only glusterfs?
>>>
>>> Thanks
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-17 Thread FERNANDO FREDIANI
If this works in this way then it's a huge downside in the architecture. 
Perhaps someone can clarify in more details.


Fernando


On 15/04/2017 14:53, Konstantin Raskoshnyi wrote:

Hi Fernando,
I see each host has direct connection nfs mount, but yes, if main host 
to which I connected nfs storage going down the storage becomes 
unavailable and all vms are down



On Sat, Apr 15, 2017 at 10:37 AM FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


Hello Konstantin.

That doesn`t make much sense make a whole cluster depend on a
single host. From what I know any host talk directly to NFS
Storage Array or whatever other Shared Storage you have.
Have you tested that host going down if that affects the other
with the NFS mounted directlly in a NFS Storage array ?

Fernando

2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi
mailto:konra...@gmail.com>>:

In ovirt you have to attach storage through specific host.
If host goes down storage is not available.

On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI
mailto:fernando.fredi...@upx.com>>
wrote:

Well, make it not go through host1 and dedicate a storage
server for running NFS and make both hosts connect to it.
In my view NFS is much easier to manage than any other
type of storage, specially FC and iSCSI and performance is
pretty much the same, so you won`t get better results
other than management going to other type.

Fernando

2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi
mailto:konra...@gmail.com>>:

Hi guys,
I have one nfs storage,
it's connected through host1.
host2 also has access to it, I can easily migrate
vms between them.

The question is - if host1 is down - all
infrastructure is down, since all traffic goes through
host1,
is there any way in oVirt to use redundant storage?

Only glusterfs?

Thanks


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-17 Thread FERNANDO FREDIANI
mount, but
yes, if main
host to which
I connected
nfs storage
going down the
storage
becomes
unavailable
and all vms
are down


On Sat, Apr
15, 2017 at
10:37 AM
FERNANDO
FREDIANI

mailto:fernando.fredi...@upx.com>>
wrote:

Hello
Konstantin.

That
doesn`t
make much
sense make
a whole
cluster
depend on
a single
host. From
what I
know any
host talk
directly
to NFS
Storage
Array or
whatever
other
Shared
Storage
you have.
Have you
tested
that host
going down
if that
affects
the other
with the
NFS
mounted
directlly
in a NFS
Storage
array ?

Fernando

2017-04-15
12:42
GMT-03:00
Konstantin
Raskoshnyi
mailto:konra...@gmail.com>>:

In
ovirt
you
have
to
attach
storage
through
specific
h

[ovirt-users] Latency threshold between Hosted Engine and Hosts

2017-04-17 Thread FERNANDO FREDIANI

Hello.

I have a Engine which is hosted in a optimal location for the people who 
access it and this Engine manage multiple Datacenters, some close by and 
some far away in terms of latency.


What is the maximum latency advised between the Engine and the hosts for 
a healthy operation or that doesn't matter much as long the Engine can 
always reach the hosts ?


Currently the maximum latency I have between Engine and Hosts is 110ms 
and sometimes when there is a non-optimal route latency goes up to 
170ms. Should I be concerned about this ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] CentOS 7 and kernel 4.x

2017-04-19 Thread FERNANDO FREDIANI

Hi folks

Is anyone using KVM Nodes running CentOS with upgraded Kernel like 
Elrepo to either 4.5 (lt) or 4.10(ml) and noticed any improvements due 
that ?


What about oVirt-Node-NG ? I don't really like to make much changes on 
oVirt-Node image, but wanted to hear from whoever may have done that and 
are having good and stable results. And if so if there is a way to build 
an install image with one of those newer kernels.


Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Hello.

Out of curiosity, why do you and people in general use more replica 3 
than replica 2 ?


If I understand correctly this seems overkill and waste of storage as 2 
copies of data (replica 2)  seems pretty reasonable similar to RAID 1 
and still in the worst case the data can be replicated after a fail. I 
see that replica 3 helps more on performance at the cost of space.


Fernando


On 24/04/2017 08:33, Sven Achtelik wrote:


Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to 
stay on the current version and I’m applying updates/upgrade if there 
are any. For this I put a host in maintenance and also use the “Stop 
Gluster Service”  checkbox. After it’s done updating I’ll set it back 
to active and wait until the engine sees all bricks again and then 
I’ll go for the next host.


This worked fine for me the last month and now that I have more and 
more VMs running the changes that are written to the gluster volume 
while a host is in maintenance become a lot more and it takes pretty 
long for the healing to complete. What I don’t understand is that I 
don’t really see a lot of network usage in the GUI during that time 
and it feels quiet slow. The Network for the gluster is a 10G and I’m 
quiet happy with the performance of it, it’s just the healing that 
takes long. I noticed that because I couldn’t update the third host 
because of unsynced gluster volumes.


Is there any limiting variable that slows down traffic during healing 
that needs to be configured ? Or should I maybe change my updating 
process somehow to avoid having so many changes in queue?


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

But then quorum doesn't replicate data 3 times, does it ?

Fernando


On 24/04/2017 10:24, Denis Chaplygin wrote:

Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


Out of curiosity, why do you and people in general use more
replica 3 than replica 2 ?


The answer is simple - quorum. With just two participants you don't 
know what to do, when your peer is unreachable. When you have three 
participants, you are able to establish a majority. In that case, when 
two partiticipants are able to communicate, they now, that lesser part 
of cluster knows, that it should not accept any changes.


If I understand correctly this seems overkill and waste of storage
as 2 copies of data (replica 2) seems pretty reasonable similar to
RAID 1 and still in the worst case the data can be replicated
after a fail. I see that replica 3 helps more on performance at
the cost of space.


You are absolutely right. You need two copies of data to provide data 
redundancy and you need three (or more) members in cluster to provide 
distinguishable majority. Therefore we have arbiter volumes, thus 
solving that issue [1].


[1] 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Ok, great, thanks for the clarification.

Therefore a replica 3 configuration means raw storage space cost is 
'similar' to a RAID 1 and actual data exists only 2 times and two 
different servers.


Regards
Fernando


On 24/04/2017 11:35, Denis Chaplygin wrote:
With arbiter volume you still have a replica 3 volume, meaning that 
you have three participants in your quorum. But only two of those 
participants keep the actual data. Third one, the arbiter, stores only 
some metadata, not the files content, so data is not replicated 3 times.


On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


But then quorum doesn't replicate data 3 times, does it ?

Fernando


On 24/04/2017 10:24, Denis Chaplygin wrote:

Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI
mailto:fernando.fredi...@upx.com>> wrote:

Out of curiosity, why do you and people in general use more
replica 3 than replica 2 ?


The answer is simple - quorum. With just two participants you
don't know what to do, when your peer is unreachable. When you
have three participants, you are able to establish a majority. In
that case, when two partiticipants are able to communicate, they
now, that lesser part of cluster knows, that it should not accept
any changes.

If I understand correctly this seems overkill and waste of
storage as 2 copies of data (replica 2)  seems pretty
reasonable similar to RAID 1 and still in the worst case the
data can be replicated after a fail. I see that replica 3
helps more on performance at the cost of space.


You are absolutely right. You need two copies of data to provide
data redundancy and you need three (or more) members in cluster
to provide distinguishable majority. Therefore we have arbiter
volumes, thus solving that issue [1].

[1]

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/

<https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/>





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bonding type

2017-04-25 Thread FERNANDO FREDIANI
If they are 2 switches and they are not stack than you have to use 
active-backup mode. If they are stack you may just used mode 4 and 
aggregate the bandwidth.


Fernando


On 25/04/2017 05:31, Alexis HAUSER wrote:

Hi,

I would like to bond 2 NICS from RHV side. There 2 links would go on 2 
separates switch.
Which kind of bond would you advice me to use (betweem the 4 proposed 
mode or the custom mode) ?


Regardes




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-25 Thread FERNANDO FREDIANI

RAID 6 doesn't make exactly 3 copies of data.

I think storage is too expensive when compared to the total cost of the 
platform that 3 copies is waste of storage or luxury, given than if you 
have a permanent failure you still can make a new 2nd copy of the data 
provided you have storage left for that.



On 25/04/2017 10:26, Donny Davis wrote:
I personally want three copies of my data, more akin to RAID 6(ish) so 
in my case replica 3 makes perfect sense.


On Mon, Apr 24, 2017 at 11:34 AM, Denis Chaplygin <mailto:dchap...@redhat.com>> wrote:


Hello!

On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI
mailto:fernando.fredi...@upx.com>> wrote:

Hi Denis, understood.
What if in the case of adding a fourth host to the running
cluster, will the copy of data be kept only twice in any of
the 4 servers ?


replica volumes can be build only from 2 or 3 bricks. There is no
way to make a replica volume from a 4 bricks.

But you may combine distributed volumes and replica volumes [1]:

|gluster volume create test-volume replica 2 transport tcp
server1:/b1 server2:/b2 server3:/b3 server4:/b4|

test-volume would be like a RAID10 - you will have two replica
volumes b1+b2 and b3+b4 combined into a single distributed volume.
In that case you will
have only two copies of your data. Part of your data will be
stored twice on b1 and b2 and another one part will be stored
twice at b3 and b4
You will be able to extend that distributed volume by adding new
replicas.


[1]

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes

<https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes>

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Performance differences between ext4 and XFS

2017-06-07 Thread FERNANDO FREDIANI
Just wanted to find out what filesystem people are using to host Virtual 
Machines in qcow2 files in a filesystem in Localstorage, ext4 or XFS ?


I normally like XFS for big files which is the case fo VMs, but wondered 
if anyone could see any performance advantage when compared with ext4.


Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Performance differences between ext4 and XFS

2017-06-08 Thread FERNANDO FREDIANI
Many thanks for your input Markus. Helps to device before putting the 
server in production.


Regards
Fernando


On 08/06/2017 02:19, Markus Stockhausen wrote:

Hi Fernando,

we personally like XFS very much. But XFS + qcow2 (even for snapshots 
in OVirt)
comes close to a no-go these days. We are experience excessive 
fragmentation.

For more info see unresolved Redhat Info:

https://access.redhat.com/solutions/532663

Even with tuning the XFS allocation policy on the qcow2 directory with

xfs_io -c 'extsize -R 2M' 

A nice 3rd party explanation can be found here:

https://blog.codecentric.de/en/2017/04/xfs-possible-memory-allocation-deadlock-kmem_alloc/

Markus


*Von:* users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag 
von "FERNANDO FREDIANI [fernando.fredi...@upx.com]

*Gesendet:* Mittwoch, 7. Juni 2017 23:35
*An:* users@ovirt.org
*Betreff:* [ovirt-users] Performance differences between ext4 and XFS

Just wanted to find out what filesystem people are using to host 
Virtual Machines in qcow2 files in a filesystem in Localstorage, ext4 
or XFS ?


I normally like XFS for big files which is the case fo VMs, but 
wondered if anyone could see any performance advantage when compared 
with ext4.


Fernando


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host install common error

2017-06-08 Thread FERNANDO FREDIANI

Hello folks.

One of the most (if not the more) annoying problems of oVirt is the 
known message "... installation failed. Command returned failure code 1 
during SSH session ..." which happens quiet often in several situations.


Scrubbing installation logs it seems that most stuff goes well, but then 
it stops in a message saying: "ERROR otopi.context 
context._executeMethod:151 Failed to execute stage 'Setup validation': 
Cannot locate vdsm package, possible cause is incorrect channels" - 
followed by another message: "DEBUG otopi.context 
context.dumpEnvironment:770 ENV BASE/exceptionInfo=list:'[('exceptions.RuntimeError'>, RuntimeError('Cannot locate vdsm package, 
possible cause is incorrect channels',), )]'"


I am not sure why would it complain about the repositories as this is a 
Minimal CentOS 7 Install and the oVirt repository is added by 
oVirt-Engine itself so I assumed it added the most appropriate to its 
own version.
I even tried to copy over the same repositories used on the Hosts that 
are installed and working fine but that message shows up again on the 
Install retries.


Does anyone have any other hints where to look ?

For reference my engine version running is: 4.1.1.6-1.el7.centos.

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Question about Clusters and Storage usage

2017-06-12 Thread FERNANDO FREDIANI

Hello folks.

I have here a scenario where I have one Datacenter and inside it I have 
one Cluster which has multiple hosts with Shared storage between them.


Now I am willing to add another standalone host with Local Storage only 
and logic tells me to add to the same Datacenter created as they are in 
fact in the same physical Datacenter, but as this host has only Local 
Storage I shouldn't obviously add to the existing cluster.


Question is: As the Datacenter was created with Storage type Shared
- Should I create a new Cluster and add it to it, even though the 
new Host has only Local Storage
- Or should I create another Datacenter with Storage Type Local, a 
cluster within it and then add the Host there ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host name already in use in oVirt Engine

2017-06-13 Thread FERNANDO FREDIANI

Hello.

I had a previous Datacenter and Cluster with a Host in it which I have 
removed completelly from oVirt Engine. In order to remove I did the 
following steps:


- Removed all Virtual Machines on the top of the Host
- Put the only Local Datastore in Maintenence mode (It didn't allow to 
remove it for some reason. Said I had to remove the Datacenter instead)
- As the Datastore couldn't be remove so the Host. I then removed the 
Datacenter and it removed everything.


Then I created a new Cluster and tried to add the same Host with the 
same hostname in it and I am getting the message: "Cannot add Host. The 
Host name is already in use, please choose a unique name and try again."


It seems that something was left lost in the Database which still 
beleives that host exists in oVirt Engine. How can I clean that up and 
add it again as I am not willing to change its name.


Thanks
Fernand
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host name already in use in oVirt Engine

2017-06-13 Thread FERNANDO FREDIANI

Replying on my own email.

I managed to add the host again but had to do some very manual steps:

- Removed /etc/vdsm/vdsm.id from the Host
- engine=# select vds_id from vds_static where host_name = 'host.name';
- engine=# delete from vds_statistics where vds_id = 'host.name.uuid';
- engine=# delete from vds_dynamic where vds_id = 'host.name.uuid';
- engine=# delete from vds_static where vds_id = 'host.name.uuid';
- uuidgen > /etc/vdsm/vdsm.id on the Host

Should I report it as a bug as when I removed everything from the Admin 
Web Interface it should have done all this clean up in the Database ?


Fernando

On 13/06/2017 10:04, FERNANDO FREDIANI wrote:

Hello.

I had a previous Datacenter and Cluster with a Host in it which I have 
removed completelly from oVirt Engine. In order to remove I did the 
following steps:


- Removed all Virtual Machines on the top of the Host
- Put the only Local Datastore in Maintenence mode (It didn't allow to 
remove it for some reason. Said I had to remove the Datacenter instead)
- As the Datastore couldn't be remove so the Host. I then removed the 
Datacenter and it removed everything.


Then I created a new Cluster and tried to add the same Host with the 
same hostname in it and I am getting the message: "Cannot add Host. 
The Host name is already in use, please choose a unique name and try 
again."


It seems that something was left lost in the Database which still 
beleives that host exists in oVirt Engine. How can I clean that up and 
add it again as I am not willing to change its name.


Thanks
Fernand


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt storage best practise

2017-06-14 Thread FERNANDO FREDIANI
I normally assume that any performance gain from directlly attaching a 
LUN to a Virtual Machine then using it in the traditional way are so 
little to compensate the extra hassle to do that. I would avoid as much 
as I cacn use it, unless it is for some very special reason where you 
cannot do in any other way. The only real usage for it so far was 
Microsoft SQL Server Clustering requirements.


Fernando


On 14/06/2017 03:23, Idan Shaby wrote:
Direct luns are disks that are not managed by oVirt. Ovirt 
communicates directly with the lun itself, without any other layer in 
between (like lvm in image disks).
The advantage of the direct lun is that it should have better 
performance since there's no overhead of another layer in the middle.
The disadvantage is that you can't take a snapshot of it (when 
attached to a vm, of course), can't make it a part of a template, 
export it, and in general - you don't manage it.



Regards,
Idan

On Mon, Jun 12, 2017 at 10:10 PM, Stefano Bovina > wrote:


Thank you very much.
What about "direct lun" usage and database example?


2017-06-08 16:40 GMT+02:00 Elad Ben Aharon mailto:ebena...@redhat.com>>:

Hi,
Answer inline

On Thu, Jun 8, 2017 at 1:07 PM, Stefano Bovina
mailto:bov...@gmail.com>> wrote:

Hi,
does a storage best practise document for oVirt exist?


Some examples:

oVirt allows to extend an existing storage domain: Is it
better to keep a 1:1 relation between LUN and oVirt
storage domain?

What do you mean by 1:1 relation? Between storage domain and
the number of LUNs the domain reside on?

If not, is it better to avoid adding LUNs to an already
existing storage domain?

No problems with storage domain extension.


Following the previous questions:

Is it better to have 1 Big oVirt storage domain or many
small oVirt storage domains?

Depends on your needs, be aware to the following:
- Each domain has its own metadata which allocates ~5GB of the
domain size.
- Each domain is being constatntly monitored by the system, so
large number of domain can decrease the system performance.
There are also downsides with having big domains, like less
flexability

There is a max num VM/disks for storage domain?


In which case is it better to use "direct attached lun"
with respect to an image on an oVirt storage domain?


Example:

Simple web server: > image
Large database (simple example):
   - root,swap etc: 30GB  > image?
   - data disk: 500GB  -> (direct or image?)

Regards,

Stefano

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-guest-agent - Ubuntu 16.04

2017-06-28 Thread FERNANDO FREDIANI

Hello

Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?

I have noticed that if you install ovirt-guest-agent package from Ubuntu 
repositories it doesn't start. Throws an error about python and never 
starts. Has anyone noticied the same ? OS in this case is a clean 
minimal install of Ubuntu 16.04.


Installing it from the following repository works fine - 
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04


Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Virtual Machine looses connectivity with no clear explanation

2017-07-03 Thread FERNANDO FREDIANI
I have a rather strange issue which is affecting one of my last deployed 
Hypervisors. It is a CentOS 7 (not a oVirt Node) which runs only 3 
Virtual Machines.


One of these VMs have a reasonable output traffic at peaks (500 - 
700Mbps) and the hypervisor underneath is connected to the switch via a 
bonding (mode=2) which in turn creates bond0.XX interfaces which are 
connected to different bridges for each network. The VM in question is 
connected to bridge "ovirtmgmt".


When the problem happens the VM stops passing traffic and cannot reach 
even the router or other VMs in the same Layer 2. Seems the bridge stop 
passing traffic for that particular VM. Other VMs work fine since they 
were created. When this problem happens I just need to go to its Console 
and run a reboot (Ctrl-Alt-Del), don't even need to Power Off and Power 
On again using oVirt Engine.
I have even re-installed this VMs operating system from scratch but the 
problem persists. Have also changed the vNic mac address in the case 
(already check) of conflicted mac addresses somewhere in that Layer 2.


Last, my hypervisor machine (due a mistake) has been running with 
SElinux disabled, not sure if it could have anything to do with this 
behavior.


Anyway, anyone has ever seen any behavior like that ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] download/export a VM image

2017-07-03 Thread FERNANDO FREDIANI

Have exactlly the same doubt here as well.


On 03/07/2017 12:05, aduckers wrote:

Running a 4.1 cluster with FC SAN storage.  I’ve got a VM that I’ve customized, 
and would now like to pull that out of oVirt in order to share with folks 
outside the environment.
What’s the easiest way to do that?
I see that the export domain is being deprecated, though I can still set one up 
at this time.  Even in the case of an NFS export domain though, it looks like 
I’d need to drill down into the exported file system and find the correct image 
based on VMID (I think..).

Is there a simple way to grab a VM image?

Thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent - Ubuntu 16.04

2017-07-04 Thread FERNANDO FREDIANI
I am still getting problems with ovirt-guest-agent on Ubuntu machines in 
any scenario, new or upgraded instalation.


One of the VMs has been upgraded to Ubuntu 17.04 (zesty) and the 
upgraded version of ovirt-guest-agent also doesn't start due something 
with python.


When trying to run it manually with: "/usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py" I get the following 
error:
root@hostname:~# /usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py

*** stack smashing detected ***: /usr/bin/python terminated
Aborted (core dumped)

Tried also to install the previous version (16.04) from evilissimo but 
doesn't work either.


Fernando


On 30/06/2017 06:16, Sandro Bonazzola wrote:
Adding Laszlo Boszormenyi (GCS) <mailto:g...@debian.org>> which is the maintainer according to 
http://it.archive.ubuntu.com/ubuntu/ubuntu/ubuntu/pool/universe/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-1.dsc 



On Wed, Jun 28, 2017 at 5:37 PM, FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


Hello

Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?

I have noticed that if you install ovirt-guest-agent package from
Ubuntu repositories it doesn't start. Throws an error about python
and never starts. Has anyone noticied the same ? OS in this case
is a clean minimal install of Ubuntu 16.04.

Installing it from the following repository works fine -

http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04

<http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04>

Fernando

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

Red Hat EMEA <https://www.redhat.com/>

<https://red.ht/sig>  
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bizzare oVirt network problem

2017-07-12 Thread FERNANDO FREDIANI

Hello.

I am facing a pretty bizzare problem in two of my Nodes running oVirt. A 
given VM running a few hundred Mbps of traffic simply stops passing 
traffic and only recovers after a reboot. Checking the bridge with 
'brctl showmacs BRIDGE' I see the VM's MAC address missing during this 
event.


It seems the bridge simply unlearn the VM's mac address which only 
returns when the VM is rebooted.
This problems happened in two different Nodes running in different 
hardware, in different datacenter, in different network architecture, 
different switch vendors and different bonding modes.


The main differences these Nodes have compared to others I have and 
which don't show this problem are:

- The CentOS 7 installed is a Minimal installation instead of oVirt-NG
- The Kernel used is 4.12 (elrepo) instead of the default 3.10
- The ovirtmgmt network is used also for the Virtual Machine 
showing this problem.


Has anyone have any idea if it may have anything to do with oVirt (any 
filters) or any of the components different from a oVirt-NG installation ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bizzare oVirt network problem

2017-07-12 Thread FERNANDO FREDIANI

Hello Pavel

What you mean by another oVirt instance ? In one Datacenter it has 2 
different clusters (or Datacenter in oVirt way of orrganizing things), 
but in the other Datacenter the oVirt Node is standlone.


Let me know.

Fernando


On 12/07/2017 16:49, Pavel Gashev wrote:


Fernando,

It looks like you have another oVirt instance in the same network 
segment(s). Don’t you?


*From: * on behalf of FERNANDO FREDIANI 


*Date: *Wednesday, 12 July 2017 at 16:21
*To: *"users@ovirt.org" 
*Subject: *[ovirt-users] Bizzare oVirt network problem

Hello.

I am facing a pretty bizzare problem in two of my Nodes running oVirt. 
A given VM running a few hundred Mbps of traffic simply stops passing 
traffic and only recovers after a reboot. Checking the bridge with 
'brctl showmacs BRIDGE' I see the VM's MAC address missing during this 
event.


It seems the bridge simply unlearn the VM's mac address which only 
returns when the VM is rebooted.
This problems happened in two different Nodes running in different 
hardware, in different datacenter, in different network architecture, 
different switch vendors and different bonding modes.


The main differences these Nodes have compared to others I have and 
which don't show this problem are:

- The CentOS 7 installed is a Minimal installation instead of oVirt-NG
- The Kernel used is 4.12 (elrepo) instead of the default 3.10
- The ovirtmgmt network is used also for the Virtual Machine 
showing this problem.


Has anyone have any idea if it may have anything to do with oVirt (any 
filters) or any of the components different from a oVirt-NG installation ?


Thanks
Fernando



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Backup oVirt Node configuration

2017-07-18 Thread Fernando Frediani
Folks. I had a need to reinstall a oVirt Node a few times these days. This
imposed reconfigure it all in order to add it back to oVirt Engine.

What is a better way to backup a oVirt Node configuration, for when you
reinstall it or if it fail completelly you just reinstall it and restore
the backed up files with network configuration, UUID, VDSM, etc ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on sdcard?

2017-07-20 Thread FERNANDO FREDIANI
The proposed seems to be something interesting but is manual and 
susceptible to errors. I would much rather if this would come out of the 
box as it is VMware ESXi.


A 'squashfs' type of image boots up and runs completely in memory. Any 
logging is written and rotated also in memory which keeps only a certain 
recent period of logs necessary for quick trobleshooting. Whoever wants 
more than that can easily set a rsyslog server to collect and keep the 
logs for a longer period. With this, only the modified Node 
configuration is written in the SD Card/USB Stick when it changes which 
is not often which makes it a reliable solution.


I personally have a Linux + libvirt solution installed and running in a 
USB Stick that does exactlly this (writes up all the logs in memory) and 
it has been running for 3+ years without any issues.


Fernando


On 20/07/2017 03:54, Lionel Caignec wrote:

Ok thank you,

for now i'm not so advanced on architecture design i'm just thinking of what 
can i do.

Lionel

- Mail original -
De: "Yedidyah Bar David" 
À: "Lionel Caignec" 
Cc: "users" 
Envoyé: Jeudi 20 Juillet 2017 08:03:50
Objet: Re: [ovirt-users] ovirt on sdcard?

On Wed, Jul 19, 2017 at 10:16 PM, Lionel Caignec  wrote:

Hi,

i'm planning to install some new hypervisors (ovirt) and i'm wondering if it's 
possible to get it installed on sdcard.
I know there is write limitation on this kind of storage device.
Is it a viable solution? there is somewhere some tuto about tuning ovirt on 
this kind of storage?

Perhaps provide some more details about your plans?

The local disk is normally used only for standard OS-level stuff -
mostly logging. If you put /var/log on NFS/iSCSI/whatever, I think
you should not expect much other local writing.
Didn't test this myself.

People are doing many other things, including putting all of the
root filesystem on remote storage. There are many options, depending
on your hardware, your existing infrastructure, etc.

Best,


Thanks

--
Lionel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-21 Thread FERNANDO FREDIANI

Has anyone had problem when using the ovirtmgmt bridge to connect VMs ?

I am still facing a bizarre problem where some VMs connected to this 
bridge stop passing traffic. Checking the problem further I see its mac 
address stops being learned by the bridge and the problem is resolved 
only with a VM reboot.


When I last saw the problem I run brctl showmacs ovirtmgmt and it shows 
me the VM's mac adress with agening timer 200.19. After the VM reboot I 
see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not used 
for VMs.


Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-24 Thread FERNANDO FREDIANI
Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any reason 
for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM mac 
address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to connect
VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and it
shows me the VM's mac adress with agening timer 200.19. After the
VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-24 Thread FERNANDO FREDIANI
Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address learned 
again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any reason 
for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM mac 
address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to connect
VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and it
shows me the VM's mac adress with agening timer 200.19. After the
VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Node

2017-07-25 Thread FERNANDO FREDIANI
Josep, these Hosts was CentOS Minimal Install or were oVirt-Node-NG 
images ? If they were CentOS Minimal install you must install vsdm 
before adding the host to oVirt Engine.


Fernando


On 25/07/2017 14:13, Jose Vicente Rosello Vila wrote:


Hello users,

I installed ovirt engine 4.1.3.5-1.el7.centos and I tried to install 2 
hosts, but the result was “ install failed”.


Both nodes have been installes from CD image.

What can I do?

Thanks,

Descripción: Descripción: logo_upv_val.jpg



Josep Vicent Roselló Vila

Àrea de Sistemes d’Informació i Comunicacions

*Universitat Politècnica de València *



Camí de Vera, s/n

46022 VALÈNCIA

_Edifici 4 
L___




Tel. +34 963 879 075 (ext.78746)

rose...@asic.upv.es 



Antes de imprimir este mensaje, piense si es necesario.
¡El cuidado del medioambiente es cosa de todos!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt VM backups

2017-07-27 Thread FERNANDO FREDIANI
One thing that I cannot conceive when doing oVirt backups is the need to 
clone the VM in order to copy it. Why, as in VMware, isn't possible to 
just Snapshot and copy the read-only disk ?


Fernando


On 27/07/2017 07:14, Abi Askushi wrote:

Hi All,

For VM backups I am using some python script to automate the snapshot 
-> clone -> export -> delete steps (although with some issues when 
trying to backups a Windows 10 VM)


I was wondering if there is there any plan to integrate VM backups in 
the GUI or what other recommended ways exist out there.


Thanx,
Abi


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-28 Thread FERNANDO FREDIANI

Hello Edwardh and all.

I keep getting these disconnects, were you able to find anything about 
to suggest changing ?


As I mentioned this machine different from the others where it never 
happened uses the ovirtmgmt network as VM network and has kernel 4.12 
instead of the default 3.10 from CentOS 7.3. It seems a particular 
situation that is triggering this behavior but could not gather any hint 
yet.


I have tried to run a regular arping to force the bridge always learn 
the VMs MAC address but it doesn't seem to work and every in a while the 
bridge 'forgets' that particular VM mac address.
I have also even rebuilt the VM completely changing its operating system 
from Ubuntu 16.04 to CentOS 7.3 and the same problem happened.


Fernando


On 24/07/2017 18:20, FERNANDO FREDIANI wrote:


Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address 
learned again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any 
reason for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM 
mac address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to
connect VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and
it shows me the VM's mac adress with agening timer 200.19. After
the VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Communication Problems between Engine and Hosts

2017-08-02 Thread FERNANDO FREDIANI

Hello.

Yesterday I had a pretty strange problem in one of our architectures. My 
oVirt which runs in one Datacenter and controls Nodes locally and also 
remotelly lost communication with the remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as 
expected and running their Virtual Machines each without dependency of 
the oVirt Engine.


What happened at some point is that when the communication between 
Engine and Hosts came back Hosts got confused and initiated a Live 
Migration of ALL VMs from one of the other. I had also to restart vdsmd 
agent on all Hosts in order to get sanity my environment.
What adds up even more strangeness to this scenario is that one of the 
Hosts affected doesn't belong to the same Cluster as the others and had 
to have the vdsmd restarted.


I understand the Hosts can survive without the Engine online with 
reduced possibilities but can communicated between them, but without 
affecting the VMs or even needing to do what happened in this scenario.


Am I wrong on any of the assumptions ?

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
For any RAID 5 or 6 configuration I normally follow a simple gold rule 
which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use any 
RAID with GlusterFS. I always thought that GlusteFS likes to work in 
JBOD mode and control the disks (bricks) directlly so you can create 
whatever distribution rule you wish, and if a single disk fails you just 
replace it and which obviously have the data replicated from another. 
The only downside of using in this way is that the replication data will 
be flow accross all servers but that is not much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat Support 
Team in depth about optimal configuration in regards to setting up 
GlusterFS most efficiently and I wanted to share with you what I learned.


In general Red Hat Virtualization team frowns upon using each DISK of 
the system as just a JBOD, sure there is some protection by having the 
data replicated, however, the recommendation is to use RAID 6 
(preferred) or RAID-5, or at least RAID-1 at the very least.


Here is the direct quote from Red Hat when I asked about RAID and Bricks:
/
/
/"A typical Gluster configuration would use RAID underneath the 
bricks. RAID 6 is most typical as it gives you 2 disk failure 
protection, but RAID 5 could be used too. Once you have the RAIDed 
bricks, you'd then apply the desired replication on top of that. The 
most popular way of doing this would be distributed replicated with 2x 
replication. In general you'll get better performance with larger 
bricks. 12 drives is often a sweet spot. Another option would be to 
create a separate tier using all SSD’s.” /


/In order to SSD tiering from my understanding you would need 1 x NVMe 
drive in each server, or 4 x SSD hot tier (it needs to be distributed, 
replicated for the hot tier if not using NVME). So with you only 
having 1 SSD drive in each server, I’d suggest maybe looking into the 
NVME option. /

/
/
/Since your using only 3-servers, what I’d probably suggest is to do 
(2 Replicas + Arbiter Node), this setup actually doesn’t require the 
3rd server to have big drives at all as it only stores meta-data about 
the files and not actually a full copy. /

/
/
/Please see the attached document that was given to me by Red Hat to 
get more information on this. Hope this information helps you./

/
/

--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect

On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com ) wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each with 
2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is 
to use GlusterFS to provide HA for the VMs. The 3 servers have a dual 
40Gb NIC and a dual 10Gb NIC. So my intention is to create a loop 
like a server triangle using the 40Gb NICs for virtualization files 
(VMs .qcow2) access and to move VMs around the pod (east /west 
traffic) while using the 10Gb interfaces for giving services to the 
outside world (north/south traffic).



This said, my first question is: How should I deploy GlusterFS in 
such oVirt scenario? My questions are:



1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, 
and then create a GlusterFS using them?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while 
not consuming too much disk space?


4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD 
disk? And yes, will Gluster do it by default or I have to configure 
it to do so?



At the bottom line, what is the good practice for using GlusterFS in 
small pods for enterprises?



You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Moacir, I beleive for using the 3 servers directly connected to each 
others without switch you have to have a Bridge on each server for every 
2 physical interfaces to allow the traffic passthrough in layer2 (Is it 
possible to create this from the oVirt Engine Web Interface?). If your 
ovirtmgmt network is separate from other (should really be) that should 
be fine to do.



Fernando


On 07/08/2017 07:13, Moacir Ferreira wrote:


Hi, in-line responses.


Thanks,

Moacir



*From:* Yaniv Kaul 
*Sent:* Monday, August 7, 2017 7:42 AM
*To:* Moacir Ferreira
*Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Good practices


On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira 
mailto:moacirferre...@hotmail.com>> wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each
with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The
idea is to use GlusterFS to provide HA for the VMs. The 3 servers
have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to
create a loop like a server triangle using the 40Gb NICs for
virtualization files (VMs .qcow2) access and to move VMs around
the pod (east /west traffic) while using the 10Gb interfaces for
giving services to the outside world (north/south traffic).


Very nice gear. How are you planning the network exactly? Without a 
switch, back-to-back? (sounds OK to me, just wanted to ensure this is 
what the 'dual' is used for). However, I'm unsure if you have the 
correct balance between the interface speeds (40g) and the disks (too 
many HDDs?).


Moacir:The idea is to have a very high performance network for the 
distributed file system and to prevent bottlenecks when we move one VM 
from a node to another. Using 40Gb NICs I can just connect the servers 
back-to-back. In this case I don't need the expensive 40Gb switch, I 
get very high speed and no contention between north/south traffic with 
east/west.



This said, my first question is: How should I deploy GlusterFS in
such oVirt scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node,
and then create a GlusterFS using them?

I would assume RAID 1 for the operating system (you don't want a 
single point of failure there?) and the rest JBODs. The SSD will be 
used for caching, I reckon? (I personally would add more SSDs instead 
of HDDs, but it does depend on the disk sizes and your space requirements.


Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic 
JBOD or a JBOD assembled using RAID-5 "disks" createdby the server's 
disk controller?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while
not consuming too much disk space?


Replica 2 + Arbiter sounds good to me.
Moacir:I agree, and that is what I am using.

4 - Does a oVirt hypervisor pod like I am planning to build, and
the virtualization environment, benefits from tiering when using a
SSD disk? And yes, will Gluster do it by default or I have to
configure it to do so?


Yes, I believe using lvmcache is the best way to go.

Moacir: Are you sure? I say that because the qcow2 files will be
quite big. So if tiering is "file based" the SSD would have to be
very, very big unless Gluster tiering do it by "chunks of data".


At the bottom line, what is the good practice for using GlusterFS
in small pods for enterprises?


Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). 
Sharding (enabled out of the box if you use a hyper-converged setup 
via gdeploy).
*Moacir:* Yes! This is another reason to have separate networks for 
north/south and east/west. In that way I can use the standard MTU on 
the 10Gb NICs and jumbo frames on the file/move 40Gb NICs.


Y.


You opinion/feedback will be really appreciated!

Moacir


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI

Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as it 
adds another layer of complexity to the system (either a hardware or 
software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, or 
6). Then when you have GlusterFS on the top of several RAIDs you have 
again more data replicated so you end up with the same data consuming 
more space in a group of disks and again on the top of several RAIDs 
depending on the Gluster configuration you have (in a RAID 1 config the 
same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that it 
reduces considerably the write speeds as each group of disks will end up 
having the write speed of a single disk as all other disks of that group 
have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this big 
pain you mentioned if the data is replicated somewhere else, can still 
be retrieved to both serve clients and reconstruct the equivalent disk 
when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each brick, 
in this case if the disk is down you can easily exchange it and 
rebuild the RAID without going offline, i.e switching off the volume 
doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use any 
RAID with GlusterFS. I always thought that GlusteFS likes to work in 
JBOD mode and control the disks (bricks) directlly so you can create 
whatever distribution rule you wish, and if a single disk fails you 
just replace it and which obviously have the data replicated from 
another. The only downside of using in this way is that the 
replication data will be flow accross all servers but that is not 
much a big issue.


Anyone can elaborate about Using RAID + GlusterFS and JBOD + GlusterFS.

Thanks
Regards
Fernando


On 07/08/2017 03:46, Devin Acosta wrote:


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for 
several different companies, and have dealt with the Red Hat Support 
Team in depth about optimal configuration in regards to setting up 
GlusterFS most efficiently and I wanted to share with you what I 
learned.


In general Red Hat Virtualization team frowns upon using each DISK 
of the system as just a JBOD, sure there is some protection by 
having the data replicated, however, the recommendation is to use 
RAID 6 (preferred) or RAID-5, or at least RAID-1 at the very least.


Here is the direct quote from Red Hat when I asked about RAID and 
Bricks:

/
/
/"A typical Gluster configuration would use RAID underneath the 
bricks. RAID 6 is most typical as it gives you 2 disk failure 
protection, but RAID 5 could be used too. Once you have the RAIDed 
bricks, you'd then apply the desired replication on top of that. The 
most popular way of doing this would be distributed replicated with 
2x replication. In general you'll get better performance with larger 
bricks. 12 drives is often a sweet spot. Another option would be to 
create a separate tier using all SSD’s.” /


/In order to SSD tiering from my understanding you would need 1 x 
NVMe drive in each server, or 4 x SSD hot tier (it needs to be 
distributed, replicated for the hot tier if not using NVME). So with 
you only having 1 SSD drive in each server, I’d suggest maybe 
looking into the NVME option. /

/
/
/Since your using only 3-servers, what I’d probably suggest is to do 
(2 Replicas + Arbiter Node), this setup actually doesn’t require the 
3rd server to have big drives at all as it only stores meta-data 
about the files and not actually a full copy. /

/
/
/Please see the attached document that was given to me by Red Hat to 
get more information on this. Hope this information helps you./

/
/

--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect

On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com <mailto:moacirferre...@hotmail.com>) wrote:


I am willing to assemble a oVirt "pod", made of 3 servers, each 
with 2 CPU sockets of 12 cores, 256GB RAM, 7 H

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
What you mentioned is a specific case and not a generic situation. The 
main point there is that RAID 5 or 6 impacts write performance compared 
when you write to only 2 given disks at a time. That was the comparison 
made.


Fernando


On 07/08/2017 16:49, Fabrice Bacchella wrote:


Le 7 août 2017 à 17:41, FERNANDO FREDIANI <mailto:fernando.fredi...@upx.com>> a écrit :




Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.




That's not true if you have medium to high range hardware raid. For 
example, HP Smart Array come with a flash cache of about 1 or 2 Gb 
that hides that from the OS. 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI

Thanks for the detailed answer Erekle.

I conclude that it is worth in any scenario to have a arbiter node in 
order to avoid wasting more disk space to RAID X + Gluster Replication 
on the top of it. The cost seems much lower if you consider running 
costs of the whole storage and compare it with the cost to build the 
arbiter node. Even having a fully redundant arbiter service with 2 nodes 
would make it wort on a larger deployment.


Regards
Fernando

On 07/08/2017 17:07, Erekle Magradze wrote:


Hi Fernando (sorry for misspelling your name, I used a different 
keyboard),


So let's go with the following scenarios:

1. Let's say you have two servers (replication factor is 2), i.e. two 
bricks per volume, in this case it is strongly recommended to have the 
arbiter node, the metadata storage that will guarantee avoiding the 
split brain situation, in this case for arbiter you don't even need a 
disk with lots of space, it's enough to have a tiny ssd but hosted on 
a separate server. Advantage of such setup is that you don't need the 
RAID 1 for each brick, you have the metadata information stored in 
arbiter node and brick replacement is easy.


2. If you have odd number of bricks (let's say 3, i.e. replication 
factor is 3) in your volume and you didn't create the arbiter node as 
well as you didn't configure the quorum, in this case the entire load 
for keeping the consistency of the volume resides on all 3 servers, 
each of them is important and each brick contains key information, 
they need to cross-check each other (that's what people usually do 
with the first try of gluster :) ), in this case replacing a brick is 
a big pain and in this case RAID 1 is a good option to have (that's 
the disadvantage, i.e. loosing the space and not having the JBOD 
option) advantage is that you don't have the to have additional 
arbiter node.


3. You have odd number of bricks and configured arbiter node, in this 
case you can easily go with JBOD, however a good practice would be to 
have a RAID 1 for arbiter disks (tiny 128GB SSD-s ar perfectly 
sufficient for volumes with 10s of TB-s in size.)


That's basically it

The rest about the reliability and setup scenarios you can find in 
gluster documentation, especially look for quorum and arbiter node 
configs+options.


Cheers

Erekle

P.S. What I was mentioning, regarding a good practice is mostly 
related to the operations of gluster not installation or deployment, 
i.e. not the conceptual understanding of gluster (conceptually it's a 
JBOD system).


On 08/07/2017 05:41 PM, FERNANDO FREDIANI wrote:


Thanks for the clarification Erekle.

However I get surprised with this way of operating from GlusterFS as 
it adds another layer of complexity to the system (either a hardware 
or software RAID) before the gluster config and increase the system's 
overall costs.


An important point to consider is: In RAID configuration you already 
have space 'wasted' in order to build redundancy (either RAID 1, 5, 
or 6). Then when you have GlusterFS on the top of several RAIDs you 
have again more data replicated so you end up with the same data 
consuming more space in a group of disks and again on the top of 
several RAIDs depending on the Gluster configuration you have (in a 
RAID 1 config the same data is replicated 4 times).


Yet another downside of having a RAID (specially RAID 5 or 6) is that 
it reduces considerably the write speeds as each group of disks will 
end up having the write speed of a single disk as all other disks of 
that group have to wait for each other to write as well.


Therefore if Gluster already replicates data why does it create this 
big pain you mentioned if the data is replicated somewhere else, can 
still be retrieved to both serve clients and reconstruct the 
equivalent disk when it is replaced ?


Fernando


On 07/08/2017 10:26, Erekle Magradze wrote:


Hi Frenando,

Here is my experience, if you consider a particular hard drive as a 
brick for gluster volume and it dies, i.e. it becomes not accessible 
it's a huge hassle to discard that brick and exchange with another 
one, since gluster some tries to access that broken brick and it's 
causing (at least it cause for me) a big pain, therefore it's better 
to have a RAID as brick, i.e. have RAID 1 (mirroring) for each 
brick, in this case if the disk is down you can easily exchange it 
and rebuild the RAID without going offline, i.e switching off the 
volume doing brick manipulations and switching it back on.


Cheers

Erekle


On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:


For any RAID 5 or 6 configuration I normally follow a simple gold 
rule which gave good results so far:

- up to 4 disks RAID 5
- 5 or more disks RAID 6

However I didn't really understand well the recommendation to use 
any RAID with GlusterFS. I always thought that GlusteFS likes to 
work in JBOD mode and control 

Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-08-07 Thread FERNANDO FREDIANI

Hello.

Despite I didn't get any feedback on this topic anymore I just wanted to 
let people know that since I moved the VM to another oVirt Cluster 
running oVirt-Node-NG and Kernel 3.10 the problem stopped happening. 
Although I still don't know the cause of it I suspect it may have to do 
with the kernel that other Host (hypervsior) is running (4.12) as that 
is the only once running disk kernel for an specific reason.
To support this suspicious in the past I had another Hypervisor also 
running kernel 4.12 and a VM that does that same job had the same issue. 
After I have rebooted the Hypervisor back to default kernel (3.10) the 
problem didn't happen anymore.


If anyone ever faces this or anything similar please let me know as I am 
always interested to find out the root of this issue.


Regards
Fernando


On 28/07/2017 15:01, FERNANDO FREDIANI wrote:


Hello Edwardh and all.

I keep getting these disconnects, were you able to find anything about 
to suggest changing ?


As I mentioned this machine different from the others where it never 
happened uses the ovirtmgmt network as VM network and has kernel 4.12 
instead of the default 3.10 from CentOS 7.3. It seems a particular 
situation that is triggering this behavior but could not gather any 
hint yet.


I have tried to run a regular arping to force the bridge always learn 
the VMs MAC address but it doesn't seem to work and every in a while 
the bridge 'forgets' that particular VM mac address.
I have also even rebuilt the VM completely changing its operating 
system from Ubuntu 16.04 to CentOS 7.3 and the same problem happened.


Fernando


On 24/07/2017 18:20, FERNANDO FREDIANI wrote:


Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address 
learned again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any 
reason for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM 
mac address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
mailto:fernando.fredi...@upx.com>> wrote:


Has anyone had problem when using the ovirtmgmt bridge to
connect VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further
I see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and
it shows me the VM's mac adress with agening timer 200.19.
After the VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is
not used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>










___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Moacir, I understand that if you do this type of configuration you will be
severely impacted on storage performance, specially for writes. Even if you
have a Hardware RAID Controller with Writeback cache you will have a
significant performance penalty and may not fully use all the resources you
mentioned you have.

Fernando

2017-08-07 10:03 GMT-03:00 Moacir Ferreira :

> Hi Colin,
>
>
> Take a look on Devin's response. Also, read the doc he shared that gives
> some hints on how to deploy Gluster.
>
>
> It is more like that if you want high-performance you should have the
> bricks created as RAID (5 or 6) by the server's disk controller and them
> assemble a JBOD GlusterFS. The attached document is Gluster specific and
> not for oVirt. But at this point I think that having SSD will not be a plus
> as using the RAID controller Gluster will not be aware of the SSD.
> Regarding the OS, my idea is to have a RAID 1, made of 2 low cost HDDs, to
> install it.
>
>
> So far, based on the information received I should create a single RAID 5
> or 6 on each server and then use this disk as a brick to create my Gluster
> cluster, made of 2 replicas + 1 arbiter. What is new for me is the detail
> that the arbiter does not need a lot of space as it only keeps meta data.
>
>
> Thanks for your response!
> Moacir
>
> --
> *From:* Colin Coe 
> *Sent:* Monday, August 7, 2017 12:41 PM
>
> *To:* Moacir Ferreira
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
> Hi
>
> I just thought that you'd do hardware RAID if you had the controller or
> JBOD if you didn't.  In hindsight, a server with 40Gbps NICs is pretty
> likely to have a hardware RAID controller.  I've never done JBOD with
> hardware RAID.  I think having a single gluster brick on hardware JBOD
> would be riskier than multiple bricks, each on a single disk, but thats not
> based on anything other than my prejudices.
>
> I thought gluster tiering was for the most frequently accessed files, in
> which case all the VMs disks would end up in the hot tier.  However, I have
> been wrong before...
>
> I just wanted to know where the OS was going as I didn't see it mentioned
> in the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a
> lot of wasted disk.
>
> Honestly, I think Yaniv's answer was far better than my own and made the
> important point about having an arbiter.
>
> Thanks
>
> On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira <
> moacirferre...@hotmail.com> wrote:
>
>> Hi Colin,
>>
>>
>> I am in Portugal, so sorry for this late response. It is quite confusing
>> for me, please consider:
>>
>>
>> 1* - *What if the RAID is done by the server's disk controller, not by
>> software?
>>
>> 2 - For JBOD I am just using gdeploy to deploy it. However, I am not
>> using the oVirt node GUI to do this.
>>
>>
>> 3 - As the VM .qcow2 files are quite big, tiering would only help if
>> made by an intelligent system that uses SSD for chunks of data not for the
>> entire .qcow2 file. But I guess this is a problem everybody else has. So,
>> Do you know how tiering works in Gluster?
>>
>>
>> 4 - I am putting the OS on the first disk. However, would you do
>> differently?
>>
>>
>> Moacir
>>
>> --
>> *From:* Colin Coe 
>> *Sent:* Monday, August 7, 2017 4:48 AM
>> *To:* Moacir Ferreira
>> *Cc:* users@ovirt.org
>> *Subject:* Re: [ovirt-users] Good practices
>>
>> 1) RAID5 may be a performance hit-
>>
>> 2) I'd be inclined to do this as JBOD by creating a distributed disperse
>> volume on each server.  Something like
>>
>> echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
>> $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
>> "server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
>>
>> 3) I think the above.
>>
>> 4) Gluster does support tiering, but IIRC you'd need the same number of
>> SSD as spindle drives.  There may be another way to use the SSD as a fast
>> cache.
>>
>> Where are you putting the OS?
>>
>> Hope I understood the question...
>>
>> Thanks
>>
>> On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <
>> moacirferre...@hotmail.com> wrote:
>>
>>> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2
>>> CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
>>> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
>>> a dual 10Gb NIC. So my intention is to create a loop like a server triangle
>>> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
>>> move VMs around the pod (east /west traffic) while using the 10Gb
>>> interfaces for giving services to the outside world (north/south traffic).
>>>
>>>
>>> This said, my first question is: How should I deploy GlusterFS in such
>>> oVirt scenario? My questions are:
>>>
>>>
>>> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
>>> then create a GlusterFS using them?
>>>
>>> 2 - Instead, should I crea

Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI
That's something on the way RAID works, regardless what most 
'super-ultra' powerfull hardware controller you may have. RAID 5 or 6 
will never have the same write performance as a RAID 10 o 0 for example. 
Writeback caches can deal with bursts well but they have a limit 
therefore there will always be a penalty compared to what else you could 
have.


If you have a continuous stream of data (a big VM deployment or a large 
data copy) there will be a continuous write and that will likely fill up 
the cache making the disks underneath the bottleneck.
That's why on some other scenarios, like ZFS people have multiple groups 
of RAID 6 (called RAIDZ2) so it improves the write speeds for these type 
of scenarios.


In the scenario given in this thread with just 3 servers, each with a 
RAID 6 there will be a bare limit on the write performance specially for 
streammed data for most powerfull your hardware controller can do 
write-back.


Also I agree the 40Gb NICs may not be used fully and 10Gb can do the job 
well, but if they were available at the begining, why not use them.


Fernando


On 08/08/2017 03:16, Fabrice Bacchella wrote:

Le 8 août 2017 à 04:08, FERNANDO FREDIANI  a écrit :
Even if you have a Hardware RAID Controller with Writeback cache you will have 
a significant performance penalty and may not fully use all the resources you 
mentioned you have.


Nope again,from my experience with HP Smart Array and write back cache, write, 
that goes in the cache, are even faster that read that must goes to the disks. 
of course if the write are too fast and to big, they will over overflow the 
cache. But on todays controller they are multi-gigabyte cache, you must write a 
lot to fill them. And if you can afford 40Gb card, you can afford decent 
controller.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI

Exactly Moacir, that is my point.


A proper Distributed FIlesystem should not rely on any type of RAID as 
it can make its own redundancy without having to rely on any underneath 
layer (look at CEPH). Using RAID may help with management and in certain 
scenarios to replace a faulty disk, but at a cost, not cheap by the way.
That's why in terms of resourcing saving, if a replica 3 brings those 
issues mentioned it is much worth to have a small arbiter somewhere 
instead of wasting a significant amount of disk space.



Fernando


On 08/08/2017 06:09, Moacir Ferreira wrote:


Fernando,


Let's see what people say... But this is what I understood Red Hat 
says is the best performance model. This is the main reason to open 
this discussion because as long as I can see, some of you in the 
community, do not agree.



But when I think about a "distributed file system", that can make any 
number of copies you want, it does not make sense using a RAIDed 
brick, what it makes sense is to use JBOD.



Moacir



*From:* fernando.fredi...@upx.com.br  on 
behalf of FERNANDO FREDIANI 

*Sent:* Tuesday, August 8, 2017 3:08 AM
*To:* Moacir Ferreira
*Cc:* Colin Coe; users@ovirt.org
*Subject:* Re: [ovirt-users] Good practices
Moacir, I understand that if you do this type of configuration you 
will be severely impacted on storage performance, specially for 
writes. Even if you have a Hardware RAID Controller with Writeback 
cache you will have a significant performance penalty and may not 
fully use all the resources you mentioned you have.


Fernando

2017-08-07 10:03 GMT-03:00 Moacir Ferreira <mailto:moacirferre...@hotmail.com>>:


Hi Colin,


Take a look on Devin's response. Also, read the doc he shared that
gives some hints on how to deploy Gluster.


It is more like that if you want high-performance you should have
the bricks created as RAID (5 or 6) by the server's disk
controller and them assemble a JBOD GlusterFS. The attached
document is Gluster specific and not for oVirt. But at this point
I think that having SSD will not be a plus as using the RAID
controller Gluster will not be aware of the SSD. Regarding the OS,
my idea is to have a RAID 1, made of 2 low cost HDDs, to install it.


So far, based on the information received I should create a single
RAID 5 or 6 on each server and then use this disk as a brick to
create my Gluster cluster, made of 2 replicas + 1 arbiter. What is
new for me is the detail that the arbiter does not need a lot of
space as it only keeps meta data.


Thanks for your response!

Moacir


*From:* Colin Coe mailto:colin@gmail.com>>
*Sent:* Monday, August 7, 2017 12:41 PM

*To:* Moacir Ferreira
*Cc:* users@ovirt.org <mailto:users@ovirt.org>
*Subject:* Re: [ovirt-users] Good practices
Hi

I just thought that you'd do hardware RAID if you had the
controller or JBOD if you didn't.  In hindsight, a server with
40Gbps NICs is pretty likely to have a hardware RAID controller. 
I've never done JBOD with hardware RAID.  I think having a single

gluster brick on hardware JBOD would be riskier than multiple
bricks, each on a single disk, but thats not based on anything
other than my prejudices.

I thought gluster tiering was for the most frequently accessed
files, in which case all the VMs disks would end up in the hot
tier.  However, I have been wrong before...

I just wanted to know where the OS was going as I didn't see it
mentioned in the OP.  Normally, I'd have the OS on a RAID1 but in
your case thats a lot of wasted disk.

Honestly, I think Yaniv's answer was far better than my own and
made the important point about having an arbiter.

Thanks

On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira
mailto:moacirferre...@hotmail.com>>
wrote:

Hi Colin,


I am in Portugal, so sorry for this late response. It is quite
confusing for me, please consider:

*
*1*- *What if the RAID is done by the server's disk
controller, not by software?

2 -**For JBOD I am just using gdeploy to deploy it. However, I
am not using the oVirt node GUI to do this.


3 -**As the VM .qcow2 files are quite big, tiering would only
help if made by an intelligent system that uses SSD for chunks
of data not for the entire .qcow2 file. But I guess this is a
problem everybody else has. So, Do you know how tiering works
in Gluster?


4 - I am putting the OS on the first disk. However, would you
do differently?


Moacir


  

Re: [ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread FERNANDO FREDIANI

Wesley, it doesn't work at all. Seems to do something with Python, not sure.

Has been reported here before and the person who maintains it has been 
involved but didn't reply.


Fernando


On 08/08/2017 16:59, Wesley Stewart wrote:
I am having trouble getting the ovirt agent working on Ubuntu 17.04 
(perhaps it just isnt there yet)


Currently I have two test machines a 16.04 and a 17.04 ubuntu servers.


*On the 17.04 server*:
Currently isntalled:
ovirt-guest-agent (1.0.12.2.dfsg-2), and service --status-all reveals 
a few virtualization agents:

 [ - ]  open-vm-tools
 [ - ]  ovirt-guest-agent
 [ + ]  qemu-guest-agent

I can't seem to start ovirt-guest-agent
sudo service ovirt-guest-agent start/restart does nothing

Running */_sudo systemctl status ovirt-guest-agent.service_/*
Aug 08 15:31:50 ubuntu-template systemd[1]: Starting oVirt Guest Agent...
Aug 08 15:31:50 ubuntu-template systemd[1]: Started oVirt Guest Agent.
Aug 08 15:31:51 ubuntu-template python[1219]: *** stack smashing 
detected ***: /usr/bin/python terminated
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: 
Main process exited, code=killed, status=6/ABRT
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: 
Unit entered failed state.
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: 
Failed with result 'signal'.


*/_sudo systemctl enable ovirt-guest-agent.service_/*
Also does not seem to do antyhing.

Doing more research, I found:
http://lists.ovirt.org/pipermail/users/2017-July/083071.html
So perhaps the ovirt-guest-agent is broken for Ubuntu 17.04?


*On the 16.04 Server I have:*
Took some fiddling, but I eventually got it working





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Communication Problems between Engine and Hosts

2017-08-16 Thread FERNANDO FREDIANI

Hello Piotr. Thanks for your reply

I was running version 4.1.1, but since that day I have upgraded to 4.1.5 
(the Engine because the hosts remain on 4.1.1). I am not sure the logs 
still exists (how long they are kept normally).


Just to clarify the hosts didn't become unresponsive, but the 
communication between the Engine and the Hosts in question (each in a 
different Datacenter was interrupted - but locally the hosts were fine 
and accessible). What was strange was that since the Hosts could not 
talk to the Engine they seem to have got 'confused' and started several 
VM live migrations which was not expected. As a note I don't have any 
Fencing policy enabled.


Regards
Fernando


On 16/08/2017 07:00, Piotr Kliczewski wrote:

Fernando,

Which ovirt version are you running? Please share the logs so I could
check what caused the hosts to become unresponsive.

Thanks,
Piotr

On Wed, Aug 2, 2017 at 5:11 PM, FERNANDO FREDIANI
 wrote:

Hello.

Yesterday I had a pretty strange problem in one of our architectures. My
oVirt which runs in one Datacenter and controls Nodes locally and also
remotelly lost communication with the remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as expected
and running their Virtual Machines each without dependency of the oVirt
Engine.

What happened at some point is that when the communication between Engine
and Hosts came back Hosts got confused and initiated a Live Migration of ALL
VMs from one of the other. I had also to restart vdsmd agent on all Hosts in
order to get sanity my environment.
What adds up even more strangeness to this scenario is that one of the Hosts
affected doesn't belong to the same Cluster as the others and had to have
the vdsmd restarted.

I understand the Hosts can survive without the Engine online with reduced
possibilities but can communicated between them, but without affecting the
VMs or even needing to do what happened in this scenario.

Am I wrong on any of the assumptions ?

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Node with bcache

2017-08-16 Thread FERNANDO FREDIANI

Hello

I just wanted to share a scenario with you and perhaps exchange more 
information with other people that may also have a similar scenario.


For a couple of months I have been running a oVirt Node (CentOS 7.3 
Minimal) with bcache (https://bcache.evilpiepirate.org/) for caching a 
SSD with HDD disks. The setup is simple and was made for a prof of 
concept and since them has been working better than expected.
This is a standalone host with 4 disks being: 1 for Operating System, 2 
x 2TB 7200 RPM in software RAID 1 and 1 x PCI-E NVMe 400GB SSD which 
plays the caching device for both reads and writes. The VM storage 
folder is mounted as a ext4 partition on the logical device created by 
bcache (/dev/bcache0). All this is transparent to oVirt as all it sees 
is a /folder to put the VMs.


We monitor the IOPS on all block devices individually and see the 
behavior exactly as expected: random writes are all done on the SSD 
first and them streamed sequentially to the mechanical drives with 
pretty impressive performance. Also in the beginning while the total 
amount of data was less than 400GB ALL read used to come from the 
caching device and therefore didn't use IOPS from the mechanical drives 
leaving it free to do basically writes. Finally at sequential IOPS (as 
described by bcache) are intelligently passed directly to the mechanical 
drives (but they are not much).


Although bcache is present on kernel 3.10 I had to use kernel-ml 4.12 
(from Elrepo) and I had also to compile the bcache-tools as I could not 
find it available in any repository.


Regards
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Inter Cluster Traffic

2017-08-22 Thread FERNANDO FREDIANI
How do you make the new cluster to use the same storage domain as the 
original one ? Storage Domains in oVirt are a bit confusing and less 
flexible and I am not sure it allows it, does it ?



On 22/08/2017 12:23, Alan Griffiths wrote:

Hi,

I'm in the process of building a second ovirt cluster within the 
default DC. This new cluster will use the same storage domains as the 
original cluster, and I will slowly migrate VMs from the old cluster 
to the new.


Given that the old and new cluster hosts have a firewall between them 
I need to ensure that all relevant ports are open, with particular 
attention to the correct operation of SPM.


Is it sufficient to open TCP ports 16514 and 54321 to achieve this?

Thanks,

Alan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-09-04 Thread FERNANDO FREDIANI
I had the very same impression. It doesn't look like that it works then. 
So for a fully redundant where you can loose a complete host you must 
have at least 3 nodes then ?


Fernando


On 01/09/2017 12:53, Jim Kusznir wrote:
Huh...Ok., how do I convert the arbitrar to full replica, then?  I was 
misinformed when I created this setup.  I thought the arbitrator held 
enough metadata that it could validate or refudiate  any one replica 
(kinda like the parity drive for a RAID-4 array).  I was also under 
the impression that one replica  + Arbitrator is enough to keep the 
array online and functional.


--Jim

On Fri, Sep 1, 2017 at 5:22 AM, Charles Kozler > wrote:


@ Jim - you have only two data volumes and lost quorum. Arbitrator
only stores metadata, no actual files. So yes, you were running in
degraded mode so some operations were hindered.

@ Sahina - Yes, this actually worked fine for me once I did that.
However, the issue I am still facing, is when I go to create a new
gluster storage domain (replica 3, hyperconverged) and I tell it
"Host to use" and I select that host. If I fail that host, all VMs
halt. I do not recall this in 3.6 or early 4.0. This to me makes
it seem like this is "pinning" a node to a volume and vice versa
like you could, for instance, for a singular hyperconverged to ex:
export a local disk via NFS and then mount it via ovirt domain.
But of course, this has its caveats. To that end, I am using
gluster replica 3, when configuring it I say "host to use: " node
1, then in the connection details I give it node1:/data. I fail
node1, all VMs halt. Did I miss something?

On Fri, Sep 1, 2017 at 2:13 AM, Sahina Bose mailto:sab...@redhat.com>> wrote:

To the OP question, when you set up a gluster storage domain,
you need to specify backup-volfile-servers=:
where server2 and server3 also have bricks running. When
server1 is down, and the volume is mounted again - server2 or
server3 are queried to get the gluster volfiles.

@Jim, if this does not work, are you using 4.1.5 build with
libgfapi access? If not, please provide the vdsm and gluster
mount logs to analyse

If VMs go to paused state - this could mean the storage is not
available. You can check "gluster volume status " to
see if atleast 2 bricks are running.

On Fri, Sep 1, 2017 at 11:31 AM, Johan Bernhardsson
mailto:jo...@kafit.se>> wrote:

If gluster drops in quorum so that it has less votes than
it should it will stop file operations until quorum is
back to normal.If i rember it right you need two bricks to
write for quorum to be met and that the arbiter only is a
vote to avoid split brain.


Basically what you have is a raid5 solution without a
spare. And when one disk dies it will run in degraded
mode. And some raid systems will stop the raid until you
have removed the disk or forced it to run anyway.

You can read up on it here:

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/



/Johan

On Thu, 2017-08-31 at 22:33 -0700, Jim Kusznir wrote:

Hi all:

Sorry to hijack the thread, but I was about to start
essentially the same thread.

I have a 3 node cluster, all three are hosts and gluster
nodes (replica 2 + arbitrar).  I DO have the
mnt_options=backup-volfile-servers= set:

storage=192.168.8.11:/engine
mnt_options=backup-volfile-servers=192.168.8.12:192.168.8.13

I had an issue today where 192.168.8.11 went down.  ALL
VMs immediately paused, including the engine (all VMs
were running on host2:192.168.8.12).  I couldn't get any
gluster stuff working until host1 (192.168.8.11) was
restored.

What's wrong / what did I miss?

(this was set up "manually" through the article on
setting up self-hosted gluster cluster back when 4.0 was
new..I've upgraded it to 4.1 since).

Thanks!
--Jim


On Thu, Aug 31, 2017 at 12:31 PM, Charles Kozler
mailto:ckozler...@gmail.com>> wrote:

Typo..."Set it up and then failed that **HOST**"

And upon that host going down, the storage domain went
down. I only have hosted storage domain and this new one
- is this why the DC went down and no SPM could be elected?

I dont recall this working this way in early 4.0 or 3.6

On Thu, Aug 31, 2017 at 3:30 PM, Charles Kozler
mailto:ckozler...@gmai

Re: [ovirt-users] update to centos 7.4

2017-09-14 Thread FERNANDO FREDIANI
It has been released yesterday. I don't thing such a quick upgrade is 
recommended. It might work well but I wouldn't find strange if there are 
issues until this is fully tested with current oVirt versions.


Fernando

On 14/09/2017 11:01, Nathanaël Blanchet wrote:

Hi all,

Now centos 7.4 is available, is it recommanted to update nodes (and 
engine os) knowing that ovirt 4.1 is officially supported for 7.3 or 
later?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manual transfer of VMs from DC to DC

2017-09-17 Thread FERNANDO FREDIANI
Alex, porting VMs in oVirt is not very flexible as some may expect or
commonly look for. Perhaps in future versions there will be things
like Host to Host transfer and not need to run commands to convert VMs
For now you need to use Exports (mount, umount, mount aigain) and so on.

2017-09-17 18:18 GMT-03:00 Alex K :
> Thanx, I can confirm that this way i may transfer VMs, but I was thinking a
> more dirty and perhaps portable way.
>
> Say I want to get to a external disk just one VM from DC A and copy/import
> it on DC B that has no access to the export domain of DC A.
>
> I've seen also articles converting the VM disk to qcow or raw then importing
> it with some perl script.
>
> I guess that the OVA import/export feature, still to be implemented, is what
> I need for this case.
>
> Thanx,
> Alex
>
> On Sep 17, 2017 10:13, "Fred Rolland"  wrote:
>>
>> Hi,
>>
>> You could import the storage domain from a DC to another DC with all the
>> VMs and disks.
>> See in [1], there is also a video explaining how to do it.
>>
>> Regards,
>> Fred
>>
>> [1]
>> https://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/
>>
>> On Fri, Sep 15, 2017 at 10:40 AM, Abi Askushi 
>> wrote:
>>>
>>> Hi all,
>>>
>>> Is there any way of transferring VMs manually from DC to DC, without the
>>> DCs having connectivity with each other?
>>>
>>> I was thinking to backup all the export domain directory, and then later
>>> rsync this directory VMs to a new NFS share, then import this NFS share as
>>> an export domain on the other DC.
>>>
>>> What do you think?
>>>
>>> Thanx,
>>> Alex
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] PFSense VLAN Trunking

2017-09-18 Thread FERNANDO FREDIANI
I also wanted to know this which is pretty useful for these scenario. 
Great question !


Fernando Frediani


On 17/09/2017 23:33, LukeFlynn wrote:

Hello,

I'm wondering if there is a way to trunk all VLANs to a PFSense VM 
similar to using the "4095" tag in ESXi. I've tried using an untagged 
interface on the same bond to no avail.





Anyone have any ideas? Perhaps it's a problem with the virtio drivers 
and not the network setup itself?


Thanks,

Luke


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt's VM backup

2017-09-21 Thread FERNANDO FREDIANI

Is it just me just finds strange the way oVirt/RHEV does backup ?

At the present you have to snapshot the VM (fine by that), but them you 
have to clone AND export it to an Export Domain, then delete the cloned 
VM. That means three copies of the same VM somewhere.


Wouldn't it be more logical to take a snapshot, get the then read-only 
disk and export it directly from any host that can read it, and finally 
remove the snapshot ?


Why the need to clone AND export ? What is the limitation to pull this 
VM directlly from host decreasing the time it takes the overall process 
and mainly the amount of storage necessary to do this job.
Ohh and before I forget by this workflow the disks are hammered a lot 
more decreasing their lifetime and may causing performance issues mainly 
during the clone process.


Fernando


On 21/09/2017 14:59, Nathanaël Blanchet wrote:


Yes seems to be good, the UI is very nice, but I didn't manage to make 
one backup though the connection to the API is okay. I followed the 
README but Nothing happens when lauching the backup processus...



Le 21/09/2017 à 19:34, Niyazi Elvan a écrit :

Hi,

You may check my project Bacchus at 
https://github.com/openbacchus/bacchus





On Sep 21, 2017 19:54, "Bernardo Juanicó" > wrote:


I didnt know that, we may adapt it in the future, but at first we
will probably just write a basic set of scripts for minimal
backup functionally since our dev time is limited.

Ill keep you in mind when looking into it.

Regards,

Bernardo

PGP Key

Skype: mattraken

2017-09-21 13:08 GMT-03:00 Nathanaël Blanchet mailto:blanc...@abes.fr>>:

Hi Bernardo,

Thanks, I knew this tool, but it is based on sdk3 which will
be removed in the next version 4.2, so I'm looking at sdk4
project.

You may want to adapt it?


Le 21/09/2017 à 17:08, Bernardo Juanicó a écrit :

Hi Nathanael,

You may want to take a look at this too:

https://github.com/bjuanico/oVirtBackup


Regards,

Bernardo

PGP Key

Skype: mattraken

2017-09-21 11:00 GMT-03:00 Nathanaël Blanchet
mailto:blanc...@abes.fr>>:

Hello Victor,

I have some questions about your script


Le 07/07/2017 à 23:40, Victor José Acosta Domínguez a
écrit :

Hello everyone, i created a python tool to backup and
restore oVirt's VMs.

Also i created a little "how to" on my blog:
http://blog.infratic.com/2017/07/create-ovirtrhevs-vm-backup/



  * Backup step is okay, and I get a usable qcow2 image
of the snapshot vm in the backup vm. It seems to be
compliant with the official backup API, except on
the step 2.

 1. /Take a snapshot of the virtual machine to be backed
up - (existing oVirt REST API operation)/
 2. /Back up the virtual machine configuration at the
time of the snapshot (the disk configuration can be
backed up as well if needed) - (added capability to
oVirt as part of the Backup API)/

I can't see any vm configuration anywhere but only the
qcow2 disk itself

 1. /Attach the disk snapshots that were created in (1)
to the virtual appliance for data backup - (added
capability to oVirt as part of the Backup API)/
 2. /
/
 3. /Detach the disk snapshots that were attached in (4)
from the virtual appliance - (added capability to
oVirt as part of the Backup API)/

An other case is when the vm to backup has more than one
disk. After I tested it, I found that only one qcow2
disk is saved on the backup vm. This is really a matter
when the original vm has many disks part of lvm, it
makes the vm restoration unusable.

  * About vm restoration, it seems that you are using
the upload_disk api, so the disk is uploaded to the
pre-defined storage domain, so it is not a real vm
restoration.

Do you plan to backup and restore a full VM (disks + vm
definition) in a next release?



I hope it help someone else

Regards

Victor Acosta




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users
   

Re: [ovirt-users] [ovirt-devel] Cockpit oVirt support

2017-10-18 Thread FERNANDO FREDIANI

This is pretty intresting and nice to have.

I tried to find the screenshots and new features to see what the new 
webadmin UI looks like, but not sure if I am searching in the right place.


https://github.com/oVirt/cockpit-machines-ovirt-provider
or
https://www.ovirt.org/develop/release-management/features/integration/cockpit/

Fernando


On 18/10/2017 09:32, Barak Korren wrote:



On 18 October 2017 at 10:24, Michal Skrivanek 
mailto:michal.skriva...@redhat.com>> wrote:


Hi all,
I’m happy to announce that we finally finished initial
contribution of oVirt specific support into the Cockpit management
platform
See below for more details

There are only limited amount of operations you can do at the
moment, but it may already be interesting for troubleshooting and
simple admin actions where you don’t want to launch the full blown
webadmin UI

Worth noting that if you were ever intimidated by the complexity
of the GWT UI of oVirt portals and it held you back from
contributing, please take another look!

Thanks,
michal


Very nice work!

Where is this going? Are all WebAdmin features planned to be supported 
at some point? Its kinda nice to be able to access and manage the 
systems from any one of the hosts instead of having to know where the 
engine is...



--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com  | TRIED. TESTED. TRUSTED. | 
redhat.com/trusted 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Install oVirt in CentOS 7 Node

2017-10-18 Thread FERNANDO FREDIANI

Hi.

I have a host which I installed a Minimal CentOS 7 and turned into a 
oVirt Node therefore it didn't come with Cockpit installed and 
configured as it does in the oVirt-Node-NG.


Comparing both types of Hosts I have the following packages below in 
each scenario.

The only package missing between both is "cockpit-ovirt-dashboard".

However I have already tried to install it and it was unable to show up 
the Virtual Machines correctly and control them. Is any specific or 
custom configuration needed in the cockpit config files to make it work 
properly ?


- oVirt-Node-NG host:
    cockpit-ws-130-1.el7.centos.x86_64
    cockpit-docker-130-1.el7.centos.x86_64
    cockpit-ovirt-dashboard-0.10.7-0.0.6.el7.centos.noarch
    cockpit-system-130-1.el7.centos.noarch
    cockpit-networkmanager-130-1.el7.centos.noarch
    cockpit-storaged-130-1.el7.centos.noarch
    cockpit-130-1.el7.centos.x86_64
    cockpit-bridge-130-1.el7.centos.x86_64
    cockpit-dashboard-130-1.el7.centos.x86_64

- CentOS 7 Minimal install
    cockpit-system-141-3.el7.centos.noarch
    cockpit-ws-141-3.el7.centos.x86_64
    cockpit-docker-141-3.el7.centos.x86_64
    cockpit-dashboard-141-3.el7.centos.x86_64
    cockpit-141-3.el7.centos.x86_64
    cockpit-bridge-141-3.el7.centos.x86_64
    cockpit-storaged-141-3.el7.centos.noarch
    cockpit-networkmanager-141-3.el7.centos.noarch

Thanks
Fernando


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Node Upgrade failing

2017-10-24 Thread FERNANDO FREDIANI
How upgrades as done and testes for oVirt Node NG ? Every time I have 
tried to use it on the Engine interface it failed somehow.


The last image I have installed was 
ovirt-node-ng-installer-ovirt-4.1-2017091913 and I after installing I 
basically do two things before adding it to the Engine: 1) Change the 
SSH port and 2) Install Zabbix Agent. Then add the host to Engine, run 
Check for Upgrade and it returns me a message: " 'found updates for 
packages ovirt-node-ng-image-update-4.1.6-1.el7.centos'.


Next I do a 'Upgrade' and it stays there fore quiet a while and 
afterwards downloading several packages fails.

Watching the /var/log/ovirt-engine/engine.log I see:

2017-10-24 10:04:10,196-02 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (pool-5-thread-4) 
[0558504e-0595-4241-acbb-6b6a517132a1] Error during host 
hostname.fqdm.hidden install
2017-10-24 10:04:10,197-02 ERROR 
[org.ovirt.engine.core.bll.host.HostUpgradeManager] (pool-5-thread-4) 
[0558504e-0595-4241-acbb-6b6a517132a1] Failed to update host 
'hostname.fqdm.hidden' packages 'ovirt-node-ng-image-update': Command 
returned failure code 1 during SSH session 'r...@hostname.fqdn.hidden:55000'
2017-10-24 10:04:10,200-02 INFO 
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(pool-5-thread-4) [0558504e-0595-4241-acbb-6b6a517132a1] START, 
SetVdsStatusVDSCommand(HostName = hostname.fqdn.hidden, 
SetVdsStatusVDSCommandParameters:{runAsync='true', 
hostId='6e99e7bd-3bd5-4de4-9794-5549f83b31a6', status='InstallFailed', 
nonOperationalReason='NONE', stopSpmFailureLogged='false', 
maintenanceReason='null'}), log id: 65815347
2017-10-24 10:04:10,224-02 INFO 
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(pool-5-thread-4) [0558504e-0595-4241-acbb-6b6a517132a1] FINISH, 
SetVdsStatusVDSCommand, log id: 65815347
2017-10-24 10:04:10,258-02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-5-thread-4) [0558504e-0595-4241-acbb-6b6a517132a1] EVENT_ID: 
HOST_UPGRADE_FAILED(841), Correlation ID: 
0558504e-0595-4241-acbb-6b6a517132a1, Call Stack: null, Custom ID: null, 
Custom Event ID: -1, Message: Failed to upgrade Host 
hostname.fqdn.hidden (User: admin@internal-authz).


Where else could I look for the root of the problem ?
Could it be related to the differnt SSH port used or anything else ?
Is there an alterntiva way to upgrade the Host via Console ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage Performance

2017-10-26 Thread FERNANDO FREDIANI
That was my impression too, but unfortunately someone told in this mail 
list recently that Gluster isn't that clever to work without RAID 
Controllers and when disks fail it imposes some difficulties for 
replacement. Perhaps someone with more knowledge could clarify this 
point which certainly is beneficial to people.


Fernando


On 26/10/2017 10:59, Juan Pablo wrote:
Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a 
while =)


Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong?


regards,
JP

2017-10-25 12:05 GMT-03:00 Bryan Sockel >:


Have a question in regards to storage performance.  I have a
gluster replica 3 volume that we are testing for performance.  In
my current configuration is 1 server has 16X1.2TB( 10K 2.5
Inch) drives configured in Raid 10 with a 256k stripe. My 2nd
server is configured with 4X6TB (3.5 Inch Drives) configured Raid
10 with a 256k stripe.  Each server is configured with 802.3 Bond
(4X1GB) network links.  Each server is configured with write-back
on the raid controller.
I am seeing a lot of network usage (solid 3 Gbps) when i perform
file copies on the vm attached to that gluster volume,  But i see
spikes on the disk io when watching the dashboard through the
cockpit interface.  I spikes are up to 1.5 Gbps, but i would say
the average through put is maybe 256 Mbps.
Is this to be expected, or should it be a solid activity in the
graphs for disk IO.  Is it better to use a 256K stripe or a 512
strip on the hardware raid configuration?
Eventually i plan on having the hardware match up for better
performance.
Thanks
Bryan

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How is everyone performing backups?

2017-10-27 Thread FERNANDO FREDIANI

Thanks for that.

Does anyone know any way to backup VMs in OVF format like or even output 
to a .zip .gz, etc ? Any way a server which is not necessarily in the 
same LAN (a offsite backup storage) receive these VMs compressed in a 
single file ?


In other words any way to perform these backups where you don't 
necessarily need to use Export Domains ?


Fernando


On 27/10/2017 15:14, Niyazi Elvan wrote:

Hi,

You may take a look at https://github.com/openbacchus/bacchus

Cheers.



On 27 October 2017 at 18:27, Wesley Stewart > wrote:


Originally, I used a script I found on github, but since updating
I can't seem to get that to work again.

I was just curious if there were any other more elegant type
solutions?  I am currently running a single host and local
storage, but I would love to backup VM's automatically once a week
or so to an NFS share.

Just curious if anyone had tackled this issue.

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Niyazi Elvan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New post on oVirt blog: Introducing High Performance Virtual Machines

2017-10-31 Thread FERNANDO FREDIANI

Hi.

Does the virtualization layer causes any significant impact in the VM 
performance, even a high CPU VM that justify the use of this feature ?


DPDK for sure is a fantastic feature for networking environments.

Fernando

On 31/10/2017 05:56, Yaniv Kaul wrote:



On Mon, Oct 30, 2017 at 9:33 PM, Vinícius Ferrão > wrote:


Hello John,

This is very interesting news for HPC guys. Accordingly to the
blog post there's a new “CPU passthrough” function. Which is
interesting.

Do you guys are targeting which market? I’m looking forward for
virtual nodes on a HPC environment.


Any intensive workload, CPU and memory bound especially, would benefit 
from the configuration.
In memory DBs (SAP Hana, Redis and friends) for example, MapReduce 
(Hadoop), etc.


For some workloads, low latency networking is also important 
(especially for nodes inter-communication) and we are looking at DPDK 
for it. See[1].


Y.

[1] https://www.ovirt.org/blog/2017/09/ovs-dpdk/


Thanks,
V.


On 30 Oct 2017, at 07:38, John Marks mailto:jma...@redhat.com>> wrote:

Hello!
Just a quick heads up that there is a new post on the oVirt blog:

Introducing High Performance Virtual Machines


In a nutshell:

oVirt 4.2.0 Alpha, released on September 28, features a new high
performance virtual machine type. It brings VM performance closer
to bare metal performance. Read the blog post.


See you on the oVirt blog!

Best,

John
-- 
John Marks

Technical Writer, oVirt
redhat Israel
Cell: +972 52 8644 491

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New post on oVirt blog: Introducing High Performance Virtual Machines

2017-10-31 Thread FERNANDO FREDIANI


On 31/10/2017 11:11, Yaniv Kaul wrote:




DPDK for sure is a fantastic feature for networking environments.


A bit over-rated, for most workloads, if you ask me...
Currently requires a bit too much configuration (in my opinion), but 
certainly there are workloads who critically need it.

Y.

Agreed




Fernando


On 31/10/2017 05:56, Yaniv Kaul wrote:



On Mon, Oct 30, 2017 at 9:33 PM, Vinícius Ferrão
mailto:fer...@if.ufrj.br>> wrote:

Hello John,

This is very interesting news for HPC guys. Accordingly to
the blog post there's a new “CPU passthrough” function. Which
is interesting.

Do you guys are targeting which market? I’m looking forward
for virtual nodes on a HPC environment.


Any intensive workload, CPU and memory bound especially, would
benefit from the configuration.
In memory DBs (SAP Hana, Redis and friends) for example,
MapReduce (Hadoop), etc.

For some workloads, low latency networking is also important
(especially for nodes inter-communication) and we are looking at
DPDK for it. See[1].

Y.

[1] https://www.ovirt.org/blog/2017/09/ovs-dpdk/



Thanks,
V.


On 30 Oct 2017, at 07:38, John Marks mailto:jma...@redhat.com>> wrote:

Hello!
Just a quick heads up that there is a new post on the oVirt
blog:

Introducing High Performance Virtual Machines


In a nutshell:

oVirt 4.2.0 Alpha, released on September 28, features a new
high performance virtual machine type. It brings VM
performance closer to bare metal performance. Read the blog
post.


See you on the oVirt blog!

Best,

John
-- 
John Marks

Technical Writer, oVirt
redhat Israel
Cell: +972 52 8644 491

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-10-31 Thread FERNANDO FREDIANI
Great. Much better Admin Portal than the usual one. Congratulations. 
Hope it  keeps getting improvements as itś very much welcome and needed.


Fernando

On 31/10/2017 10:13, Sandro Bonazzola wrote:


The oVirt Project is pleased to announce the availability of the First 
Beta Release of oVirt 4.2.0, as of October 31st, 2017



This is pre-release software. This pre-release should not to be used 
in production.


Please take a look at our community page[1] to learn how to ask 
questions and interact with developers and users.All issues or bugs 
should be reported via oVirt Bugzilla[2].


This update is the first beta release of the 4.2.0 version. This 
release brings more than 230 enhancements and more than one thousand 
bug fixes, including more than 380 high or urgent severity fixes, on 
top of oVirt 4.1 series.



What's new in oVirt 4.2.0?

 *

The Administration Portalhas been completely redesigned using
Patternfly, a widely adopted standard in web application design.
It now features a cleaner, more intuitive design, for an improved
user experience.

 *

There is an all-new VM Portalfor non-admin users.

 *

A new High Performance virtual machinetype has been added to the
New VM dialog box in the Administration Portal.

 *

Open Virtual Network (OVN)adds support for Open vSwitch software
defined networking (SDN).

 *

oVirt now supports Nvidia vGPU.

 *

The ovirt-ansible-rolespackage helps users with common
administration tasks.

 *

Virt-v2vnow supports Debian/Ubuntu based VMs.


For more information about these and other features, check out the 
oVirt 4.2.0 blog post 
.



This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later


This release supports Hypervisor Hosts on x86_64 and ppc64le 
architectures for:


* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later

* oVirt Node 4.2 (available for x86_64 only)


See the release notes draft [3] for installation / upgrade 
instructions and a list of new features and bugs fixed.



Notes:

- oVirt Appliance is already available.

- An async release of oVirt Node will follow soon.


Additional Resources:

* Read more about the oVirt 4.2.0 release highlights: 
http://www.ovirt.org/release/4.2.0/ 


* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog: 
http://www.ovirt.org/blog/



[1] https://www.ovirt.org/community/ 

[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt 



[3] http://www.ovirt.org/release/4.2.0/ 



[4] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/ 



--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upload img file to Storage Domain

2016-06-08 Thread Fernando Frediani

Hi there,

I'm spending a fair amount of time to find out how (if possible) to 
upload a .img image to a oVirt Storage Domain and be able to mount it in 
a VM as a disk.


It is a OpenWRT image and there is no OVF from it, so it's a raw image 
which I wanted to use as a disc. Tried both with engine-iso-uploader and 
engine-image-uploader but they refure for diferent reasons.


Is that possible at all ?

Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upload img file to Storage Domain

2016-06-09 Thread Fernando Frediani

Thanks for that.
That worked.

I installed a temporary Linux distribution, downloded the .img file to 
its filesystem, made a dd to /dev/vdb and remove the temporary Linux 
distribution disk, then let the disk boot from the remaining disk. Far 
from ideal but at least worked.


Fernando

Em 08/06/2016 10:12, Barak Korren escreveu:

On 8 June 2016 at 15:58, Fernando Frediani  wrote:

Hi there,

I'm spending a fair amount of time to find out how (if possible) to upload a
.img image to a oVirt Storage Domain and be able to mount it in a VM as a
disk.

It is a OpenWRT image and there is no OVF from it, so it's a raw image which
I wanted to use as a disc. Tried both with engine-iso-uploader and
engine-image-uploader but they refure for diferent reasons.

Is that possible at all ?


Uploading of QCOW images from GUI will hopefully land in 4.0.

In the meantime you can work around it like this:
1. Create a VM and install centos/some other Linux on it
2. Create a new VM disk and attach to VM, not the disk device
3. Copy the image from the file to the disk device with virt-resize.
4. Detach the disk from the VM and build a new VM around it

Optionally:
1. Convert the VM from step #4 to a template
2. Re use the VM from step #1 to upload more images.

HTH,



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Fernando Frediani
I guess what the colleague wants to know is how to specify a interface 
in a different VLAN on the top of the 10Gb LACP in order for the NFS 
traffic to flow.
In VMware world that would be vmkernel interface, so a new 
network/interface with an different IP address than Management (ovirtmgmt).


Fernando

Em 14/06/2016 13:52, Ryan Mahoney escreveu:

Right, but how do you specify which network the nfs traffic is using?

On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer > wrote:


On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
mailto:r...@beaconhillentertainment.com>> wrote:
> On my hosts, I have configured a 1gbe nic for ovirtmgmt whose
usage is
> currently setup for Management, Display, VM and Migration. I
also have a 2
> 10gbe nics bonded LACP which are VLAN tagged and assigned the
dozen or so
> VLANS needed for the various VM's to access.  I have NFS storage
mounted to
> the Data Center, and I would like to know how I check/specify
which network
> connection ovirt is using for that NFS storage.  I want to make
sure it is
> utilizing the 10gbe bond on each host vs using the 1gbe connection.

We don't configured anything regarding network used for nfs
storage, so it works
just like any other nfs mount you create yourself.

Nir




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi there,

I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local 
and Gluster.
Specifically speaking about iSCSI and FCoE I see they use LVM on the 
block storage level to store the Virtual Machines.


I just wanted to understand why the choice was to have LVM and if that 
is the only option at the moment.


Was ever considered to have something like GFS2 ou OCFS2 in comparison 
with VMFS5 and VMs running in qcow2 ou raw files on the top of it ?


I don't like LVM and have a strong preference for anything related to 
storage that doesn't use it so the reason I'm looking for a different 
way to use block storage without it having to be a LVM.


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi Nir,
Thanks for clarification.

Answering your questions: The intent was to use a Posix like filesystem 
similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for 
how the block storage is presented to multiple servers. Yes I heard 
about GFS2 escalation issues in the past, but thought it had been gone 
now a days, it seems not.


I had the impression that qcow2 images have both thin-provisioning and 
snapshot capabilities.


Regarding LVM I don't like the idea of having VMs buried into a LVM 
volume nor the idea of troubleshooting LVM volumes when necessary. 
Dealing with qcow2 images for every VM separately makes things much 
easier for doing several tasks. I would say that people coming from 
VMware would prefer to deal with a VMDK rather than a RDM LUN. In the 
other hand I have nothing to say about LVM performance.


Best
Fernando


Em 14/06/2016 16:35, Nir Soffer escreveu:

On Tue, Jun 14, 2016 at 8:59 PM, Fernando Frediani
 wrote:

Hi there,

I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local and
Gluster.

We support iSCSI, FC, FCoE, NFS, Gluster, Ceph, Local and any posix like
shared file system.


Specifically speaking about iSCSI and FCoE I see they use LVM on the block
storage level to store the Virtual Machines.

To be more specific, we use lvm to create volumes. Each virtual machine disk
use one volume and additional volume for each snapshot.


I just wanted to understand why the choice was to have LVM

What would use use instead?


and if that is
the only option at the moment.

This is the only option for block storage if you need snapshots or thin
provisioning.

If preallocated disk without snapshots is good enough for you, you
can attach a LUN directly to a vm. This will give the best performance.


Was ever considered to have something like GFS2 ou OCFS2 in comparison with
VMFS5 and VMs running in qcow2 ou raw files on the top of it ?

Any posix compatible file system can be used, using raw or qcow2 files.

You can use GFS2, but I heard that it does not scale well.


I don't like LVM and have a strong preference for anything related to
storage that doesn't use it so the reason I'm looking for a different way to
use block storage without it having to be a LVM.

You can use one of the file based storage options, or ceph.

Whats wrong with lvm?


Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi Nir,

I wouldn't say that the performance coming from LVM is significantly 
better than from a filesystem if the last is well built. In VMware the 
performance from a VMDK running on the top of VMFS5 and from a RDM has 
no significant gain one over another. I've always preferred to have 
machines in a filesystem for the ease of management. In some cases with 
hundreds of them in a single filesystem never faced performance issues. 
The bottleneck normally is down to the storage architecture (Storage 
Controller, RAID config, etc).


The multipath is certainly a plus that helps in certain cases.

I guess the answer to my original question is clear. If I want to use 
block storage shared among different hosts there is no choice in oVirt 
other than LVM.
In a particular case I have a storage shared via a kind of internal SAS 
backplane to all servers. The only alternative to that would be dedicate 
a server to own the storage and export it as NFS, but in that case there 
would be some looses in the terms of hardware an reliability.


Thanks
Fernando

On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani 
 wrote:

Hi Nir,
Thanks for clarification.

Answering your questions: The intent was to use a Posix like filesystem
similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how
the block storage is presented to multiple servers. Yes I heard about GFS2
escalation issues in the past, but thought it had been gone now a days, it
seems not.

I had the impression that qcow2 images have both thin-provisioning and
snapshot capabilities.

Yes, using file based storage, you have both snapshots and thin provisioning,
this is the most reliable way to get thin provisioning in ovirt.

But then you pay for the file system overhead, where in block storage the qemu
image is using the lv directly.

In block storage we use multipath, so if you have mutiple nics and networks,
you get better reliability and performance.


Regarding LVM I don't like the idea of having VMs buried into a LVM volume
nor the idea of troubleshooting LVM volumes when necessary. Dealing with
qcow2 images for every VM separately makes things much easier for doing
several tasks. I would say that people coming from VMware would prefer to
deal with a VMDK rather than a RDM LUN. In the other hand I have nothing to
say about LVM performance.

LVM has its own issues with many lvs on the same vg - we recommend to
use up to 350 lvs per vg. If you need more, you need to use another vg.

The best would be to try both and use the best storage for the particular
use case.

Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Install oVirt-Node to USB Stick / SD Card

2016-06-16 Thread Fernando Frediani

Hi,

What is the current status of installing oVirt-Node to an internal USB 
Stick or SD Card ?
Is it customized to run in memory after boot and only write config 
changes to the storage or are there any concerns or caveats necessary 
when using it that way ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine on Ceph RBD

2016-06-16 Thread Fernando Frediani

+1


On 16/06/2016 23:14, Bond, Darryl wrote:

Has there been any consideration of allowing the hosted engine to be installed 
on a Ceph rbd.

I'm not suggesting using cinder but addressing the rbd directly in the hosted 
engine install process.


This would allow ceph only hosting of oVirt without another replicated storage 
for the engine.


Darryl




The contents of this electronic message and any attachments are intended only 
for the addressee and may contain legally privileged, personal, sensitive or 
confidential information. If you are not the intended addressee, and have 
received this email, any transmission, distribution, downloading, printing or 
photocopying of the contents of this message or attachments is strictly 
prohibited. Any legal privilege or confidentiality attached to this message and 
attachments is not waived, lost or destroyed by reason of delivery to any 
person other than intended addressee. If you have received this message and are 
not the intended addressee you should notify the sender by return email and 
destroy all copies of the message and any attachments. Unless expressly 
attributed, the views expressed in this email do not necessarily represent the 
views of the company.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install oVirt-Node to USB Stick / SD Card

2016-06-20 Thread Fernando Frediani

Thanks Yaniv,

This is something pretty relevant when designing a platform. Shaving 
costs from disks can something significant, plus eventually the cost of 
a unnecessary RAID controller.

I personally prefer to run diskless Hypervisor nodes.

Regards,
Fernando

Em 20/06/2016 07:24, Yaniv Dary escreveu:
The next gen node should work similarly to CentOS or Fedora in this 
aspect.
I would check if this is possible via the platform and it should work 
the same for the virt use case.


Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem 
Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 
7692306 8272306 Email: yd...@redhat.com <mailto:yd...@redhat.com> IRC 
: ydary


On Thu, Jun 16, 2016 at 11:50 PM, Fernando Frediani 
mailto:fernando.fredi...@upx.com.br>> 
wrote:


Hi,

What is the current status of installing oVirt-Node to an internal
USB Stick or SD Card ?
Is it customized to run in memory after boot and only write config
changes to the storage or are there any concerns or caveats
necessary when using it that way ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setup new enviroment

2016-06-21 Thread Fernando Frediani

Hello,

If you have 3 x 2TB disks, one on each server, why not use a Distributed 
Storage and have redundancy ?


Fernando

Em 21/06/2016 02:32, Andy Michielsen escreveu:

Hello all,

I was just wondering what your opinions would be in setting up a new oVirt 
enviroment.

I have 4 old servers, 3 with 64 Gigs of ram and 2 tera's of disk space an one 
with 32 Gigs of ram and 1,2 Gb. Each has 2 hexcore cpu's. Each server has at 
least 2 nic's.

I would like to use each server in a seperate cluster with there own local 
storage as this would all only be used as a test enviroment but still would 
like to manage them from one interface. Deploy new vm's from templates etc.

Can I still use the all in one installation for engine and node ? Or can I 
install the engine on a seperate host, physical or virtual.

What would be a good way to use the network ?

Any advice would be greatly appriciated. Thanks in advance.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setup new enviroment

2016-06-21 Thread Fernando Frediani
Then you still have this option and spreading the VMs across 3 hosts I 
believe you will have the same space to handle things plus the 
redundancy a cluster gives.
If you  don't want to waste any disk space you may still use Distributed 
storage but with no redundancy (Striped Volumes). The downside is that 
you cannot turn off any of the servers without turning off all VMs.
Or maybe if you can compromise on it you can have a middle term which is 
Disperse Volumes.


Fernando

Em 21/06/2016 14:04, Andy Michielsen escreveu:

Hello Fernando,

I would like to run as much of windows vm's as I can. I would like to run 30 or 
more with each 6 Gb of ram and 100Gb of storage. And I need a DEV, QUA and PRD 
enviroment. So I would like to keep as much as disk spacecas posible.

Kind regards

Verstuurd vanaf mijn iPad


Op 21 jun. 2016 om 14:20 heeft Fernando Frediani  
het volgende geschreven:

Hello,

If you have 3 x 2TB disks, one on each server, why not use a Distributed 
Storage and have redundancy ?

Fernando

Em 21/06/2016 02:32, Andy Michielsen escreveu:

Hello all,

I was just wondering what your opinions would be in setting up a new oVirt 
enviroment.

I have 4 old servers, 3 with 64 Gigs of ram and 2 tera's of disk space an one 
with 32 Gigs of ram and 1,2 Gb. Each has 2 hexcore cpu's. Each server has at 
least 2 nic's.

I would like to use each server in a seperate cluster with there own local 
storage as this would all only be used as a test enviroment but still would 
like to manage them from one interface. Deploy new vm's from templates etc.

Can I still use the all in one installation for engine and node ? Or can I 
install the engine on a seperate host, physical or virtual.

What would be a good way to use the network ?

Any advice would be greatly appriciated. Thanks in advance.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0.0 Fourth Release candidate is now available for testing

2016-06-21 Thread Fernando Frediani

Nice Rafael. Thanks for the update.

Is Next Generation Node that one that will allow a supported 
installation and running on USB Stick or SD Card instead of a 
traditional disk ?


Fernando

Em 21/06/2016 12:16, Rafael Martins escreveu:

The oVirt Project is pleased to announce the availability of the Fourth
Release Candidate of oVirt 4.0.0 for testing, as of June 21th, 2016

This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.

This release is available now for:
* Fedora 23
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23
* oVirt Next Generation Node 4.0

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO is already available. [4]
* A new oVirt Next Generation Node will be available soon.
* A new oVirt Engine Appliance is already available.
* A new oVirt Guest Tools ISO will be available soon.
* Mirrors[5] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.0 release candidate highlights:
   http://www.ovirt.org/release/4.0.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
   http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.0/
[4] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors

--
Rafael Martins
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network redundancy with Manual balancing

2016-06-23 Thread Fernando Frediani

Hello,

In VMware it is possible to bound two network interfaces and for each 
Portgroup (equivalent to a VLAN) is possible to tell which of the 
physical interfaces you wish the traffic to flow primarily and which 
stays as secondary(bond mode=1 equivalent). So for certain VLANs 
(Management, Live Migration, etc) is possible to force traffic flow via 
one physical NIC of the bond and for other VLANs (Virtual Machine's 
traffic) outs via the other NIC with failover to each other should a 
cable or switch fails.


In oVirt it is also possible to have bonds, but would it still be 
possible to do that same and favor the traffic per VLAN basis ?


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-24 Thread Fernando Frediani

Hello Colin,

I know well all the equipment you have in your hands as I used to work 
with these during a long time. Great stuff I can say.


All seems Ok from what you describe, except the iSCSI network which 
should not be a bond, but two independent vlans (and subnets) using 
iSCSI multipath. Bond works, but it's not the recommended setup for 
these scenarios.


Fernando

On 24/06/2016 22:12, Colin Coe wrote:

Hi all

We run four RHEV datacenters, two PROD, one DEV and one 
TEST/Training.  They are all  working OK but I'd like a definitive 
answer on how I should be configuring the networking side as I'm 
pretty sure we're getting sub-optimal networking performance.


All datacenters are housed in HP C7000 Blade enclosures. The PROD 
datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a 
cluster of two 4730s. These are configured RAID5 internally with 
NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and 
each datacenter has a cluster of three P4500s configured with RAID10 
internally and NRAID5.


The HP C7000 each have two Flex10/10D interconnect modules configured 
in a redundant ring so that we can upgrade the interconnects without 
dropping network connectivity to the infrastructure. We use fat RHEL-H 
7.2 hypervisors (HP BL460) and these are all configured with six 
network interfaces:

- eno1 and eno2 are bond0 which is the rhevm interface
- eno3 and eno4 are bond1 and all the VM VLANs are trunked over this 
bond using 802.1q

- eno5 and eno6 are bond2 and dedicated to iSCSI traffic

Is this the "correct" way to do this?  If not, what should I be doing 
instead?


Thanks

CC


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network redundancy with Manual balancing per VLAN

2016-06-25 Thread Fernando Frediani

Hello,

In VMware it is possible to bond two network interfaces and for each 
Portgroup (equivalent to a VLAN) is possible to tell which of the 
physical interfaces underneath it you wish the traffic to flow primarily 
and which stays as secondary(bond mode=1 equivalent). So for certain 
VLANs (Management, Live Migration, etc) is possible to force traffic 
flow via one physical NIC of the bond and for other VLANs (Virtual 
Machine's traffic) outs via the other NIC with failover to each other 
should a cable or switch fails.


This is specially good for better utilize the fewer NICs available and 
still have redundancy.


In oVirt it is also possible to have bonds, but would it still be 
possible to do that same and favor the traffic per VLAN basis ? I guess 
it is something related to Linux Bond module but perhaps someone has 
done this already.


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Fernando Frediani

This solution looks intresting.

If I understand it correctly you first build your CEPH pool. Then you 
export RBD to iSCSI Target which exports it to oVirt which then will 
create LVMs on the top of it ?


Could you share more details about your experience ? Looks like a way to 
get CEPH + oVirt without Cinder.


Thanks

Fernando

On 25/06/2016 17:47, Nicolás wrote:

Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is 
actually an iSCSI backend. So far, we have had zero issues with cca. 
50 high IO rated VMs. Perhaps [1] might shed some light on how to set 
it up.


Regards.

[1]: 
https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_iscsi.html
En 24/6/2016 9:28 p. m., Charles Gomes  
escribió:


Hello

I’ve been reading lots of material about implementing oVirt with
Ceph, however all talk about using Cinder.

Is there a way to get oVirt with Ceph without having to implement
entire Openstack ?

I’m already currently using Foreman to deploy Ceph and KVM nodes,
trying to minimize the amount of moving parts. I heard something
about oVirt providing a managed Cinder appliance, have any seen this ?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network redundancy with Manual balancing per VLAN

2016-06-27 Thread Fernando Frediani

Thanks for the reply.

Perhaps is the case of contacting the maintainer of Linux Bond module 
and see if there is room for this feature to be implement anytime. OVS 
is great in the coming future, but Bond module is still something very 
handy that simplify the things a lot.


Thanks
Fernando

Em 27/06/2016 03:00, Edward Haas escreveu:



On Sun, Jun 26, 2016 at 4:37 PM, Yevgeny Zaspitsky 
mailto:yzasp...@redhat.com>> wrote:


Dan, Edy,

Could you guys answer this?

IIUC, the requirements are:

  * stream the traffic of few VLANs(network roles) through a
single bond
  * be able to bind a VLAN to a bond slave with an option of fallback
  * have redundancy
  * assign different QoS to every VLAN (my addition)

I guess this is a new RFC that we do not support currently, but
would we be able to provide in any future?

-- Forwarded message --
From: *Fernando Frediani* mailto:fernando.fredi...@upx.com.br>>
Date: Sat, Jun 25, 2016 at 11:17 PM
Subject: [ovirt-users] Network redundancy with Manual balancing
per VLAN
To: users@ovirt.org <mailto:users@ovirt.org>


Hello,

In VMware it is possible to bond two network interfaces and for
each Portgroup (equivalent to a VLAN) is possible to tell which of
the physical interfaces underneath it you wish the traffic to flow
primarily and which stays as secondary(bond mode=1 equivalent). So
for certain VLANs (Management, Live Migration, etc) is possible to
force traffic flow via one physical NIC of the bond and for other
VLANs (Virtual Machine's traffic) outs via the other NIC with
failover to each other should a cable or switch fails.

This is specially good for better utilize the fewer NICs available
and still have redundancy.

In oVirt it is also possible to have bonds, but would it still be
possible to do that same and favor the traffic per VLAN basis ? I
guess it is something related to Linux Bond module but perhaps
someone has done this already.


Thanks

Fernando

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


Hello Fernando,

As you mentioned, oVirt is using the Linux Bond and the solution you 
are looking for is not supported.
The oVirt way to handle this is by applying QoS on the networks, 
providing the guaranteed rates for each and utilizing the bond for 
throughput beyond the one link limit.


With the introduction of OVS as an alternative networking 
infrastructure for the hosts, you could create a hook that implements 
some special functionality, but ovs is not in yet.


Thanks,
Edy.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network redundancy with Manual balancing per VLAN

2016-06-27 Thread Fernando Frediani

Well, yes. That's actually the name VMware uses.

If it bring similar functionalities then that's the solution. Thanks for 
sharing. looks pretty intresting.


Fernando

Em 27/06/2016 09:39, Fabrice Bacchella escreveu:

Isn't teaming[1] the futur, instead of bonding ?

[1] http://rhelblog.redhat.com/2014/06/23/team-driver/


Le 27 juin 2016 à 14:31, Fernando Frediani 
mailto:fernando.fredi...@upx.com.br>> 
a écrit :


Thanks for the reply.

Perhaps is the case of contacting the maintainer of Linux Bond module 
and see if there is room for this feature to be implement anytime. 
OVS is great in the coming future, but Bond module is still something 
very handy that simplify the things a lot.


Thanks
Fernando

Em 27/06/2016 03:00, Edward Haas escreveu:



On Sun, Jun 26, 2016 at 4:37 PM, Yevgeny Zaspitsky 
 wrote:


Dan, Edy,

Could you guys answer this?

IIUC, the requirements are:

  * stream the traffic of few VLANs(network roles) through a
single bond
  * be able to bind a VLAN to a bond slave with an option of
fallback
  * have redundancy
  * assign different QoS to every VLAN (my addition)

I guess this is a new RFC that we do not support currently, but
would we be able to provide in any future?

-- Forwarded message --
From: *Fernando Frediani* 
Date: Sat, Jun 25, 2016 at 11:17 PM
Subject: [ovirt-users] Network redundancy with Manual balancing
per VLAN
To: users@ovirt.org <mailto:users@ovirt.org>


Hello,

In VMware it is possible to bond two network interfaces and for
each Portgroup (equivalent to a VLAN) is possible to tell which
of the physical interfaces underneath it you wish the traffic to
flow primarily and which stays as secondary(bond mode=1
equivalent). So for certain VLANs (Management, Live Migration,
etc) is possible to force traffic flow via one physical NIC of
the bond and for other VLANs (Virtual Machine's traffic) outs
via the other NIC with failover to each other should a cable or
switch fails.

This is specially good for better utilize the fewer NICs
available and still have redundancy.

In oVirt it is also possible to have bonds, but would it still
be possible to do that same and favor the traffic per VLAN basis
? I guess it is something related to Linux Bond module but
perhaps someone has done this already.


Thanks

Fernando

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


Hello Fernando,

As you mentioned, oVirt is using the Linux Bond and the solution you 
are looking for is not supported.
The oVirt way to handle this is by applying QoS on the networks, 
providing the guaranteed rates for each and utilizing the bond for 
throughput beyond the one link limit.


With the introduction of OVS as an alternative networking 
infrastructure for the hosts, you could create a hook that 
implements some special functionality, but ovs is not in yet.


Thanks,
Edy.




___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Run oVirt Node in SD Card/USB Stick

2016-07-06 Thread Fernando Frediani

Hello there,

With oVirt 4.0 Release is running oVirt Node in a SD Card or USB Stick 
supported where the system boots in memory and only writes configuration 
changes to permanent storage similar to what VMware ESXi does ?


This is very useful and can save a significant amount on CAPEX and 
running costs depending on the size of the cluster.


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Run oVirt Node in SD Card/USB Stick

2016-07-06 Thread Fernando Frediani

Hi Yaniv,

I have already done a fair amount of tunning to run a minimal OS from a 
USB stick and it seems to work reasonable well overtime, but nothing 
rock solid and of course I wouldn't try it myself in a production oVirt 
Node if that's not official.


Even if it's not running in memory it's just a question to create a 
schema to avoid all unnecessary writes to permanent storage. Logs can be 
limited to a short period in memory (in another Console) or sent to a 
remote syslog server. It doesn't change much for the base OS to read 
anything it needs.
I thought I had seen these years ago during the development of the first 
versions of oVirt Node, but maybe I misunderstood or it was not 
considered for newer versions.


Perhaps there is something around this on some roadmap. As I mentioned, 
this is a significant saving for any platform not having to use any 
disks in the Compute Nodes.


Regards,
Fernando


Em 06/07/2016 10:40, Yaniv Dary escreveu:

oVirt node depends on the base OS support of the feature (Fedora\CentOS).
I have seen people do this online, but nothing official, so you can 
try it.



Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem 
Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 
7692306 8272306 Email: yd...@redhat.com <mailto:yd...@redhat.com> IRC 
: ydary


On Wed, Jul 6, 2016 at 3:23 PM, Fernando Frediani 
mailto:fernando.fredi...@upx.com.br>> 
wrote:


Hello there,

With oVirt 4.0 Release is running oVirt Node in a SD Card or USB
Stick supported where the system boots in memory and only writes
configuration changes to permanent storage similar to what VMware
ESXi does ?

This is very useful and can save a significant amount on CAPEX and
running costs depending on the size of the cluster.

Thanks
Fernando
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multiple FC SAN's and Hosts LSM etc

2016-07-15 Thread Fernando Frediani
One of the things I don't like very much in oVirt is the LVM on Shared 
Block storage, but unfortunately there are no other options. One day 
perhaps a VMFS5 equivalent will come up somewhere.


I would avoid it and put a server in between the SAN and the oVirt Nodes 
and use NFS in order to abstract it, that would give you more 
flexibility and keep you away from LVM. But in your case if you don't 
have 10GbE interfaces you will loose on performance using 1GbE interfaces.


With regards the  migration of VMs between different CPU families I'm 
not sure what is the tolerance of oVirt with it. Depending on the CPUs 
and the cluster types you may be able to do migrations.So I would keep 
different CPU families in different clusters.


Fernando

Em 15/07/2016 07:07, Neil escreveu:

Hi guys,

I'm soon going to have the following equipment and I'd like to find 
the best way to utilise it


1.) One NEW FC SAN, and 3 new Dell Hosts with FC cards.

2.) One OLD FC SAN with 2 older HP hosts with FC cards. (old VMWare 
environment)


3.) Another OLDER FC SAN with 2 older HP Hosts with FC cards. (old 
VMWare environment)


4.) I have an existing oVirt 3.5 DR cluster with two hosts and NFS 
storage that is current in use and works well.


Each of the above SAN's will only have FC ports to connect to their 
existing hosts, so all hosts won't be connected to all SAN's. All 
hosts would be the same Centos 7.x release etc.


All existing VM's are going to be moved to the option 1 via a 
baremetal restore from backup onto a NEW oVirt platform. Once 
installed I'd then like to re-commission 2 and 3 above to make use of 
the old hardware and SAN's as secondary or possibly a "new" DR 
platform to replace or improve on option 4.


Bearing in mind the older hardware will be different CPU generations, 
would it be best to add the older hosts and SAN's as new clusters 
within the same NEW oVirt installation? Or should I rather just keep 
2, 3 and 4 as separate oVirt installations?


I know in the past live migration wouldn't work with different CPU 
generations, and of course my SAN's won't be physically connected to 
each of the hosts.


In order to move VM's between 1, 2 and 3 would I need to shut the VM 
down and export and import, or is there another way to do this?


Could LSM work between across all three SANS and hosts?

I know I can do a baremetal restore from backup directly onto either 
1, 2 or 3 if needed, but I'd like to try tie all of this into one 
platform if there is good reason to do so. Any thoughts, suggestions 
or ideas here?


Any guidance is greatly appreciated.

Thank you

Regards.

Neil Wilson.






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt + Gluster Hyperconverged

2016-07-15 Thread Fernando Frediani

Hi folks,

I have a few servers with reasonable amount of raw storage but they are 
3 with only 8GB of memory each.
I wanted to have them with an oVirt Hyperconverged + Gluster mainly to 
take advantage of the amount of the storage spread between them and have 
ability to live migrate VMs.


Question is: Does running Gluster on the same Hypervisor nodes consumes 
any significant memory that won't be much left for running VMs ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HTML5 Console

2016-07-31 Thread Fernando Frediani
Console in oVirt is pretty annoying to make it work in certain cases. 
This is certanly something for developers to take up and think about how 
to make it a bit easier and straight forward.


On 31/07/2016 00:12, Anantha Raghava wrote:


Hi,

How do we start the HTML5 console for the Virtual Machines?

Under console options, when we set the client console as HTML 5, only 
black screen appears. The same is the case for noVNC as well.



Any suggestions to set this right?
--

Thanks & Regards,

Anantha Raghava




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] LVM2 Thinprovisioned

2016-08-09 Thread Fernando Frediani

Hello all.

When you use oVirt with a Block Storage the only option available to 
store de VMs is LVM.


Does LVM in oVirt use Thinprovisoned (supported in LVM2) instead of 
having to use the SAN Thinprovisioned features ?


Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Convert OnApp VM to oVirt

2016-08-11 Thread Fernando Frediani

Hi,

Has anyone done a VM conversion from OnApp (therefore running in LVM) to 
oVirt/RHEV format being either buried in LVM or in a QCOW2 file.


I've seen some instructions using qemu-img but wanted to find out if 
anyone has found any issues in this process. Any adjusts to be done 
before booting the VM the first time in the new platform ?

Both hypervisors are KVM.

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LVM2 Thinprovisioned

2016-08-11 Thread Fernando Frediani

Really ? That's pretty bad and another downside unfortunately.

The fact that the only option for block storage is LVM (so there is not 
suitable Clustered Filesystem to run QCOW2 files) and now that LVM2 
Thinprovisioned is not supported can be a real issue where 
Thinprovisioned LUNs are not available in the Storage Controller.


Have these topics ever came up in the feature/product management 
meetings are any of them something being considered ?
The same way CPU and Memory overprovision are key features in order to 
justify the overall solution cost Storage Thinprovision equally 
necessary otherwise the Storage stuff end up costing more than half of 
whole platform solution.


Thanks for the answer anyway. Hopefully at least LVM2 Thinprovisioning 
comes up anytime soon.


Fernando


Em 11/08/2016 12:54, Nir Soffer escreveu:

On Tue, Aug 9, 2016 at 3:16 PM, Fernando Frediani
 wrote:

Hello all.

When you use oVirt with a Block Storage the only option available to store
de VMs is LVM.

Does LVM in oVirt use Thinprovisoned (supported in LVM2) instead of having
to use the SAN Thinprovisioned features ?

No, we use regular lvs. thin pool are not supported in a cluster.

Using thin provisioned LUN for ovirt storage domain is the best option
(supported
since 4.0). We discard removed lvs, so you get back the storage on the storage
server and can use it for other thin provisioned LUNs.

Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LVM2 Thinprovisioned

2016-08-11 Thread Fernando Frediani
I use LVM2 and Thinprovisioned LVs to put Filesystems and it works with 
no issues. It's just a question of handling it correctly to tell it how 
to create each storage chunk that way. The same way those LVs can be 
used to run VMs as they are in traditional LVM.


Not sure what you mean by cote Linux not supporting it.

Fernando


Em 11/08/2016 14:43, Chris Adams escreveu:

Once upon a time, Fernando Frediani  said:

Thanks for the answer anyway. Hopefully at least LVM2
Thinprovisioning comes up anytime soon.

This has nothing to do with oVirt; it is something the core Linux LVM
code does not support.  Last time I looked, nobody was working on it
upstream.

You can still thin-provision VMs in oVirt, there's just not a way to
release space if a VM image shrinks significantly.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LVM2 Thinprovisioned

2016-08-11 Thread Fernando Frediani

Ok Chris, got what you mean.

Thanks for clarifying.

Fernando


Em 11/08/2016 15:12, Chris Adams escreveu:

Once upon a time, Fernando Frediani  said:

I use LVM2 and Thinprovisioned LVs to put Filesystems and it works
with no issues. It's just a question of handling it correctly to
tell it how to create each storage chunk that way. The same way
those LVs can be used to run VMs as they are in traditional LVM.

Not sure what you mean by cote Linux not supporting it.

To do that with multiple access, you have to be running in clustered LVM
mode, and thin provisioning is not supported with CLVM.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster replication on 1Gb interfaces

2016-08-16 Thread Fernando Frediani

Hi all.

I understand using 10Gb interfaces when using Gluster is advised for 
helping with data replication specially in situations where a node went 
down for a while and need to re-sync data.


However can anyone tell if using one 1Gb interface dedicated for it in 
hosts with 1.8 TB of Raw storage would be still Ok or can it cause 
severe impact on performance ? What are the chances of a 1Gb nice being 
saturated during normal operation ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt on a single server

2016-09-05 Thread Fernando Frediani
Adding Kimchi to oVirt node perhaps may be the easiest option. It can be 
pretty useful for many situations and doesn't need such thing like 
mounting NFS in localhost.


It is not nice to not have a All-in-One stable solution anymore as this 
can help with its adoption for later growth.


oVirt-Cockpit looks nice and intresting.

Fernando


On 05/09/2016 05:18, Barak Korren wrote:

On 4 September 2016 at 23:45, zero four  wrote:
...

I understand and acknowledge that oVirt is not targeted towards homelab
setups, or at least small homelab setups.  However I believe that having a
solid configuration for such use cases would be a benefit to the project as
a whole.

As others have already mentioned, using the full oVirt  with engine in
a single host scenario can work, but is not currently actively
maintained or tested.

There are other options originating from the oVirt community however.

One notable option is to use the Cockpit-oVirt plugin [1] which can
use VDSM to manage VMs on a single host.

Another option is to use the Kimchi project [2] for which discussion
for making it an oVirt project had taken part in the past [3]. It
seems that also some work for inclusion in oVirt node was also planned
at some point [4].

[1]: http://www.ovirt.org/develop/release-management/features/cockpit/
[2]: https://github.com/kimchi-project/kimchi
[3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html
[4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware setup

2016-09-09 Thread Fernando Frediani
What is the reason for ? A lot of stuff is moving into Hyperconvergence 
and that saves a lot of hardware and power consumption.


The only thing I would look with more caution is the amount of memory 
Gluster itself consumes, but if there are enough resources in each host 
for Distributed Storage + Compute stuff I see no problems with that, 
unless someone has some strong technical reason for not.


Fernando


On 09/09/2016 12:43, Fernando Fuentes wrote:

Bryan,

Just my opinion but I would separate storage away from your compute nodes.

Regards,

--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org



On Fri, Sep 9, 2016, at 10:28 AM, Bryan Sockel wrote:

Hi,

I am looking to put together a configuration setup for a new install 
and was wondering if it is ok for have both gluster and ovirt running 
on the same systems, or if it was better to separate my storage on to 
anther platform?


I am currently looking into 3 servers with 196 GB Ram, dual 6 core 
proc's on each server.  Gluster would be installed on each server 
with a 3 way replica, ovirt would be running as an appliance and each 
node would be part of the ovirt cluster.



_
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Minimal resources for Engine

2016-09-09 Thread Fernando Frediani

Hi there.

I was reading this interesting URL someone just sent a while ago 
regarding hyperconvergence topic 
(https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/) 
and found the point about the optimal amount of resources for a Engine - 
16GB of RAM.


I just wanted to ask what component or feature eats up so much memory 
for that amount be the recommended. Or is it just in a hyperconverged 
scenario ?


Are there any components that can be optional that can reduce the amount 
of memory needed to run the Engine ?


Also with if the Data Warehouse runs in a separate host what would be 
the reduction in resources consumption, specially memory ?


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Question about Bridge and macvtap

2016-09-22 Thread Fernando Frediani

Hi.

Quick question between Linux Bridge which is used by default in oVirt 
and macvtap which can be used in libvirt/KVM.


What are the downsides or limitations of using macvtap ? Does it have 
any significa performance improvement over bridge ?


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani
Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between 
hosts ? How scalable was it and which of the two work better ?


Using traditional CLVM is far from good starting because of the lack of 
Thinprovision so I'm willing to consider either of the Filesystems.


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani

Hello Nicolas. Thanks for your reply.

As you correctly said GlusterFS is not Block Storare but it is 
Distributed Storage. There are scenarios where it simply doesn't apply 
like a Shared Block storage between physical servers in a chassis or 
simply shared DAS (Direct Attached Storage). Otherwise would you would 
unnecessarily use network throughput which can be better used for other 
things like VM legit traffic and not have the best performance you could 
reading/writing directly from/to a Shared Block Storage.


Distributed storage is always a great mindset for newer scenarios, but 
it doesn't apply to all scenarios and I wouldn't think Redhat would 
direct people to a single way.


Fernando


On 23/11/2016 11:11, Nicolas Ecarnot wrote:

Le 23/11/2016 à 13:03, Fernando Frediani a écrit :

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hello Fernando,

Redhat took a clear direction towards the use of GlusterFS for its 
Software-defined storage, and lots of efforts are made to make 
oVirt/RHEV work together smoothly.
I know GlusterFS is not a block storage, but it's worth considering 
it, especially if you intend to setup hyper-converged clusters.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani

Are you sure Pavel ?

As far as I know and it has been discussed in this list before, the 
limitation is in CLVM which doesn't support Thinprovisioning yet. LVM2 
does, but it is not in Clustered mode. I tried to use GFS2 in the past 
for other non-virtualization related stuff and didn't have much success 
either.


What about OCFS2 ? Has anyone ?

Fernando


On 23/11/2016 11:26, Pavel Gashev wrote:

Fernando,

oVirt supports thin provisioning for shared block storages (DAS or iSCSI). It 
works using QCOW2 disk images directly on LVM volumes. oVirt extends volumes 
when QCOW2 is growing.

I tried GFS2. It's slow, and blocks other hosts on a host failure.

-Original Message-
From:  on behalf of Fernando Frediani 

Date: Wednesday 23 November 2016 at 15:03
To: "users@ovirt.org" 
Subject: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani

Right Pavel. Then where is it or where is the reference to it ?

The only way I heard of is using Thinprovisioning in the SAN level.

With regards to OCFS2 if anyone has any experience with I would like to 
hear about its sucess or not using it.


Thanks

Fernando


On 23/11/2016 11:46, Pavel Gashev wrote:

Fernando,

Clustered LVM doesn’t support lvmthin(7) 
http://man7.org/linux/man-pages/man7/lvmthin.7.html
There is an oVirt LVM-based thin provisioning implementation.

-Original Message-
From: Fernando Frediani 
Date: Wednesday 23 November 2016 at 16:31
To: Pavel Gashev , "users@ovirt.org" 
Subject: Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Are you sure Pavel ?

As far as I know and it has been discussed in this list before, the
limitation is in CLVM which doesn't support Thinprovisioning yet. LVM2
does, but it is not in Clustered mode. I tried to use GFS2 in the past
for other non-virtualization related stuff and didn't have much success
either.

What about OCFS2 ? Has anyone ?

Fernando


On 23/11/2016 11:26, Pavel Gashev wrote:

Fernando,

oVirt supports thin provisioning for shared block storages (DAS or iSCSI). It 
works using QCOW2 disk images directly on LVM volumes. oVirt extends volumes 
when QCOW2 is growing.

I tried GFS2. It's slow, and blocks other hosts on a host failure.

-Original Message-
From:  on behalf of Fernando Frediani 

Date: Wednesday 23 November 2016 at 15:03
To: "users@ovirt.org" 
Subject: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Regarding DAS for Shared Storage

2016-12-22 Thread Fernando Frediani

Hello rex.

I have a very similar situation and I'm interested to find out how 
people are doing to use DAS in these types of environments.


Thanks
Fernando

On 22/12/2016 10:07, rex wrote:

Hi,

Have a VRTX Chassis enclosure populated with two Dell PowerEdge M620 
Blades and has a built-in DAS storage. Created a 1 TB Virtual Disk 
from the DAS storage using the Dell iDRAC interface. This block device 
is accessible from the two blades (CentOS 7 OS) like shown below,


 [root@node1 ~]# lsblk /dev/sdc

 NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 sdc8:32   0 1024G  0 disk

 [root@node2 ~]# lsblk /dev/sdc

 NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 sdc8:32   0 1024G  0 disk

 oVirt rpms has been installed on both the blades with the aim of 
configuring them as oVirt nodes with Live Migration feature. But other 
than the DAS we don't have any storage which can be used as shared 
storage. So can some one please tell me whether the above block disk 
can be used as a shared storage (i.e mounted on the two nodes at the 
same time) and if it is possible how can this be done. Thank you.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Using zRam with oVirt Nodes

2016-12-30 Thread Fernando Frediani

Hello folks.

On simple libvirt/KVM hosts in order to improve RAM usage and avoid swap 
to disk I use zRam with swap to RAM technique. So create half of amount 
of host memory in zRam disk divided by the number of CPU cores. Works 
pretty well.


Has anyone tried it with oVirt Nodes or even has it been considered as a 
feature to simply turn on/off ?


Thanks
Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using zRam with oVirt Nodes

2016-12-30 Thread Fernando Frediani

Hello it's the same thing zswap.

The use case is to be able to put more stuff in a single host without it 
need needing to swap to slow disks. You sacrifice CPU and avoid a lot 
slower swap to disk.


Fernando

On 30/12/2016 16:41, Yaniv Kaul wrote:



On Dec 30, 2016 7:06 PM, "Fernando Frediani" 
mailto:fernando.fredi...@upx.com.br>> 
wrote:


Hello folks.

On simple libvirt/KVM hosts in order to improve RAM usage and
avoid swap to disk I use zRam with swap to RAM technique. So
create half of amount of host memory in zRam disk divided by the
number of CPU cores. Works pretty well.

Has anyone tried it with oVirt Nodes or even has it been
considered as a feature to simply turn on/off ?


What exactly is the use case?
I'd use zram for temporary disks, using the VDSM hook for them.
I think you are referring to zswap?
Y.


Thanks
Fernando

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   >