Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
But if you receive a 9000 MTU frame on an "input" interface that results sending it out on an interface of a 1500 MTU, then if you set DF bit the frame will just be dropped by the router. If you want your data to "cross" your frame over a different MTU path, then you can not set DF to 1. This

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
Exactly Fabrice! In this case the router will fragment the "bigger" MTU to fit the "smaller" MTU but only when the DF is not set. However, fragmentation on routers are made by the control plane, meaning you will overload the router CPU doing too much fragmentation. On a good NIC the announced

Re: [ovirt-users] Good practices

2017-08-08 Thread Johan Bernhardsson
You attach the ssd as a hot tier with a gluster command. I don't think that gdeploy or ovirt gui can do it. The gluster docs and redhat docs explains tiering quite good. /Johan On August 8, 2017 07:06:42 Moacir Ferreira wrote: Hi Devin, Please consider that

Re: [ovirt-users] Good practices

2017-08-08 Thread Yaniv Kaul
On Tue, Aug 8, 2017 at 12:03 AM, FERNANDO FREDIANI < fernando.fredi...@upx.com> wrote: > Thanks for the detailed answer Erekle. > > I conclude that it is worth in any scenario to have a arbiter node in > order to avoid wasting more disk space to RAID X + Gluster Replication on > the top of it.

Re: [ovirt-users] Good practices

2017-08-08 Thread Fabrice Bacchella
> Le 8 août 2017 à 08:50, Yaniv Kaul a écrit : > > Storage is usually the slowest link in the chain. I personally believe that > spending the money on NVMe drives makes more sense than 40Gb (except [1], > which is suspiciously cheap!) > > Y. > [1] http://a.co/4hsCTqG

[ovirt-users] Reg: Ovirt mouse not responding

2017-08-08 Thread syedquad...@ctel.in
Dear Team, I am using Ovirt 3.x on centos 3 node cluster and in that Ubuntu 14.04 64bit vm's are installed. But the end users, who are using this vm' s are facing some issue daily and issues are mentioned below, 1. Keyboard will not respond in middle automatically and after checking log

[ovirt-users] Move VM from FC storage cluster to local-storage in another cluster

2017-08-08 Thread Neil
Hi guys, I need to move a VM from one cluster (cluster1) using FC storage with 4 hosts, to a separate cluster (cluster 2) with only 1 NEW host that has local storage only. What would be the best way to do this? All I aim to achieve is to have a single NEW host that has local storage that I can

Re: [ovirt-users] Good practices

2017-08-08 Thread Yaniv Kaul
On Tue, Aug 8, 2017 at 9:16 AM, Fabrice Bacchella < fabrice.bacche...@orange.fr> wrote: > > > Le 8 août 2017 à 04:08, FERNANDO FREDIANI a > écrit : > > > Even if you have a Hardware RAID Controller with Writeback cache you > will have a significant performance penalty

Re: [ovirt-users] Good practices

2017-08-08 Thread Fabrice Bacchella
> Le 8 août 2017 à 04:08, FERNANDO FREDIANI a écrit > : > Even if you have a Hardware RAID Controller with Writeback cache you will > have a significant performance penalty and may not fully use all the > resources you mentioned you have. > Nope again,from my

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Yaniv Kaul
On Tue, Aug 8, 2017 at 12:42 AM, Moacir Ferreira wrote: > Fabrice, > > > If you choose to have jumbo frames all over, then when the traffic goes > outside of your "jumbo frames" enabled network it will be necessary to be > fragmented back again to the destination

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Thanks Johan, you brought "light" into my darkness! I went looking for the GlusterFS tiering how-to and it looks like quite simple to attach a SSD as hot tier. For those willing to read about it, go here: http://blog.gluster.org/2016/03/automated-tiering-in-gluster/ Now, I still have a

Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and hosted engine

2017-08-08 Thread Kasturi Narra
Hi, You can follow the steps below to do that. 1) Stop all the virtual machines. 2) Move all the storage domains other than hosted_storage to maintenance which will unmount them from all the nodes. 3) Move HE to global maintenance 'hosted-engine --set-maintenance --mode =global' 4)

Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and hosted engine

2017-08-08 Thread Moacir Ferreira
Sorry Erekle, I am just a beginner... >From the hosted engine I can put the two other servers, that are not hosting >the hosted-engine, on maintenance, and that was what I did. When I tried to >put the last one on maintenance it did not allow me due to the hosted-engine >and I force it

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
True! But in some point of the network it may be necessary to make the MTU 1500. For example, if your data need to cross the Internet. The border router in between your LAN and the Internet will have to fragment a large frame back to a normal one to send it over the Internet. This router will

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Fabrice Bacchella
The border router will do like any other router on the world. If the DF bit is set (common case) or if it's IPv6, it will not fragment but send an ICMP. > Le 8 août 2017 à 13:34, Moacir Ferreira a écrit : > > True! But in some point of the network it may be

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Ok, the 40Gb NIC that I got were for free. But anyway, if you were working with 6 HDD + 1 SSD per server, then you get 21 disks on your cluster. As data in a JBOD will be built all over the network, then it can be really intensive especially depending on the number of replicas you choose for

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Fabrice Bacchella
> Le 8 août 2017 à 11:49, Moacir Ferreira a écrit : > > This is by far more complex. A good NIC will have an offload engine (LSO - > Large Segment Offload) and, if so, the NIC driver will report a MTU of 64K to > the IP stack. The IP stack will then send data to

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
This is by far more complex. A good NIC will have an offload engine (LSO - Large Segment Offload) and, if so, the NIC driver will report a MTU of 64K to the IP stack. The IP stack will then send data to the NIC as if the MTU were 64K and the NIC will fragment it to the size of the "declared"

Re: [ovirt-users] Good practices

2017-08-08 Thread Johan Bernhardsson
On ovirt gluster uses sharding. So all large files are broken up in small pieces on the gluster bricks. /Johan On August 8, 2017 12:19:39 Moacir Ferreira wrote: Thanks Johan, you brought "light" into my darkness! I went looking for the GlusterFS tiering how-to

Re: [ovirt-users] Move VM from FC storage cluster to local-storage in another cluster

2017-08-08 Thread Neil
I'm replying to my own email as I managed to resolve the issue. Sometimes it helps to RTFM. I had to remove the export domain that I created on cluster2, as well as detach the original export domain on cluster1, before I could attach the original export domain to cluster2, as you can only have

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Fernando, Let's see what people say... But this is what I understood Red Hat says is the best performance model. This is the main reason to open this discussion because as long as I can see, some of you in the community, do not agree. But when I think about a "distributed file system", that

Re: [ovirt-users] Good practices

2017-08-08 Thread Fabrice Bacchella
> Le 8 août 2017 à 15:24, FERNANDO FREDIANI a écrit > : > > That's something on the way RAID works, regardless what most 'super-ultra' > powerfull hardware controller you may have. RAID 5 or 6 will never have the > same write performance as a RAID 10 o 0 for

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
Sorry... I guess our discussion here is in line with the "good practices" discussion. For a long time I see a lot of mentions on having a front-end and a back-end network when dealing with distributed file systems like Gluster and Ceph. What I would like to here from those who already

Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI
Exactly Moacir, that is my point. A proper Distributed FIlesystem should not rely on any type of RAID as it can make its own redundancy without having to rely on any underneath layer (look at CEPH). Using RAID may help with management and in certain scenarios to replace a faulty disk, but at

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Yaniv Kaul
On Tue, Aug 8, 2017 at 4:50 PM, Fabrice Bacchella < fabrice.bacche...@orange.fr> wrote: > > Le 8 août 2017 à 14:53, Moacir Ferreira a > écrit : > > But if you receive a 9000 MTU frame on an "input" interface that results > sending it out on an interface of a 1500 MTU,

Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI
That's something on the way RAID works, regardless what most 'super-ultra' powerfull hardware controller you may have. RAID 5 or 6 will never have the same write performance as a RAID 10 o 0 for example. Writeback caches can deal with bursts well but they have a limit therefore there will

Re: [ovirt-users] Good practices

2017-08-08 Thread Karli Sjöberg
On tis, 2017-08-08 at 10:24 -0300, FERNANDO FREDIANI wrote: > That's something on the way RAID works, regardless what most  > 'super-ultra' powerfull hardware controller you may have. RAID 5 or > 6  > will never have the same write performance as a RAID 10 o 0 for > example.  > Writeback caches

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Fabrice Bacchella
> Le 8 août 2017 à 14:53, Moacir Ferreira a écrit : > > But if you receive a 9000 MTU frame on an "input" interface that results > sending it out on an interface of a 1500 MTU, then if you set DF bit the > frame will just be dropped by the router. The frame will

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Thanks once again Johan! What would be your approach: JBOD straight or JBOD made of RAIDed bricks? Moacir From: Johan Bernhardsson Sent: Tuesday, August 8, 2017 11:24 AM To: Moacir Ferreira; Devin Acosta; users@ovirt.org Subject: Re:

Re: [ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread FERNANDO FREDIANI
Wesley, it doesn't work at all. Seems to do something with Python, not sure. Has been reported here before and the person who maintains it has been involved but didn't reply. Fernando On 08/08/2017 16:59, Wesley Stewart wrote: I am having trouble getting the ovirt agent working on Ubuntu

Re: [ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread Wesley Stewart
Actually just got it working. 17.04 seems to have version .12-1 of python-ethtool. I simply removed it and the ovirt agent sudo apt-get remove python-ethtool And then I found the same version of python-ethtool on my 16.04 ubuntu server (0.11-3) and found a deb file for it: wget

Re: [ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread Johan Bernhardsson
And it would have been good if i read the whole email  :) On Tue, 2017-08-08 at 22:04 +0200, Johan Bernhardsson wrote: > It is a bug that is also present in 16.04.  The log directory in > /var/log/ovirt-guest-agent  has the wrong user (or permission) It > should have ovirtagent   as user and

[ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread Wesley Stewart
I am having trouble getting the ovirt agent working on Ubuntu 17.04 (perhaps it just isnt there yet) Currently I have two test machines a 16.04 and a 17.04 ubuntu servers. *On the 17.04 server*: Currently isntalled: ovirt-guest-agent (1.0.12.2.dfsg-2), and service --status-all reveals a few

Re: [ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread Johan Bernhardsson
It is a bug that is also present in 16.04.  The log directory in /var/log/ovirt-guest-agent  has the wrong user (or permission) It should have ovirtagent   as user and group. /Johan On Tue, 2017-08-08 at 15:59 -0400, Wesley Stewart wrote: > I am having trouble getting the ovirt agent working on

Re: [ovirt-users] Issues getting agent working on Ubuntu 17.04

2017-08-08 Thread Wesley Stewart
Oddly enough, on Ubuntu 16.04 I had similar issues, but different outputs. The "fix" for me was pretty much the same. The oddest part was that apt reported that python-ethtool was on version 0.11-3. I removed it and installed via a deb file anyways. sudo apt-get remove python-ethool *(even

Re: [ovirt-users] Install ovirt on Azure

2017-08-08 Thread Staniforth, Paul
Although I haven't had time to try it HVX in ravello can do nested virtualization for KVM. https://www.ravellosystems.com/technology/nested-virtualization Regards, Paul S. From: users-boun...@ovirt.org on behalf of

Re: [ovirt-users] Good practices

2017-08-08 Thread Pavel Gashev
Fernando, I agree that RAID is not required here by common sense. The only point to setup RAID is a lack of manageability of GlusterFS. So you just buy manageability for extra hardware cost and write performance in some scenarios. That is it. On 08/08/2017, 16:24, "users-boun...@ovirt.org on