Hello
Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?
I have noticed that if you install ovirt-guest-agent package from Ubuntu
repositories it doesn't start. Throws an error about python and never
starts. Has anyone noticied the same ? OS in this case is a clean
minimal
Hello.
Out of curiosity, why do you and people in general use more replica 3
than replica 2 ?
If I understand correctly this seems overkill and waste of storage as 2
copies of data (replica 2) seems pretty reasonable similar to RAID 1
and still in the worst case the data can be replicated
a replica 3 volume, meaning that
you have three participants in your quorum. But only two of those
participants keep the actual data. Third one, the arbiter, stores only
some metadata, not the files content, so data is not replicated 3 times.
On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI
Hello!
On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
Hi Denis, understood.
What if in the case of adding a fourth host to the running
cluster, will the copy of data be kept only twice
address.
I have also even rebuilt the VM completely changing its operating system
from Ubuntu 16.04 to CentOS 7.3 and the same problem happened.
Fernando
On 24/07/2017 18:20, FERNANDO FREDIANI wrote:
Hello Edward, this happened again today and I was able to check more
details.
So:
- The VM
Moacir, I understand that if you do this type of configuration you will be
severely impacted on storage performance, specially for writes. Even if you
have a Hardware RAID Controller with Writeback cache you will have a
significant performance penalty and may not fully use all the resources you
Hello.
Yesterday I had a pretty strange problem in one of our architectures. My
oVirt which runs in one Datacenter and controls Nodes locally and also
remotelly lost communication with the remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as
n make any
number of copies you want, it does not make sense using a RAIDed
brick, what it makes sense is to use JBOD.
Moacir
*From:* fernando.fredi...@upx.com.br <fernando.fredi...@upx.com.br> on
behalf of F
your hardware controller can do
write-back.
Also I agree the 40Gb NICs may not be used fully and 10Gb can do the job
well, but if they were available at the begining, why not use them.
Fernando
On 08/08/2017 03:16, Fabrice Bacchella wrote:
Le 8 août 2017 à 04:08, FERNANDO FREDIANI
Wesley, it doesn't work at all. Seems to do something with Python, not sure.
Has been reported here before and the person who maintains it has been
involved but didn't reply.
Fernando
On 08/08/2017 16:59, Wesley Stewart wrote:
I am having trouble getting the ovirt agent working on Ubuntu
2017 à 17:41, FERNANDO FREDIANI <fernando.fredi...@upx.com
<mailto:fernando.fredi...@upx.com>> a écrit :
Yet another downside of having a RAID (specially RAID 5 or 6) is that
it reduces considerably the write speeds as each group of disks will
end up having the write speed of a
the Hypervisor back to default kernel (3.10) the
problem didn't happen anymore.
If anyone ever faces this or anything similar please let me know as I am
always interested to find out the root of this issue.
Regards
Fernando
On 28/07/2017 15:01, FERNANDO FREDIANI wrote:
Hello Edwardh and all.
I
to the operations of gluster not installation or deployment,
i.e. not the conceptual understanding of gluster (conceptually it's a
JBOD system).
On 08/07/2017 05:41 PM, FERNANDO FREDIANI wrote:
Thanks for the clarification Erekle.
However I get surprised with this way of operating from GlusterFS
For any RAID 5 or 6 configuration I normally follow a simple gold rule
which gave good results so far:
- up to 4 disks RAID 5
- 5 or more disks RAID 6
However I didn't really understand well the recommendation to use any
RAID with GlusterFS. I always thought that GlusteFS likes to work in
Moacir, I beleive for using the 3 servers directly connected to each
others without switch you have to have a Bridge on each server for every
2 physical interfaces to allow the traffic passthrough in layer2 (Is it
possible to create this from the oVirt Engine Web Interface?). If your
ovirtmgmt
it and
rebuild the RAID without going offline, i.e switching off the volume
doing brick manipulations and switching it back on.
Cheers
Erekle
On 08/07/2017 03:04 PM, FERNANDO FREDIANI wrote:
For any RAID 5 or 6 configuration I normally follow a simple gold
rule which gave good results so far:
- up
How do you make the new cluster to use the same storage domain as the
original one ? Storage Domains in oVirt are a bit confusing and less
flexible and I am not sure it allows it, does it ?
On 22/08/2017 12:23, Alan Griffiths wrote:
Hi,
I'm in the process of building a second ovirt cluster
Hello folks.
I have here a scenario where I have one Datacenter and inside it I have
one Cluster which has multiple hosts with Shared storage between them.
Now I am willing to add another standalone host with Local Storage only
and logic tells me to add to the same Datacenter created as they
Just wanted to find out what filesystem people are using to host Virtual
Machines in qcow2 files in a filesystem in Localstorage, ext4 or XFS ?
I normally like XFS for big files which is the case fo VMs, but wondered
if anyone could see any performance advantage when compared with ext4.
://blog.codecentric.de/en/2017/04/xfs-possible-memory-allocation-deadlock-kmem_alloc/
Markus
*Von:* users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag
von "FERNANDO FREDIANI [fernando.fredi...@upx.com]
*Gesendet:* Mittwoc
Hello folks.
One of the most (if not the more) annoying problems of oVirt is the
known message "... installation failed. Command returned failure code 1
during SSH session ..." which happens quiet often in several situations.
Scrubbing installation logs it seems that most stuff goes well,
Hello.
I had a previous Datacenter and Cluster with a Host in it which I have
removed completelly from oVirt Engine. In order to remove I did the
following steps:
- Removed all Virtual Machines on the top of the Host
- Put the only Local Datastore in Maintenence mode (It didn't allow to
up in the Database ?
Fernando
On 13/06/2017 10:04, FERNANDO FREDIANI wrote:
Hello.
I had a previous Datacenter and Cluster with a Host in it which I have
removed completelly from oVirt Engine. In order to remove I did the
following steps:
- Removed all Virtual Machines on the top of the Host
-
I normally assume that any performance gain from directlly attaching a
LUN to a Virtual Machine then using it in the traditional way are so
little to compensate the extra hassle to do that. I would avoid as much
as I cacn use it, unless it is for some very special reason where you
cannot do in
It has been released yesterday. I don't thing such a quick upgrade is
recommended. It might work well but I wouldn't find strange if there are
issues until this is fully tested with current oVirt versions.
Fernando
On 14/09/2017 11:01, Nathanaël Blanchet wrote:
Hi all,
Now centos 7.4 is
I also wanted to know this which is pretty useful for these scenario.
Great question !
Fernando Frediani
On 17/09/2017 23:33, LukeFlynn wrote:
Hello,
I'm wondering if there is a way to trunk all VLANs to a PFSense VM
similar to using the "4095" tag in ESXi. I've tried using a
Alex, porting VMs in oVirt is not very flexible as some may expect or
commonly look for. Perhaps in future versions there will be things
like Host to Host transfer and not need to run commands to convert VMs
For now you need to use Exports (mount, umount, mount aigain) and so on.
2017-09-17 18:18
Is it just me just finds strange the way oVirt/RHEV does backup ?
At the present you have to snapshot the VM (fine by that), but them you
have to clone AND export it to an Export Domain, then delete the cloned
VM. That means three copies of the same VM somewhere.
Wouldn't it be more logical
I had the very same impression. It doesn't look like that it works then.
So for a fully redundant where you can loose a complete host you must
have at least 3 nodes then ?
Fernando
On 01/09/2017 12:53, Jim Kusznir wrote:
Huh...Ok., how do I convert the arbitrar to full replica, then? I was
This is pretty intresting and nice to have.
I tried to find the screenshots and new features to see what the new
webadmin UI looks like, but not sure if I am searching in the right place.
https://github.com/oVirt/cockpit-machines-ovirt-provider
or
Hi.
I have a host which I installed a Minimal CentOS 7 and turned into a
oVirt Node therefore it didn't come with Cockpit installed and
configured as it does in the oVirt-Node-NG.
Comparing both types of Hosts I have the following packages below in
each scenario.
The only package missing
Helli Rudi
Nice specs.
I wouldn't use GlusterFS for this setup with the third server in a
different location. Just have this server as an Standalone and replicate
the VMs there. You won't have real time replication, but much less
hassle and probably to have constant failures, specially
gains. Do you think that is feasible ?
Fernando
On 28/11/2017 06:12, Irit Goihman wrote:
Hi Fernando,
On Mon, Nov 27, 2017 at 9:25 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
Hello
As some may have seen recently OVS D
Hello folks.
Ou oVirt (4.1.7.3-1.el7.centos) which runs in one Datacenter and
controls Nodes locally and also remotelly lost communication with the
remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as
expected and running their Virtual Machines
Hello
Has anyone installed a Windows 10 Virtual Machine ?
I am having serious Console Performance issues even after installing the
Ted Hat QXL controller from the virtio-win ISO.
Someone informed in a forum having similar issues and have resolved by
increasing the graphics card memory to
Hello
As some may have seen recently OVS DPDK has been introduced to oVirt
(https://ovirt.org/blog/2017/09/ovs-dpdk/). This is very interesting
feature which can make a huge performance difference in terms of network
performance.
Just wanted to ask if anyone has tested it in any environment
I hope the same as well.
Actually I hope this concept could be retired and be able to add any
type of storage in any type of DC either it has 1 host or multiple.
Regards
Fernando
On 23/11/2017 17:22, Matt . wrote:
Hi Guys,
I'm wondering at the moment what the actual difference is between
confident once vram is increased that should resolve
the issue with not only Windows 10 VMs, but other as well.
Anyone can give a hint about the correct procedure to apply this change ?
Thanks in advance.
Fernando
On 23/11/2017 10:46, FERNANDO FREDIANI wrote:
Hello
Has anyone installed
(over LVM?), as I don't have
dedicated hardware RAID cards, would mdraid add any benefit?
On Mon, Nov 13, 2017 at 1:47 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
Helli Rudi
Nice specs.
I wouldn't use GlusterFS fo
early adopters.
Regards
Fernando Frediani
On 03/11/2017 10:21, Marcin Jessa wrote:
Hi guys.
I have VM with hosted engine and a two node setup. When beta came out I was
“OH! New shiny tech! Let’s try it!”. You all know the feeling.
Unfortunately the upgrade process did not go as expected.
I
nterface, but
I’m never going to use it for primary work. Please prioritize ease
of use on Desktop over Mobile!
----
*From:* FERNANDO FREDIANI <fernando.fredi...@upx.com
<mailto:fernando.fredi...@up
Thanks Lev. Could you please mention in a short sentence what this
critical issue is related to ?
Coincidently or not today I had a major issue in our node related to
network which only resolved after I live migrated all VMs out from a
node and rebooted it. It seems this network issue was
So it doesn't look related, but regardless if the issue returns can
you please open a bug with all the details?
Thanks in advance,
On Thu, Nov 9, 2017 at 6:14 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
Thanks Lev. Could you pl
Agreed. Otherwise it would apply 'one size fits all' as mentioned and
that is not the case.
Applying guidelines is something very good to do, by removing stuff that
may only be 'recent trend' or 'buzz stuff' considering the audience that
will use it is even better practice. I don't think
Fantastic Greg and thanks for the feedback about this as well.
Keep up the good work.
Fernando
On 07/11/2017 16:45, Greg Sheremeta wrote:
On Wed, Nov 1, 2017 at 3:19 PM, Yaniv Kaul <yk...@redhat.com
<mailto:yk...@redhat.com>> wrote:
On Wed, Nov 1, 2017 at 7:36 PM, FERNA
Thanks Simone.
Let me ask about a two bugfixes I am eagerly waiting for 4.1.x and they
are already on the 4.2.0 release notes.
BZ 1448831 - Issues with automating the configuration of VMs
(cloud-init) - https://bugzilla.redhat.com/show_bug.cgi?id=1448831
BZ 1464043 - Cloud-init network
only in the 4.2.
Thanks in advance,
On Fri, Dec 1, 2017 at 1:58 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
Thanks Simone.
Let me ask about a two bugfixes I am eagerly waiting for 4.1.x and
they are already on the 4.2.0 rel
That's one of the reasons I prefer File storage (like NFS) than iSCSI or
Fiberchannel. A lot more flexible and manageable.
In the past for VMFS5 I used to work with 4TB LUNs. Now a days something
between 4TB and 8TB may be ok given the bigger size of VMs, depending on
your enviroment of
Hello
If you have 10Gb ports you hardly need to use these aggregation in order
to have more bandwidth. 10Gb is enough for A LOT of things. Just use
bonding mode=1 (active/backup) if your switches don't support stacking.
Doing mode-tlb and alb is not always straight forward as mode 1 or mode
How upgrades as done and testes for oVirt Node NG ? Every time I have
tried to use it on the Engine interface it failed somehow.
The last image I have installed was
ovirt-node-ng-installer-ovirt-4.1-2017091913 and I after installing I
basically do two things before adding it to the Engine: 1)
On 31/10/2017 11:11, Yaniv Kaul wrote:
DPDK for sure is a fantastic feature for networking environments.
A bit over-rated, for most workloads, if you ask me...
Currently requires a bit too much configuration (in my opinion), but
certainly there are workloads who critically need it.
Great. Much better Admin Portal than the usual one. Congratulations.
Hope it keeps getting improvements as itś very much welcome and needed.
Fernando
On 31/10/2017 10:13, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the availability of the First
Beta Release of oVirt
Hi.
Does the virtualization layer causes any significant impact in the VM
performance, even a high CPU VM that justify the use of this feature ?
DPDK for sure is a fantastic feature for networking environments.
Fernando
On 31/10/2017 05:56, Yaniv Kaul wrote:
On Mon, Oct 30, 2017 at 9:33
on Desktop over Mobile!
----
*From:* FERNANDO FREDIANI <fernando.fredi...@upx.com
<mailto:fernando.fredi...@upx.com>>
*Subject:* Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release
is now available
Just a note about this topic.
I miss a TUI. I knew it existed before and it's something pretty handy
when adding new hosts and some trobleshooting.
Fernando
On 31/10/2017 13:12, Nathanaël Blanchet wrote:
Le 31/10/2017 à 15:52, Stephen Liu a écrit :
Ryan,
Thanks for your advice.
My
On 31/10/2017 13:43, Alexander Wels wrote:
Will the right click dialog be available in the final release? Because,
currently in 4.2 we need to go at the up right corner to interact with
object (migrate, maintenance...)
Short answer: No, we removed it on purpose.
Long answer: No, here are
Updates are often problematic.
Whenever someone manages to do a 4.1 to 4.2 upgrade could possibility
post it to the Wiki. That will help a lot of people.
Fernando
On 21/12/2017 08:24, Sandro Bonazzola wrote:
2017-12-21 11:03 GMT+01:00 Giorgio Biacchi
That was my impression too, but unfortunately someone told in this mail
list recently that Gluster isn't that clever to work without RAID
Controllers and when disks fail it imposes some difficulties for
replacement. Perhaps someone with more knowledge could clarify this
point which certainly
Thanks for that.
Does anyone know any way to backup VMs in OVF format like or even output
to a .zip .gz, etc ? Any way a server which is not necessarily in the
same LAN (a offsite backup storage) receive these VMs compressed in a
single file ?
In other words any way to perform these backups
Quick questions about Nodes (in order to not hijack the other thread).
As they don't have much notion of persistence how can I:
- To have a simple configuration backup in order for, if the Operating
System disk fails I reinstall the Node, restore the config and
everything comes back up fine ?
I really don't see much need of using hardware raid if you have a
SSD-only environment. You will get little benefit from hardware cache
memory and to guarantee the writes you may having the filesystem doing
always sync operations, similar to what ZFS does. You just need to use a
HBA to
Has anyone tried the command below under the hood between two oVirt Node
(in the same Datacenter or between two different (local) ones) ? Does it
work ?
virsh migrate --live --persistent --undefinesource --copy-storage-all \
--verbose --desturi
This is such a fantastic features for
:
On 28 Dec 2017, at 19:56, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
Has anyone tried the command below under the hood between two oVirt
Node (in the same Datacenter or between two different (local) ones) ?
Does it work ?
Seems an upgrade guide is more than needed for a 4.1 to 4.2 upgrade.
Perhaps with all the feedback comming that can be done.
On 21/12/2017 11:55, Misak Khachatryan wrote:
Did upgrade to 4.2 yesterday.
Everything wen smoothly except few glitches.
I have 4 host install - 3 gluster and one
Sure Sandro, but I was talking about documentation which lacks for
certain procedures that are very common.
Regards
Fernando
On 21/12/2017 12:32, Sandro Bonazzola wrote:
2017-12-21 12:55 GMT+01:00 FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@u
That is going certainly to be a very welcome feature and if not yet
should be on the top of th roadmap. For planned maintenances it solves
mostly all downtime problems.
Fernando
On 21/12/2017 12:19, Pujan Shah wrote:
We have a bit odd setup where some of our clients have dedicated hosts
and
Should not this type of content not come from the dev team before the
product is in GA ?
On 21/12/2017 15:53, Yaniv Kaul wrote:
On Dec 21, 2017 6:17 PM, "FERNANDO FREDIANI"
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
Sure Sandro, b
Yeah, I noticied the same thing and almost didn't select anyone of these
which are already pretty old.
On 17/01/2018 05:03, Barak Korren wrote:
On 17 January 2018 at 01:02, ~Stack~ wrote:
Greetings,
FYI, your Ubuntu options are antiquated.
12.10, 13.04, 13.10 are all
, or am I missing anything ?
Fernando
On 07/03/2018 15:43, Michal Skrivanek wrote:
On 7 Mar 2018, at 14:03, FERNANDO FREDIANI <fernando.fredi...@upx.com
<mailto:fernando.fredi...@upx.com>> wrote:
Hello Gianluca
Resurrecting this topic. I made the changes as per your instruc
enough and
have seen it crashing several times as well.
Fernando
On 07/03/2018 16:59, Gianluca Cecchi wrote:
On Wed, Mar 7, 2018 at 7:43 PM, Michal Skrivanek
<michal.skriva...@redhat.com <mailto:michal.skriva...@redhat.com>> wrote:
On 7 Mar 2018, at 14:03, FERNA
memory.
Let me know.
Thanks
Fernando Frediani
On 24/11/2017 20:45, Gianluca Cecchi wrote:
On Fri, Nov 24, 2017 at 5:50 PM, FERNANDO FREDIANI
<fernando.fredi...@upx.com <mailto:fernando.fredi...@upx.com>> wrote:
I have made a Export of the same VM created in oVirt to a server
I always found replica 3 a complete overkill. Don't know people made that
up that was necessary. Just looks good and costs a lot with little benefit.
Normally when using magnetic disks 2 copies are fine for most scenarios,
but if using SSDs for similar scenarios depending on the configuration of
Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed
>> Log file is located at /var/log/ovirt-hosted-engine-s
>> etup/ovirt-hosted-engine-setup-20180408214515-4vofq6.log
>>
>>
>>
>> - 原始邮件 -
>> 发件人:Alex K <rightki
ostedEngine) and then you can add an additional host with hosted engine
> bits directly from the webadmin UI (HostedEngine side tab of Add new host
> dialog, select Deploy).
>
> Best regards
>
> Martin Sivak
>
> On Mon, Apr 9, 2018 at 6:21 PM, FERNANDO FREDIANI <
> fern
It's likely possibile you will get more performance from a NFS server
compared to Gluster. Specially if on your NFS server you have something
like ZFS + SSD for L2ARC or ext4 + Bcache, but you get not redundancy. If
you NFS server dies everything stops working, which is not the case with
Hello
Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If so
how ?
Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
at function once the engine
> is stopped. It’s the easiest way I can think of right now. What kind of
> storage are you using ?
>
>
>
> Sven
>
>
>
> *Von:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *Im
> Auftrag von *FERNANDO FREDIANI
> *Gesendet:*
not take in to account such things as ovirt
> or proxmox or any linux overlay to hypervisors like it does for vmware /
> vcenter which is no fault of their own. They assume flat KVM host (or 2 if
> clustered) whereas stuff like ovirt can introduce variables (eg: no MAC
> spoofing)
>
Is it enough to deploy the Self-Hosted engine in just one Host of the
cluster or is it necessary to repeat the process in each of the nodes that
must be able to run it ?
Thanks
Fernando
2018-04-03 2:01 GMT-03:00 Vincent Royer :
> Same thing, the engine in this case is
Hello
As I mentioned in another thread I am migrating a 'Bare-metal' oVirt-Engine
to a Self-Hosted Engine.
For that I am following this documentation:
https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/
However I think called me
there.
Thanks
Fernando
2018-03-19 13:38 GMT-03:00 FERNANDO FREDIANI <fernando.fredi...@upx.com>:
> Hello folks
>
> I currently have a oVirt Engine which runs in a Dedicated Virtual Machine
> in another ans separate environment. It is very nice to have it like that
> because every time
Out of curiosity how much traffic can it handle running in these Virtual
Machines on the top of reasonable hardware ?
Fernando
2018-03-23 4:58 GMT-03:00 Joop :
> On 22-3-2018 10:17, Yaniv Kaul wrote:
>
>
>
> On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler <
101 - 182 of 182 matches
Mail list logo