Hi Fernando,
we personally like XFS very much. But XFS + qcow2 (even for snapshots in OVirt)
comes close to a no-go these days. We are experience excessive fragmentation.
For more info see unresolved Redhat Info:
https://access.redhat.com/solutions/532663
Even with tuning the XFS allocation
Thanks! I excuted "ovirt-engine-extension-aaa-ldap-setup", but I got an error.
Is there anything wrong?
[root@engine ~]# ovirt-engine-extension-aaa-ldap-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files:
Results on trying a separate build on a CentOS VM are again a freeze
at the same point as stated below:
[INFO] weaveinfo Join point 'constructor-execution(void
com.google.gwt.dev.jjs.impl.ControlFlowAnalyzer.(com.google.gwt.dev.jjs.ast.JProgram))'
in Type
Just wanted to find out what filesystem people are using to host Virtual
Machines in qcow2 files in a filesystem in Localstorage, ext4 or XFS ?
I normally like XFS for big files which is the case fo VMs, but wondered
if anyone could see any performance advantage when compared with ext4.
On Jun 7, 2017 18:14, "Anthony.Fillmore"
wrote:
Awesome, this is exactly what I was looking for! Thank you!
One last thing - Is there a data dictionary available somewhere for the
Ovirt Postgresql DB tables and views? Some way I can view the full schema
and
I just used all the default setting installing from the iso.
ansible v2.3.0.0
gdeploy v2.0.2
On 2017-06-07 09:41, knarra wrote:
On 06/07/2017 03:15 AM, ov...@fateknollogee.com wrote:
I finally figured out what the error was all about
The default location for the gdeploy script is:
On 06/07/2017 03:15 AM, ov...@fateknollogee.com wrote:
I finally figured out what the error was all about
The default location for the gdeploy script is:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
The oVirt node installer
"ovirt-node-ng-installer-ovirt-4.1-2017060504.iso"
On Wed, Jun 7, 2017 at 9:00 AM, knarra wrote:
> On 06/07/2017 03:15 AM, ov...@fateknollogee.com wrote:
>
>> I finally figured out what the error was all about
>>
>> The default location for the gdeploy script is:
>>
Hi Anton,
Thanks for the suggestions; our engine has the same default values as
you posted. However it seems our engine tried to start each VM exactly 3
times: once on each host in the cluster, all within about 15 seconds,
and never tried again.
The engine logs don't appear to shed any useful
Ok - I was able to get taskcleaner.sh working. I stopped ovirt-engine service,
ran the script and cleared out active and zombie tasks. Restarted the engine,
put the host into maintenance mode and was FINALLY able to remove it.
Thanks a ton for your assistance, it is truly appreciated.
On Wed, Jun 7, 2017 at 6:10 AM, wrote:
> Since this blog post is from 2013, how much of it is still relevant today?
> Do you still need to carry out all the steps?
>
> http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/
You need sudo yum install
Awesome, this is exactly what I was looking for! Thank you!
One last thing - Is there a data dictionary available somewhere for the Ovirt
Postgresql DB tables and views? Some way I can view the full schema and
understand what data is located where? Documentation online seems very scarce
Hey Alexander,
I did those exact steps roughly two days ago...the host is still stuck in
preparing for maintenance mode. Confirming the host has been rebooted seems to
have no effect.
Any other ideas? Some way to hit the Ovirt Database and manipulate the value
for what state the host is in?
On Wednesday, June 7, 2017 10:50:56 AM EDT Anthony. Fillmore wrote:
> Hey Alexander,
>
> The query pieces you gave me allowed me to successfully set the host in
> maintenance mode. When I go to remove the host, I get the following
> error: 'Cannot remove host. Related operation is currently
Hi,
there are couple of possible reasons, one of them is that some other
host is considered significantly better than the current one where the
engine VM is running.
This can be caused by cpu load, memory load, gateway ping failures,
storage stuff.. we have couple of checks.
Best regards
Under the engine-config, I can see two variables that connected to the
restart of HA VM's
MaxNumOfTriesToRunFailedAutoStartVm: "Number of attempts to restart highly
available VM that went down unexpectedly" (Value Type: Integer)
RetryToRunAutoStartVmIntervalInSeconds: "How often to try to restart
On Wed, Jun 7, 2017 at 4:48 PM, wrote:
> Why would the hosted engine power itself off? I did not issue a power off
> command
>
> Broadcast message from root@ovirt-engine (Wed 2017-06-07 13:45:10 UTC):
>
> "VM is shutting down!"
> The system is going down for power-off at
Why would the hosted engine power itself off? I did not issue a power
off command
Broadcast message from root@ovirt-engine (Wed 2017-06-07 13:45:10 UTC):
"VM is shutting down!"
The system is going down for power-off at Wed 2017-06-07 13:46:10 UTC!
Hello,
i followed this guide:
https://jebpages.com/2013/01/08/ovirt-on-ovirt-nested-kvm-fu/ is more
or less the same.
Luca
On Wed, Jun 7, 2017 at 3:10 PM, wrote:
> Since this blog post is from 2013, how much of it is still relevant today?
> Do you still need to carry
Since this blog post is from 2013, how much of it is still relevant
today?
Do you still need to carry out all the steps?
http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/
___
Users mailing list
Users@ovirt.org
Hello,
In our oVirt hosts, we are using DELL equallogic SAN with each server
connecting to SAN via 2 physical interfaces. Since both interfaces share the
same network (Equalogic limitation) we must patch the linux kernel to to allow
iSCSI multipath with multiple NICs in the same subnet with
As requested:
[root@node-1 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: active (running) since Tue 2017-06-06 05:51:29 PDT; 23h ago
Process: 3052
Sorry, please ignore my suggestion. Now I realize that you actually want to
avoid re-ordering.
On Wed, Jun 7, 2017 at 2:05 PM, Yevgeny Zaspitsky
wrote:
> You can activate reordering by using oVirt REST API. Sending POST request
> to
On Wed, Jun 7, 2017 at 12:16 PM, Artyom Lukianov
wrote:
> The only thing that I can think it that the HE VM FQDN does not resolvable
> via DNS, so when the HE deployment tries to reach it, he fails.
> Can you check that you can connect to the HE VM via FQDN "happyhourovirt"
You can activate reordering by using oVirt REST API. Sending POST request
to http://${engine_address}/vms/${vm_id}/reordermacaddresses URL should do
the job.
Please note that it would reorder all vnics of the VM that PCI address
wasn't assigned to them, in other words the VM wasn't run since the
Hi,
I didn't try Guest tools version 4.1.5 in w2016, but here was the same with
w2016. When trying to transfer a 300GB files, the networks as a behavior like
"stop and go", some time it goes down for a seconds and than restarts
transferring again, and keeps slowing down.
And also, I find no
The only thing that I can think it that the HE VM FQDN does not resolvable
via DNS, so when the HE deployment tries to reach it, he fails.
Can you check that you can connect to the HE VM via FQDN "happyhourovirt"
after the engine-setup?
Best Regards
On Wed, Jun 7, 2017 at 11:46 AM, Ramachandra
Hi,
right now I’m thinking of doing the same. I couldn’t find any explanation why
the Win2016 Server behave like they do at this point. I do think that it might
be an issue that is coming up only when running in the oVirt/RHEV setup, maybe
the guest drivers or anything else ? I’ve tried
I mean for a 2016 as a whole, but I didn't test it. I read in some foruns
people are complaining about w2016.
I test the wfoundation 2012r2 with the new ovirt drives, it's working
De: "Gianluca Cecchi"
Para: supo...@logicworks.pt
Cc: "Sven Achtelik"
Hi all,
We've got a three-node "hyper-converged" oVirt 4.1.2 + GlusterFS cluster
on brand new hardware. It's not quite in production yet but, as these
things always go, we already have some important VMs on it.
Last night the servers (which aren't yet on UPS) suffered a brief power
failure. They
Hi Sahina,
Did you have the chance to check the logs and have any idea how this may be
addressed?
Thanx,
Alex
On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose wrote:
> Can we have the gluster mount logs and brick logs to check if it's the
> same issue?
>
> On Sun, Jun 4, 2017
Hi all,
Yanir is right, the local vm.conf is just a cache of what was
retrieved from the engine.
I might be interesting to check what the configuration of the engine
VM shows when edited using the webadmin. Or enable debug logging [1]
for hosted engine and add the OVF dump we send there now and
On Tue, Jun 6, 2017 at 9:40 PM, Ling Ho wrote:
> Hello,
>
> I'm trying to migrate my Hosted Engine from a old NFS storage domain to
> a new NFS storage domain. I am running 4.1 now.
>
> I've searched around and found this reference
>
On Wed, Jun 7, 2017 at 11:04 AM, wrote:
> Hi,
>
> Well, we move to windows 2012, since we have read that w2016 is not to
> stable. With w2012 everything backs to normal working.
>
>
Do you mean from oVirt/RHEV guest point of view or for 2016 as a whole...?
If im not mistaken the values of vm.conf are repopulated from the database
, but i wouldn't recommend meddling with DB data.
maybe the network device wasn't set properly during the hosted engine setup
?
On Wed, Jun 7, 2017 at 11:47 AM, Arsène Gschwind
wrote:
> Hi,
>
>
Hi,
Well, we move to windows 2012, since we have read that w2016 is not to stable.
With w2012 everything backs to normal working.
De: "Sven Achtelik"
Para: supo...@logicworks.pt
Enviadas: Terça-feira, 6 De Junho de 2017 14:20:21
Assunto: AW: [ovirt-users] windows
Hi,
Any chance to get a hint how to change the vm.conf file so it will not
be overwritten constantly?
Thanks a lot.
Arsène
On 06/06/2017 09:50 AM, Arsène Gschwind wrote:
Hi,
I've migrated our oVirt engine to hosted-engine located on a FC
storage LUN, so far so good.
For some reason I'm
Hi, can you please provide the ovirt-hosted-engine-setup log?
On Wed, Jun 7, 2017 at 9:45 AM, Ramachandra Reddy Ankireddypalle <
rcreddy.ankireddypa...@gmail.com> wrote:
> Hi,
> I created a bonded interface consisting of two network interfaces.
> When I tried to install hosted engine over
HI,
Thanks for looking in to the issue. It turns out that the error happens
if I use bonded interface. If I use regular eth interface everything works
fine. Please suggest what needs to be done for bonding to work.
Thanks and Regards,
Ram
On Wed, Jun 7, 2017 at 3:09 AM, knarra
On 06/07/2017 12:06 AM, Ramachandra Reddy Ankireddypalle wrote:
Hi,
hosted engine unattended install fails with the following error:
[ ERROR ] Cannot automatically add the host to cluster Default:
400 Bad Request Bad Request
Your browser sent a request that this server could not
On 06/07/2017 03:15 AM, ov...@fateknollogee.com wrote:
I finally figured out what the error was all about
The default location for the gdeploy script is:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
The oVirt node installer
"ovirt-node-ng-installer-ovirt-4.1-2017060504.iso"
Or you can try the migration tool:
https://github.com/oVirt/ovirt-engine-kerbldap-migration
Check the README, there are instructions how to procceed.
On Wed, Jun 7, 2017 at 8:33 AM, Latchezar Filtchev wrote:
> This can help you:
>
>
>
>
Hi,
I created a bonded interface consisting of two network interfaces.
When I tried to install hosted engine over the bonded interface, it fails
at the end with the following error message:
[ ERROR ] Cannot automatically add the host to cluster Default:
400
Bad Request Bad Request
HI Brendan,
Can you please send the output for systemctl status vdsmd and journalctl -u
vdsmd.service ?
Thanks,
On Wed, Jun 7, 2017 at 9:32 AM, Sandro Bonazzola
wrote:
>
>
> On Tue, Jun 6, 2017 at 2:56 PM, Brendan Hartzell wrote:
>
>> Upon login to the
This can help you:
http://lists.ovirt.org/pipermail/users/2016-September/042937.html
Best,
Latcho
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
qinglong.d...@horebdata.cn
Sent: Wednesday, June 07, 2017 4:57 AM
To: users
Subject: [ovirt-users] active directory
Hi
On Tue, Jun 6, 2017 at 2:56 PM, Brendan Hartzell wrote:
> Upon login to the server, to watch terminal output, I noticed that the
> node status is degraded.
>
> [root@node-1 ~]# nodectl check
> Status: WARN
> Bootloader ... OK
> Layer boot entries ... OK
> Valid boot entries
On Wed, Jun 7, 2017 at 3:29 AM, Leni Kadali Mutungi
wrote:
> On 6/6/17, Yedidyah Bar David wrote:
>> On Tue, Jun 6, 2017 at 6:55 AM, Leni Kadali Mutungi
>> wrote:
>> So, did the engine also start successfully?
> Successfully ran
>
47 matches
Mail list logo