nothing special:
1) upgrade one of the nodes(overall we have 7) it became green
2) engine-setup (it runs on the separate node) db vacuum,etc answered with
yes.
went w/o errors.
3) i try to put into maintenance the next node, error came.
4)failure on migration but success on selecting of SPM.
i ha
Can you list the steps you did for the upgrade procedure ? (did you follow
a specific guide perhaps ?)
On Tue, Aug 1, 2017 at 5:37 PM, Arman Khalatyan wrote:
> It is unclear now, I am not able to reproduce the error...probably
> changing the policy fixed the "null" record in the database.
> My
It is unclear now, I am not able to reproduce the error...probably changing
the policy fixed the "null" record in the database.
My upgrade went w/o error from 4.1.3 to 4.1.4.
The engine.log from yesterday is here: with the password:BUG
https://cloud.aip.de/index.php/s/N6xY0gw3GdEf63H (I hope I am n
Yes, none is a valid policy assuming you don't need any special
considerations when running a VM.
If you could gather the relevant log entries and the error you see and open
a new bug it'll help us
track and fix the issue.
Please specify exactly from which engine version you upgraded and into
which
Thank you for your response,
I am looking now in to records of the menu "Scheduling Policy": there is an
entry "none", is it suppose to be there??
Because when I selecting it then error occurs.
On Tue, Aug 1, 2017 at 10:35 AM, Yanir Quinn wrote:
> Thanks for the update, we will check if there i
> Ok I found the ERROR:
> After upgrade the schedule policy was "none", I dont know why it was moved to
> none but to fix the problem I did following:
> Edit Cluster->Scheduling Policy-> Select Policy: vm_evently_distributed
> Now I can run/migrate the VMs.
>
> I think there should be a some bug i
Thanks for the update, we will check if there is a bug in the upgrade
process
On Mon, Jul 31, 2017 at 6:32 PM, Arman Khalatyan wrote:
> Ok I found the ERROR:
> After upgrade the schedule policy was "none", I dont know why it was moved
> to none but to fix the problem I did following:
> Edit Clus
Ok I found the ERROR:
After upgrade the schedule policy was "none", I dont know why it was moved
to none but to fix the problem I did following:
Edit Cluster->Scheduling Policy-> Select Policy: vm_evently_distributed
Now I can run/migrate the VMs.
I think there should be a some bug in the upgrade
Looks like renewed certificates problem, in the
ovirt-engine-setup-xx-xx.log I found following lines:
Are there way to fix it?
2017-07-31 15:14:28 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.pki.ca
ca._enrollCertificates:330 processing: 'engine'[renew=True]
2017-07-31 15:14:28 DEBUG
otopi
Sorry, I forgot to mention the error.
This error throws every time when I try to start the VM:
2017-07-31 16:51:07,297+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
(default task-239) [7848103c-98dc-45d1-b99a-4713e3b8e956] Error during
ValidateFailure.: java.lang.NullPointerException
a
Hi,
Please provide the engine log so we can figure out which validation fails.
On Mon, Jul 31, 2017 at 4:57 PM, Arman Khalatyan wrote:
> Hi,
> I am running in to trouble with 4.1.4 after engine upgrade I am not able
> to start or migrate virtual machines:
> getting following error:
> General com
Hi,
I am running in to trouble with 4.1.4 after engine upgrade I am not able to
start or migrate virtual machines:
getting following error:
General command validation failure
Are there any workarounds?
___
Users mailing list
Users@ovirt.org
http://list
12 matches
Mail list logo