I have two options. One it got there when we moved the servers from our
lab desk to the hosting site. We had some problems getting it running.
Or two a couple of weeks ago two servers rebooted after high load. That
might have caused a damage to the file.
I did manage to move all servers from
On Tue, Aug 1, 2017 at 6:05 PM, Doug Ingham wrote:
> Hi All,
> Just today I noticed that guests can now pass discards to the underlying
> shared filesystem.
Or block storage. Most likely it works better on shared block storage.
On Tue, Aug 1, 2017 at 3:07 PM Richard Chan
> Could this be the cause of the access issue (old system upgraded from 3.4
> all the way to 4.1)?
> ## change in python class name??
On 07/28/2017 06:03 PM, Davide Ferrari wrote:
On 28/07/17 17:46, Juan Hernández wrote:
The oVirt access log indeed shows that three disks are added to the
virtual machine. May it be that Foreman thinks that it has to
explicitly add a boot disk? Ohad, Ivan, any idea?
Just today I noticed that guests can now pass discards to the underlying
Is this supported by all of the main Linux guest OS's running the virt
On Tue, Aug 1, 2017 at 3:36 PM, Fabrice Moyen
> I'm running Ovirt 4.1.2 onto ppc64le platforms (running centOS 7 LE). I
> see my ovirt manager has an issue upgrading my two hosts. When checking at
> the engine.log log, I can see the following error
It is unclear now, I am not able to reproduce the error...probably changing
the policy fixed the "null" record in the database.
My upgrade went w/o error from 4.1.3 to 4.1.4.
The engine.log from yesterday is here: with the password:BUG
https://cloud.aip.de/index.php/s/N6xY0gw3GdEf63H (I hope I am
I'm running Ovirt 4.1.2 onto ppc64le platforms (running centOS 7 LE). I see my ovirt manager has an issue upgrading my two hosts. When checking at the engine.log log, I can see the following error message:
2017-07-28 12:14:42,747+02 ERROR
I bought a external certificate, in godaddy , and they send to me only
one archive crt. I saw this:
I don't know how I can generate a certificate p12.
Can someone help me ?
This sounds like something that might fit your needs:
and the looks alive.
I didn't try it myself though.
If you're not satisfied with this or any other existing distro, you
could either slim down an existing distro (e.g. creating your own lean
Could this be the cause of the access issue (old system upgraded from 3.4
all the way to 4.1)?
## change in python class name??
When I copied svdsm.logger.conf.rpmnew over
Yes, none is a valid policy assuming you don't need any special
considerations when running a VM.
If you could gather the relevant log entries and the error you see and open
a new bug it'll help us
track and fix the issue.
Please specify exactly from which engine version you upgraded and into
Thank you for your response,
I am looking now in to records of the menu "Scheduling Policy": there is an
entry "none", is it suppose to be there??
Because when I selecting it then error occurs.
On Tue, Aug 1, 2017 at 10:35 AM, Yanir Quinn wrote:
> Thanks for the update, we
> Ok I found the ERROR:
> After upgrade the schedule policy was "none", I dont know why it was moved to
> none but to fix the problem I did following:
> Edit Cluster->Scheduling Policy-> Select Policy: vm_evently_distributed
> Now I can run/migrate the VMs.
> I think there should be a some bug
where did you manually install the packages ? engine or host machine ?
you can you provide the engine.log to see what might cause the error on the
maybe rollback the last yum transaction you did and try to run the engine
as far as for the host installation you might need to
Seems like I already bumped in such issue awhile ago -
but now I see what's wrong - it happens after failing to read
/etc/vdsm/svdsm.logger.conf for some reason
then we have a bug in our fallback which tries to run
Thanks for the update, we will check if there is a bug in the upgrade
On Mon, Jul 31, 2017 at 6:32 PM, Arman Khalatyan wrote:
> Ok I found the ERROR:
> After upgrade the schedule policy was "none", I dont know why it was moved
> to none but to fix the problem I did
Mail list logo