Re: [ovirt-users] Network configuration validation error

2018-02-11 Thread Artyom Lukianov
This option relevant only for the upgrade from 3.6 to 4.0(engine had
different OS major versions), it all other cases the upgrade flow very
similar to upgrade flow of standard engine environment.

   1. Put hosted-engine environment to GlobalMaintenance(you can do it via
   UI)
   2. Update engine packages(# yum update -y)
   3. Run engine-setup
   4. Disable GlobalMaintenance

Best Regards

On Fri, Feb 9, 2018 at 1:21 PM,  wrote:

> Hi,
> Could someone explain me at least what "Cluster PROD is at version 4.2
> which is not supported by this upgrade flow. Please fix it before
> upgrading." means ? As far as I know 4.2 is the most recent branch
> available, isn't it ?
> Regards
>
>
>
> Le 08-Feb-2018 09:59:32 +0100, mbur...@redhat.com a écrit:
>
> Not sure i understand from which version you trying to upgrade and what is
> the exact upgrade flow, if i got it correctly, it is seems that you
> upgraded the hosts to 4.2, but engine still 4.1?
> What exactly the upgrade steps, please explain the flow., what have you
> done after upgrading the hosts? to what version?
>
> Cheers)
>
> On Wed, Feb 7, 2018 at 3:00 PM,  wrote:
>
>> Hi,
>> Thanks a lot for your answer.
>>
>> I applied some updates at node level, but I forgot to upgrade the engine !
>>
>> When I try to do so I get a strange error : "Cluster PROD is at version
>> 4.2 which is not supported by this upgrade flow. Please fix it before
>> upgrading."
>>
>> Here are the installed packets on my nodes :
>> python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64
>> ovirt-imageio-common-1.2.0-1.el7.centos.noarch
>> ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch
>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch
>> ovirt-release42-4.2.0-1.el7.centos.noarch
>> ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch
>> ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64
>> ovirt-host-4.2.0-1.el7.centos.x86_64
>> ovirt-host-deploy-1.7.0-1.el7.centos.noarch
>> ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch
>> ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch
>> ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch
>> ovirt-vmconsole-host-1.0.4-1.el7.noarch
>> cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch
>> ovirt-vmconsole-1.0.4-1.el7.noarch
>>
>> What I am supposed to do ? I see no newer packages available.
>>
>> Regards
>>
>>
>>
>> Le 07-Feb-2018 13:23:43 +0100, mbur...@redhat.com a écrit:
>>
>> Hi
>>
>> This is a bug and it was already fixed in 4.2.1.1-0.1.el7 -
>> https://bugzilla.redhat.com/show_bug.cgi?id=1528906
>>
>> The no default route bug was fixed in - https://bugzilla.redhat.com/
>> show_bug.cgi?id=1477589
>>
>> Thanks,
>>
>> On Wed, Feb 7, 2018 at 1:15 PM,  wrote:
>>
>>>
>>> Hi,
>>> I am experiencing a new problem : when I try to modify something in the
>>> network setup on the second node (added to the cluster after installing the
>>> engine on the other one) using the Engine GUI, I get the following error
>>> when validating :
>>>
>>> must match "^\b((25[0-5]|2[0-4]\d|[01]\d\
>>> d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)"
>>> Attribut : ipConfiguration.iPv4Addresses[0].gateway
>>>
>>> Moreover, on the general status of ther server, I have a "Host has no
>>> default route" alert.
>>>
>>> The ovirtmgmt network has a defined gateway of course, and the storage
>>> network has none because it is not required. Both server have the same
>>> setup, with different addresses of course :-)
>>>
>>> I have not been able to find anything useful in the logs.
>>>
>>> Is this a bug or am I doing something wrong ?
>>>
>>> Regards
>>>
>>> --
>>> FreeMail powered by mail.fr
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> Michael Burman
>>
>> Senior Quality engineer - rhv network - redhat israel
>>
>> Red Hat
>>
>> 
>>
>> mbur...@redhat.comM: 0545355725 IM: mburman
>>
>>
>>
>> --
>> FreeMail powered by mail.fr
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> Michael Burman
>
> Senior Quality engineer - rhv network - redhat israel
>
> Red Hat
>
> 
>
> mbur...@redhat.comM: 0545355725 IM: mburman
>
>
>
> --
> FreeMail powered by mail.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-2rc hosted-engine don't boot error:cannot allocate kernel buffer

2017-12-11 Thread Artyom Lukianov
I opened the bug because I had the same issue
https://bugzilla.redhat.com/show_bug.cgi?id=1524331.

Best Regards


On Mon, Dec 11, 2017 at 11:32 AM, Maton, Brett 
wrote:

> Hi Roberto can you check how much RAM is allocated to the HE VM ?
>
>
> virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
>
> virsh # dominfo HostedEngine
>
>
> The last update I did seems to have changed the HE RAM from 4GB to 4MB!
>
>
> On 11 December 2017 at 09:08, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Dec 11, 2017 at 9:47 AM, Roberto Nunin 
>> wrote:
>>
>>> Hello all
>>>
>>> during weekend, I've re-tried to deploy my 4.2_rc lab.
>>> Everything was fine, apart the fact host 2 and 3 weren't imported. I had
>>> to add them to the cluster manually, with the NEW function.
>>> After this Gluster volumes were added fine to the environment.
>>>
>>> Next engine deploy on nodes 2 and 3, ended with ok status.
>>>
>>> Tring to migrate HE from host 1 to host 2 was fine, the same from host 2
>>> to host 3.
>>>
>>> After these two attempts, no way to migrate HE back to any host.
>>> Tried Maintenance mode set to global, reboot the HE and now I'm in the
>>> same condition reported below, not anymore able to boot the HE.
>>>
>>> Here's hosted-engine --vm-status:
>>>
>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>
>>>
>>>
>>> --== Host 1 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : True
>>> Hostname   : aps-te61-mng.example.com
>>> Host ID: 1
>>> Engine status  : {"reason": "vm not running on this
>>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>>> Score  : 3400
>>> stopped: False
>>> Local maintenance  : False
>>> crc32  : 7dfc420b
>>> local_conf_timestamp   : 181953
>>> Host timestamp : 181952
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=181952 (Mon Dec 11 09:21:46 2017)
>>> host-id=1
>>> score=3400
>>> vm_conf_refresh_time=181953 (Mon Dec 11 09:21:47 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> --== Host 2 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : True
>>> Hostname   : aps-te64-mng.example.com
>>> Host ID: 2
>>> Engine status  : {"reason": "vm not running on this
>>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>>> Score  : 3400
>>> stopped: False
>>> Local maintenance  : False
>>> crc32  : 67c7dd1d
>>> local_conf_timestamp   : 181946
>>> Host timestamp : 181946
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=181946 (Mon Dec 11 09:21:49 2017)
>>> host-id=2
>>> score=3400
>>> vm_conf_refresh_time=181946 (Mon Dec 11 09:21:49 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> --== Host 3 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : True
>>> Hostname   : aps-te68-mng.example.com
>>> Host ID: 3
>>> Engine status  : {"reason": "failed liveliness
>>> check", "health": "bad", "vm": "up", "detail": "Up"}
>>> Score  : 3400
>>> stopped: False
>>> Local maintenance  : False
>>> crc32  : 4daea041
>>> local_conf_timestamp   : 181078
>>> Host timestamp : 181078
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=181078 (Mon Dec 11 09:21:53 2017)
>>> host-id=3
>>> score=3400
>>> vm_conf_refresh_time=181078 (Mon Dec 11 09:21:53 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>
>>> (it is in global maintenance to avoid messages to be sent to admin
>>> mailbox).
>>>
>>
>> As soon as you exit the global maintenance mode, one of the hosts should
>> take care of automatically restarting the engine VM within a couple of
>> minutes.
>>
>> If you want to manually start the engine VM 

Re: [ovirt-users] Slow booting host - restart loop

2017-09-06 Thread Artyom Lukianov
It can be a result of the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1477700.

Best Regards

On Tue, Sep 5, 2017 at 6:34 PM, Bernardo Juanicó  wrote:

> Hi Eli,
>
> I could not access the psql prompt, i tried with the credentials and
> values on /etc/ovirt-engine/engine.conf.d/10-setup-database.conf.
>
> I tried interactively and also with a .pgpass file containing:
> localhost:5432:engine:engine:PASSWORD
>
> And i get the following error:
>
> psql: FATAL:  Peer authentication failed for user "engine"
>
> Thanks!
>
> Bernardo
>
>
> PGP Key 
> Skype: mattraken
>
> 2017-09-05 12:14 GMT-03:00 Eli Mesika :
>
>> Hi Bernardo
>>
>> I would like to suggest a workaround to this problem , can you please
>> check that :
>>
>> We have a configuration value named FenceQuietTimeBetweenOperationsInSec.
>> It controls the minimal timeout to wait between fence operation (stop,
>> start),
>> currently, it is defaulted to 180 sec , The key is not exposed to
>> engine-config, so, I would suggest to
>>
>> 1) Change this key value to 900 by running the following from psql prompt
>> :
>>
>> update vdc_options set option_value = '900' where option_name =
>> 'FenceQuietTimeBetweenOperationsInSec';
>>
>> 2) Restart the engine
>>
>> 3) Repeat the scenario
>>
>> Now, the engine will require 15 min between fencing operations and your
>> host can be up again without being fenced again.
>>
>> Please let me know if this workaround is working for you
>>
>> Thanks
>>
>> Eli
>>
>> On Tue, Sep 5, 2017 at 4:20 PM, Bernardo Juanicó 
>> wrote:
>>
>>> Martin, thanks for your reply, i was aware of the [1] BUG and the
>>> implemented solution, changing ServerRebootTimeout to 1200 didnt change a
>>> thing...
>>> Now i know about [2] and ill test the fix once it gets released.
>>>
>>> Regards,
>>>
>>> Bernardo
>>>
>>> PGP Key 
>>> Skype: mattraken
>>>
>>> 2017-09-05 8:23 GMT-03:00 Martin Perina :
>>>
 Hi Bernardo,

 we have added timeout to wait until host is booted [1] in oVirt 4.1.2.
 This timeout is by default 5 minutes, but it can be extended using
 following command:

engine-config -s ServerRebootTimeout=NNN

 where NNN is number of seconds you want to wait until host is booted up.

 But be aware that you may be affected by [2], which we are currently
 trying to fix.

 Regards

 Martin Perina


 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1423657
 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1477700


 On Fri, Sep 1, 2017 at 7:54 PM, Bernardo Juanicó 
 wrote:

> Hi everyone,
>
> I installed 2 hosts on a new cluster and the servers take a really
> long to boot up (about 8 minutes).
>
> When a host crashes or is powered off the ovirt-manager starts it via
> power management, since the servers takes all that time to boot up the
> ovirt-manager thinks it failed to start and proceeds to reboot it, several
> times before giving up, when the server is finally started (about 20
> minutes after the failure)
>
> I changed some engine variables with engine-config trying to set a
> higher timeout, but the problem persists.
>
> Any ideas??
>
>
> Regards,
> Bernardo
>
>
> PGP Key
> 
> Skype: mattraken
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Scheduling policy time interval and others

2017-08-13 Thread Artyom Lukianov
You can set these parameters under Cluster Edit Dialog -> Scheduling
Policies.

Best Regards

On Sat, Aug 12, 2017 at 9:19 PM, wodel youchi 
wrote:

> Hi,
>
> When reading about scheduling policy, I am faced with some variables, but
> I cannot find any explanation on how they are defined or where are they
> defined.
>
> For example :
> - HighUtilization: Expressed as a percentage. If the host runs with CPU
> usage at or above the high utilization value for the* defined time
> interval*.
>
> - CpuOverCommitDurationMinutes: Sets the time (in minutes) that a host can
> run a CPU load outside of the* defined utilization values* before the
> scheduling policy takes action.
>
> Time interval and utilization values, where are these values?
>
> Regards.
>
>
> 
>  Garanti
> sans virus. www.avast.com
> 
> <#m_-3386163909074915278_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot set a quota to limit resources for each user

2017-08-03 Thread Artyom Lukianov
Hi Soumya,

   1. Yes, you correct a quota is shared among all quota consumers.
   2. No, it not possible, the single possibility as you said is to create
   a separate quota for each user. I believe it was designed to make quota for
   group of users at first place, but I think it can be a good RFE :)

Best Regards

On Thu, Aug 3, 2017 at 10:54 AM, Soumya Koduri  wrote:

> Hi,
>
> We have a use-case to limit VM resources for each user and were following
> guidelines specified in the admin guide to set quota and limit resources
> for each user [section:  16.8. Using Quota to Limit Resources by User].
> However looks like that quota is shared by all the users added as consumers.
>
> Suppose I have created a quota (say quota1) to limit the storage capacity
> to 100GB for each user. Once I add user1 and user2 as consumers to that
> quota, seems like both users combined are entitled to 100GB. Is my
> understanding correct?
>
> Please let me know if there is any way to configure a single quota which
> could be applied for each user individually (i.e, in the above eg., each
> user should be limited to 100GB storage capacity).
>
> Or is the only way this can be done is by creating separate quota for each
> user [which seems like tedious process and cannot scale]?
>
>
> Thanks,
> Soumya
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine increase memory

2017-07-30 Thread Artyom Lukianov
I believe it depends on the version of Ovirt that you use, the Yanir
solution can be relevant only for the 3.5 version. Started from version 3.6
you can edit HE VM memory via UI without any troubles, the only thing that
you must be aware of under 4.1 you will need to wait for some time, until
the engine update HE VM OVF file(in 4.2 you do not need to wait).

   1. Enable global maintenance
   2. To reduce wait time you can change # engine-config
   OvfUpdateIntervalInMinutes=1 && systemctl restart ovirt-engine
   3. Change HE VM memory via UI and wait for 1-2 minutes
   4. Change OvfUpdateIntervalInMinutes parameter to default value
   5. Restart HE VM # hosted-engine --vm-poweroff && hosted-engine
   --vm-start
   6. Disable global maintenance

In 4.2 you do not need to apply steps 2-4

Best Regards


On Sun, Jul 30, 2017 at 11:25 AM, Yanir Quinn  wrote:

> Hi,
> In order to increase the memory allocation or the cpu cores of the hosted
> engine VM, the VM must be brought down and the vm.conf configuration file
> must be edited on every hypervisor:
>
> Set global maintenance mode from the hypervisor the VM is currently
> running on:
> # hosted-engine --set-maintenance --mode=global
>
> Cleanly shutdown the engine VM:
> # hosted-engine --vm-shutdown
> Makes sure the VM is Down and not 'Powering Off' before proceeding:
> # hosted-engine --vm-status
> Edit /etc/ovirt-hosted-engine/vm.conf on *each* hypervisor, and change these 
> lines to whatever amount of RAM in MB / CPU cores you wish the VM to have:
>
> memSize=4096
> smp=1   *smp = number of virtual cores*
>
> Start the VM again:# hosted-engine --vm-start
> And take it out of maintenance mode:
> # hosted-engine --set-maintenance --mode=none
>
> Regards,
> Yanir Quinn
>
>
> On Sat, Jul 29, 2017 at 1:36 AM, gregor  wrote:
>
>> Hi,
>>
>> is it possible to increase the memory of an hosted engine after the
>> installation?
>>
>> cheers
>> gregor
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Power failure recovery

2017-06-07 Thread Artyom Lukianov
Under the engine-config, I can see two variables that connected to the
restart of HA VM's
MaxNumOfTriesToRunFailedAutoStartVm: "Number of attempts to restart highly
available VM that went down unexpectedly" (Value Type: Integer)
RetryToRunAutoStartVmIntervalInSeconds: "How often to try to restart highly
available VM that went down unexpectedly (in seconds)" (Value Type: Integer)
And their default parameters are:
# engine-config -g MaxNumOfTriesToRunFailedAutoStartVm
MaxNumOfTriesToRunFailedAutoStartVm: 10 version: general
# engine-config -g RetryToRunAutoStartVmIntervalInSeconds
RetryToRunAutoStartVmIntervalInSeconds: 30 version: general

So check the engine.log if you do not see that the engine restarts the HA
VM's ten times, it is definitely a bug otherwise, you can just to play with
this parameters to adapt it to your case.
Best Regards

On Wed, Jun 7, 2017 at 12:52 PM, Chris Boot  wrote:

> Hi all,
>
> We've got a three-node "hyper-converged" oVirt 4.1.2 + GlusterFS cluster
> on brand new hardware. It's not quite in production yet but, as these
> things always go, we already have some important VMs on it.
>
> Last night the servers (which aren't yet on UPS) suffered a brief power
> failure. They all booted up cleanly and the hosted engine started up ~10
> minutes afterwards (presumably once the engine GlusterFS volume was
> sufficiently healed and the HA stack realised). So far so good.
>
> As soon at the HostedEngine started up it tried to start all our Highly
> Available VMs. Unfortunately our master storage domain was as yet
> inactive as GlusterFS was presumably still trying to get it healed.
> About 10 minutes later the master domain was activated and
> "reconstructed" and an SPM was selected, but oVirt had tried and failed
> to start all the HA VMs already and didn't bother trying again.
>
> All the VMs started just fine this morning when we realised what
> happened and logged-in to oVirt to start them.
>
> Is this known and/or expected behaviour? Can we do anything to delay
> starting HA VMs until the storage domains are there? Can we get oVirt to
> keep trying to start HA VMs when they fail to start?
>
> Is there a bug for this already or should I be raising one?
>
> Thanks,
> Chris
>
> --
> Chris Boot
> bo...@bootc.net
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine install fails for bonded interface

2017-06-07 Thread Artyom Lukianov
The only thing that I can think it that the HE VM FQDN does not resolvable
via DNS, so when the HE deployment tries to reach it, he fails.
Can you check that you can connect to the HE VM via FQDN "happyhourovirt"
after the engine-setup?

Best Regards

On Wed, Jun 7, 2017 at 11:46 AM, Ramachandra Reddy Ankireddypalle <
rcreddy.ankireddypa...@gmail.com> wrote:

> Please find attached hosted engine setup log.
> hosted-engine deploy is stuck in the following loop.
>
>   |- HE_APPLIANCE_ENGINE_SETUP_SUCCESS
> [ INFO  ] Engine-setup successfully completed
> [ INFO  ] Engine is still unreachable
> [ INFO  ] Engine is still not reachable, waiting...
>
>
> On Wed, Jun 7, 2017 at 3:40 AM, Artyom Lukianov <aluki...@redhat.com>
> wrote:
>
>> Hi, can you please provide the ovirt-hosted-engine-setup log?
>>
>> On Wed, Jun 7, 2017 at 9:45 AM, Ramachandra Reddy Ankireddypalle <
>> rcreddy.ankireddypa...@gmail.com> wrote:
>>
>>> Hi,
>>>  I created a bonded interface consisting of two network interfaces.
>>> When I tried to install hosted engine over the bonded interface, it fails
>>> at the end with the following error message:
>>>
>>>  [ ERROR ] Cannot automatically add the host to cluster Default:
>>>   400
>>> Bad Request  Bad Request Your browser sent
>>> a request that this server could not understand.  
>>>
>>>  If I try to install over the eth interface instead of the bonded
>>> interface the install succeeds. Please suggest what needs to be done to use
>>> bonded interface.
>>>
>>> Thanks and Regards,
>>> Ram
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine install fails for bonded interface

2017-06-07 Thread Artyom Lukianov
Hi, can you please provide the ovirt-hosted-engine-setup log?

On Wed, Jun 7, 2017 at 9:45 AM, Ramachandra Reddy Ankireddypalle <
rcreddy.ankireddypa...@gmail.com> wrote:

> Hi,
>  I created a bonded interface consisting of two network interfaces.
> When I tried to install hosted engine over the bonded interface, it fails
> at the end with the following error message:
>
>  [ ERROR ] Cannot automatically add the host to cluster Default:
>   400
> Bad Request  Bad Request Your browser sent
> a request that this server could not understand.  
>
>  If I try to install over the eth interface instead of the bonded
> interface the install succeeds. Please suggest what needs to be done to use
> bonded interface.
>
> Thanks and Regards,
> Ram
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] The host xxx did not satisfy internal filter Memory because its available memory is too low

2017-03-28 Thread Artyom Lukianov
Try to use "Print Screen" button :)
Also, I wanted to correct my answer. it looks like a bug only in the case
when the host does not have other VM's that run on it.
In the case when you have other VM's that run on the host, possible the
situation when the host free memory will be greater than "Max free Memory
for scheduling new VMs".
The variable "Max free Memory for scheduling new VMs" has straight relation
to the guaranteed memory on the VM's that run on the host, but it does not
the real amount of the allocated memory on the host.

Best Regards


On Mon, Mar 27, 2017 at 8:32 PM, yimao <yiima...@gmail.com> wrote:

> I am sorry, how can I get the screenshot of the host in the engine?
>
> 2017-03-27 16:57 GMT+08:00 Artyom Lukianov <aluki...@redhat.com>:
> > Looks like a bug, the "Max free Memory for scheduling new VMs" in the
> case
> > when we do not have memory optimization must be equal to the free memory
> on
> > the host(minus some small amount of the reserved memory). Can you please
> > provide the screenshot of the host in the engine and also output of the
> > command # free -h on the problematic host.
> >
> > Best Regards
> >
> > On Sun, Mar 26, 2017 at 6:13 PM, yimao <yiima...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> When I create vm after I have installed ovirt Node 4.1.1 successfully,
> >> I got this error message "The host  did not satisfy internal
> >> filter Memory because its available memory is too low".
> >>
> >> I use ovirt node:ovirt-node-ng-installer-ovirt-4.1-2017032304.iso and
> >> ovirt-engine-appliance-4.1-20170322.1.el7.centos.noarch.rpm. And I
> >> followed the steps in http://www.ovirt.org/node/.
> >>
> >> I found the informations in  "hosts"->"general"->"info" page:
> >> "Max free Memory for scheduling new VMs: 356 MB"
> >> "Physical Memory: 15747 MB total, 5039 MB used, 10708 MB free"
> >>
> >> Why the free memory for scheduling new vms is so little? Is there
> >> anything wrong with my configuration?
> >>
> >> Thanks in advance,
> >> Yiimao Yang.
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] The host xxx did not satisfy internal filter Memory because its available memory is too low

2017-03-27 Thread Artyom Lukianov
Looks like a bug, the "Max free Memory for scheduling new VMs" in the case
when we do not have memory optimization must be equal to the free memory on
the host(minus some small amount of the reserved memory). Can you please
provide the screenshot of the host in the engine and also output of the
command # free -h on the problematic host.

Best Regards

On Sun, Mar 26, 2017 at 6:13 PM, yimao  wrote:

> Hi,
>
> When I create vm after I have installed ovirt Node 4.1.1 successfully,
> I got this error message "The host  did not satisfy internal
> filter Memory because its available memory is too low".
>
> I use ovirt node:ovirt-node-ng-installer-ovirt-4.1-2017032304.iso and
> ovirt-engine-appliance-4.1-20170322.1.el7.centos.noarch.rpm. And I
> followed the steps in http://www.ovirt.org/node/.
>
> I found the informations in  "hosts"->"general"->"info" page:
> "Max free Memory for scheduling new VMs: 356 MB"
> "Physical Memory: 15747 MB total, 5039 MB used, 10708 MB free"
>
> Why the free memory for scheduling new vms is so little? Is there
> anything wrong with my configuration?
>
> Thanks in advance,
> Yiimao Yang.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cloud init and vms

2017-03-25 Thread Artyom Lukianov
Also, you need to be aware of the fact that the engine applies the
cloud-init configuration only when you start the VM first time. So the
scenario must be:

   1. Create a VM for template
   2. Install and enable on it cloud-init and seal it
   3. Make template from the VM
   4. Make new VM from the template and fill all relevant cloud-init
   parameters
   5. Start the VM

If you do not sure if the VM attached the cloud-init file you can check VM
devices via # virsh -r dumpxml 
Best Regards

On Thu, Mar 23, 2017 at 5:33 PM, Endre Karlson <endre.karl...@gmail.com>
wrote:

> Yeah i tried that with ubuntu but it looks to just fail whebmn it starts
> the vm because it tries to contact the metadata api instead of using the
> cdrom source
>
> 23. mar. 2017 4:26 p.m. skrev "Artyom Lukianov" <aluki...@redhat.com>:
>
>> Just be sure that the cloud-init service enabled before you create the
>> template, otherwise it will fail to initialize a VM.
>> Best Regards
>>
>> On Thu, Mar 23, 2017 at 1:06 PM, Endre Karlson <endre.karl...@gmail.com>
>> wrote:
>>
>>> Hi, is there any prerequisite setup on a Ubuntu vm that is turned into a
>>> template that needs to be done except installing cloud init packages and
>>> sealing the template?
>>>
>>> Endre
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] appliance upgrade 4.0 to 4.1

2017-03-25 Thread Artyom Lukianov
The correct way to upgrade appliance from 4.0 to 4.1

   1. Enable GlobalMaintenance
   2. Add the 4.1 repositories to the appliance
   3. Update the relevant packages
   4. Run # engine-setup
   5. Disable GlobalMaintenance
   6. Change cluster and datacenter compatibility version

The --upgrade-appliance command was added just for the purpose to make an
easier pass from 3.6 to 4.0, all other scenarios not supported.
Best Regards

On Fri, Mar 24, 2017 at 5:35 PM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> (I should add that I'm trying to upgrade "oVirt Engine Appliance 4.0" to
> "oVirt Engine Appliance 4.1")
>
> Hello,
>
> I'm trying to upgrade an appliance based oVirt install (2 nodes with
> centos 7.3, ovirt 4.0) using "hosted-engine --upgrade-appliance" on one
> host.
>
> After multiples tries, I always get this error at the end :
>
> [ INFO  ] Running engine-setup on the appliance
>   |- Preparing to restore:
>   |- - Unpacking file '/root/engine_backup.tar.gz'
>   |- FATAL: Backup was created by version '4.0' and can not be
> restored using the installed version 4.1
>   |- HE_APPLIANCE_ENGINE_RESTORE_FAIL
> [ ERROR ] Engine backup restore failed on the appliance
>
> is this normal?
> is this process not yet compatible with oVirt 4.1 appliance?
> which is the "official" way to update oVirt from 4.0 to 4.1
>
> I tried to do a "yum update && engine setup && reboot" on engine, and
> indeed it works, but there is no rollback possible, so it seems a little
> dangerours (?)
>
>
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cloud init and vms

2017-03-23 Thread Artyom Lukianov
Just be sure that the cloud-init service enabled before you create the
template, otherwise it will fail to initialize a VM.
Best Regards

On Thu, Mar 23, 2017 at 1:06 PM, Endre Karlson 
wrote:

> Hi, is there any prerequisite setup on a Ubuntu vm that is turned into a
> template that needs to be done except installing cloud init packages and
> sealing the template?
>
> Endre
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Hosted Engine Setup fails

2017-03-05 Thread Artyom Lukianov
I found this one under the vdsm log:
libvirtError: internal error: process exited while connecting to monitor:
Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied
Thread-70::INFO::2017-03-05
16:00:04,325::vm::1330::virt.vm::(setDownStatus)
vmId=`ed786811-0321-431e-be4b-2d03764c1b02`::Changed state to Down:
internal error: process exited while connecting to monitor: Could not
access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied (code=1)
Thread-70::INFO::2017-03-05 16:00:04,325::guestagent::430::virt.vm::(stop)
vmId=`ed786811-0321-431e-be4b-2d03764c1b02`::Stopping connection
Thread-70::DEBUG::2017-03-05
16:00:04,325::vmchannels::238::vds::(unregister) Delete fileno 52 from
listener.
Thread-70::DEBUG::2017-03-05
16:00:04,325::vmchannels::66::vds::(_unregister_fd) Failed to unregister FD
from epoll (ENOENT): 52
Thread-70::DEBUG::2017-03-05
16:00:04,326::__init__::209::jsonrpc.Notification::(emit) Sending event
{"params": {"ed786811-0321-431e-be4b-2d03764c1b02": {"status": "Down",
"exitReason": 1, "exitMessage": "internal error: process exited while
connecting to monitor: Could not access KVM kernel module: Permission
denied\nfailed to initialize KVM: Permission denied", "exitCode": 1},
"notify_time": 4339924730}, "jsonrpc": "2.0", "method":
"|virt|VM_status|ed786811-0321-431e-be4b-2d03764c1b02"}

Can you check if you have KVM modules loaded? Also, check group owner for "
/dev/kvm".
Best Regards


On Sat, Mar 4, 2017 at 4:24 PM, Manuel Luis Aznar <
manuel.luis.az...@gmail.com> wrote:

> Hello there again,
>
> The error on the first email was using the repo ovirt-release41.rpm (
> http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm), so as I
> were getting the same error again and again I am currently trying with
> ovirt-release41-snapshot.rpm (http://resources.ovirt.org/
> pub/yum-repo/ovirt-release41-snapshot.rpm) and the result is nearly the
> same.
>
> After creating the VM on the installation I got the same error with the
> command "systemctl status vdsmd":
>
> mar 04 14:10:19 host1.bajada.es vdsm[20443]: vdsm root ERROR failed to
> retrieve Hosted Engine HA info
>
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line
> 231, in _getHaInfo
>   stats = instance.get_all_stats()
>File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 102, in get_all_stats
>   with broker.connection(self._retries, self._wait):
>File "/usr/lib64/python2.7/contextlib.py", line 17, in
> __enter__
>   return self.gen.next()
>File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 99, in connection
>   self.connect(retries, wait)
>File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 78, in connect
>   raise BrokerConnectionError(error_msg)
>  BrokerConnectionError: Failed to connect to broker, the number of
> errors has exceeded the limit (1)
>
> mar 04 14:10:34 host1.bajada.es vdsm[20443]: vdsm
> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect
> to broker, the number of errors has exceeded the limit (1)
>
> I have noticed that the ovirt-ha-agent and ovirt-ha-broker services was
> not running. I guess if this have something to do with the error in vsmd
> service log.
>
> But in this case the ovirt-hosted-engine-installation prints the vnc
> connection and I can connect to the engine VM.
>
> Thanks for all in advance
> Any help would be appreciated
> Manuel Luis Aznar
>
> 2017-03-03 21:48 GMT+00:00 Manuel Luis Aznar 
> :
>
>> Hello there,
>>
>> I am having some trouble when deploying an oVirt 4.1 hosted engine
>> installation.
>>
>> When I m just to end the installation and the hosted engine setup script
>> is about to start the Vm engine (appliance) it fails saying "The VM is not
>> powring up".
>>
>> If I double check the service vdsmd i get this error all the time:
>>
>> vdsm root ERROR failed to retrieve Hosted Engine HA info
>>  Traceback (most recent call last):
>>  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231,
>> in _getHaInfo
>>  stats = instance.get_all_stats()
>>  File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
>> line 102, in get_all_stats
>>  with broker.connection(self._retries, self._wait):
>>  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
>>  return self.gen.next()
>>  File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
>> line 99, in connection
>>  self.connect(retries, wait)
>>  File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
>> line 78, in connect
>>  raise 

Re: [ovirt-users] oVirt Hosted Engine Setup fails

2017-03-04 Thread Artyom Lukianov
Please provide the ovirt-hosted-engine-setup log.

Best Regards

On Sat, Mar 4, 2017 at 4:24 PM, Manuel Luis Aznar <
manuel.luis.az...@gmail.com> wrote:

> Hello there again,
>
> The error on the first email was using the repo ovirt-release41.rpm (
> http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm), so as I
> were getting the same error again and again I am currently trying with
> ovirt-release41-snapshot.rpm (http://resources.ovirt.org/
> pub/yum-repo/ovirt-release41-snapshot.rpm) and the result is nearly the
> same.
>
> After creating the VM on the installation I got the same error with the
> command "systemctl status vdsmd":
>
> mar 04 14:10:19 host1.bajada.es vdsm[20443]: vdsm root ERROR failed to
> retrieve Hosted Engine HA info
>
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line
> 231, in _getHaInfo
>   stats = instance.get_all_stats()
>File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 102, in get_all_stats
>   with broker.connection(self._retries, self._wait):
>File "/usr/lib64/python2.7/contextlib.py", line 17, in
> __enter__
>   return self.gen.next()
>File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 99, in connection
>   self.connect(retries, wait)
>File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 78, in connect
>   raise BrokerConnectionError(error_msg)
>  BrokerConnectionError: Failed to connect to broker, the number of
> errors has exceeded the limit (1)
>
> mar 04 14:10:34 host1.bajada.es vdsm[20443]: vdsm
> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect
> to broker, the number of errors has exceeded the limit (1)
>
> I have noticed that the ovirt-ha-agent and ovirt-ha-broker services was
> not running. I guess if this have something to do with the error in vsmd
> service log.
>
> But in this case the ovirt-hosted-engine-installation prints the vnc
> connection and I can connect to the engine VM.
>
> Thanks for all in advance
> Any help would be appreciated
> Manuel Luis Aznar
>
> 2017-03-03 21:48 GMT+00:00 Manuel Luis Aznar 
> :
>
>> Hello there,
>>
>> I am having some trouble when deploying an oVirt 4.1 hosted engine
>> installation.
>>
>> When I m just to end the installation and the hosted engine setup script
>> is about to start the Vm engine (appliance) it fails saying "The VM is not
>> powring up".
>>
>> If I double check the service vdsmd i get this error all the time:
>>
>> vdsm root ERROR failed to retrieve Hosted Engine HA info
>>  Traceback (most recent call last):
>>  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231,
>> in _getHaInfo
>>  stats = instance.get_all_stats()
>>  File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
>> line 102, in get_all_stats
>>  with broker.connection(self._retries, self._wait):
>>  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
>>  return self.gen.next()
>>  File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
>> line 99, in connection
>>  self.connect(retries, wait)
>>  File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
>> line 78, in connect
>>  raise BrokerConnectionError(error_msg)
>> BrokerConnectionError: Failed to connect to broker, the number of errors
>> has exceeded the limit (1)
>>
>> Did anyone have experimented the same problem?¿? Any hint on How to
>> solved it?¿? I have tried several times with clean installations and always
>> getting the same...
>>
>> The host where I am trying to do the installation have CentOS 7...
>>
>>
>> Thanks for all in advance
>> Will be waiting for any hint to see what I am doing wrong...
>> Manuel Luis Aznar
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add vm numa node message

2017-02-12 Thread Artyom Lukianov
Looks like a bug...
Under the engine log I can see:
2017-02-12 03:18:03,160-05 INFO
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-22)
[43c019b6-06b8-4c2f-bb2b-0639ebd17df9] Running command: Upd
ateVmCommand internal: false. Entities affected :  ID:
5df59ddd-a70a-405e-b528-f02bdcf0d3c9 Type: VMAction group
EDIT_VM_PROPERTIES with role type USER
2017-02-12 03:18:03,171-05 INFO
 [org.ovirt.engine.core.bll.numa.vm.SetVmNumaNodesCommand] (default
task-22) [6bf7ace6] Running command: SetVmNumaNodesCom
mand internal: true.
2017-02-12 03:18:03,180-05 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-22) [6bf7ace6] EVENT_ID: NUMA_ADD_VM
_NUMA_NODE_SUCCESS(1,300), Correlation ID: 6bf7ace6, Call Stack: null,
Custom Event ID: -1, Message: Add VM NUMA node successfully.
But it least it does not change VM NUMA mode, will open the bug for this
issue.

Thanks

On Sat, Feb 11, 2017 at 5:43 PM, Gianluca Cecchi 
wrote:

> Hello,
> yesterday I saw this message 3 times
>
> Add VM NUMA node successfully.
>
> and it seems to me that it was at the same time when I edited some VMs and
> in
> Console --> Advance parameters
> I set "disable strict user checking"
> Are they indeed related? What is the relation with NUMA?
>
> Thanks,
> Gianluca
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine migration problems

2017-02-12 Thread Artyom Lukianov
It looks like the engine does not recognize engine1 and engine3 hosts as
good hosts for the HE VM migration, can you please check the HE score of
this hosts, before you run the migration?

On Sat, Feb 11, 2017 at 8:18 AM, Jim Kusznir  wrote:

> Hi again:
>
> I thought I had fixed the hosted engine migration that was preventing me
> from updating the host the engine was running on.  Today it let me migrate
> it from ovirt1 to ovirt2, and perform needed updates on ovirt1.  When I
> tried to migrate it back to ovirt1 after the updates, I got errors that it
> failed migration.  I tried an auto-migrate, and it claimed that the other
> two nodes (including the node it was running on) do not meet minimum
> requrements, specifically that they are not HA nodesBut I did
> explicitly set them up as HA nodes.
>
> Here's the engine.log output from the command:
>
> 2017-02-11 06:12:03,078 INFO  
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager]
> (default task-41) [252e1f97] Candidate host 'engine1'
> ('1e182fb9-8057-42ed-abd6-bc5bc343ccc6') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
> 2017-02-11 06:12:03,078 INFO  
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager]
> (default task-41) [252e1f97] Candidate host 'engine3'
> ('bac8ace2-cf7e-48ea-9113-b82343cd87f7') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
> 2017-02-11 06:12:03,081 INFO  
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager]
> (default task-41) [252e1f97] Candidate host 'engine2'
> ('76c075fc-1dfb-479d-98ef-57575ec11787') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration' (correlation id: null)
> 2017-02-11 06:12:03,081 WARN  [org.ovirt.engine.core.bll.MigrateVmCommand]
> (default task-41) [252e1f97] Validation of action 'MigrateVm' failed for
> user admin@internal-authz. Reasons: VAR__ACTION__MIGRATE,VAR__
> TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName
> engine1,$filterName HA,VAR__DETAIL__NOT_HE_HOST,SCHEDULING_HOST_FILTERED_
> REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName
> engine3,$filterName HA,VAR__DETAIL__NOT_HE_HOST,SCHEDULING_HOST_FILTERED_
> REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName
> engine2,$filterName Migration,VAR__DETAIL__SAME_
> HOST,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL
>
> I'm a bit confused by thisI followed the ovirt+gluster howto
> referenced from the contributed documentation page.
>
> --Jim
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0.x - hosted-engine was not starting properly

2016-09-29 Thread Artyom Lukianov
We have the same configuration under the file
/etc/httpd/conf.d/z-ovirt-engine-proxy.conf  for the regular engine under
3.6 and 4.0, so I do not sure if it relates to the problem.
About entropy level check the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1357246.
Best Regards

On Thu, Sep 29, 2016 at 1:47 PM, Martin Perina  wrote:

> Hi,
>
> please take a look at my inline comments:
>
> On Tue, Sep 27, 2016 at 7:23 PM, Gervais de Montbrun <
> gerv...@demontbrun.com> wrote:
>
>> Hey All,
>>
>> Since updating to 4.0.x of oVirt, I have had an issue with my hosted
>> engine. After a some poking around, I think I have figured out my issue and
>> thought I would share to see what others think.
>> The issue has existed with 4.0, 4.0.1, 4.0.2, 4.0.3, and still exists in
>> 4.0.4.
>>
>> Description:
>> When my hosted engine starts it reports that it is in a degraded state
>> with 7 or 8 services still not started when I run systemctl status. It
>> takes about 6 or 7 minutes to eventually start all the services and come
>> online. If I don't set my cluster to Global-Maintenance mode it eventually
>> thinks that my hosted-engine needs to be rebooted and restarts it before it
>> can start everything.
>>
>
> ​Could you please share with us logs gathered by ovirt-log-collector?
>
> It's just a guess but could you please take a look if you HE VM has enough
> entropy?
>
>   cat /proc/sys/kernel/random/entropy_avail
>
> If the value is low (below or around 200),  you really need to install and
> configure some entropy generator such as haveged
>
>
>> Solution:
>> I realized that Apache was the culprit and found that the proxy to the
>> ovirt-engine in /etc/httpd/conf.d/z-ovirt-engine-proxy.conf has a super
>> long timeout with many retries. I changed the settings and now everything
>> works for me.
>>
>> -> Before change:
>>
>> > RHEVManagerWeb/|OvirtEngineWeb/|ca.crt$|engine.ssh.key.txt$|
>> rhevm.ssh.key.txt$)>
>> ProxyPassMatch ajp://127.0.0.1:8702 timeout=3600 retry=5
>>
>> 
>> AddOutputFilterByType DEFLATE text/javascript text/css
>> text/html text/xml text/json application/xml application/json
>> application/x-yaml
>> 
>> 
>>
>>
>> -> After change:
>>
>> 
>> ProxyPassMatch ajp://127.0.0.1:8702 timeout=5 retry=2
>>
>> 
>> AddOutputFilterByType DEFLATE text/javascript text/css
>> text/html text/xml text/json application/xml application/json
>> application/x-yaml
>> 
>> 
>>
>>
> ​This one is correct for 4.0​
> ​, not sure why it was not updated during upgrade from 3.6. @Simone?
> ​
>
>
>>
>> If I read the timeout settings correctly, it will wait 60 minutes with 5
>> retries. 5 hours is way too long for my little server to hold onto all
>> those apache processes.
>>
> The change I made allows for there to be an error, and also releases
>> apache's hold on the process. Once everything is ready, apache is ready to
>> serve requests and everything/everyone is happy. Before making the change,
>> I just get a whitescreen in my browser and then nothing works until I
>> restart Apache (or I end up in an endless loop of ovirt-ha services
>> restarting my hosted-engine.
>>
>
> ​Well, if you have an issue with too many apache processes waiting for
> engine to respond, then there's some issue in engine. As I wrote above
> please share the logs with us and check entropy.
>
> Thanks
>
> Martin Perina
> ​
>
>
>>
>> I noticed that this setting reverts to the original setting, so oVirt
>> must be writing this file. Perhaps these number can be changed in oVirt? If
>> not, I will just setup and ansible play to revert the settings with working
>> values and restart apache on my engine.
>> :-)
>>
>> Cheers,
>> Gervais
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Scheduling policy for oVirt cluster "vm_evenly_distributed"

2016-09-27 Thread Artyom Lukianov
Can you please show the screenshot from the clusters->edit cluster->
scheduling policy and also specify what host is SPM?

On Tue, Sep 27, 2016 at 10:09 AM, <aleksey.maksi...@it-kb.ru> wrote:

> These parameters can be seen in the screenshot in my first message
>
> 27.09.2016, 09:51, "Artyom Lukianov" <aluki...@redhat.com>:
> > Please provide all policy parameters(HighVmCount, SpmVmGrace and
> MigrationThreshold)
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Scheduling policy for oVirt cluster "vm_evenly_distributed"

2016-09-27 Thread Artyom Lukianov
Please provide all policy parameters(HighVmCount, SpmVmGrace and
MigrationThreshold) also please specify what host is SPM.
You can find some additional information how this policy works here:
http://www.ovirt.org/develop/release-management/features/sla/even-vm-count-distribution/
Best Regards

On Mon, Sep 26, 2016 at 12:44 PM,  wrote:

> Hello oVirt guru`s!
>
> oVirt Engine Version: 4.0.3-1.el7.centos
>
> I test load balancing (Scheduling policy for oVirt cluster)
> I created a copy of the policy "vm_evenly_distributed" (see
> My_vm_evenly_distributed.png)
> The policy specified limit 2 VMs on the host.
>
> I have a 4 host in oVirt cluster. On the first host are 5 virtual
> machines. Other hosts do not have the VM.
>
> HOST1 - 5VM
> HOST2 - 0VM
> HOST3 - 0VM
> HOST4 - 0VM
>
> Now I apply the policy on a cluster and I expect that oVirt start to
> migrate VMs from first host to another, e.g.
>
> HOST1 - 2 VM
> HOST2 - 2 VM
> HOST3 - 1 VM
> HOST4 - 0 VM
>
> But in fact, migration is performed only one virtual machine:
>
> HOST1 - 4 VM
> HOST2 - 1 VM
> HOST3 - 0 VM
> HOST4 - 0 VM
>
> All hosts on the same hardware.
> VMs not have Affinity Groups.
> All VMs are HA-enabled and automigration-enabled
>
> What am I doing wrong?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt hosted-engine deploy never works

2016-09-01 Thread Artyom Lukianov
Just run on your host hosted-engine --vm-status.

On Thu, Sep 1, 2016 at 5:47 PM, Osvaldo ALVAREZ POZO 
wrote:

> hello,
>
> I managed to install hoste-engine.
> I think
>
> yum localinstall http://dl.fedoraproject.org/
> pub/epel/7/x86_64/h/haveged-1.9.1-1.el7.x86_64.rpm
>
>  -systemctl enable haveged
>
> systemctl start haveged
>
>
> cat /proc/sys/kernel/random/entropy_avail
>
> 1888
> it was helpfull
>
> I lost connection for a while to the ovirt-node
>
> But I think the hosted-engine vm is not in the iscsi LUN.
> How can I see where is the VM located?
>
> Thanks
>
> --
> *De: *"Simone Tiraboschi" 
> *À: *"alvarez" 
> *Cc: *"Gianluca Cecchi" , "users" <
> users@ovirt.org>
> *Envoyé: *Jeudi 1 Septembre 2016 12:12:39
>
> *Objet: *Re: [ovirt-users] ovirt hosted-engine deploy never works
>
>
>
> On Thu, Sep 1, 2016 at 12:06 PM, Osvaldo ALVAREZ POZO <
> alva...@xtra-mail.fr> wrote:
>
>> Hello,
>>
>> but the problem is that i am not able to finish the hosted_engine deploy.
>> So, I am not able to modify the Vm, because procces stoped :-(
>>
>> thanks for the help
>>
>>
> Yes, adding entropy to the host should still help.
>
>
>> --
>> *De: *"Simone Tiraboschi" 
>> *À: *"Gianluca Cecchi" 
>> *Cc: *"alvarez" , "users" 
>> *Envoyé: *Jeudi 1 Septembre 2016 12:00:22
>> *Objet: *Re: [ovirt-users] ovirt hosted-engine deploy never works
>>
>>
>>
>> On Thu, Sep 1, 2016 at 11:48 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Thu, Sep 1, 2016 at 11:39 AM, Osvaldo ALVAREZ POZO <
>>> alva...@xtra-mail.fr> wrote:
>>>
 hello,

 but normally it is not possible to install paquets on ovirt node
 that's way I wonder how to install  haveged?

 yum localinstall is this the command?

 Thanks


>>> Actually the haveged workaround is to apply on engine server, tipically
>>> when it is deployed as a VM, not the host.
>>> HIH,
>>> Gianluca
>>>
>>>
>> The effectiveness of HAVEGE (HArdware Volatile Entropy Gathering and
>> Expansion) on virtual machines is quite debated.
>> Having more entropy on the host could also help, but yes, we also have to
>> add a virtio-rng device to the engine VM:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1357246
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt hosted-engine deploy never works

2016-08-31 Thread Artyom Lukianov
Please attach the HE setup log(/var/log/ovirt-hosted-engine-setup/).

On Wed, Aug 31, 2016 at 1:27 PM, Osvaldo ALVAREZ POZO 
wrote:

> Hello,
>
> I am trying to deploy ovirt hosted-engine (over iscsi, i do not have NFS)
> from cockpit web interface.
> I have:
> -1 ovirt-node (ovirt-node-ng-installer-ovirt-4.0-2016062412)
> -le rpm for the appliance ovirt-engine-appliance-4.0-20160819.1.el7.
> centos.noarch
>
> I have four network nics let's say nic1, nic2, nic3 and nic4. I use nic3
> and nic4 for ISCSi
> I have the ovirt-node ip on nic1.
>
> Any time I try to deploy ovirt hosted engine from ovirt-node cockpit
> interface the result is the same, I receive a "time out message", and the
> ovirt-node cockpit web interface does not work anymore.
>
> The documentation does not talk about network card requirement I do not
> know where to look.
>
> should I use nic2 instead of nic1 when deploying hosted-engine?
>
> Any advice?
>
> Thanks
>
> Inle
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine 4.0 randomly stopping

2016-07-18 Thread Artyom Lukianov
Can you please provide agent.log from /var/log/ovirt-hosted-engine-ha?

On Mon, Jul 18, 2016 at 3:24 PM, Matt .  wrote:

> Hi,
>
> I see an odd behavour on the 4.0 hosted engine, it randomly stops
> running and needs a new start.
>
> What is the solution for this ? There seems to be some settings for
> the HA-broker to make which will fix it ?
>
> Any details are welcome.
>
> Thanks,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-ha-agent keeps quitting - 4.0.0

2016-07-17 Thread Artyom Lukianov
We had the bug related to this issue
https://bugzilla.redhat.com/show_bug.cgi?id=1343005.
It must be fixed in recent versions.
Best Regards

On Thu, Jul 14, 2016 at 8:14 PM, Gervais de Montbrun  wrote:

> Hey Folks,
>
> I upgraded my oVirt cluster from 3.6.7 to 4.0.0 yesterday and am
> experiencing a bunch of issues.
>
> 1) I can't update the Compatibility Version to 4.0 because it tells me
> that all my VMs have to be off to do so, but I have a hosted engine. I
> found some info online about how you plan to fix this. Do we know if the
> fix will be in 4.0.1?
>
> 2) More alarming... the ovirt-ha-agent keeps quitting. The agent.log shows:
>
> MainThread::ERROR::2016-07-13
> 16:38:57,100::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 16:39:02,104::config::122::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_load)
> Configuration file '/etc/ovirt-hosted-engine/hosted-engine.conf' not
> available [[Errno 24] Too many open files:
> '/etc/ovirt-hosted-engine/hosted-engine.conf']
> MainThread::ERROR::2016-07-13
> 16:39:02,105::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 16:39:07,110::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Too many errors occurred, giving up. Please review the log and consider
> filing a bug.
> MainThread::ERROR::2016-07-13
> 17:44:03,499::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::ERROR::2016-07-13
> 17:44:03,515::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '(24, 'Sanlock lockspace remove failure', 'Too many open files')' -
> trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:08,520::config::122::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_load)
> Configuration file '/etc/ovirt-hosted-engine/hosted-engine.conf' not
> available [[Errno 24] Too many open files:
> '/etc/ovirt-hosted-engine/hosted-engine.conf']
> MainThread::ERROR::2016-07-13
> 17:44:08,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:13,529::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:18,535::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:23,541::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:28,546::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:33,552::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:38,556::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:43,561::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:48,566::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:53,571::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Too many errors occurred, giving up. Please review the log and consider
> filing a bug.
> MainThread::ERROR::2016-07-13
> 18:47:40,048::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::ERROR::2016-07-14
> 10:32:29,184::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::ERROR::2016-07-14
> 11:10:07,223::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)
> Connection closed: Connection closed
> MainThread::ERROR::2016-07-14
> 11:10:07,224::brokerlink::148::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(get_monitor_status)
> Exception getting monitor status: Connection closed
> MainThread::ERROR::2016-07-14
> 11:10:07,224::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: 

Re: [ovirt-users] AIO UPG Error from 3.5 to 3.6

2015-11-04 Thread Artyom Lukianov
Run yum update --skip-broken and after engine-setup.
engine-setup will download and upgrade all necessary packages.

- Original Message -
From: "Christian Rebel" 
To: users@ovirt.org
Sent: Wednesday, November 4, 2015 8:15:18 PM
Subject: [ovirt-users] AIO UPG Error from 3.5 to 3.6



Getting the following Error on an AIO Upgrade from 3.5 to 3.6, any ideas? 



--> Processing Dependency: vdsm-jsonrpc-java < 1.1.0 for package: 
ovirt-engine-backend-3.5.4.2-1.el7.centos.noarch 

--> Finished Dependency Resolution 



Error: Package: ovirt-engine-backend-3.5.4.2-1.el7.centos.noarch (@ovirt-3.5) 

Requires: vdsm-jsonrpc-java < 1.1.0 

Removing: vdsm-jsonrpc-java-1.0.15-1.el7.noarch (@ovirt-3.5) 

vdsm-jsonrpc-java = 1.0.15-1.el7 

Updated By: vdsm-jsonrpc-java-1.1.5-1.el7.centos.noarch (ovirt-3.6) 

vdsm-jsonrpc-java = 1.1.5-1.el7.centos 

Available: vdsm-jsonrpc-java-1.0.14-1.el7.noarch (ovirt-3.5) 

vdsm-jsonrpc-java = 1.0.14-1.el7 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] To start vm (runonce) with Rest API

2015-10-06 Thread Artyom Lukianov
You run POST request on 
https://your_rhevm_ip/ovirt-engine/api/vms/6ea131f6-814c-430f-9594-7ee214a44c94/start
 with body:












- Original Message -
From: "Qingyun Ao" 
To: users@ovirt.org
Sent: Wednesday, September 30, 2015 9:20:51 AM
Subject: [ovirt-users] To start vm (runonce) with Rest API

Hello, all, 


Before starting a vm in the first time (runonce), we can attach a floppy drive 
to the vm. How can I compose the XML file to do this with oViirt Rest API? 


-- 
Best regards, 
Ao Qingyun 
aoqing...@gmail.com 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] unable to assign quota to users/groups

2015-08-27 Thread Artyom Lukianov
Doron, like I said I not encountered such behavior in 3.5.
Thanks

- Original Message -
From: Doron Fediuck dfedi...@redhat.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Jorick Astrego j.astr...@netbulae.eu, users users@ovirt.org
Sent: Thursday, August 27, 2015 2:48:41 PM
Subject: Re: [ovirt-users] unable to assign quota to users/groups

Please open a BZ and attach a screenshot.

Thanks,
Doron

On Tue, Aug 25, 2015 at 5:45 PM, Artyom Lukianov aluki...@redhat.com
wrote:

 Not in 3.5, in 3.6 we have REST API for quota, but it's really strange,
 can you attach screenshot how looks detail panel, when you click on
 specific quota.
 Thanks

 - Original Message -
 From: Jorick Astrego j.astr...@netbulae.eu
 To: users@ovirt.org
 Sent: Tuesday, August 25, 2015 1:06:34 PM
 Subject: [ovirt-users] unable to assign quota to users/groups

 In 3.5.3, I'm trying to set a quota for each user. But I cannot find how
 to do it anymore as the Quota consumers tab is not there in my 3.5.3
 install.


 17.8. Using Quota to Limit Resources by User
 Summary
 This procedure describes how to use quotas to limit the resources a user
 has access to.
 ⁠
 Procedure 17.5. Assigning a User to a Quota

 In the tree, click the Data Center with the quota you want to associate
 with a User.
 Click the Quota tab in the navigation pane.
 Select the target quota in the list in the navigation pane.
 Click the Consumers tab in the details pane.
 Click Add at the top of the details pane.
 In the Search field, type the name of the user you want to associate with
 the quota.
 Click GO.
 Select the check box at the left side of the row containing the name of
 the target user.
 Click OK in the bottom right of the Assign Users and Groups to Quota
 window.
 Result
 After a short time, the user will appear in the Consumers tab of the
 details pane.

 What happened to the details pane? Is there another way to assign a quota
 to users/groups?



 Met vriendelijke groet, With kind regards,

 Jorick Astrego

 Netbulae Virtualization Experts

 Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
 KvK 08198180
 Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
   BTW NL821234584B01



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] unable to assign quota to users/groups

2015-08-25 Thread Artyom Lukianov
Not in 3.5, in 3.6 we have REST API for quota, but it's really strange, can you 
attach screenshot how looks detail panel, when you click on specific quota.
Thanks

- Original Message -
From: Jorick Astrego j.astr...@netbulae.eu
To: users@ovirt.org
Sent: Tuesday, August 25, 2015 1:06:34 PM
Subject: [ovirt-users] unable to assign quota to users/groups

In 3.5.3, I'm trying to set a quota for each user. But I cannot find how to do 
it anymore as the Quota consumers tab is not there in my 3.5.3 install. 


17.8. Using Quota to Limit Resources by User 
Summary 
This procedure describes how to use quotas to limit the resources a user has 
access to. 
⁠ 
Procedure 17.5. Assigning a User to a Quota 

In the tree, click the Data Center with the quota you want to associate with a 
User. 
Click the Quota tab in the navigation pane. 
Select the target quota in the list in the navigation pane. 
Click the Consumers tab in the details pane. 
Click Add at the top of the details pane. 
In the Search field, type the name of the user you want to associate with the 
quota. 
Click GO. 
Select the check box at the left side of the row containing the name of the 
target user. 
Click OK in the bottom right of the Assign Users and Groups to Quota window. 
Result 
After a short time, the user will appear in the Consumers tab of the details 
pane. 

What happened to the details pane? Is there another way to assign a quota to 
users/groups? 



Met vriendelijke groet, With kind regards, 

Jorick Astrego 

Netbulae Virtualization Experts 

Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3AKvK 
08198180 
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CPU Threads Help!!!

2015-08-12 Thread Artyom Lukianov
Can you please provide output of lscpu for your host please.
Thanks

- Original Message -
From: zhangjian2011 zhangjian2...@cn.fujitsu.com
To: users@ovirt.org
Sent: Wednesday, August 12, 2015 1:06:11 PM
Subject: [ovirt-users] CPU Threads Help!!!



Hi, Guys Recently I am investigating the Optimization Policy in Cluster and try 
to use “ CPU Threads ”. 

As the manual of description: 

 

For example, a 24-core 

system with 2 threads per core (48 

threads total) can run virtual machines with 

up to 48 cores each 

 



My host CPU is 

i3-2120 CPU (2 cores 4 threads) 



So I think the expect result case is : 

1. If the “ Count Threads As Cores ” disable , we can’t run virtual vm with 
above 2 vCPU 

2. If the “ Count Threads As Cores ” enable , we can’t run virtual vm with 
above 4 vCPU 



Now the problem is about result case 1: 

1. When I set “ Count Threads As Cores ” disable , I still can run virtual vm 
with 4 vCPU 

Can anyone help me to explain it ??? 

Regards, 
Jian 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a host

2015-08-04 Thread Artyom Lukianov
Maybe I mistake, but from logs looks like host where you stop network also SPM 
host, so can you try right click on host choose 'Confirm Host have been 
Rebooted'.
If it will help, maybe someone from devs can help us, with error from engine 
log:
2015-08-03 10:54:29,044 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] 
(DefaultQuartzScheduler_Worker-35) Command SpmStatusVDSCommand(HostName = 
ovhv00.mytld, HostId = 0759994d-a704-4374-b6fa-d8c62f46760a, storagePoolId = 
0002-0002-0002-0002-01ac) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SpmStatusVDS, error = (107, 'Sanlock resource read failure', 'Transport 
endpoint is not connected'), code = 100
Thanks

- Original Message -
From: Konstantinos Christidis kochr...@ekt.gr
To: Artyom Lukianov aluki...@redhat.com
Cc: users@ovirt.org
Sent: Monday, August 3, 2015 11:20:21 AM
Subject: Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a   
host

Hello,

Sorry for my late response. I reproduced the error in a lab environment 
(oVirt3.5/CentOS7.1) with 2 hosts (ovhv00 ovhv01) and a replicated 
glusterfs.
I activated the maintenance mode in host ovhv01 and then I stopped 
network.service (instead of a reboot).
The result is always the same. Data Center becomes Non Responsive, my 
storage becomes red and inactive, and most VMs become paused due to 
unknown storage error.

This is the engine log
https://paste.fedoraproject.org/250877/58925314/raw/

Thanks,

K.



On 07/29/2015 12:21 PM, Artyom Lukianov wrote:
 Can you please provide engine log(/var/log/ovirt-engine/engine.log)?

 - Original Message -
 From: Konstantinos Christidis kochr...@ekt.gr
 To: users@ovirt.org
 Cc: Artyom Lukianov aluki...@redhat.com
 Sent: Wednesday, July 29, 2015 9:40:26 AM
 Subject: Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a 
 host

 Maintenance mode is already enabled. All VMs finish migration successfully.
 Now I stop glusterd service on this host (systemctl stop
 glusterd.service) and nothing bad happens, which means that distributed
 replica glusterfs works fine.
 Then I stop vdsmd service (systemctl stop vdsmd.service) and everything
 works fine.
 When I administratively set ovirtmgmt network down or reboot this host,
 my Data Center becomes Non Responsive, my storage becomes red and
 inactive, and most VMs become paused due to unknown storage error.

 K.




 On 07/28/2015 06:09 PM, Artyom Lukianov wrote:
 Just put host to maintenance mode, if it have vms it will migrate them 
 automatically on other host.

 - Original Message -
 From: Konstantinos Christidis kochr...@ekt.gr
 To: users@ovirt.org
 Sent: Tuesday, July 28, 2015 1:15:15 PM
 Subject: [ovirt-users] Data Center becomes Non Responsive when I reboot a
 host

 Hello ovirt users,

 I have 4 hosts with a distributed replicated 2x2 GlusterFS storage.
 (oVirt3.5/CentOS7)

 When I reboot a host (in maintenance mode and not my SPM host) my Data
 Center becomes Non Responsive, my storage becomes red and inactive,
 and many VMs become paused due to unknown storage error. The same
 happens if I administratively set ovirtmgmt network down (to a host in
 maintenance mode and not my SPM host) with ifconfig ovirtmgmt down.
 I know that management network (ovirtmgmt) is required by default and is
 part of oVirt monitoring process but is there anything I can do in order
 to reboot a host without causing this mess?

 Thanks,

 K.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a host

2015-07-29 Thread Artyom Lukianov
Can you please provide engine log(/var/log/ovirt-engine/engine.log)?

- Original Message -
From: Konstantinos Christidis kochr...@ekt.gr
To: users@ovirt.org
Cc: Artyom Lukianov aluki...@redhat.com
Sent: Wednesday, July 29, 2015 9:40:26 AM
Subject: Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a   
host

Maintenance mode is already enabled. All VMs finish migration successfully.
Now I stop glusterd service on this host (systemctl stop 
glusterd.service) and nothing bad happens, which means that distributed 
replica glusterfs works fine.
Then I stop vdsmd service (systemctl stop vdsmd.service) and everything 
works fine.
When I administratively set ovirtmgmt network down or reboot this host, 
my Data Center becomes Non Responsive, my storage becomes red and 
inactive, and most VMs become paused due to unknown storage error.

K.




On 07/28/2015 06:09 PM, Artyom Lukianov wrote:
 Just put host to maintenance mode, if it have vms it will migrate them 
 automatically on other host.

 - Original Message -
 From: Konstantinos Christidis kochr...@ekt.gr
 To: users@ovirt.org
 Sent: Tuesday, July 28, 2015 1:15:15 PM
 Subject: [ovirt-users] Data Center becomes Non Responsive when I reboot a 
 host

 Hello ovirt users,

 I have 4 hosts with a distributed replicated 2x2 GlusterFS storage.
 (oVirt3.5/CentOS7)

 When I reboot a host (in maintenance mode and not my SPM host) my Data
 Center becomes Non Responsive, my storage becomes red and inactive,
 and many VMs become paused due to unknown storage error. The same
 happens if I administratively set ovirtmgmt network down (to a host in
 maintenance mode and not my SPM host) with ifconfig ovirtmgmt down.
 I know that management network (ovirtmgmt) is required by default and is
 part of oVirt monitoring process but is there anything I can do in order
 to reboot a host without causing this mess?

 Thanks,

 K.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a host

2015-07-28 Thread Artyom Lukianov
Just put host to maintenance mode, if it have vms it will migrate them 
automatically on other host.

- Original Message -
From: Konstantinos Christidis kochr...@ekt.gr
To: users@ovirt.org
Sent: Tuesday, July 28, 2015 1:15:15 PM
Subject: [ovirt-users] Data Center becomes Non Responsive when I reboot a   
host

Hello ovirt users,

I have 4 hosts with a distributed replicated 2x2 GlusterFS storage. 
(oVirt3.5/CentOS7)

When I reboot a host (in maintenance mode and not my SPM host) my Data 
Center becomes Non Responsive, my storage becomes red and inactive, 
and many VMs become paused due to unknown storage error. The same 
happens if I administratively set ovirtmgmt network down (to a host in 
maintenance mode and not my SPM host) with ifconfig ovirtmgmt down.
I know that management network (ovirtmgmt) is required by default and is 
part of oVirt monitoring process but is there anything I can do in order 
to reboot a host without causing this mess?

Thanks,

K.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] This VM is not managed by the engine

2015-07-12 Thread Artyom Lukianov
Also provide to us vdsm log from host(/var/log/vdsm/vdsm.log) for future 
investigation.
Thanks

- Original Message -
From: Mark Steele mste...@telvue.com
To: Roy Golan rgo...@redhat.com
Cc: Artyom Lukianov aluki...@redhat.com, users@ovirt.org
Sent: Sunday, July 12, 2015 4:21:34 PM
Subject: Re: [ovirt-users] This VM is not managed by the engine

[root@hv-02 etc]# vdsClient -s 0 destroy
41703d5c-6cdb-42b4-93df-d78be2776e2b

Unexpected exception

Not sure I'm getting any closer :-)




***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue

On Sun, Jul 12, 2015 at 9:13 AM, Mark Steele mste...@telvue.com wrote:

 OK - I think I'm getting closer - now that I'm on the correct box.

 here is the output of the vdsClient command - which is the device id - the
 first line?

 [root@hv-02 etc]# vdsClient -s 0 list

 41703d5c-6cdb-42b4-93df-d78be2776e2b
 Status = Up
 acpiEnable = true
 emulatedMachine = rhel6.5.0
 afterMigrationStatus =
 pid = 27304
 memGuaranteedSize = 2048
 transparentHugePages = true
 displaySecurePort = 5902
 spiceSslCipherSuite = DEFAULT
 cpuType = SandyBridge
 smp = 2
 numaTune = {'nodeset': '0,1', 'mode': 'interleave'}
 custom =
 {'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6device_ebd4c73d-12c4-435e-8cc5-f180d8f20a72':
 'VmDevice {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
 deviceId=ebd4c73d-12c4-435e-8cc5-f180d8f20a72, device=unix, type=CHANNEL,
 bootOrder=0, specParams={}, address={bus=0, controller=0,
 type=virtio-serial, port=2}, managed=false, plugged=true, readOnly=false,
 deviceAlias=channel1, customProperties={}, snapshotId=null}',
 'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6device_ebd4c73d-12c4-435e-8cc5-f180d8f20a72device_ffd2796f-7644-4008-b920-5f0970b0ef0e':
 'VmDevice {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
 deviceId=ffd2796f-7644-4008-b920-5f0970b0ef0e, device=unix, type=CHANNEL,
 bootOrder=0, specParams={}, address={bus=0, controller=0,
 type=virtio-serial, port=1}, managed=false, plugged=true, readOnly=false,
 deviceAlias=channel0, customProperties={}, snapshotId=null}',
 'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6': 'VmDevice
 {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
 deviceId=86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6, device=ide, type=CONTROLLER,
 bootOrder=0, specParams={}, address={slot=0x01, bus=0x00, domain=0x,
 type=pci, function=0x1}, managed=false, plugged=true, readOnly=false,
 deviceAlias=ide0, customProperties={}, snapshotId=null}',
 'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6device_ebd4c73d-12c4-435e-8cc5-f180d8f20a72device_ffd2796f-7644-4008-b920-5f0970b0ef0edevice_6693d023-9c1f-433c-870e-e9771be8474b':
 'VmDevice {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
 deviceId=6693d023-9c1f-433c-870e-e9771be8474b, device=spicevmc,
 type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0,
 type=virtio-serial, port=3}, managed=false, plugged=true, readOnly=false,
 deviceAlias=channel2, customProperties={}, snapshotId=null}'}
 vmType = kvm
 memSize = 2048
 smpCoresPerSocket = 1
 vmName = connect-turbo-stage-03
 nice = 0
 bootMenuEnable = false
 copyPasteEnable = true
 displayIp = 10.1.90.161
 displayPort = -1
 smartcardEnable = false
 clientIp =
 fileTransferEnable = true
 nicModel = rtl8139,pv
 keyboardLayout = en-us
 kvmEnable = true
 pitReinjection = false
 displayNetwork = ovirtmgmt
 devices = [{'target': 2097152, 'specParams': {'model': 'none'}, 'alias':
 'balloon0', 'deviceType': 'balloon', 'device': 'memballoon', 'type':
 'balloon'}, {'device': 'unix', 'alias': 'channel0', 'address': {'bus': '0',
 'controller': '0', 'type': 'virtio-serial', 'port': '1'}, 'deviceType':
 'channel', 'type': 'channel'}, {'device': 'unix', 'alias': 'channel1',
 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port':
 '2'}, 'deviceType': 'channel', 'type': 'channel'}, {'device': 'spicevmc',
 'alias': 'channel2', 'address': {'bus': '0', 'controller': '0', 'type':
 'virtio-serial', 'port': '3'}, 'deviceType': 'channel', 'type': 'channel'},
 {'index': '0', 'alias': 'scsi0', 'specParams': {}, 'deviceType':
 'controller', 'deviceId': '88db8cb9-0960-4797-bd41-1694bf14b8a9',
 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x', 'type':
 'pci', 'function': '0x0'}, 'device': 'scsi', 'model': 'virtio-scsi',
 'type': 'controller'}, {'alias': 'virtio-serial0', 'specParams': {},
 'deviceType': 'controller', 'deviceId':
 '4bb9c112-e027-4e7d-8b1c-32f99c7040ee', 'address': {'slot': '0x05', 'bus':
 '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device':
 'virtio-serial', 'type': 'controller'}, {'device': 'usb', 'alias': 'usb0',
 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x', 'type':
 'pci', 'function': '0x2'}, 'deviceType': 'controller', 'type':
 'controller'}, {'device': 'ide', 'alias': 'ide0', 'address': {'slot':
 '0x01', 'bus

Re: [ovirt-users] This VM is not managed by the engine

2015-07-09 Thread Artyom Lukianov
Can you sea via engine, on what host run VM?
Anyway if you have really run VM on host you can try to figure it with 'ps aux 
| grep qemu', if it will return you some process, you can just kill process via 
'kill pid'.
I hope it will help you.

- Original Message -
From: Mark Steele mste...@telvue.com
To: Artyom Lukianov aluki...@redhat.com
Cc: users@ovirt.org
Sent: Thursday, July 9, 2015 5:42:20 PM
Subject: Re: [ovirt-users] This VM is not managed by the engine

Artyom,

Thank you - I don't have vdsClient installed - can you point me to the
download?


***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue

On Thu, Jul 9, 2015 at 10:11 AM, Artyom Lukianov aluki...@redhat.com
wrote:

 Please check host where VM run(vdsClient -s 0 list table), and you can
 destroy it via vdsClient(vdsClient -s 0 destroy vm_id).
 Thanks

 - Original Message -
 From: Mark Steele mste...@telvue.com
 To: users@ovirt.org
 Sent: Thursday, July 9, 2015 4:38:32 PM
 Subject: [ovirt-users] This VM is not managed by the engine

 I have a VM that was not started and is now showing as running. When I
 attempt to suspend or stop it in the ovirt-shell, I get the message:

 status: 400
 reason: bad request
 detail: Cannot hibernate VM. This VM is not managed by the engine.

 Not sure how the VM was initially created on the ovirt manager. This VM is
 not needed - how can I 'shutdown' and remove this VM?

 Thanks

 ***
 Mark Steele
 CIO / VP Technical Operations | TelVue Corporation
 TelVue - We Share Your Vision
 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
 twitter: http://twitter.com/telvue | facebook:
 https://www.facebook.com/telvue

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] This VM is not managed by the engine

2015-07-09 Thread Artyom Lukianov
Please check host where VM run(vdsClient -s 0 list table), and you can destroy 
it via vdsClient(vdsClient -s 0 destroy vm_id).
Thanks

- Original Message -
From: Mark Steele mste...@telvue.com
To: users@ovirt.org
Sent: Thursday, July 9, 2015 4:38:32 PM
Subject: [ovirt-users] This VM is not managed by the engine

I have a VM that was not started and is now showing as running. When I attempt 
to suspend or stop it in the ovirt-shell, I get the message: 

status: 400 
reason: bad request 
detail: Cannot hibernate VM. This VM is not managed by the engine. 

Not sure how the VM was initially created on the ovirt manager. This VM is not 
needed - how can I 'shutdown' and remove this VM? 

Thanks 

*** 
Mark Steele 
CIO / VP Technical Operations | TelVue Corporation 
TelVue - We Share Your Vision 
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com 
twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Nested KVM on AMD

2015-04-21 Thread Artyom Lukianov
I enabled nested virtualization via:
1) echo options kvm-amd nested=1  /etc/modprobe.d/kvm-amd.conf
2) modprobe -r kvm-amd
3) modprobe kvm-amd

After it I can see svm flag on vm cpu, but from some reason I still receive the 
same error No virtualization Hardware was detected, when try to deploy vm in 
engine as host.

The same scenario for intel cpu's works fine and I can use nested 
virtualization:
1) echo options kvm-intel nested=1  /etc/modprobe.d/kvm-intel.conf
2) modprobe -r kvm-amd
3) modprobe kvm-amd

So it looks like problem in host deploy with amd cpu maybe Alon Bar-Lev can 
help with it.
Also check your kernel it must be = 3.10

- Original Message -
From: Winfried de Heiden w...@dds.nl
To: users@ovirt.org
Sent: Tuesday, April 21, 2015 10:45:06 AM
Subject: [ovirt-users] Nested KVM on AMD

Hi all, 

For testing purposes I installed vdsm-hook-nestedvt: 

rpm -qi vdsm-hook-nestedvt.noarch 

Name : vdsm-hook-nestedvt Relocations: (not relocatable) 
Version : 4.16.10 Vendor: (none) 
Release : 8.gitc937927.el6 Build Date: ma 12 jan 2015 13:21:31 CET 
Install Date: do 16 apr 2015 15:37:00 CEST Build Host: fc21-vm03.phx.ovirt.org 
Group : Applications/System Source RPM: vdsm-4.16.10-8.gitc937927.el6.src.rpm 
Size : 1612 License: GPLv2+ 
Signature : RSA/SHA1, wo 21 jan 2015 15:32:39 CET, Key ID ab8c4f9dfe590cb7 
URL : http://www.ovirt.org/wiki/Vdsm 
Summary : Nested Virtualization support for VDSM 
Description : 
If the nested virtualization is enabled in your kvm module 
this hook will expose it to the guests. 

Installation looks fine on the oVirt (AMD cpu) KVM-host: 

[root@bigvirt ~]# cat /sys/module/kvm_amd/parameters/nested 
1 

and in oVirt manager 50_nestedvt will show up in Host Hooks. However, trying 
to install oVirt Node Hypervisor 3.5 it will warn No virtualization Hardware 
was detected. Also, the svm flag is not shown on the guest machine. 
Am I missing something? Why is nested kvm not working? 

Winfried 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed, No available host found

2015-04-06 Thread Artyom Lukianov
Engine try to migrate vm on some available host, but migration failed, so 
engine try another host. From some reason migration failed on all hosts:
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Fatal error during migration, code = 12

For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source 
and also from destination hosts.
Thanks


- Original Message -
From: Jason Keltz j...@cse.yorku.ca
To: users users@ovirt.org
Sent: Monday, April 6, 2015 3:47:23 PM
Subject: [ovirt-users] Migration failed, No available host found

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to 
move the VM to node 1 or node 3, and it fails with the error: Migration 
failed, No available host found

I'm unable to decipher engine.log to determine the cause of the 
problem.  Below is what seems to be the relevant lines from the log.  
Any help would be appreciated.

Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) 
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM
, sharedLocks= ]
2015-04-06 08:31:56,686 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: 
MigrateVmCommand internal: false. Entities affected :  ID: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM 
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: 
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID: 
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group 
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO 
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation 
scoring method
2015-04-06 08:31:56,727 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateBrokerVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, 
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom 
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, 
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START, 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923
2015-04-06 08:33:17,669 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Failed in 
MigrateStatusVDS method
2015-04-06 08:33:17,670 INFO 

Re: [ovirt-users] running vm when its configured memory is bigger than host memory

2015-03-25 Thread Artyom Lukianov
How I know in 3.6 we will have possibility for memory 
hotplug(http://www.ovirt.org/Features/Memory_Hotplug), so you can increase vm 
memory without stopping vm, so I believe your friend can try to implement some 
external load balancing 
module(http://www.ovirt.org/Features/oVirt_External_Scheduler):
1) Run vms
2) if one of tasks not success to run because memory limitation on host migrate 
it on host with enough memory
3) hotplug vm memory to new size

I hope it will help you.
Thanks

- Original Message -
From: Jiří Sléžka jiri.sle...@slu.cz
To: users@ovirt.org
Sent: Tuesday, March 24, 2015 6:04:16 PM
Subject: [ovirt-users] running vm when its configured memory is bigger than 
host memory

Hello,

my colleague uses oVirt3.5 for scientific purposes and has a question. 
As I know he needs to run virtual machine with huge amount of over 
allocated memory (more than one host has) and when it is really needed 
then migrate it to host where is much more memory.

It looks to me like nice use case for oVirt.

But here is his own question.

 Currently, the virtual machine of given memory size S cannot be run
 on the host with less than S physical memory.

 We need to run several virtual machines (with unpredictable memory
 requirements) on the cluster consisting of several different hosts
 (with different amount of physical memory) in such the way that any
 virtual machine can be run on any host.

 Due to Ovirt limitation, virtual machines memory sizes has to be set
 to the MINIMUM of host physical memory sizes (in order to be able to
 run any virtual machine on any host). As far as I know, this rule has
 no connection to cluster's Memory optimization 'Max Memory Over
 Commitment' parameter.

 But we cannot predict memory needs of our virtual machines so we need
 to set the memory size of all of them to the MAXIMUM of host's
 physical memory sizes.

 Explanation:

 We are running several computational tasks (every one on single
 independent virtual machine). We have several (and different) host
 machines (see Figure 1).

 1. At the beginning, every task consumes a decent amount of memory.

 2. After a while, some task(s) allocate a huge amount of memory
 (Figure 2). At this moment, some of them cannot continue (due to
 unavailable memory on its current host) without migration to the host
 with higher memory available.

 3. After migration (Figure 3), all tasks may continue.

 4. Some tasks finally consumes a LOT of memory (Figure 4).

 The algorithm above cannot be realized, since every virtual machine
 (i.e. task) has predefined (and fixed  when running) its memory size
 set to the MINIMUM of hosts physical memory sizes.


Thanks in advance

Jiri Slezka


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine --vm-status results

2015-03-18 Thread Artyom Lukianov
At the moment hosted-engine not support clean remove of hosts from HE 
environment, we have PRD bug for this issue 
https://bugzilla.redhat.com/show_bug.cgi?id=1136009, I you can edit metadata on 
HE storage and remove not active hosts, but it better to ask developers for 
correct way to this stuff.

- Original Message -
From: Filipe Guarino guari...@gmail.com
To: users@ovirt.org
Sent: Monday, March 9, 2015 1:03:03 AM
Subject: [ovirt-users] Hosted-Engine --vm-status results

Hello guys 
I installed ovirt using hosted-engine procedure with six fisical hosts, with 
more than 60 vms, and until now, everythings ok and my environment works fine. 
I decided to use some of my hosts for other tasks, so have been removed four of 
my six hosts and put it way from my environment. 
After few days, my second host (hosted_engine_2) start to fail. It's hardware 
issue. My 10GbE interface stoped. I decide to put my host 4 as a second 
hosted_engine_2. 
It's works fine. but when I use command hosted-engine --vm-status, its still 
returns all of the old members of hosted-engines (1 to 6) 
how can i fix it leave only just active active nodes? 
See below the output for my hosted-engine --vm-status 



[root@bmh0001 ~]# hosted-engine --vm-status 

--== Host 1 status ==-- 

Status up-to-date : True 
Hostname : bmh0001.place.brazil 
Host ID : 1 
Engine status : {reason: vm not running on this host, health: bad, 
vm: down, detail: unknown} 
Score : 2400 
Local maintenance : False 
Host timestamp : 68830 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=68830 (Sun Mar 8 17:38:05 2015) 
host-id=1 
score=2400 
maintenance=False 
state=EngineDown 


--== Host 2 status ==-- 

Status up-to-date : True 
Hostname : bmh0004.place.brazil 
Host ID : 2 
Engine status : {health: good, vm: up, detail: up} 
Score : 2400 
Local maintenance : False 
Host timestamp : 2427 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=2427 (Sun Mar 8 17:38:09 2015) 
host-id=2 
score=2400 
maintenance=False 
state=EngineUp 


--== Host 3 status ==-- 

Status up-to-date : False 
Hostname : bmh0003.place.brazil 
Host ID : 3 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 331389 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=331389 (Tue Mar 3 14:48:25 2015) 
host-id=3 
score=0 
maintenance=True 
state=LocalMaintenance 


--== Host 4 status ==-- 

Status up-to-date : False 
Hostname : bmh0004.place.brazil 
Host ID : 4 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 364358 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=364358 (Tue Mar 3 16:10:36 2015) 
host-id=4 
score=0 
maintenance=True 
state=LocalMaintenance 


--== Host 5 status ==-- 

Status up-to-date : False 
Hostname : bmh0005.place.brazil 
Host ID : 5 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 241930 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=241930 (Fri Mar 6 09:40:31 2015) 
host-id=5 
score=0 
maintenance=True 
state=LocalMaintenance 


--== Host 6 status ==-- 

Status up-to-date : False 
Hostname : bmh0006.place.brazil 
Host ID : 6 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 77376 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=77376 (Wed Mar 4 09:11:17 2015) 
host-id=6 
score=0 
maintenance=True 
state=LocalMaintenance 
[root@bmh0001 ~]# hosted-engine --vm-status 


--== Host 1 status ==-- 

Status up-to-date : True 
Hostname : bmh0001.place.brazil 
Host ID : 1 
Engine status : {reason: bad vm status, health: bad, vm: down, 
detail: down} 
Score : 2400 
Local maintenance : False 
Host timestamp : 68122 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=68122 (Sun Mar 8 17:26:16 2015) 
host-id=1 
score=2400 
maintenance=False 
state=EngineStarting 


--== Host 2 status ==-- 

Status up-to-date : True 
Hostname : bmh0004.place.brazil 
Host ID : 2 
Engine status : {reason: bad vm status, health: bad, vm: up, 
detail: powering up} 
Score : 2400 
Local maintenance : False 
Host timestamp : 1719 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=1719 (Sun Mar 8 17:26:21 2015) 
host-id=2 
score=2400 
maintenance=False 
state=EngineStarting 


--== Host 3 status ==-- 

Status up-to-date : False 
Hostname : bmh0003.place.brazil 
Host ID : 3 
Engine status : unknown stale-data 
Score : 0 
Local maintenance : True 
Host timestamp : 331389 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=331389 (Tue Mar 3 14:48:25 2015) 
host-id=3 
score=0 

Re: [ovirt-users] Cannot run VM. Memory size exceeds supported limit for given cluster version.

2015-02-17 Thread Artyom Lukianov
Can you please provide version of cluster where you try to run vm and also what 
OS system you chose under vm properties?
Thanks

- Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: users@ovirt.org, Martin Pavlik mpav...@redhat.com, Kanagaraj 
kmayi...@redhat.com
Sent: Tuesday, February 17, 2015 4:00:32 AM
Subject: [ovirt-users] Cannot run VM. Memory size exceeds supported limit   
for given cluster version.

Hi, 

I am running Ovirt 3.5.1 and have the following settings in the engine config 
for VM memory :- 

- 
[root@ccr01 ~]# engine-config -g VM32BitMaxMemorySizeInMB 
VM32BitMaxMemorySizeInMB: 65536 version: general 

[root@ccr01 ~]# engine-config -g VM64BitMaxMemorySizeInMB 
VM64BitMaxMemorySizeInMB: 524288 version: 3.0 
VM64BitMaxMemorySizeInMB: 2097152 version: 3.1 
VM64BitMaxMemorySizeInMB: 2097152 version: 3.2 
VM64BitMaxMemorySizeInMB: 4096000 version: 3.4 
VM64BitMaxMemorySizeInMB: 4096000 version: 3.5 
VM64BitMaxMemorySizeInMB: 2097152 version: 3.3 
[root@ccr01 ~]# 
 

When i create a guest VM more than 64GB memory,VM is failed to start with 
following error: 

Cannot run VM. Memory size exceeds supported limit for given cluster version. 

Steps to Reproduce: 
1. Create new VM or edit existing VM Memory Size to 65536 MB from Ovirt GUI. 
2. Created the guest VM through template 64-bit... so it will support more than 
16 GB memory. 
3. Now, Start the VM. 

Thanks, 
Punit 






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [SLA] can't start vm due to host did not satisfy internal filter Memory.

2015-02-10 Thread Artyom Lukianov
Can you also provide value for Max free Memory for scheduling new VMs 
parameter, under REST max_scheduling_memory.

- Original Message -
From: Sven Kieske s.kie...@mittwald.de
To: users@ovirt.org List Users@ovirt.org
Sent: Tuesday, February 10, 2015 12:57:57 PM
Subject: [ovirt-users] [SLA] can't start vm due to host did not satisfy 
internal filter Memory.

Hi,

I haven't got any trouble with ovirt for a long time
but know I hit some wall:

Situation: local storage DC with a single vm in it.
ovirt-engine version 3.3.3 (yeah I know, old..)

error i get: vm does not boot anymore.

excerpt from engine.log:

2015-02-09 13:25:06,926 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
(ajp--127.0.0.1-8702-14) Operation Failed: [Cannot run VM. There are no
available running Hosts with sufficient memor
y in VM's Cluster ., Cannot run VM. There is no host that satisfies
current scheduling constraints. See bellow for details:, The host
$REDACTED did not satisfy internal filter Memory.]

and before this:

2015-02-09 13:25:06,914 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager]
(ajp--127.0.0.1-8702-14) [41867e4b] Candidate host $REDACTED
(efbc6306-d072-4601-a171-7dd79f169687) was filtered out by VAR
__FILTERTYPE__INTERNAL filter Memory


previously this vm could be started, the configuration was not altered.

configuration:

cluster: memory overcommitment: 200%

host: ovirt reports 32068 MB physical memory

vm was defined with max 32768 MB RAM
and 16384 MB RAM guaranteed

imho I should be able to start this vm.

however it refused to start until
I lowered the max ram to around 28 GB.

I could provide full logs in private, if needed.

I have other vms in other virtual DCs running
fine with half this amount of ram, and more.

the vm got created via REST.

mom seems to work just fine.

any help would be appreciated.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [SLA] can't start vm due to host did not satisfy internal filter Memory.

2015-02-10 Thread Artyom Lukianov
I checked a little in code, and we have some complicate formula for memory: 
memory_that_vm_need = host.mem_commited + host.pending_vmem_size + 
host.guest_overhead + host.reserved_mem + vm.guaranteed_memory
you can check all parameters for host under vds_dynamic table in database, you 
can also enable debug logging under engine log t get more 
details(http://www.ovirt.org/OVirt_Engine_Development_Environment#Enable_DEBUG_log).
I hope it will help you.
Thanks

- Original Message -
From: Sven Kieske s.kie...@mittwald.de
To: Artyom Lukianov aluki...@redhat.com
Cc: users@ovirt.org List Users@ovirt.org
Sent: Tuesday, February 10, 2015 3:29:13 PM
Subject: Re: [ovirt-users] [SLA] can't start vm due to host did not satisfy 
internal filter Memory.



On 10/02/15 14:03, Artyom Lukianov wrote:
 Can you also provide value for Max free Memory for scheduling new VMs 
 parameter, under REST max_scheduling_memory.

sure:

with the vm being launched with 28 GB max ram it is:
Max free Memory for scheduling new VMs:
35617 MB

Before it was way beyond that (I can't shutdown the vm now), obviously
because of the 200% overcommitment on the cluster.

So imho it should be fine running a vm with 32768 MB RAM defined.

Thanks!

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt and power saving

2015-01-13 Thread Artyom Lukianov
Not really understand you, what you mean by you can pause vm and so trigger 
the low
utilization parameter?

- Original Message -
From: Mario Giammarco mgiamma...@gmail.com
To: Artyom Lukianov aluki...@redhat.com
Cc: users@ovirt.org
Sent: Tuesday, January 13, 2015 3:05:06 PM
Subject: Re: [ovirt-users] Ovirt and power saving

Thanks for reply.
I just want now to  be sure that you can pause vm and so trigger the low
utilization parameter

2015-01-13 12:45 GMT+01:00 Artyom Lukianov aluki...@redhat.com:

 We have power saving policy that also have possibility to poweroff
 hosts(via power management), you can configure power saving policy
 parameters for you purpose(HighUtilization can have values from 50-100 and
 LowUtilization 0-49), so you can set LowUtilization=0 and
 HighUtilization=50, so load balancing will try migrate all vms on one
 host(if cpu utilization less that 50). And also you can set parameter
 HostsInReserve to 0 if you do not want additional hosts in reserve.

 - Original Message -
 From: Mario Giammarco mgiamma...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 13, 2015 1:30:38 PM
 Subject: [ovirt-users] Ovirt and power saving

 Hello,
 I would like to ask if it is possible to do this use case with ovirt:

 1) two servers powered on
 2) operator suspend some virtual machines
 3) load falls down
 4) ovirt shutdown one server

 Then operator unpauses virtual machines and ovirt starts again the 2nd
 server.

 Thanks,
 Mario

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt and power saving

2015-01-13 Thread Artyom Lukianov
We have power saving policy that also have possibility to poweroff hosts(via 
power management), you can configure power saving policy parameters for you 
purpose(HighUtilization can have values from 50-100 and LowUtilization 0-49), 
so you can set LowUtilization=0 and HighUtilization=50, so load balancing will 
try migrate all vms on one host(if cpu utilization less that 50). And also you 
can set parameter HostsInReserve to 0 if you do not want additional hosts in 
reserve.

- Original Message -
From: Mario Giammarco mgiamma...@gmail.com
To: users@ovirt.org
Sent: Tuesday, January 13, 2015 1:30:38 PM
Subject: [ovirt-users] Ovirt and power saving

Hello, 
I would like to ask if it is possible to do this use case with ovirt: 

1) two servers powered on 
2) operator suspend some virtual machines 
3) load falls down 
4) ovirt shutdown one server 

Then operator unpauses virtual machines and ovirt starts again the 2nd server. 

Thanks, 
Mario 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt and power saving

2015-01-13 Thread Artyom Lukianov
I don't think that suspend vm somehow affect on power saving, because suspend 
vm not generate cpu load on host. So if you have two vms and two hosts, when 
you suspend vms, engine will shutdown all hosts(if HostsInReserve=0), so it can 
be good idea to set this parameter equal to 1, to have at least one host up if 
you need to run vms.
Best regards

- Original Message -
From: Mario Giammarco mgiamma...@gmail.com
To: Artyom Lukianov aluki...@redhat.com
Cc: users@ovirt.org
Sent: Tuesday, January 13, 2015 5:22:59 PM
Subject: Re: [ovirt-users] Ovirt and power saving

I mean that I know that I can set a power saving policy, it is a very nice
thing.
I need to be sure that I can pause/suspend (not shutdown) virtual machines.
I also need that, if I suspend enough virtual machines ovirt shutdowns
server to save power following the power save policy.

Otherwise is it possible to manually shutdown a server forcing ovirt to
migrate vms to the only one powered up?

Basically it is a manual power saving mode.

Thanks again,
Mario

2015-01-13 14:21 GMT+01:00 Artyom Lukianov aluki...@redhat.com:

 Not really understand you, what you mean by you can pause vm and so
 trigger the low
 utilization parameter?

 - Original Message -
 From: Mario Giammarco mgiamma...@gmail.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: users@ovirt.org
 Sent: Tuesday, January 13, 2015 3:05:06 PM
 Subject: Re: [ovirt-users] Ovirt and power saving

 Thanks for reply.
 I just want now to  be sure that you can pause vm and so trigger the low
 utilization parameter

 2015-01-13 12:45 GMT+01:00 Artyom Lukianov aluki...@redhat.com:

  We have power saving policy that also have possibility to poweroff
  hosts(via power management), you can configure power saving policy
  parameters for you purpose(HighUtilization can have values from 50-100
 and
  LowUtilization 0-49), so you can set LowUtilization=0 and
  HighUtilization=50, so load balancing will try migrate all vms on one
  host(if cpu utilization less that 50). And also you can set parameter
  HostsInReserve to 0 if you do not want additional hosts in reserve.
 
  - Original Message -
  From: Mario Giammarco mgiamma...@gmail.com
  To: users@ovirt.org
  Sent: Tuesday, January 13, 2015 1:30:38 PM
  Subject: [ovirt-users] Ovirt and power saving
 
  Hello,
  I would like to ask if it is possible to do this use case with ovirt:
 
  1) two servers powered on
  2) operator suspend some virtual machines
  3) load falls down
  4) ovirt shutdown one server
 
  Then operator unpauses virtual machines and ovirt starts again the 2nd
  server.
 
  Thanks,
  Mario
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2015-01-11 Thread Artyom Lukianov
1) If you want to check crush of vm, you can emulate kernel panic on vm, or 
just to kill vm process on host. So if you have HA vm that run on first host, 
and you kill vm process, vm must be restarted on second host.

2) Reason why we drop vm to unknown status and not start it automatically on 
second host(in case of some host problem), because engine not really know if it 
power outage or just some connectivity problem with host, and can be that vm 
still exist on first host and continue write on storage, so when you start the 
same vm on second host, that will write on the same storage you can get data 
corruption. I think you can write your own script, for example check 
connectivity to vm or to power management interface of host(if you have one) 
and if it really power outage, Confirm Host has been Rebooted via REST or 
SDK(fence manual).

I hope it will help you.  

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org, 
Jiri Moskovcak jmosk...@redhat.com, Yedidyah Bar David d...@redhat.com, 
Sandro Bonazzola sbona...@redhat.com
Sent: Thursday, January 8, 2015 7:03:55 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

Thanks for the advice.

For
2) If something happen with host where HA vm run(network problem, power 
outage), vm dropped to unknown state, and if you want from engine to start 
this vm on another host, you need click under problematic host menu Confirm 
Host has been Rebooted, when you confirm this, engine will start vm on 
another host and also release SPM role from problematic host(if it SPM sure).

Is there any way to make the VM failover happen(move to another host in the 
same cluster) automatically  as for sometimes the administrator may also can 
not recognize the sudden power outage immediately.

Also How I can test in my environment? Please kindly advise.
1) If vm crash(from some reason) it restarted automatically on another host in 
the same cluster.


Thanks,
Cong

-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Thursday, January 08, 2015 8:46 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org; Jiri Moskovcak; Yedidyah 
Bar David; Sandro Bonazzola
Subject: Re: [ovirt-users] VM failover with ovirt3.5

So, behavior for not HE HA vm is:
1) If vm crash(from some reason) it restarted automatically on another host in 
the same cluster.
2) If something happen with host where HA vm run(network problem, power 
outage), vm dropped to unknown state, and if you want from engine to start this 
vm on another host, you need click under problematic host menu Confirm Host 
has been Rebooted, when you confirm this, engine will start vm on another host 
and also release SPM role from problematic host(if it SPM sure).


- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org, 
Jiri Moskovcak jmosk...@redhat.com, Yedidyah Bar David d...@redhat.com, 
Sandro Bonazzola sbona...@redhat.com
Sent: Wednesday, January 7, 2015 3:00:26 AM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

For case 1, I got the avicde that I need to change 
'migration_max_time_per_gib_mem'  value inside vdsm.conf, I am doing it and 
when I get the result, I will also share with you. Thanks.

For case 2, do you mean I did the wrong way to test normal VM failover? Now 
although I shut down host 3 forcely, the vm on the top of it will not do 
failover.
What is your advice for this?

Thanks,
Cong



-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Tuesday, January 06, 2015 12:34 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org; Jiri Moskovcak; Yedidyah 
Bar David; Sandro Bonazzola
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Case 1:
In vdsm.log I can see this one:
Thread-674407::ERROR::2015-01-05 12:09:43,264::migration::259::vm.Vm::(run) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 245, in run
self._startUnderlyingMigration(time.time())
  File /usr/share/vdsm/virt/migration.py, line 324, in 
_startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/virt/vm.py, line 670, in f
ret = attr(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py, line 111, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 1264, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: operation aborted: migration job: canceled by client
I see that this kind can be happen, because migration time exceeding the 
configured maximum time for migrations, but anyway we need help from devs, I 
added some to CC.

Case 2:
HA vm

Re: [ovirt-users] VM Affinity groups ovirt 3.4.4

2015-01-08 Thread Artyom Lukianov
At present time we have only affinity filter and affinity weight module, so you 
can see work of affinity groups only when you start or manually migrate vms. If 
you have hard positive affinity group, that include two vms, and first vm was 
started on first host, second vm also must start on first host. I don't know if 
devs will add also balancing module for affinity groups. But you can open PRD 
bug for this one.
Thanks

- Original Message -
From: Gary Lloyd g.ll...@keele.ac.uk
To: users@ovirt.org
Sent: Thursday, January 8, 2015 1:12:01 PM
Subject: [ovirt-users] VM Affinity groups ovirt 3.4.4

Hi we have recently updated our production environment to ovirt 3.4.4 . 

I have created a positive enforcing VM Affinity Group with 2 vms in one of our 
clusters, but they don't seem to be moving (currently on different hosts). Is 
there something else I need to activate ? 

Thanks 

Gary Lloyd 
-- 
IT Services 
Keele University 
--- 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2015-01-08 Thread Artyom Lukianov
So, behavior for not HE HA vm is:
1) If vm crash(from some reason) it restarted automatically on another host in 
the same cluster.
2) If something happen with host where HA vm run(network problem, power 
outage), vm dropped to unknown state, and if you want from engine to start this 
vm on another host, you need click under problematic host menu Confirm Host 
has been Rebooted, when you confirm this, engine will start vm on another host 
and also release SPM role from problematic host(if it SPM sure).
 

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org, 
Jiri Moskovcak jmosk...@redhat.com, Yedidyah Bar David d...@redhat.com, 
Sandro Bonazzola sbona...@redhat.com
Sent: Wednesday, January 7, 2015 3:00:26 AM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

For case 1, I got the avicde that I need to change 
'migration_max_time_per_gib_mem'  value inside vdsm.conf, I am doing it and 
when I get the result, I will also share with you. Thanks.

For case 2, do you mean I did the wrong way to test normal VM failover? Now 
although I shut down host 3 forcely, the vm on the top of it will not do 
failover.
What is your advice for this?

Thanks,
Cong



-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Tuesday, January 06, 2015 12:34 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org; Jiri Moskovcak; Yedidyah 
Bar David; Sandro Bonazzola
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Case 1:
In vdsm.log I can see this one:
Thread-674407::ERROR::2015-01-05 12:09:43,264::migration::259::vm.Vm::(run) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 245, in run
self._startUnderlyingMigration(time.time())
  File /usr/share/vdsm/virt/migration.py, line 324, in 
_startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/virt/vm.py, line 670, in f
ret = attr(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py, line 111, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 1264, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: operation aborted: migration job: canceled by client
I see that this kind can be happen, because migration time exceeding the 
configured maximum time for migrations, but anyway we need help from devs, I 
added some to CC.

Case 2:
HA vm must migrate only in case of some fail on host3, so if your host_3 is ok 
vm will continue run on it.


- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Monday, January 5, 2015 7:38:08 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

I collected the agent.log and vdsm.log in 2 cases.

Case1 HE VM failover trail
What I did
1, make all host be engine up
2, set host1 be with local maintenance mode. In host1, there is HE VM.
3, Then HE VM is trying to migrate, but finally it fails. This can be found 
from agent.log_hosted_engine_1
As for the log is very large, I uploaded into google dirve. The link is as
https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdRGJhUXUwejNGRHc
The logs are for 3 hosts in my environment.

Case2 non-HE VM failover trail
1, make all host be engine up
2,set host2 be with local maintenance mode. In host3, there is one vm with ha 
enabled. Also for the cluster, Enable HA reservation and Resilience policy is 
set as migrating virtual machines
3,But the vm on the top of host3 does not migrate at all.
The logs are uploaded to good drive as
https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdd3MzTXZBbmxpNmc


Thanks,
Cong




-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Sunday, January 04, 2015 3:22 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Can you provide vdsm logs:
1) for HE vm case
2) for not HE vm case
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Thursday, January 1, 2015 2:32:18 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks for the advice. I applied the patch for clientIF.py as
- port = config.getint('addresses', 'management_port')
+ port = config.get('addresses', 'management_port')

Now there is no fatal error in beam.log, also migration can start to happen 
when I set the host where HE VM is to be local maintenance mode. But it finally 
fail with the following log. Also HE VM can not be done with live

Re: [ovirt-users] VM failover with ovirt3.5

2015-01-06 Thread Artyom Lukianov
Case 1:
In vdsm.log I can see this one:
Thread-674407::ERROR::2015-01-05 12:09:43,264::migration::259::vm.Vm::(run) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 245, in run
self._startUnderlyingMigration(time.time())
  File /usr/share/vdsm/virt/migration.py, line 324, in 
_startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/virt/vm.py, line 670, in f
ret = attr(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py, line 111, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 1264, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: operation aborted: migration job: canceled by client
I see that this kind can be happen, because migration time exceeding the 
configured maximum time for migrations, but anyway we need help from devs, I 
added some to CC.

Case 2:
HA vm must migrate only in case of some fail on host3, so if your host_3 is ok 
vm will continue run on it.


- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Monday, January 5, 2015 7:38:08 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

I collected the agent.log and vdsm.log in 2 cases.

Case1 HE VM failover trail
What I did
1, make all host be engine up
2, set host1 be with local maintenance mode. In host1, there is HE VM.
3, Then HE VM is trying to migrate, but finally it fails. This can be found 
from agent.log_hosted_engine_1
As for the log is very large, I uploaded into google dirve. The link is as
https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdRGJhUXUwejNGRHc
The logs are for 3 hosts in my environment.

Case2 non-HE VM failover trail
1, make all host be engine up
2,set host2 be with local maintenance mode. In host3, there is one vm with ha 
enabled. Also for the cluster, Enable HA reservation and Resilience policy is 
set as migrating virtual machines
3,But the vm on the top of host3 does not migrate at all.
The logs are uploaded to good drive as
https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdd3MzTXZBbmxpNmc


Thanks,
Cong




-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Sunday, January 04, 2015 3:22 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Can you provide vdsm logs:
1) for HE vm case
2) for not HE vm case
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Thursday, January 1, 2015 2:32:18 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks for the advice. I applied the patch for clientIF.py as
- port = config.getint('addresses', 'management_port')
+ port = config.get('addresses', 'management_port')

Now there is no fatal error in beam.log, also migration can start to happen 
when I set the host where HE VM is to be local maintenance mode. But it finally 
fail with the following log. Also HE VM can not be done with live migration in 
my environment.

MainThread::INFO::2014-12-31
19:08:06,197::states::759::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Continuing to monitor migration
MainThread::INFO::2014-12-31
19:08:06,430::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineMigratingAway (score: 2000)
MainThread::INFO::2014-12-31
19:08:06,430::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::ERROR::2014-12-31
19:08:16,490::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
Failed to migrate
Traceback (most recent call last):
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py,
line 863, in _monitor_migration
   vm_id,
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py,
line 85, in run_vds_client_cmd
   response['status']['message'])
DetailedError: Error 47 from migrateStatus: Migration canceled
MainThread::INFO::2014-12-31
19:08:16,501::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1420070896.5 type=state_transition
detail=EngineMigratingAway-ReinitializeFSM hostname='compute2-3'
MainThread::INFO::2014-12-31
19:08:16,502::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineMigratingAway-ReinitializeFSM) sent? ignored
MainThread::INFO::2014-12-31
19:08:16,805

Re: [ovirt-users] Affinity

2015-01-05 Thread Artyom Lukianov
If you want that each vm will run on different host, you can:
1) just not pinned it to specific host(placement and migration options under vm 
properties)?
2) create two clusters, each cluster will have hosts from different physical 
datacenters, and for each cluster administrate different set of vms.
3) And in the end we have possibility to use external scheduler, so you can 
write your own python script, and use it as filter, weight or balance module 
for ovirt scheduler:
   http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy
   http://www.ovirt.org/External_Scheduler_Samples 

   You need to install package ovirt-scheduler-proxy and also enable it via 
engine-config -s ExternalSchedulerEnabled=True.
   Also additional info you can find under cd 
/usr/share/doc/ovirt-scheduler-proxy*

I hope it will help you.
Thanks

- Original Message -
From: Koen Vanoppen vanoppen.k...@gmail.com
To: Doron Fediuck dfedi...@redhat.com, users@ovirt.org
Sent: Monday, January 5, 2015 11:26:26 AM
Subject: Re: [ovirt-users] Affinity

Thanks for your reply. But I maybe explained it badly... All the 4 hypervisors 
are in the same Datacenter on oVirt. But they are physically in a different 
datacenter. 
That is why I want to force them to run on a different hypervisor. IF 1 
physical datacenter would go down (electricity power failure or whatever) At 
least one VM will continue to run. 

2015-01-05 10:21 GMT+01:00 Doron Fediuck  dfedi...@redhat.com  : 




On 05/01/15 10:54, Koen Vanoppen wrote: 



Hi All, 

First of all, let me say a Happy New Year with all the best wishes!! 

Now, let's get down to business :-). 

I would lik eto implement the Affinity option in our datacenter. 
I already activated it, put the 2 vms in it and set in on negative so they 
won't run together on the same hypervisor. Now, the question... 
Is there a way I can force the VM's so they will run on a hypervisor that is 
not in the same datacenter? We have this; 

Hyp1 - Datacenter1 
Hyp2 - Datacenter2 
Hyp3 - Datacenter1 
Hyp4 - Datacenter2 

For the moment they run on a different Hypervisor, but they are in the same 
Datacenter. I can manually now move them to the other one, but I would like 
this that oVirt manages this... 
Is this possible...? 

Kind regards, 

Koen 


Hi Koen. 

VMs cannot move between data centers while still running. 
Just to provide some basic concepts in terms of hierarchy, we have: 

Data-Center1 
| 
|\ Cluster A 
| | 
| |\ Host (a) 
| |\ Host (b) 
| 
|\ Cluster B 
| | 
| |\ Host (c) 
| |\ Host (d) 
| |\ Host (e) 

Data-Center2 
| 
|\ Cluster A 
| | 
| |\ Host (f) 
| 
|\ Cluster B 
| | 
| |\ Host (g) 
| |\ Host (h) 
| |\ Host (i) 

As you can see, a host may be a part of a single cluster and a VM 
will run on one of the hosts. Live migration can be done between 
hosts of the same cluster. The only way to move VM between clusters 
and DCs are when the VM is down, in a manual way. 

So Affinity works in the cluster level, and the rules are valid for a 
specific cluster who currently owns the VM. 
In your case if Hyp1 and Hyp3 belong to the same cluster, the affinity 
rules will apply every time you start or migrate a VM. However, there's 
no rule which is valid for Datacenter1 and Datacenter2. 

HTH, 
Doron 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine status=WaitForLaunch

2015-01-04 Thread Artyom Lukianov
Connect to answer from d...@redhat.com, just put you environment to global 
maintenance mode before you destroy HE vm(hosted-engine --set-maintenance 
--mode=global), because agent will restart HE vm.

- Original Message -
From: Will K yetanotherw...@yahoo.com
To: users@ovirt.org
Sent: Sunday, January 4, 2015 7:03:36 AM
Subject: [ovirt-users] hosted engine status=WaitForLaunch

Hi all, 

I'm pretty new to the hosted engine. 

1. Is there a way to re-run the deploy from scratch? 
2. if not, how can I restart a bad setup giving me status=WaitForLaunch? 


a. I'm sure I gave bad option to run the VM installation 
b. I'm sure I picked a wrong parameter for the hosted-engine command 

Now when I run 
# hosted-engine --vm-start 
VM exists and is down, destroying it 
Machine destroyed 

bc13d2d0-f2b5-4ac2-bc5a-ef98e046e2a9 
Status = WaitForLaunch 
nicModel = rtl8139,pv 
emulatedMachine = pc 

Thanks in Advance and Happy New Year. 

Will 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2015-01-04 Thread Artyom Lukianov
Can you provide vdsm logs:
1) for HE vm case
2) for not HE vm case
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Thursday, January 1, 2015 2:32:18 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks for the advice. I applied the patch for clientIF.py as
- port = config.getint('addresses', 'management_port')
+ port = config.get('addresses', 'management_port')

Now there is no fatal error in beam.log, also migration can start to happen 
when I set the host where HE VM is to be local maintenance mode. But it finally 
fail with the following log. Also HE VM can not be done with live migration in 
my environment.

MainThread::INFO::2014-12-31
19:08:06,197::states::759::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Continuing to monitor migration
MainThread::INFO::2014-12-31
19:08:06,430::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineMigratingAway (score: 2000)
MainThread::INFO::2014-12-31
19:08:06,430::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::ERROR::2014-12-31
19:08:16,490::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
Failed to migrate
Traceback (most recent call last):
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py,
line 863, in _monitor_migration
   vm_id,
 File 
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py,
line 85, in run_vds_client_cmd
   response['status']['message'])
DetailedError: Error 47 from migrateStatus: Migration canceled
MainThread::INFO::2014-12-31
19:08:16,501::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1420070896.5 type=state_transition
detail=EngineMigratingAway-ReinitializeFSM hostname='compute2-3'
MainThread::INFO::2014-12-31
19:08:16,502::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineMigratingAway-ReinitializeFSM) sent? ignored
MainThread::INFO::2014-12-31
19:08:16,805::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state ReinitializeFSM (score: 0)
MainThread::INFO::2014-12-31
19:08:16,805::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)

Besides, I had a try for other VMs instead of HE VM, but the failover( also no 
start to try migrating) happen. I set HA for those VMs. Is there some log I can 
check for this?

Please kindly advise.

Thanks,
Cong


 On 2014/12/31, at 0:14, Artyom Lukianov aluki...@redhat.com wrote:

 Ok I found this one:
 Thread-1807180::ERROR::2014-12-30 
 13:02:52,164::migration::165::vm.Vm::(_recover) 
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to destroy remote VM
 Traceback (most recent call last):
 File /usr/share/vdsm/virt/migration.py, line 163, in _recover
  self.destServer.destroy(self._vm.id)
 AttributeError: 'SourceThread' object has no attribute 'destServer'
 Thread-1807180::ERROR::2014-12-30 13:02:52,165::migration::259::vm.Vm::(run) 
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
 Traceback (most recent call last):
 File /usr/share/vdsm/virt/migration.py, line 229, in run
  self._setupVdsConnection()
 File /usr/share/vdsm/virt/migration.py, line 92, in _setupVdsConnection
  self._dst, self._vm.cif.bindings['xmlrpc'].serverPort)
 File /usr/lib/python2.7/site-packages/vdsm/vdscli.py, line 91, in 
 cannonizeHostPort
  return addr + ':' + port
 TypeError: cannot concatenate 'str' and 'int' objects

 We have bug that already verified for this one 
 https://bugzilla.redhat.com/show_bug.cgi?id=1163771, so patch must be 
 included in latest builds, but you can also take a look on patch, and edit 
 files by yourself on all you machines and restart vdsm.

 - Original Message -
 From: cong yue yuecong1...@gmail.com
 To: aluki...@redhat.com, stira...@redhat.com, users@ovirt.org
 Cc: Cong Yue cong_...@alliedtelesis.com
 Sent: Tuesday, December 30, 2014 8:22:47 PM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 The vdsm.log just after I turned the host where HE VM is to local.

 In the log, there is some part like

 ---
 GuestMonitor-HostedEngine::DEBUG::2014-12-30
 13:01:03,988::vm::486::vm.Vm::(_getUserCpuTuneInfo)
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
 set
 GuestMonitor-HostedEngine::DEBUG::2014-12-30
 13:01:03,989::vm::486::vm.Vm::(_getUserCpuTuneInfo)
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
 set
 GuestMonitor-HostedEngine::DEBUG::2014-12-30
 13:01:03,990::vm::486::vm.Vm::(_getUserCpuTuneInfo)
 vmId

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-31 Thread Artyom Lukianov
Ok I found this one:
Thread-1807180::ERROR::2014-12-30 
13:02:52,164::migration::165::vm.Vm::(_recover) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to destroy remote VM
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 163, in _recover
self.destServer.destroy(self._vm.id)
AttributeError: 'SourceThread' object has no attribute 'destServer'
Thread-1807180::ERROR::2014-12-30 13:02:52,165::migration::259::vm.Vm::(run) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 229, in run
self._setupVdsConnection()
  File /usr/share/vdsm/virt/migration.py, line 92, in _setupVdsConnection
self._dst, self._vm.cif.bindings['xmlrpc'].serverPort)
  File /usr/lib/python2.7/site-packages/vdsm/vdscli.py, line 91, in 
cannonizeHostPort
return addr + ':' + port
TypeError: cannot concatenate 'str' and 'int' objects

We have bug that already verified for this one 
https://bugzilla.redhat.com/show_bug.cgi?id=1163771, so patch must be included 
in latest builds, but you can also take a look on patch, and edit files by 
yourself on all you machines and restart vdsm.

- Original Message -
From: cong yue yuecong1...@gmail.com
To: aluki...@redhat.com, stira...@redhat.com, users@ovirt.org
Cc: Cong Yue cong_...@alliedtelesis.com
Sent: Tuesday, December 30, 2014 8:22:47 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

The vdsm.log just after I turned the host where HE VM is to local.

In the log, there is some part like

---
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,988::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,989::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,990::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,675::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message StompFrame command='SEND'
JsonRpcServer::DEBUG::2014-12-30
13:01:04,676::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806995::DEBUG::2014-12-30
13:01:04,677::stompReactor::163::yajsonrpc.StompServer::(send) Sending
response
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,678::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message StompFrame command='SEND'
JsonRpcServer::DEBUG::2014-12-30
13:01:04,679::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806996::DEBUG::2014-12-30
13:01:04,681::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
---

I this with some wrong?

Thanks,
Cong


 From: Artyom Lukianov aluki...@redhat.com
 Date: 2014年12月29日 23:13:45 GMT-8
 To: Yue, Cong cong_...@alliedtelesis.com
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
 users@ovirt.org
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 HE vm migrated only by ovirt-ha-agent and not by engine, but FatalError it's
 more interesting, can you provide vdsm.log for this one please.

 - Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
 Sent: Monday, December 29, 2014 8:29:04 PM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 I disabled local maintenance mode for all hosts, and then only set the host
 where HE VM is there to local maintenance mode. The logs are as follows.
 During the migration of HE VM , it shows some fatal error happen. By the
 way, also HE VM can not work with live migration. Instead, other VMs can do
 live migration.

 ---
 [root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local
 You have new mail in /var/spool/mail/root
 [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
 MainThread::INFO::2014-12-29
 13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best remote host 10.0.0.92 (id: 3, score: 2400)
 MainThread::INFO::2014-12-29
 13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Current state EngineUp (score: 2400)
 MainThread::INFO::2014-12-29
 13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best remote host 10.0.0.92 (id: 3, score: 2400)
 MainThread::INFO::2014-12-29
 13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Current state EngineUp (score: 2400)
 MainThread::INFO::2014-12-29
 13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
Can you also provide output of hosted-engine --vm-status please, previous time 
it was useful, because I do not see something unusual.
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 7:15:24 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Also I change the maintenance mode to local in another host. But also the VM in 
this host can not be migrated. The logs are as follows.

[root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419829795.7 type=state_transition
detail=EngineDown-LocalMaintenance hostname='compute2-2'
MainThread::INFO::2014-12-28
21:09:55,761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineDown-LocalMaintenance) sent? sent
MainThread::INFO::2014-12-28
21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
21:09:55,990::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
^C
You have new mail in /var/spool/mail/root
[root@compute2-2 ~]# ps -ef | grep qemu
root 18420  2777  0 21:10x-apple-data-detectors://39 pts/0
00:00:00x-apple-data-detectors://40 grep --color=auto qemu
qemu 29809 1  0 Dec19 ?01:17:20 /usr/libexec/qemu-kvm
-name testvm2-2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem
-m 500 -realtime mlock=off -smp
1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
c31e97d0-135e-42da-9954-162b5228dce3 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0059-3610-8033-B4C04F395931,uuid=c31e97d0-135e-42da-9954-162b5228dce3
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:17:17x-apple-data-detectors://42,driftfix=slew 
-no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive 
file=/rhev/data-center/0002-0002-0002-0002-01e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/5cbeb8c9

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
I see that HE vm run on host with ip 10.0.0.94, and two another hosts in Local 
Maintenance state, so vm will not migrate to any of them, can you try disable 
local maintenance on all hosts in HE environment and after enable local 
maintenance on host where HE vm run, and provide also output of hosted-engine 
--vm-status.
Failover works in next way:
1) if host where run HE vm have score less by 800 that some other host in HE 
environment, HE vm will migrate on host with best score
2) if something happen to vm(kernel panic, crash of service...), agent will 
restart HE vm on another host in HE environment with positive score
3) if put to local maintenance host with HE vm, vm will migrate to another host 
with positive score
Thanks.

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 6:30:42 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks and the --vm-status log is as follows:
[root@compute2-2 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.94
Host ID: 1
Engine status  : {health: good, vm: up,
detail: up}
Score  : 2400
Local maintenance  : False
Host timestamp : 1008087
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1008087tel:1008087 (Mon Dec 29 11:25:51 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.93
Host ID: 2
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 859142
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=859142 (Mon Dec 29 08:25:08 2014)
host-id=2
score=0
maintenance=True
state=LocalMaintenance


--== Host 3 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.92
Host ID: 3
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 853615
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=853615 (Mon Dec 29 08:25:57 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
You have new mail in /var/spool/mail/root
[root@compute2-2 ~]#

Could you please explain how VM failover works inside ovirt? Is there any other 
debug option I can enable to check the problem?

Thanks,
Cong


On 2014/12/29, at 1:39, Artyom Lukianov 
aluki...@redhat.commailto:aluki...@redhat.com wrote:

Can you also provide output of hosted-engine --vm-status please, previous time 
it was useful, because I do not see something unusual.
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.commailto:cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.commailto:aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.commailto:stira...@redhat.com, 
users@ovirt.orgmailto:users@ovirt.org
Sent: Monday, December 29, 2014 7:15:24 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Also I change the maintenance mode to local in another host. But also the VM in 
this host can not be migrated. The logs are as follows.

[root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
HE vm migrated only by ovirt-ha-agent and not by engine, but FatalError it's 
more interesting, can you provide vdsm.log for this one please.

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 8:29:04 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

I disabled local maintenance mode for all hosts, and then only set the host 
where HE VM is there to local maintenance mode. The logs are as follows. During 
the migration of HE VM , it shows some fatal error happen. By the way, also HE 
VM can not work with live migration. Instead, other VMs can do live migration.

---
[root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local
You have new mail in /var/spool/mail/root
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-29
13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.92 (id: 3, score: 2400)
MainThread::INFO::2014-12-29
13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.92 (id: 3, score: 2400)
MainThread::INFO::2014-12-29
13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:16:43,272::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:43,272::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:16:53,316::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-29
13:16:53,562::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:53,562::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:03,600::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-29
13:17:03,611::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877023.61 type=state_transition
detail=EngineUp-LocalMaintenanceMigrateVm hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:03,672::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineUp-LocalMaintenanceMigrateVm) sent? sent
MainThread::INFO::2014-12-29
13:17:03,911::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-29
13:17:03,912::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenanceMigrateVm (score: 0)
MainThread::INFO::2014-12-29
13:17:03,912::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:03,960::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877023.96 type=state_transition
detail=LocalMaintenanceMigrateVm-EngineMigratingAway
hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:03,980::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(LocalMaintenanceMigrateVm-EngineMigratingAway) sent? sent
MainThread::INFO::2014-12-29
13:17:04,218::states::66::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_penalize_memory)
Penalizing score by 400 due to low free memory
MainThread::INFO::2014-12-29
13:17:04,218::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineMigratingAway (score: 2000)
MainThread::INFO::2014-12-29
13:17:04,219::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::ERROR::2014-12-29
13:17:14,251::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
If you want enable failover for some vm, you can enter under vm 
properties-High Availability and enable Highly Available checkbox. But HE vm 
already automatically Highly Available.

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 7:49:58 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks for detailed explanation. Do you mean only HE VM can be failover? I want 
to have a try with the VM on any host to check whether VM can be failover to 
other host automatically like VMware or Xenserver?
I will have a try as you advised and provide the log for your further advice.

Thanks,
Cong



 On 2014/12/29, at 8:43, Artyom Lukianov aluki...@redhat.com wrote:

 I see that HE vm run on host with ip 10.0.0.94, and two another hosts in 
 Local Maintenance state, so vm will not migrate to any of them, can you try 
 disable local maintenance on all hosts in HE environment and after enable 
 local maintenance on host where HE vm run, and provide also output of 
 hosted-engine --vm-status.
 Failover works in next way:
 1) if host where run HE vm have score less by 800 that some other host in HE 
 environment, HE vm will migrate on host with best score
 2) if something happen to vm(kernel panic, crash of service...), agent will 
 restart HE vm on another host in HE environment with positive score
 3) if put to local maintenance host with HE vm, vm will migrate to another 
 host with positive score
 Thanks.

 - Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
 Sent: Monday, December 29, 2014 6:30:42 PM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Thanks and the --vm-status log is as follows:
 [root@compute2-2 ~]# hosted-engine --vm-status


 --== Host 1 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.94
 Host ID: 1
 Engine status  : {health: good, vm: up,
 detail: up}
 Score  : 2400
 Local maintenance  : False
 Host timestamp : 1008087
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1008087tel:1008087 (Mon Dec 29 11:25:51 2014)
 host-id=1
 score=2400
 maintenance=False
 state=EngineUp


 --== Host 2 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.93
 Host ID: 2
 Engine status  : {reason: vm not running on
 this host, health: bad, vm: down, detail: unknown}
 Score  : 0
 Local maintenance  : True
 Host timestamp : 859142
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=859142 (Mon Dec 29 08:25:08 2014)
 host-id=2
 score=0
 maintenance=True
 state=LocalMaintenance


 --== Host 3 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.92
 Host ID: 3
 Engine status  : {reason: vm not running on
 this host, health: bad, vm: down, detail: unknown}
 Score  : 0
 Local maintenance  : True
 Host timestamp : 853615
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=853615 (Mon Dec 29 08:25:57 2014)
 host-id=3
 score=0
 maintenance=True
 state=LocalMaintenance
 You have new mail in /var/spool/mail/root
 [root@compute2-2 ~]#

 Could you please explain how VM failover works inside ovirt? Is there any 
 other debug option I can enable to check the problem?

 Thanks,
 Cong


 On 2014/12/29, at 1:39, Artyom Lukianov 
 aluki...@redhat.commailto:aluki...@redhat.com wrote:

 Can you also provide output of hosted-engine --vm-status please, previous 
 time it was useful, because I do not see something unusual.
 Thanks

 - Original Message -
 From: Cong Yue 
 cong_...@alliedtelesis.commailto:cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.commailto:aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.commailto:stira...@redhat.com, 
 users@ovirt.orgmailto:users@ovirt.org
 Sent: Monday, December 29, 2014 7:15:24 AM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Also I change the maintenance mode to local in another host. But also the VM 
 in this host can not be migrated. The logs are as follows.

 [root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
 [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
 MainThread::INFO::2014-12-28
 21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-28 Thread Artyom Lukianov
I see that you set local maintenance on host3 that do not have engine vm on it, 
so it nothing to migrate from this host.
If you set local maintenance on host1, vm must migrate to another host with 
positive score.
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Simone Tiraboschi stira...@redhat.com
Cc: users@ovirt.org
Sent: Saturday, December 27, 2014 6:58:32 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Hi

I had a try with hosted-engine --set-maintence --mode=local on
compute2-1, which is host 3 in my cluster. From the log, it shows
maintence mode is dectected, but migration does not happen.

The logs are as follows. Is there any other config I need to check?

[root@compute2-1 vdsm]# hosted-engine --vm-status


--== Host 1 status ==-

Status up-to-date  : True
Hostname   : 10.0.0.94
Host ID: 1
Engine status  : {health: good, vm: up,
detail: up}
Score  : 2400
Local maintenance  : False
Host timestamp : 836296
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=836296 (Sat Dec 27 11:42:39 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.93
Host ID: 2
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 2400
Local maintenance  : False
Host timestamp : 687358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=687358 (Sat Dec 27 08:42:04 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown


--== Host 3 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.92
Host ID: 3
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 681827
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=681827 (Sat Dec 27 08:42:40 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)



[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27

Re: [ovirt-users] How to stick VM to hosts?

2014-11-30 Thread Artyom Lukianov
I am sure you know about vm pinning, but it promise that you can run vm only on 
one specific vm, in your case you want to run vm on some range of hosts, 
exclude some specific hosts. At the moment ovirt don't have such thing of 
policy, but you can write your own filter for scheduler-proxy, that will filter 
all hosts except ones that you need:
http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy
http://www.ovirt.org/External_Scheduler_Samples 

You need to install package ovirt-scheduler-proxy and also enable it via 
engine-config -s ExternalSchedulerEnabled=True.
Also additional info you can find under cd /usr/share/doc/ovirt-scheduler-proxy*

I hope it will help you.
Thanks

- Original Message -
From: Arman Khalatyan arm2...@gmail.com
To: users users@ovirt.org
Sent: Friday, November 28, 2014 11:51:40 AM
Subject: [ovirt-users] How to stick VM to hosts?

Hello, 
I have 2 VMs with the negative Affinity. 
I am looking some way to force the VMs running on different selected hosts. 
Assuming I have hosts c1-c8: Can I tell VM to run on c1-c8 but not c2 and c4? 
Thanks, 
Arman. 

*** 
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik 
Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany 
*** 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] DataCenters/QoS/CPU

2014-11-26 Thread Artyom Lukianov
You need to specify limit for host cpu loading in %, so if you want that your 
vm will not utilize more than 30% of host cpu you need to put this value to 30.
Thanks

- Original Message -
From: Luciano Giacchetta luciano.giacche...@osit.com.ar
To: users@ovirt.org
Sent: Tuesday, November 25, 2014 5:47:17 PM
Subject: [ovirt-users] DataCenters/QoS/CPU

Hello, 

I want to implement a new CPU QoS but i don't know what data needs in Limit: 
field. 
I saw this page, http://www.ovirt.org/Features/CPU_SLA but as i can see it only 
reference to http://libvirt.org/formatdomain.html#elementsCPUTuning 

Does Limit field need to be filled with some options from cputune? 
Is there some related information? 

Ovirt: 3.5.1 
Centos: 6.6 

Regards, 

-- 
Luciano Giacchetta | Tecnología Informática 
+Linea: +54 (11) 6091 7601 +iNum: +883 510 009 903 145 
+eMail: luciano.giacche...@osit.com.ar +Site: http://www.osit.com.ar 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to configure 'Custom properties' in VM config?

2014-09-01 Thread Artyom Lukianov
If you want to add new custom_properties it can be very nice guide for you 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.4/html/Administration_Guide/VDSM_hooks_defining_custom_properties.html
 just change rhevm to engine in case of ovirt.

- Original Message -
From: Grzegorz Szypa grzegorz.sz...@gmail.com
To: users@ovirt.org
Sent: Monday, September 1, 2014 2:32:59 PM
Subject: [ovirt-users] How to configure 'Custom properties' in VM config?

Hi. 

I find information, How to configure 'Custom properties' in VM config? 


-- 
Grzegorz Szypa 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Spam oVirt Power management

2014-06-26 Thread Artyom Lukianov
If I understand correct it must be under vds table in engine database: select 
pm_password from vds;(it's for password).

- Original Message -
From: Maurice James mja...@media-node.com
To: users users@ovirt.org
Sent: Thursday, June 26, 2014 3:45:57 PM
Subject: [ovirt-users] Spam  oVirt Power management

Does anyone know where on the filesystem or the database does the engine store 
the power management information? Username and IP info for drac? 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM HostedEngie is down. Exist message: internal error Failed to acquire lock error -243

2014-06-09 Thread Artyom Lukianov
I just blocked connection to storage for testing, but on result I had this 
error: Failed to acquire lock error -243, so I added it in reproduce steps.
If you know another steps to reproduce this error, without blocking connection 
to storage it also can be wonderful if you can provide them.
Thanks

- Original Message -
From: Andrew Lau and...@andrewklau.com
To: combuster combus...@archlinux.us
Cc: users users@ovirt.org
Sent: Monday, June 9, 2014 3:47:00 AM
Subject: Re: [ovirt-users] VM HostedEngie is down. Exist message: internal 
error Failed to acquire lock error -243

I just ran a few extra tests, I had a 2 host, hosted-engine running
for a day. They both had a score of 2400. Migrated the VM through the
UI multiple times, all worked fine. I then added the third host, and
that's when it all fell to pieces.
Other two hosts have a score of 0 now.

I'm also curious, in the BZ there's a note about:

where engine-vm block connection to storage domain(via iptables -I
INPUT -s sd_ip -j DROP)

What's the purpose for that?

On Sat, Jun 7, 2014 at 4:16 PM, Andrew Lau and...@andrewklau.com wrote:
 Ignore that, the issue came back after 10 minutes.

 I've even tried a gluster mount + nfs server on top of that, and the
 same issue has come back.

 On Fri, Jun 6, 2014 at 6:26 PM, Andrew Lau and...@andrewklau.com wrote:
 Interesting, I put it all into global maintenance. Shut it all down
 for 10~ minutes, and it's regained it's sanlock control and doesn't
 seem to have that issue coming up in the log.

 On Fri, Jun 6, 2014 at 4:21 PM, combuster combus...@archlinux.us wrote:
 It was pure NFS on a NAS device. They all had different ids (had no
 redeployements of nodes before problem occured).

 Thanks Jirka.


 On 06/06/2014 08:19 AM, Jiri Moskovcak wrote:

 I've seen that problem in other threads, the common denominator was nfs
 on top of gluster. So if you have this setup, then it's a known problem. 
 Or
 you should double check if you hosts have different ids otherwise they 
 would
 be trying to acquire the same lock.

 --Jirka

 On 06/06/2014 08:03 AM, Andrew Lau wrote:

 Hi Ivan,

 Thanks for the in depth reply.

 I've only seen this happen twice, and only after I added a third host
 to the HA cluster. I wonder if that's the root problem.

 Have you seen this happen on all your installs or only just after your
 manual migration? It's a little frustrating this is happening as I was
 hoping to get this into a production environment. It was all working
 except that log message :(

 Thanks,
 Andrew


 On Fri, Jun 6, 2014 at 3:20 PM, combuster combus...@archlinux.us wrote:

 Hi Andrew,

 this is something that I saw in my logs too, first on one node and then
 on
 the other three. When that happend on all four of them, engine was
 corrupted
 beyond repair.

 First of all, I think that message is saying that sanlock can't get a
 lock
 on the shared storage that you defined for the hostedengine during
 installation. I got this error when I've tried to manually migrate the
 hosted engine. There is an unresolved bug there and I think it's related
 to
 this one:

 [Bug 1093366 - Migration of hosted-engine vm put target host score to
 zero]
 https://bugzilla.redhat.com/show_bug.cgi?id=1093366

 This is a blocker bug (or should be) for the selfhostedengine and, from
 my
 own experience with it, shouldn't be used in the production enviroment
 (not
 untill it's fixed).

 Nothing that I've done couldn't fix the fact that the score for the
 target
 node was Zero, tried to reinstall the node, reboot the node, restarted
 several services, tailed a tons of logs etc but to no avail. When only
 one
 node was left (that was actually running the hosted engine), I brought
 the
 engine's vm down gracefully (hosted-engine --vm-shutdown I belive) and
 after
 that, when I've tried to start the vm - it wouldn't load. Running VNC
 showed
 that the filesystem inside the vm was corrupted and when I ran fsck and
 finally started up - it was too badly damaged. I succeded to start the
 engine itself (after repairing postgresql service that wouldn't want to
 start) but the database was damaged enough and acted pretty weird
 (showed
 that storage domains were down but the vm's were running fine etc).
 Lucky
 me, I had already exported all of the VM's on the first sign of trouble
 and
 then installed ovirt-engine on the dedicated server and attached the
 export
 domain.

 So while really a usefull feature, and it's working (for the most part
 ie,
 automatic migration works), manually migrating VM with the hosted-engine
 will lead to troubles.

 I hope that my experience with it, will be of use to you. It happened to
 me
 two weeks ago, ovirt-engine was current (3.4.1) and there was no fix
 available.

 Regards,

 Ivan

 On 06/06/2014 05:12 AM, Andrew Lau wrote:

 Hi,

 I'm seeing this weird message in my engine log

 2014-06-06 03:06:09,380 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-79) 

Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Artyom Lukianov
I see that I verified it on version 
ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from this 
version and above.
Thanks
- Original Message -
From: Andrew Lau and...@andrewklau.com
To: Artyom Lukianov aluki...@redhat.com
Cc: users users@ovirt.org, Sandro Bonazzola sbona...@redhat.com
Sent: Saturday, May 24, 2014 2:51:15 PM
Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
become operational...

Simply starting the ha-agents manually seems to bring up the VM
however it doesn't come up in the chkconfig list.

The next host that gets configured works fine. What steps get
configured in that final stage that perhaps I could manually run
rather than rerolling for a third time?

On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com wrote:
 Hi,

 Are these patches merged into 3.4.1? I seem to be hitting this issue
 now, twice in a row.
 The second BZ is also marked as private.

 On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com wrote:
 It have number of the same bugs:
 https://bugzilla.redhat.com/show_bug.cgi?id=1080513
 https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this already 
 merged, so if you take the last ovirt it must include it
 The one thing you can do until it, it try to restart host and start 
 deployment process from beginning.
 Thanks

 - Original Message -
 From: Tobias Honacker tob...@honacker.info
 To: users@ovirt.org
 Sent: Thursday, May 1, 2014 6:06:47 PM
 Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...

 Hi all,

 i hit this bug yesterday.

 Packages:

 ovirt-host-deploy-1.2.0-1.el6.noarch
 ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
 ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
 ovirt-release-11.2.0-1.noarch
 ovirt-hosted-engine-ha-1.1.2-1.el6.noarch

 After setting up the hosted engine (running great) the setup canceled with 
 this MSG:

 [ INFO  ] The VDSM Host is now operational
 [ ERROR ] Waiting for cluster 'Default' to become operational...
 [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no 
 attribute '__dict__'
 [ INFO  ] Stage: Clean up
 [ INFO  ] Stage: Pre-termination
 [ INFO  ] Stage: Termination

 What is the next step i have to do that t he HA features of the 
 hosted-engine will take care of keeping the VM alive.

 best regards
 tobias

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Recovering from an aborted hosted-engine --deploy

2014-05-19 Thread Artyom Lukianov
I must be enough just to destroy vm likes Jiri proposed.
Thanks

- Original Message -
From: Bob Doolittle b...@doolittle.us.com
To: Sandro Bonazzola sbona...@redhat.com, Jiri Moskovcak 
jmosk...@redhat.com, Bob Doolittle bdoolit...@teradici.com, users 
users@ovirt.org
Sent: Monday, May 19, 2014 4:32:52 PM
Subject: Re: [ovirt-users] Recovering from an aborted hosted-engine --deploy


On 05/19/2014 04:13 AM, Sandro Bonazzola wrote:
 Il 19/05/2014 09:14, Jiri Moskovcak ha scritto:
 On 05/16/2014 09:12 PM, Bob Doolittle wrote:
 Hi,

 I had an issue at the end of my hosted-engine --deploy.

 My VM was stuck during OS installation because I was unable to configure
 the network for some reason.

 So I chose the final option 3 to abort the deployment.

 Now, I seem to be stuck. If I try to re-run --deploy it says it's
 already installed.
 What's the exact error message? If it's just saying that there is already a 
 VM on taht host, then you can run the following commands on the host to
 remove it:

 vdsClient -s 0 list
 vdsClient -s 0 destroy ID of the vm you get from the first command
 or hosted-engine --vm-poweroff.
 Waiting for the error message for understanding what's blocking you.


The error was:

 [ ERROR ] The following VMs has been found: 
 cee35901-04f7-49de-8bdc-709c1f5e3df7
 [ ERROR ] Failed to execute stage 'Environment setup': Cannot setup 
 Hosted Engine with other VMs running

I was trying to install the VM using a netinst ISO, which I think was 
too ambitious. Since I couldn't get the VM network up for some reason, I 
didn't get very far with the installation. I want to try over with a 
normal install ISO. Can I destroy the VM as Jiri suggested, and then 
re-run --deploy? Or do I need to do the cleanup actions Didi suggested?

Thanks,
  Bob

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-01 Thread Artyom Lukianov
It have number of the same bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1080513
https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this already 
merged, so if you take the last ovirt it must include it
The one thing you can do until it, it try to restart host and start deployment 
process from beginning.
Thanks

- Original Message -
From: Tobias Honacker tob...@honacker.info
To: users@ovirt.org
Sent: Thursday, May 1, 2014 6:06:47 PM
Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become 
operational...

Hi all, 

i hit this bug yesterday. 

Packages: 

ovirt-host-deploy-1.2.0-1.el6.noarch 
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch 
ovirt-hosted-engine-setup-1.1.2-1.el6.noarch 
ovirt-release-11.2.0-1.noarch 
ovirt-hosted-engine-ha-1.1.2-1.el6.noarch 

After setting up the hosted engine (running great) the setup canceled with this 
MSG: 

[ INFO  ] The VDSM Host is now operational 
[ ERROR ] Waiting for cluster 'Default' to become operational... 
[ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no 
attribute '__dict__' 
[ INFO  ] Stage: Clean up
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination 

What is the next step i have to do that t he HA features of the hosted-engine 
will take care of keeping the VM alive. 

best regards 
tobias 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users