Michal Skrivanek wrote:
On 11 Jun 2014, at 14:31, noc wrote:

On 26-5-2014 16:22, Gilad Chaplik wrote:
Hi Nathanaël,

happy to assist :) hope it will work in first run:

1) install the proxy and ovirtsdk.
2) put attached file in the right place (according to docs: ".../plugins"), 
make sure to edit the file with your ovirt's ip, user@domain and PW.
3) restart proxy service.
3) use config tool to configure ovirt-engine:
* "ExternalSchedulerServiceURL"="http://<ip>:18781/"
* "ExternalSchedulerEnabled"=true
4) restart ovirt-engine service.
5) under configure->cluster_policy see that weight function 
memory_even_distribution was added (should be in manage policy units or /sth- you 
will see it in the main dialog as well).
6) clone/copy currernt cluster's used cluster policy (probably none - prefer it 
to have no balancing modules to avoid conflicts), name it 'your_name' and 
attach memory_even_distribution weight (you can leave it as the only weight 
module in weight section to avoid configuring factors).
7) replace cluster's cluster policy with newly created one.

try it out and let me know how goes :-)


Ok, progress of some sort :-)

I added the weight function to the cluster and when I replace my dns name with 
localhost in ExternalSchedulerServiceURL then engine.log shows that it can 
contact the scheduler. I expected a rebalance but nothing happened. Stopping 
and starting a VM does provoke a reaction, an error :-(

From the scheduler.log I see that engine contacts it and pushes some 
information, the log also shows that some information is returned and then 
there is a big error message in the log of engine.

xmlrpc is infamous about not being able to handle numbers like 
9223372010239819775

Then oVirt shouldn't either use that kind of numbers or should not use xmlrpc.

Sorry but thats a non-answer and doesn't help anybody.

Howto solve this problem. Do you need a BZ?

Joop

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to