Just to precise that I'm also concerned whatever is the host (el7 or el6) and I have many vms running on a single host (up to 15) and many networks ( up to 10) It is always the same : when vdsmd finished to take the totality of memory, the host becomes unreacheable and vms begin to migrate. The only way to stop this is to restart vdsmd.

Le 30/03/2015 15:40, Kapetanakis Giannis a écrit :
On 26/03/15 18:12, Darrell Budic wrote:
Yes, this script leaks quickly. Started out at a RSS of 21000ish, already at 26744 a minute in, about 5 minutes later it’s at 39384 and climbing.

Been abusing a production server for those simple tests, but didn’t want to run valgrind against it right this minute. Did run it against the test.py script above though, got this (fpaste.org didn’t like, too long maybe?): http://tower.onholyground.com/valgrind-test.log

To comment on some other posts in this thread, I also see leaks on my test system which is running Centos 6.6, but it only has 3 VMs across 2 servers and 3 configured networks and it leaks MUCH slower. I suspect people don’t notice this on test systems because they don’t have a lot of VMs/interfaces running, and don’t leave them up for weeks at a time. That’s why I was running these tests on my production box, to have more VMs up.

I don't think it's related directly to the number of VMs running.
Maybe indirectly if it's related to the number of network interfaces (so vm interfaces add to the leak).

We've seen the leak on nodes under maintenance...

G
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to