Thanks Ruben.

I've realised that lower performance takes place each time a new VM is in PROLOG status or when I create/clone an image. I guess all that is related to disk access. As a testing purpose, I've cloned an image and executed iotop in the server where OpenNebula and NFS Server are running.

This is a sample output:

   PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO> COMMAND
3238 be/7 oneadmin 1783.88 K/s 1783.88 K/s 0.00 % 93.98 % cp -f /var/lib/one/datastores/1/9fedfe0d5cb02961~ne/datastores/1/72da3ba9d86fdc573b944c03253561ae

Sometimes, there is another process (flush-147:0) with an important IO usage:
   PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO> COMMAND
 8028 be/4 root     1358.87 B/s    7.96 K/s  0.00 %  4.15 % [flush-147:0]


How could I set an "ionice -c 3" to that "cp" command? I would like to do it in a global and persistent way so that future cp actions (due to creation or cloning of a VM or an image) have a lower ionice, not just once. Perhaps, I should set ionice in the driver configuration "exporting" a new global variable in /etc/one/defaultrc in the same way as the priority, shouldn't I? But, how do I configure ionice to interact with the proper driver?


Thanks for your help.

Carlos.


On 11/13/2012 11:51 AM, Ruben S. Montero wrote:
Hi

OpenNebula drivers are the piece of software that deals with the underlying subsystems. Each driver starts a new thread/process for each operation that should inherit the driver priority.

If you take a look to /etc/one/defaultrc, you can change the CPU priority assigned to the drivers. It defaults to 19, the least favorable to the process. You may want to try to use sets other io scheduling algorithm with ionice, or the blk module of cgroups.... or any other tool to adjust the I/O priority of a process.

Cheers

Ruben




On Mon, Nov 12, 2012 at 5:59 PM, Carlos Jiménez <[email protected] <mailto:[email protected]>> wrote:

    Hi all,

    I have a server running OpenNebula 3.8 and acting as NFS server
    for storaging with another host (running KVM and acting as a NFS
    client for the storage). I have one VM running and then I try to
    create another VM using Sunstone. Then, the running VM reduces its
    performance while the creation of the new VM takes places. I guess
    OpenNebula I/O processes on the shared disk have better
    priority/nice/ionice than disk access of the already running VM.
    The question is: Is there any way to control it so running VMs
    don't decrease their performance? How do you reduce
    priority/nice/ionice of the creation of the new VMs?

    Thanks in advance,

    Carlos.
    _______________________________________________
    Users mailing list
    [email protected] <mailto:[email protected]>
    http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org <http://www.OpenNebula.org> | [email protected] <mailto:[email protected]> | @OpenNebula

_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to