Re: [PATCH v3] powerpc: kvm: make _PAGE_NUMA take effect

2014-04-14 Thread Alexander Graf


On 13.04.14 04:27, Liu ping fan wrote:

On Fri, Apr 11, 2014 at 10:03 PM, Alexander Graf ag...@suse.de wrote:

On 11.04.2014, at 13:45, Liu Ping Fan pingf...@linux.vnet.ibm.com wrote:


When we mark pte with _PAGE_NUMA we already call 
mmu_notifier_invalidate_range_start
and mmu_notifier_invalidate_range_end, which will mark existing guest hpte
entry as HPTE_V_ABSENT. Now we need to do that when we are inserting new
guest hpte entries.

What happens when we don't? Why do we need the check? Why isn't it done 
implicitly? What happens when we treat a NUMA marked page as non-present? Why 
does it work out for us?

Assume you have no idea what PAGE_NUMA is, but try to figure out what this 
patch does and whether you need to cherry-pick it into your downstream kernel. 
The description as is still is not very helpful for that. It doesn't even 
explain what really changes with this patch applied.


Yeah.  what about appending the following description?  Can it make
the context clear?
Guest should not setup a hpte for the page whose pte is marked with
_PAGE_NUMA, so on the host, the numa-fault mechanism can take effect
to check whether the page is placed correctly or not.


Try to come up with a text that answers the following questions in order:

  - What does _PAGE_NUMA mean?
  - How does page migration with _PAGE_NUMA work?
  - Why should we not map pages when _PAGE_NUMA is set?
  - Which part of what needs to be done did the previous _PAGE_NUMA 
patch address?

  - What's the situation without this patch?
  - Which scenario does this patch fix?

Once you have a text that answers those, you should have a good patch 
description :).


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linux 3.13: BUG: soft lockup - CPU#1 stuck for 22s! [qemu-kvm:2653]

2014-04-14 Thread Jason S. Wagner
I didn't end up with any time to investigate this issue further due to 
Life Stuff.  I've just returned from vacation and upgraded all the 
software on my system.  I noticed I got a new version of spice and 
allowed an new kernel.  


Just wanted to report that everything seems fine now.

On Wed, Mar 12, 2014 at 12:38 PM, Jason S. Wagner 
jasonswag...@gmail.com wrote:

Hi Paolo,

Hmm, the symptoms don't seem to match entirely -- I already am on a 
64-bit kernel.  I didn't recognize high CPU utilization, but I also 
admit I wasn't looking for it.  Leaving the instance running doesn't 
eventually hang my system, but it did cause strange system behavior.


Some apps would stall while launching, others that were already 
running would not be able to perform some operations.  Nautilus, for 
example, couldn't render directory views and would display 
Loading... in the corner of the window.  I assume this is because 
CPU1 is locked by KVM.  Stopping the instance would allow any pending 
operations to complete; apps stuck launching all started at once, and 
Nautilus was again able to show me my local disk.


I believe it also prevented clean shutdown.  My keyboard's LEDs would 
stop responding and after allowing a reasonable amount of time, I 
reached for the reset button several times.


I'll reinstall Linux 3.13 tonight and try to confirm these details.

On Wed, Mar 12, 2014 at 7:08 AM, Paolo Bonzini pbonz...@redhat.com 
wrote:

Il 11/03/2014 21:01, Jason S. Wagner ha scritto:

Hi all,

Over the weekend, Linux 3.13.6 was installed on my machine. When I
started gnome-boxes on Monday morning, my VM halted during boot. 
Only a
portion of the VESA BIOS init output was displayed on-screen before 
the

halt. Backtraces occasionally appeared in dmesg
(http://pastebin.com/uUiNLmJ6). Some other programs were prevented 
from

performing some tasks while the VM was running.

While investigating, I was directed to an earlier KVM bug report
(https://lkml.org/lkml/2014/2/11/487). I downgraded to Linux 3.12.9 
and

rebooted, and now my VM is starting. Unfortunately, I'm very new to
Linux, so I'm not sure what the next steps would be.


Yeah, it's quite likely this is the issue you're facing.

And the author of the report at 
http://www.spinics.net/lists/kvm/msg100196.html also told me 
privately that 64-bit kernel made it go away, so it's also likely to 
be the same thing.  Thomas, do you also see a soft lockup message?


Paolo


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] powerpc: kvm: make _PAGE_NUMA take effect

2014-04-14 Thread liu ping fan
On Mon, Apr 14, 2014 at 2:43 PM, Alexander Graf ag...@suse.de wrote:

 On 13.04.14 04:27, Liu ping fan wrote:

 On Fri, Apr 11, 2014 at 10:03 PM, Alexander Graf ag...@suse.de wrote:

 On 11.04.2014, at 13:45, Liu Ping Fan pingf...@linux.vnet.ibm.com
 wrote:

 When we mark pte with _PAGE_NUMA we already call
 mmu_notifier_invalidate_range_start
 and mmu_notifier_invalidate_range_end, which will mark existing guest
 hpte
 entry as HPTE_V_ABSENT. Now we need to do that when we are inserting new
 guest hpte entries.

 What happens when we don't? Why do we need the check? Why isn't it done
 implicitly? What happens when we treat a NUMA marked page as non-present?
 Why does it work out for us?

 Assume you have no idea what PAGE_NUMA is, but try to figure out what
 this patch does and whether you need to cherry-pick it into your downstream
 kernel. The description as is still is not very helpful for that. It doesn't
 even explain what really changes with this patch applied.

 Yeah.  what about appending the following description?  Can it make
 the context clear?
 Guest should not setup a hpte for the page whose pte is marked with
 _PAGE_NUMA, so on the host, the numa-fault mechanism can take effect
 to check whether the page is placed correctly or not.


 Try to come up with a text that answers the following questions in order:

I divide them into 3 groups, and answer them by 3 sections. Seems that
it has the total story :)
Please take a look.

   - What does _PAGE_NUMA mean?
Group 1 - section 2

   - How does page migration with _PAGE_NUMA work?
   - Why should we not map pages when _PAGE_NUMA is set?
Group 2 - section 1
(Note: for the 1st question in this group, I am not sure about the
details, except that we can fix numa balancing by moving task or
moving page.  So I comment as  migration should be involved to cut
down the distance between the cpu and pages)

   - Which part of what needs to be done did the previous _PAGE_NUMA patch
 address?
   - What's the situation without this patch?
   - Which scenario does this patch fix?

Group 3 - section 3


Numa fault is a method which help to achieve auto numa balancing.
When such a page fault takes place, the page fault handler will check
whether the page is placed correctly. If not, migration should be
involved to cut down the distance between the cpu and pages.

A pte with _PAGE_NUMA help to implement numa fault. It means not to
allow the MMU to access the page directly. So a page fault is triggered
and numa fault handler gets the opportunity to run checker.

As for the access of MMU, we need special handling for the powernv's guest.
When we mark a pte with _PAGE_NUMA, we already call mmu_notifier to
invalidate it in guest's htab, but when we tried to re-insert them,
we firstly try to fix it in real-mode. Only after this fails, we fallback
to virt mode, and most of important, we run numa fault handler in virt
mode.  This patch guards the way of real-mode to ensure that if a pte is
marked with _PAGE_NUMA, it will NOT be fixed in real mode, instead, it will
be fixed in virt mode and have the opportunity to be checked with placement.


Thx,
Fan


 Once you have a text that answers those, you should have a good patch
 description :).

 Alex


 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] powerpc: kvm: make _PAGE_NUMA take effect

2014-04-14 Thread Alexander Graf


On 14.04.14 10:08, liu ping fan wrote:

On Mon, Apr 14, 2014 at 2:43 PM, Alexander Graf ag...@suse.de wrote:

On 13.04.14 04:27, Liu ping fan wrote:

On Fri, Apr 11, 2014 at 10:03 PM, Alexander Graf ag...@suse.de wrote:

On 11.04.2014, at 13:45, Liu Ping Fan pingf...@linux.vnet.ibm.com
wrote:


When we mark pte with _PAGE_NUMA we already call
mmu_notifier_invalidate_range_start
and mmu_notifier_invalidate_range_end, which will mark existing guest
hpte
entry as HPTE_V_ABSENT. Now we need to do that when we are inserting new
guest hpte entries.

What happens when we don't? Why do we need the check? Why isn't it done
implicitly? What happens when we treat a NUMA marked page as non-present?
Why does it work out for us?

Assume you have no idea what PAGE_NUMA is, but try to figure out what
this patch does and whether you need to cherry-pick it into your downstream
kernel. The description as is still is not very helpful for that. It doesn't
even explain what really changes with this patch applied.


Yeah.  what about appending the following description?  Can it make
the context clear?
Guest should not setup a hpte for the page whose pte is marked with
_PAGE_NUMA, so on the host, the numa-fault mechanism can take effect
to check whether the page is placed correctly or not.


Try to come up with a text that answers the following questions in order:


I divide them into 3 groups, and answer them by 3 sections. Seems that
it has the total story :)
Please take a look.


   - What does _PAGE_NUMA mean?

Group 1 - section 2


   - How does page migration with _PAGE_NUMA work?
   - Why should we not map pages when _PAGE_NUMA is set?

Group 2 - section 1
(Note: for the 1st question in this group, I am not sure about the
details, except that we can fix numa balancing by moving task or
moving page.  So I comment as  migration should be involved to cut
down the distance between the cpu and pages)


   - Which part of what needs to be done did the previous _PAGE_NUMA patch
address?
   - What's the situation without this patch?
   - Which scenario does this patch fix?


Group 3 - section 3


Numa fault is a method which help to achieve auto numa balancing.
When such a page fault takes place, the page fault handler will check
whether the page is placed correctly. If not, migration should be
involved to cut down the distance between the cpu and pages.

A pte with _PAGE_NUMA help to implement numa fault. It means not to
allow the MMU to access the page directly. So a page fault is triggered
and numa fault handler gets the opportunity to run checker.

As for the access of MMU, we need special handling for the powernv's guest.
When we mark a pte with _PAGE_NUMA, we already call mmu_notifier to
invalidate it in guest's htab, but when we tried to re-insert them,
we firstly try to fix it in real-mode. Only after this fails, we fallback
to virt mode, and most of important, we run numa fault handler in virt
mode.  This patch guards the way of real-mode to ensure that if a pte is
marked with _PAGE_NUMA, it will NOT be fixed in real mode, instead, it will
be fixed in virt mode and have the opportunity to be checked with placement.


s/fixed/mapped/g

Otherwise works as patch description for me :).


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ssh from host to guest using qemu to boot VM

2014-04-14 Thread Jobin Raju George
Hey!

How do I setup ssh from the host to the guest using qemu?

1) I am able to use port redirection when I boot the VM without any
special parameter(explained in point 2) as follows:

/usr/bin/qemu-system-x86_64 -hda ubuntu1204 -m 512 -redir tcp:::8001

2) But when I try to boot using the following

/usr/bin/qemu-system-x86_64 \
-m 1024 \
-name vserialtest \
-cdrom ubuntu-12.04-desktop-amd64.iso \
-hda ubuntu1204-virtio-serial \
-chardev socket,host=localhost,port=,server,nowait,id=port1-char \
-device virtio-serial \
-device virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0
\
-net user,hostfwd=tcp:::8001

I get the following error and the VM does not boot:

qemu-system-x86_64: -net user,hostfwd=tcp:::8001: invalid host
forwarding rule 'tcp:::8001'
qemu-system-x86_64: -net user,hostfwd=tcp:::8001: Device 'user'
could not be initialized

Please note that I am able to boot the VM without the -net parameter
without any issues, however, I want to setup ssh from the host to the
guest. ssh from guest to host works fine as expected.

-- 

Thanks and regards,
Jobin Raju George
Final Year, Information Technology
College of Engineering Pune
Alternate e-mail: georgejr10...@coep.ac.in
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ssh from host to guest using qemu to boot VM

2014-04-14 Thread Fam Zheng
On Mon, 04/14 17:14, Jobin Raju George wrote:
 Hey!
 
 How do I setup ssh from the host to the guest using qemu?
 
 1) I am able to use port redirection when I boot the VM without any
 special parameter(explained in point 2) as follows:
 
 /usr/bin/qemu-system-x86_64 -hda ubuntu1204 -m 512 -redir tcp:::8001
 
 2) But when I try to boot using the following
 
 /usr/bin/qemu-system-x86_64 \
 -m 1024 \
 -name vserialtest \
 -cdrom ubuntu-12.04-desktop-amd64.iso \
 -hda ubuntu1204-virtio-serial \
 -chardev socket,host=localhost,port=,server,nowait,id=port1-char \
 -device virtio-serial \
 -device 
 virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0
 \
 -net user,hostfwd=tcp:::8001
 
 I get the following error and the VM does not boot:
 
 qemu-system-x86_64: -net user,hostfwd=tcp:::8001: invalid host
 forwarding rule 'tcp:::8001'
 qemu-system-x86_64: -net user,hostfwd=tcp:::8001: Device 'user'
 could not be initialized

Format:
hostfwd=[tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport

Try:

-net user,hostfwd=tcp::-:8001

Fam

 
 Please note that I am able to boot the VM without the -net parameter
 without any issues, however, I want to setup ssh from the host to the
 guest. ssh from guest to host works fine as expected.
 
 -- 
 
 Thanks and regards,
 Jobin Raju George
 Final Year, Information Technology
 College of Engineering Pune
 Alternate e-mail: georgejr10...@coep.ac.in
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ssh from host to guest using qemu to boot VM

2014-04-14 Thread Jobin Raju George
I retried using:

/usr/bin/qemu-system-x86_64 \
-m 1024 \
-name vserialtest \
-cdrom ubuntu-12.04-desktop-amd64.iso -hda ubuntu1204-virtio-serial \
-chardev socket,host=localhost,port=,server,nowait,id=port1-char \
-device virtio-serial \
-device virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0
\
-net user,hostfwd=tcp:127.0.0.1:-:8001

but get the following:

qemu-system-x86_64: -net user,hostfwd=tcp:127.0.0.1:-:8001: could
not set up host forwarding rule 'tcp:127.0.0.1:-:8001'
qemu-system-x86_64: -net user,hostfwd=tcp:127.0.0.1:-:8001: Device
'user' could not be initialized


Also tried:

-net user,hostfwd=tcp::-:8001

but get the following error:

qemu-system-x86_64: -net user,hostfwd=tcp::-:8001: could not set
up host forwarding rule 'tcp::-:8001'
qemu-system-x86_64: -net user,hostfwd=tcp::-:8001: Device 'user'
could not be initialized


On Mon, Apr 14, 2014 at 5:31 PM, Fam Zheng f...@redhat.com wrote:
 On Mon, 04/14 17:14, Jobin Raju George wrote:
 Hey!

 How do I setup ssh from the host to the guest using qemu?

 1) I am able to use port redirection when I boot the VM without any
 special parameter(explained in point 2) as follows:

 /usr/bin/qemu-system-x86_64 -hda ubuntu1204 -m 512 -redir tcp:::8001

 2) But when I try to boot using the following

 /usr/bin/qemu-system-x86_64 \
 -m 1024 \
 -name vserialtest \
 -cdrom ubuntu-12.04-desktop-amd64.iso \
 -hda ubuntu1204-virtio-serial \
 -chardev socket,host=localhost,port=,server,nowait,id=port1-char \
 -device virtio-serial \
 -device 
 virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0
 \
 -net user,hostfwd=tcp:::8001

 I get the following error and the VM does not boot:

 qemu-system-x86_64: -net user,hostfwd=tcp:::8001: invalid host
 forwarding rule 'tcp:::8001'
 qemu-system-x86_64: -net user,hostfwd=tcp:::8001: Device 'user'
 could not be initialized

 Format:
 hostfwd=[tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport

 Try:

 -net user,hostfwd=tcp::-:8001

 Fam


 Please note that I am able to boot the VM without the -net parameter
 without any issues, however, I want to setup ssh from the host to the
 guest. ssh from guest to host works fine as expected.

 --

 Thanks and regards,
 Jobin Raju George
 Final Year, Information Technology
 College of Engineering Pune
 Alternate e-mail: georgejr10...@coep.ac.in
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Thanks and regards,

Jobin Raju George

Final Year, Information Technology

College of Engineering Pune

Alternate e-mail: georgejr10...@coep.ac.in
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ssh from host to guest using qemu to boot VM

2014-04-14 Thread Fam Zheng
On Mon, 04/14 17:36, Jobin Raju George wrote:
 I retried using:
 
 /usr/bin/qemu-system-x86_64 \
 -m 1024 \
 -name vserialtest \
 -cdrom ubuntu-12.04-desktop-amd64.iso -hda ubuntu1204-virtio-serial \
 -chardev socket,host=localhost,port=,server,nowait,id=port1-char \
 -device virtio-serial \
 -device 
 virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0
 \
 -net user,hostfwd=tcp:127.0.0.1:-:8001
 
 but get the following:
 
 qemu-system-x86_64: -net user,hostfwd=tcp:127.0.0.1:-:8001: could
 not set up host forwarding rule 'tcp:127.0.0.1:-:8001'
 qemu-system-x86_64: -net user,hostfwd=tcp:127.0.0.1:-:8001: Device
 'user' could not be initialized
 
 
 Also tried:
 
 -net user,hostfwd=tcp::-:8001
 
 but get the following error:
 
 qemu-system-x86_64: -net user,hostfwd=tcp::-:8001: could not set
 up host forwarding rule 'tcp::-:8001'
 qemu-system-x86_64: -net user,hostfwd=tcp::-:8001: Device 'user'
 could not be initialized

Is the port busy? What does netstat -ltn say?

Fam

 
 
 On Mon, Apr 14, 2014 at 5:31 PM, Fam Zheng f...@redhat.com wrote:
  On Mon, 04/14 17:14, Jobin Raju George wrote:
  Hey!
 
  How do I setup ssh from the host to the guest using qemu?
 
  1) I am able to use port redirection when I boot the VM without any
  special parameter(explained in point 2) as follows:
 
  /usr/bin/qemu-system-x86_64 -hda ubuntu1204 -m 512 -redir tcp:::8001
 
  2) But when I try to boot using the following
 
  /usr/bin/qemu-system-x86_64 \
  -m 1024 \
  -name vserialtest \
  -cdrom ubuntu-12.04-desktop-amd64.iso \
  -hda ubuntu1204-virtio-serial \
  -chardev socket,host=localhost,port=,server,nowait,id=port1-char \
  -device virtio-serial \
  -device 
  virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0
  \
  -net user,hostfwd=tcp:::8001
 
  I get the following error and the VM does not boot:
 
  qemu-system-x86_64: -net user,hostfwd=tcp:::8001: invalid host
  forwarding rule 'tcp:::8001'
  qemu-system-x86_64: -net user,hostfwd=tcp:::8001: Device 'user'
  could not be initialized
 
  Format:
  hostfwd=[tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
 
  Try:
 
  -net user,hostfwd=tcp::-:8001
 
  Fam
 
 
  Please note that I am able to boot the VM without the -net parameter
  without any issues, however, I want to setup ssh from the host to the
  guest. ssh from guest to host works fine as expected.
 
  --
 
  Thanks and regards,
  Jobin Raju George
  Final Year, Information Technology
  College of Engineering Pune
  Alternate e-mail: georgejr10...@coep.ac.in
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
 
 
 -- 
 Thanks and regards,
 
 Jobin Raju George
 
 Final Year, Information Technology
 
 College of Engineering Pune
 
 Alternate e-mail: georgejr10...@coep.ac.in
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v2 6/6] KVM: emulate: remove memopp and rip_relative

2014-04-14 Thread Bandan Das
Paolo Bonzini pbonz...@redhat.com writes:

 Il 10/04/2014 20:03, Bandan Das ha scritto:
  /* Fields above regs are cleared together. */

 This comment is not accurate anymore after patch 4.  Since you're
 fixing it, please add another comment saying where the cleared fields
 start, too.

Oops, I forgot to change the comment appropriately. 
Will fix in next version.

Bandan

 +ctxt-memop.addr.mem.ea = (u32)ctxt-memop.addr.mem.ea;

 This is missing if (ctxt-ad_bytes != 8).

 +if (rip_relative)
 +ctxt-memop.addr.mem.ea += ctxt-_eip;

 Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 73721] KVM hv-time

2014-04-14 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=73721

--- Comment #1 from flyse...@gmx.de ---
I can confirm the problem. It happens on my gentoo system as well. Let me know,
if you want me to create debug logs or similar.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 73721] KVM hv-time

2014-04-14 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=73721

Rasmus Eskola fruit...@gmail.com changed:

   What|Removed |Added

 CC||fruit...@gmail.com

--- Comment #2 from Rasmus Eskola fruit...@gmail.com ---
I can also confirm this on my Arch Linux system. Near bare-metal performance on
3.13 kernel with hv-time, but getting these error messages in dmesg on 3.14.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


KVM call agenda for 2014-04-15

2014-04-14 Thread Juan Quintela

Hi

Please, send any topic that you are interested in covering.

Thanks, Juan.

Call details:

15:00 CEST
13:00 UTC
09:00 EDT

Every two weeks

If you need phone number details,  contact me privately.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[GIT PULL] KVM fixes for 3.15-rc1

2014-04-14 Thread Marcelo Tosatti

Linus,

Please pull from

git://git.kernel.org/pub/scm/virt/kvm/kvm.git master

To receive the following KVM fixes:

- Fix for guest triggerable BUG_ON (CVE-2014-0155)
- CR4.SMAP support
- Spurious WARN_ON() fix

Feng Wu (5):
  KVM: Remove SMAP bit from CR4_RESERVED_BITS
  KVM: Add SMAP support when setting CR4
  KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode
  KVM: expose SMAP feature to guest
  KVM: Rename variable smep to cr4_smep

Marcelo Tosatti (1):
  KVM: x86: remove WARN_ON from get_kernel_ns()

Paolo Bonzini (2):
  KVM: ioapic: fix assignment of ioapic-rtc_status.pending_eoi 
(CVE-2014-0155)
  KVM: ioapic: try to recover if pending_eoi goes out of range

 arch/x86/include/asm/kvm_host.h |2 -
 arch/x86/kvm/cpuid.c|2 -
 arch/x86/kvm/cpuid.h|8 +++
 arch/x86/kvm/mmu.c  |   40 ++--
 arch/x86/kvm/mmu.h  |   44 
 arch/x86/kvm/paging_tmpl.h  |2 -
 arch/x86/kvm/vmx.c  |   11 +-
 arch/x86/kvm/x86.c  |   10 +++--
 virt/kvm/ioapic.c   |   25 +-
 9 files changed, 114 insertions(+), 30 deletions(-)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re:

2014-04-14 Thread Marcus White
Hello,
A friendly bump to see if anyone has any ideas:-)

Cheers!

On Sun, Apr 13, 2014 at 2:01 PM, Marcus White
roastedseawee...@gmail.com wrote:
 Hello,
 I had some basic questions regarding KVM, and would appreciate any help:)

 I have been reading about the KVM architecture, and as I understand
 it, the guest shows up as a regular process in the host itself..

 I had some questions around that..

 1.  Are the guest processes implemented as a control group within the
 overall VM process itself? Is the VM a kernel process or a user
 process?

 2. Is there a way for me to force some specific CPU/s to a guest, and
 those CPUs to be not used for any work on the host itself?  Pinning is
 just making sure the vCPU runs on the same physical CPU always, I am
 looking for something more than that..

 3. If the host is compiled as a non pre-emptible kernel, kernel
 process run to completion until they give up the CPU themselves. In
 the context of a guest, I am trying to understand what that would mean
 in the context of KVM and guest VMs. If the VM is a user process, it
 means nothing, I wasnt sure as per (1).

 Cheers!
 M
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/3] migration dirty bitmap support ARMv7

2014-04-14 Thread Mario Smarduch

The patch set supports migration dirty bitmap support implementation for
arm-kvm.  Spliting of pmd's to pte's as suggested is implemented on demand
when migration is started. 

I tested it on 4-way SMP ARMv7, with SMP guests.
2GB VMs with dirty shared memory segments upto 1.8 GB 
and relatively fast update rates 16Mb/5mS. 

Next course of action would be rmap support which 
scales much better on bigger systems. Although one
think that confused me, x86 migrations were sometimes
10 to 15 times slower, I think it must be something 
wrong with my configuration.


Mario Smarduch (3):
  headers for migration dirtybitmap support
  initial write protect of VM address space and on dirty log read
  hooks to interface with QEMU for initial write protect, dirty log read

 arch/arm/include/asm/kvm_host.h |9 +++
 arch/arm/kvm/arm.c  |   62 ++-
 arch/arm/kvm/mmu.c  |  158 ++-
 3 files changed, 226 insertions(+), 3 deletions(-)
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/3] migration dirtybitmap support ARMv7

2014-04-14 Thread Mario Smarduch

- support QEMU interface for initial VM Write Protect
- QEMU Dirty bit map log retrieval


Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
 arch/arm/kvm/arm.c |   62 +++-
 1 file changed, 61 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bd18bb8..9076e3d 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -241,6 +241,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
   const struct kvm_memory_slot *old,
   enum kvm_mr_change change)
 {
+   if ((change != KVM_MR_DELETE)  (mem-flags  KVM_MEM_LOG_DIRTY_PAGES))
+   kvm_mmu_slot_remove_write_access(kvm, mem-slot);
 }
 
 void kvm_arch_flush_shadow_all(struct kvm *kvm)
@@ -773,9 +775,67 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
}
 }
 
+/*
+ * Walks the memslot dirty bitmap, write protects dirty pages for next rount,
+ * and stores the dirty bitmap fo QEMU retrieval.
+ *
+ */
 int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 {
-   return -EINVAL;
+   int r;
+   struct kvm_memory_slot *memslot;
+   unsigned long n, i;
+   unsigned long *dirty_bitmap;
+   unsigned long *dirty_bitmap_buffer;
+   bool is_dirty = false;
+   gfn_t offset;
+
+   mutex_lock(kvm-slots_lock);
+   r = -EINVAL;
+
+   if (log-slot = KVM_USER_MEM_SLOTS)
+   goto out;
+
+   memslot = id_to_memslot(kvm-memslots, log-slot);
+   dirty_bitmap = memslot-dirty_bitmap;
+
+   r = -ENOENT;
+   if (!dirty_bitmap)
+   goto out;
+
+   n = kvm_dirty_bitmap_bytes(memslot);
+   dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long);
+   memset(dirty_bitmap_buffer, 0, n);
+
+   spin_lock(kvm-mmu_lock);
+   for (i = 0; i  n / sizeof(long); i++) {
+   unsigned long mask;
+
+   if (!dirty_bitmap[i])
+   continue;
+
+   is_dirty = true;
+   offset = i * BITS_PER_LONG;
+   kvm_mmu_write_protect_pt_masked(kvm, memslot, offset,
+   dirty_bitmap[i]);
+   mask = dirty_bitmap[i];
+   dirty_bitmap_buffer[i] = mask;
+   dirty_bitmap[i] = 0;
+   }
+
+   if (is_dirty)
+   kvm_tlb_flush_vmid(kvm);
+
+   spin_unlock(kvm-mmu_lock);
+   r = -EFAULT;
+
+   if (copy_to_user(log-dirty_bitmap, dirty_bitmap_buffer, n))
+   goto out;
+
+   r = 0;
+out:
+   mutex_unlock(kvm-slots_lock);
+   return r;
 }
 
 static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/3] migration dirtybitmap support ARMv7

2014-04-14 Thread Mario Smarduch

- Support write protection of entire VM address space
- Split pmds section in migration mode
- Write protect dirty pages on Dirty log read 

Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
 arch/arm/kvm/mmu.c |  158 +++-
 1 file changed, 156 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 7789857..502e776 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -56,6 +56,13 @@ static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, 
phys_addr_t ipa)
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
 }
 
+void kvm_tlb_flush_vmid(struct kvm *kvm)
+{
+   phys_addr_t x;
+   /* based on function description 2nd argument is irrelevent */
+   kvm_tlb_flush_vmid_ipa(kvm, x);
+}
+
 static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
  int min, int max)
 {
@@ -639,6 +646,143 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, 
phys_addr_t *ipap)
return false;
 }
 
+/*
+ * Called when QEMU retrieves the dirty log and write protects dirty pages
+ * for next QEMU call to retrieve the dirty logn
+ */
+void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
+   struct kvm_memory_slot *slot,
+   gfn_t gfn_offset, unsigned long mask)
+{
+   phys_addr_t ipa;
+   pgd_t *pgdp = kvm-arch.pgd, *pgd;
+   pud_t *pud;
+   pmd_t *pmd;
+   pte_t *pte, new_pte;
+
+   while (mask) {
+   ipa = (slot-base_gfn + gfn_offset + __ffs(mask))  PAGE_SHIFT;
+   pgd = pgdp + pgd_index(ipa);
+   if (!pgd_present(*pgd))
+   goto update_mask;
+   pud = pud_offset(pgd, ipa);
+   if (!pud_present(*pud))
+   goto update_mask;
+   pmd = pmd_offset(pud, ipa);
+   if (!pmd_present(*pmd))
+   goto update_mask;
+   pte = pte_offset_kernel(pmd, ipa);
+   if (!pte_present(*pte))
+   goto update_mask;
+   if ((*pte  L_PTE_S2_RDWR) == L_PTE_S2_RDONLY)
+   goto update_mask;
+   new_pte = pfn_pte(pte_pfn(*pte), PAGE_S2);
+   *pte = new_pte;
+update_mask:
+   mask = mask - 1;
+   }
+}
+
+/*
+ * In migration splits PMDs into PTEs to keep track of dirty pages. Without
+ * spliting light execution prevents migration.
+ */
+bool split_pmd(struct kvm *kvm, pmd_t *pmd, u64 addr)
+{
+   struct page *page;
+   pfn_t pfn = pmd_pfn(*pmd);
+   pte_t *pte, new_pte;
+   int i;
+
+   page = alloc_page(GFP_KERNEL);
+   if (page == NULL)
+   return false;
+
+   pte = page_address(page);
+   for (i = 0; i  PMD_SIZE/PAGE_SIZE; i++) {
+   new_pte = pfn_pte(pfn+i, PAGE_S2);
+   pte[i] = new_pte;
+   }
+   kvm_clean_pte(pte);
+   pmd_populate_kernel(NULL, pmd, pte);
+
+   /*
+   * flush the whole TLB for VM  relying on hardware broadcast
+   */
+   kvm_tlb_flush_vmid(kvm);
+   get_page(virt_to_page(pte));
+   return true;
+}
+
+/*
+ * Called from QEMU when migration dirty logging is started. Write the protect
+ * current set. Future faults writes are tracked through WP of when dirty log
+ * log.
+ */
+
+void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
+{
+   pgd_t *pgd;
+   pud_t *pud;
+   pmd_t *pmd;
+   pte_t *pte, new_pte;
+   pgd_t *pgdp = kvm-arch.pgd;
+   struct kvm_memory_slot *memslot = id_to_memslot(kvm-memslots, slot);
+   u64 start = memslot-base_gfn  PAGE_SHIFT;
+   u64 end = (memslot-base_gfn + memslot-npages)  PAGE_SHIFT;
+   u64 addr = start, addr1;
+
+   spin_lock(kvm-mmu_lock);
+   kvm-arch.migration_in_progress = true;
+   while (addr  end) {
+   if (need_resched() || spin_needbreak(kvm-mmu_lock)) {
+   kvm_tlb_flush_vmid(kvm);
+   cond_resched_lock(kvm-mmu_lock);
+   }
+
+   pgd = pgdp + pgd_index(addr);
+   if (!pgd_present(*pgd)) {
+   addr = pgd_addr_end(addr, end);
+   continue;
+   }
+
+   pud = pud_offset(pgd, addr);
+   if (pud_huge(*pud) || !pud_present(*pud)) {
+   addr = pud_addr_end(addr, end);
+   continue;
+   }
+
+   pmd = pmd_offset(pud, addr);
+   if (!pmd_present(*pmd)) {
+   addr = pmd_addr_end(addr, end);
+   continue;
+   }
+
+   if (kvm_pmd_huge(*pmd)) {
+   if (!split_pmd(kvm, pmd, addr)) {
+   kvm-arch.migration_in_progress = false;
+   return;
+   }
+   addr = pmd_addr_end(addr, 

[PATCH 1/3] migration dirtybitmap support ARMv7

2014-04-14 Thread Mario Smarduch

Headers for migration, prototypes

Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
 arch/arm/include/asm/kvm_host.h |9 +
 1 file changed, 9 insertions(+)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 098f7dd..9b71f13 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -67,6 +67,7 @@ struct kvm_arch {
 
/* Interrupt controller */
struct vgic_distvgic;
+   int migration_in_progress;
 };
 
 #define KVM_NR_MEM_OBJS 40
@@ -228,4 +229,12 @@ int kvm_perf_teardown(void);
 u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
 int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
 
+void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot);
+
+void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
+   struct kvm_memory_slot *slot,
+   gfn_t gfn_offset, unsigned long mask);
+
+void kvm_tlb_flush_vmid(struct kvm *kvm);
+
 #endif /* __ARM_KVM_HOST_H__ */
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] powerpc: kvm: make _PAGE_NUMA take effect

2014-04-14 Thread liu ping fan
On Mon, Apr 14, 2014 at 2:43 PM, Alexander Graf ag...@suse.de wrote:

 On 13.04.14 04:27, Liu ping fan wrote:

 On Fri, Apr 11, 2014 at 10:03 PM, Alexander Graf ag...@suse.de wrote:

 On 11.04.2014, at 13:45, Liu Ping Fan pingf...@linux.vnet.ibm.com
 wrote:

 When we mark pte with _PAGE_NUMA we already call
 mmu_notifier_invalidate_range_start
 and mmu_notifier_invalidate_range_end, which will mark existing guest
 hpte
 entry as HPTE_V_ABSENT. Now we need to do that when we are inserting new
 guest hpte entries.

 What happens when we don't? Why do we need the check? Why isn't it done
 implicitly? What happens when we treat a NUMA marked page as non-present?
 Why does it work out for us?

 Assume you have no idea what PAGE_NUMA is, but try to figure out what
 this patch does and whether you need to cherry-pick it into your downstream
 kernel. The description as is still is not very helpful for that. It doesn't
 even explain what really changes with this patch applied.

 Yeah.  what about appending the following description?  Can it make
 the context clear?
 Guest should not setup a hpte for the page whose pte is marked with
 _PAGE_NUMA, so on the host, the numa-fault mechanism can take effect
 to check whether the page is placed correctly or not.


 Try to come up with a text that answers the following questions in order:

I divide them into 3 groups, and answer them by 3 sections. Seems that
it has the total story :)
Please take a look.

   - What does _PAGE_NUMA mean?
Group 1 - section 2

   - How does page migration with _PAGE_NUMA work?
   - Why should we not map pages when _PAGE_NUMA is set?
Group 2 - section 1
(Note: for the 1st question in this group, I am not sure about the
details, except that we can fix numa balancing by moving task or
moving page.  So I comment as  migration should be involved to cut
down the distance between the cpu and pages)

   - Which part of what needs to be done did the previous _PAGE_NUMA patch
 address?
   - What's the situation without this patch?
   - Which scenario does this patch fix?

Group 3 - section 3


Numa fault is a method which help to achieve auto numa balancing.
When such a page fault takes place, the page fault handler will check
whether the page is placed correctly. If not, migration should be
involved to cut down the distance between the cpu and pages.

A pte with _PAGE_NUMA help to implement numa fault. It means not to
allow the MMU to access the page directly. So a page fault is triggered
and numa fault handler gets the opportunity to run checker.

As for the access of MMU, we need special handling for the powernv's guest.
When we mark a pte with _PAGE_NUMA, we already call mmu_notifier to
invalidate it in guest's htab, but when we tried to re-insert them,
we firstly try to fix it in real-mode. Only after this fails, we fallback
to virt mode, and most of important, we run numa fault handler in virt
mode.  This patch guards the way of real-mode to ensure that if a pte is
marked with _PAGE_NUMA, it will NOT be fixed in real mode, instead, it will
be fixed in virt mode and have the opportunity to be checked with placement.


Thx,
Fan


 Once you have a text that answers those, you should have a good patch
 description :).

 Alex


 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html