Please take patch A or B.
Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.
There is no need to use smp_mb() and cmpxchg() any more.
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.
This patch adds a comment explaining this.
Signed-off-by: Takuya
Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.
There is no
Il 14/02/2014 08:37, Fernando Luis Vázquez Cao ha scritto:
These days hv_clock allocation is memblock based (i.e. the percpu
allocator is not involved), which means that the physical address
of each of the per-cpu hv_clock areas is guaranteed to remain
unchanged through all its lifetime and we
https://bugzilla.kernel.org/show_bug.cgi?id=69361
Paolo Bonzini bonz...@gnu.org changed:
What|Removed |Added
CC||bonz...@gnu.org
---
Il 17/01/2014 20:52, Radim Krčmář ha scritto:
We should open NMI window right after an iret, but SVM exits before it.
We wanted to single step using the trap flag and then open it.
(or we could emulate the iret instead)
We don't do it since commit 3842d135ff2 (likely), because the iret exit
https://bugzilla.kernel.org/show_bug.cgi?id=69361
--- Comment #10 from robert...@intel.com ---
(In reply to Paolo Bonzini from comment #9)
Yes, commit 215393bc1fab3d61a5a296838bdffce22f27ffda.
May I know which branch it is committed?
--
You are receiving this mail because:
You are watching
Il 22/01/2014 13:03, Paolo Bonzini ha scritto:
Il 22/01/2014 06:29, Liu, Jinsong ha scritto:
These patches are version 3 to enalbe Intel MPX for KVM.
Version 1:
* Add some Intel MPX definiation
* Fix a cpuid(0x0d, 0) exposing bug, dynamic per XCR0 features
enable/disable
* vmx and msr
Paolo Bonzini wrote:
Il 22/01/2014 13:03, Paolo Bonzini ha scritto:
Il 22/01/2014 06:29, Liu, Jinsong ha scritto:
These patches are version 3 to enalbe Intel MPX for KVM.
Version 1:
* Add some Intel MPX definiation
* Fix a cpuid(0x0d, 0) exposing bug, dynamic per XCR0 features
https://bugzilla.kernel.org/show_bug.cgi?id=69361
--- Comment #11 from Paolo Bonzini bonz...@gnu.org ---
It is in v3.14-rc1
--
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message
On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.
I have already
These days hv_clock allocation is memblock based (i.e. the percpu
allocator is not involved), which means that the physical address
of each of the per-cpu hv_clock areas is guaranteed to remain
unchanged through all its lifetime and we do not need to update
its location after CPU bring-up.
On Tue, Feb 18, 2014 at 02:38:40AM +, Zhanghaoyu (A) wrote:
Hi, all
The VM will get stuck for a while(about 6s for a VM with 20GB memory) when
attaching a pass-through PCI card to the non-pass-through VM for the first
time.
The reason is that the host will build the whole VT-d
Il 17/02/2014 21:24, Alex Williamson ha scritto:
VFIO now has support for using the IOMMU_CACHE flag and a mechanism
for an external user to test the current operating mode of the IOMMU.
Add support for this to the kvm-vfio pseudo device so that we only
register noncoherent DMA when necessary.
Il 18/02/2014 11:38, Michael S. Tsirkin ha scritto:
What if you detach and re-attach?
Is it fast then?
If yes this means the issue is COW breaking that occurs
with get_user_pages, not translation as such.
Try hugepages with prealloc - does it help?
I agree it's either COW breaking or
On Tue, Feb 18, 2014 at 11:42:19AM +0100, Paolo Bonzini wrote:
Il 18/02/2014 11:38, Michael S. Tsirkin ha scritto:
What if you detach and re-attach?
Is it fast then?
If yes this means the issue is COW breaking that occurs
with get_user_pages, not translation as such.
Try hugepages with
Hi, all
The VM will get stuck for a while(about 6s for a VM with 20GB memory) when
attaching a pass-through PCI card to the non-pass-through VM for the first
time.
The reason is that the host will build the whole VT-d GPA-HPA DMAR
page-table, which needs a lot of time, and during this
Il 18/02/2014 11:51, Michael S. Tsirkin ha scritto:
I agree it's either COW breaking or (similarly) locking pages that
the guest hasn't touched yet.
You can use prealloc or -rt mlock=on to avoid this problem.
Paolo
Or the new shared flag - IIRC shared VMAs don't do COW either.
Only if
On Tue, Feb 18, 2014 at 12:05:19PM +0100, Paolo Bonzini wrote:
Il 18/02/2014 11:51, Michael S. Tsirkin ha scritto:
I agree it's either COW breaking or (similarly) locking pages that
the guest hasn't touched yet.
You can use prealloc or -rt mlock=on to avoid this problem.
Paolo
Or the
Il 18/02/2014 11:09, Fernando Luis Vázquez Cao ha scritto:
These days hv_clock allocation is memblock based (i.e. the percpu
allocator is not involved), which means that the physical address
of each of the per-cpu hv_clock areas is guaranteed to remain
unchanged through all its lifetime and we
What if you detach and re-attach?
Is it fast then?
If yes this means the issue is COW breaking that occurs with
get_user_pages, not translation as such.
Try hugepages with prealloc - does it help?
I agree it's either COW breaking or (similarly) locking pages that the guest
hasn't touched
On Mon, 2014-02-17 at 10:29 +, David Vrabel wrote:
On 15/02/14 02:59, Luis R. Rodriguez wrote:
From: Luis R. Rodriguez mcg...@suse.com
The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
was to prevent our backend interfaces from being used by the
bridge and nominating
2014-02-18
--
* [Qemu-devel] [PATCH V17 00/11] Add support for binding guest numa
nodes
Any news about this? (Vinod)
* Should we change anything to get more people to sign for the call?
There hasn't been a call in quite a long time. Ideas? (me)
* x2apic?
- Pending patch for
2014-02-18
--
* x2apic?
- Pending patch for cpu feature flag.
- Should this be the default.
- It is not for 32bits, but should it be for 64bit?
- libvirt always use x2apic, unconditionally?
- What happens if one side of migration uses -m cpu_something and the
other -m
When we run a guest with cache disabled, we don't flush the cache to
the Point of Coherency, hence possibly missing bits of data that have
been written in the cache, but have not yet reached memory.
We also have the opposite issue: when a guest enables its cache,
whatever sits in the cache is
The use of p*d_addr_end with stage-2 translation is slightly dodgy,
as the IPA is 40bits, while all the p*d_addr_end helpers are
taking an unsigned long (arm64 is fine with that as unligned long
is 64bit).
The fix is to introduce 64bit clean versions of the same helpers,
and use them in the
So far, KVM/ARM used a fixed HCR configuration per guest, except for
the VI/VF/VA bits to control the interrupt in absence of VGIC.
With the upcoming need to dynamically reconfigure trapping, it becomes
necessary to allow the HCR to be changed on a per-vcpu basis.
The fix here is to mimic what
HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1.
In order to minimise the amount of surprise a guest could generate by
trying to access these registers with caches off, add them to the
list of registers we switch/handle.
Signed-off-by: Marc Zyngier marc.zyng...@arm.com
In order to be able to detect the point where the guest enables
its MMU and caches, trap all the VM related system registers.
Once we see the guest enabling both the MMU and the caches, we
can go back to a saner mode of operation, which is to leave these
registers in complete control of the
In order for a guest with caches disabled to observe data written
contained in a given page, we need to make sure that page is
committed to memory, and not just hanging in the cache (as guest
accesses are completely bypassing the cache until it decides to
enable it).
For this purpose, hook into
Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
changed the way we match the 64bit coprocessor access from
user space, but didn't update the trap handler for the same
set of registers.
The effect is that a trapped 64bit access is never matched, leading
to a fault being injected
When the guest runs with caches disabled (like in an early boot
sequence, for example), all the writes are diectly going to RAM,
bypassing the caches altogether.
Once the MMU and caches are enabled, whatever sits in the cache
becomes suddenly visible, which isn't what the guest expects.
A way to
Compiling with THP enabled leads to the following warning:
arch/arm/kvm/mmu.c: In function ‘unmap_range’:
arch/arm/kvm/mmu.c:177:39: warning: ‘pte’ may be used uninitialized in this
function [-Wmaybe-uninitialized]
if (kvm_pmd_huge(*pmd) || page_empty(pte)) {
The current handling of AArch32 trapping is slightly less than
perfect, as it is not possible (from a handler point of view)
to distinguish it from an AArch64 access, nor to tell a 32bit
from a 64bit access either.
Fix this by introducing two additional flags:
- is_aarch32: true if the access was
In order for the guest with caches off to observe data written
contained in a given page, we need to make sure that page is
committed to memory, and not just hanging in the cache (as
guest accesses are completely bypassing the cache until it
decides to enable it).
For this purpose, hook into the
In order to be able to detect the point where the guest enables
its MMU and caches, trap all the VM related system registers.
Once we see the guest enabling both the MMU and the caches, we
can go back to a saner mode of operation, which is to leave these
registers in complete control of the
Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
added an ordering dependency for the 64bit registers.
The order described is: CRn, CRm, Op1, Op2, 64bit-first.
Unfortunately, the implementation is: CRn, 64bit-first, CRm...
Move the 64bit test to be last in order to match the
On Tue, Feb 18, 2014 at 03:27:25PM +, Marc Zyngier wrote:
The use of p*d_addr_end with stage-2 translation is slightly dodgy,
as the IPA is 40bits, while all the p*d_addr_end helpers are
taking an unsigned long (arm64 is fine with that as unligned long
is 64bit).
The fix is to introduce
I am Mr. Mr. Leung Wing Lok and I work with Hang Seng Bank, Hong Kong. I have a
Business Proposal of $19,500,000.00 of mutual benefits. Contact me via
leungwlok...@yahoo.com.vn
for more info.--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Tue, Feb 18, 2014 at 03:27:25PM +, Marc Zyngier wrote:
The use of p*d_addr_end with stage-2 translation is slightly dodgy,
as the IPA is 40bits, while all the p*d_addr_end helpers are
taking an unsigned long (arm64 is fine with that as unligned long
is 64bit).
The fix is to introduce
On Tue, Feb 18, 2014 at 03:27:33PM +, Marc Zyngier wrote:
Compiling with THP enabled leads to the following warning:
arch/arm/kvm/mmu.c: In function ‘unmap_range’:
arch/arm/kvm/mmu.c:177:39: warning: ‘pte’ may be used uninitialized in this
function [-Wmaybe-uninitialized]
if
On Mon, Feb 17, 2014 at 2:27 AM, David Vrabel david.vra...@citrix.com wrote:
On 15/02/14 02:59, Luis R. Rodriguez wrote:
From: Luis R. Rodriguez mcg...@suse.com
This v2 series changes the approach from my original virtualization
multicast patch series [0] by abandoning completely the
On Mon, Feb 17, 2014 at 6:36 AM, Zoltan Kiss zoltan.k...@citrix.com wrote:
There is a valid scenario to put IP addresses on the backend VIFs:
http://wiki.xen.org/wiki/Xen_Networking#Routing
This is useful thanks!
Also, the backend is not necessarily Dom0, you can connect two guests with
On Mon, 2014-02-10 at 11:05 -0800, Nicholas A. Bellinger wrote:
SNIP
Hi Yan,
So recently I've been doing some KVM guest performance comparisons
between the scsi-mq prototype using virtio-scsi + vhost-scsi, and
Windows Server 2012 with vioscsi.sys (virtio-win-0.1-74.iso) +
On Tue, Feb 18, 2014 at 7:27 AM, Marc Zyngier marc.zyng...@arm.com wrote:
When we run a guest with cache disabled, we don't flush the cache to
the Point of Coherency, hence possibly missing bits of data that have
been written in the cache, but have not yet reached memory.
We also have the
On Sun, Feb 16, 2014 at 10:57 AM, Stephen Hemminger
step...@networkplumber.org wrote:
On Fri, 14 Feb 2014 18:59:37 -0800
Luis R. Rodriguez mcg...@do-not-panic.com wrote:
From: Luis R. Rodriguez mcg...@suse.com
It doesn't make sense for some interfaces to become a root bridge
at any point in
On Tue, 2014-02-18 at 13:00 -0800, Nicholas A. Bellinger wrote:
On Mon, 2014-02-10 at 11:05 -0800, Nicholas A. Bellinger wrote:
SNIP
Hi Yan,
So recently I've been doing some KVM guest performance comparisons
between the scsi-mq prototype using virtio-scsi + vhost-scsi,
On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams d...@redhat.com wrote:
On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
From: Luis R. Rodriguez mcg...@suse.com
Some interfaces do not need to have any IPv4 or IPv6
addresses, so enable an option to specify this. One
example where
On Tue, Feb 18, 2014 at 3:22 AM, Ian Campbell ian.campb...@citrix.com wrote:
On Mon, 2014-02-17 at 10:29 +, David Vrabel wrote:
On 15/02/14 02:59, Luis R. Rodriguez wrote:
From: Luis R. Rodriguez mcg...@suse.com
The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
was to
On Tue, 18 Feb 2014 13:19:15 -0800
Luis R. Rodriguez mcg...@do-not-panic.com wrote:
Sure, but note that the both disable_ipv6 and accept_dada sysctl
parameters are global. ipv4 and ipv6 interfaces are created upon
NETDEVICE_REGISTER, which will get triggered when a driver calls
From: Rik van Riel r...@redhat.com
Normally task_numa_work scans over a fairly small amount of memory,
but it is possible to run into a large unpopulated part of virtual
memory, with no pages mapped. In that case, task_numa_work can run
for a while, and it may make sense to reschedule as
The NUMA scanning code can end up iterating over many gigabytes
of unpopulated memory, especially in the case of a freshly started
KVM guest with lots of memory.
This results in the mmu notifier code being called even when
there are no mapped pages in a virtual address range. The amount
of time
From: Rik van Riel r...@redhat.com
The NUMA scanning code can end up iterating over many gigabytes
of unpopulated memory, especially in the case of a freshly started
KVM guest with lots of memory.
This results in the mmu notifier code being called even when
there are no mapped pages in a virtual
From: Rik van Riel r...@redhat.com
Reorganize the order of ifs in change_pmd_range a little, in
preparation for the next patch.
Signed-off-by: Rik van Riel r...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Andrea Arcangeli aarca...@redhat.com
Reported-by: Xing Gang gang.x...@hp.com
3.4-stable review patch. If anyone has any objections, please let me know.
--
From: Asias He as...@redhat.com
commit 2c95a3290919541b846bee3e0fbaa75860929f53 upstream.
Block layer will allocate a spinlock for the queue if the driver does
not provide one in blk_init_queue().
On 18 February 2014 15:05, Juan Quintela quint...@redhat.com wrote:
* Maintenance?
- How is being handling patches for the stable release?
CC: stable@ for stable maintance.
- add documentation about what/how to send patches for stable release
- Regressions deserve a backport almost always?
(2014/02/18 18:43), Xiao Guangrong wrote:
On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
(2014/02/18 18:07), Paolo Bonzini wrote:
Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
On Tue, 18 Feb 2014, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
Reorganize the order of ifs in change_pmd_range a little, in
preparation for the next patch.
Signed-off-by: Rik van Riel r...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Andrea Arcangeli
On Tue, 18 Feb 2014, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
The NUMA scanning code can end up iterating over many gigabytes
of unpopulated memory, especially in the case of a freshly started
KVM guest with lots of memory.
This results in the mmu notifier code being
On 02/18/2014 09:24 PM, David Rientjes wrote:
On Tue, 18 Feb 2014, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
The NUMA scanning code can end up iterating over many gigabytes
of unpopulated memory, especially in the case of a freshly started
KVM guest with lots of memory.
On Tue, 18 Feb 2014 18:24:36 -0800 (PST)
David Rientjes rient...@google.com wrote:
Acked-by: David Rientjes rient...@google.com
Might have been cleaner to move the
mmu_notifier_invalidate_range_{start,end}() to hugetlb_change_protection()
as well, though.
Way cleaner! Second version
On Tue, 2014-02-18 at 13:11 -0800, Nicholas A. Bellinger wrote:
On Tue, 2014-02-18 at 13:00 -0800, Nicholas A. Bellinger wrote:
On Mon, 2014-02-10 at 11:05 -0800, Nicholas A. Bellinger wrote:
SNIP
Hi Yan,
So recently I've been doing some KVM guest performance
64 matches
Mail list logo