unlikely that these two introduced a regression. My PV tests worked
fine (except for a warning in the guest, but again, I had the same
warning before changes).
Boris Ostrovsky (2):
libxl: Wait until QEMU removed the device before tearing it down
libxl: Simplify cleanup in do_pci_remove
On 11/12/2014 01:58 AM, Hu, Robert wrote:
2. Failed to hotplug a VT-d device with XEN4.5-RC1
http://bugzilla-archived.xenproject.org/bugzilla/show_bug.cgi?id=1894
This should be addressed by these two:
http://lists.xenproject.org/archives/html/xen-devel/2014-11/msg00875.html
On 11/14/2014 11:01 AM, Ian Jackson wrote:
Boris Ostrovsky writes (Re: [PATCH 1/2] libxl: Wait until QEMU removed the device
before tearing it down):
And I believe we still need part of the second patch --- the one that
removes call to xc_domain_irq_permission() for PV guests (after your
On 11/14/2014 11:31 AM, Ian Jackson wrote:
Boris Ostrovsky writes (Re: [PATCH 1/2] libxl: Wait until QEMU removed the device
before tearing it down):
On 11/14/2014 11:01 AM, Ian Jackson wrote:
What call to xc_physdev_unmap_pirq ? I see one in the PV path.
At the skip1 label.
`goto skip1
On 11/14/2014 11:36 AM, Ian Jackson wrote:
Boris Ostrovsky writes (Re: [PATCH 1/2] libxl: Wait until QEMU removed the device
before tearing it down):
On 11/14/2014 11:31 AM, Ian Jackson wrote:
`goto skip1' only appears in the PV path AFAICT.
Right, this is all about PV code path.
So now I
On 11/14/2014 04:20 PM, Sander Eikelenboom wrote:
Friday, November 14, 2014, 10:09:04 PM, you wrote:
I don't know about detach but I apparently can't even properly attach a
second device --- I get complaints about it already being in xenstore.
But device does show up in the guest.
And then I
vmx_add_host_load_msr() and vmx_add_guest_msr() share fair amount of code. Merge
them to simplify code maintenance.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar
Introduce vpmu_are_all_set that allows testing multiple bits at once. Convert
macros
into inlines for better compiler checking.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed
The two routines share most of their logic.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h
is currently always done prior to calling
vpmu_save_force() let's both set and clear it there.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h
MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested
only Intel's BTS is currently supported.
Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h
context_saved()
before vpmu_switch_to() is executed. (Note that while this change could have
been dalayed until that later patch, the changes are harmless to existing code
and so we do it here)
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Jan Beulich jbeul...@suse.com
Reviewed
is being written, as opposed to postponing this
until later.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
---
xen/arch/x86/hvm/svm/svm.c
Code for initializing/tearing down PMU for PV guests
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn
Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.
Call core2_get_pmc_count() once, during initialization.
Properly clean up when core2_vpmu_alloc_resource() fails.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin
to hypervisor.
Since the interrupt handler may now force VPMU context save (i.e. set
VPMU_CONTEXT_SAVE flag) we need to make changes to amd_vpmu_save() which
until now expected this flag to be set only when the counters were stopped.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked
Move some VPMU initilization operations into __initcalls to avoid performing
same tests and calculations for each vcpu.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Tested-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
---
xen/arch/x86/hvm/svm/vpmu.c | 115
only access PMU MSRs with {rd,wr}msrl() (not the _safe versions
which would not be NMI-safe).
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Jan Beulich jbeul...@suse.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
subtree
Boris Ostrovsky (21):
common/symbols: Export hypervisor symbols to privileged guest
x86/VPMU: Manage VPMU_CONTEXT_SAVE flag in vpmu_save_force()
x86/VPMU: Set MSR bitmaps only for HVM/PVH guests
x86/VPMU: Make vpmu macros a bit more efficient
intel/VPMU: Clean up Intel VPMU code
Don't have the hypervisor update APIC_LVTPC when _it_ thinks the vector should
be updated. Instead, handle guest's APIC_LVTPC accesses and write what the guest
explicitly wanted.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed
Export Xen's symbols as {addresstypename} triplet via new XENPF_get_symbol
hypercall
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Daniel De Graaf dgde...@tycho.nsa.gov
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h
On 11/17/2014 11:59 AM, Andrew Cooper wrote:
On 17/11/14 16:58, Ian Campbell wrote:
On Mon, 2014-11-17 at 15:19 +, Andrew Cooper wrote:
c/s d1b93ea causes substantial functional regressions in pygrub's ability to
parse bootloader configuration files.
Please can you and Boris both provide
On 11/20/2014 11:15 AM, Ian Campbell wrote:
On Thu, 2014-11-20 at 16:08 +, Andrew Cooper wrote:
On 20/11/14 16:00, Ian Campbell wrote:
On Mon, 2014-11-17 at 15:19 +, Andrew Cooper wrote:
c/s d1b93ea causes substantial functional regressions in pygrub's ability to
parse bootloader
On 11/21/2014 06:12 AM, Wei Liu wrote:
On Thu, Nov 20, 2014 at 04:27:34PM -0500, Boris Ostrovsky wrote:
When parsing bitmap objects JSON parser will create libxl_bitmap
map of the smallest size needed.
This can cause problems when saved image file specifies CPU affinity.
For example
On 11/21/2014 06:26 AM, Dario Faggioli wrote:
On Thu, 2014-11-20 at 16:27 -0500, Boris Ostrovsky wrote:
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index 58df4f3..2a08bef 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -614,6 +614,13 @@ void
On 11/21/2014 10:14 AM, Konrad Rzeszutek Wilk wrote:
On Fri, Nov 21, 2014 at 11:08:37AM +0100, Juergen Gross wrote:
Hi,
during tests of my linear p2m list patches I stumbled over some
WARNs issued during xl save and xl restore of a pv-domU with
unpatched linux 3.18-rc5:
Boris had an patch for
On 11/21/2014 12:09 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Nov 21, 2014 at 01:32:13PM +, Andrew Cooper wrote:
That does look plausibly like it would fix the issue.
However, I can't help but feeing that this is hacking around a broken
patch in the first place.
I cannot think of a reason
, in fact, fixes a regression) but the second bug
will probably take me a while (need to make detaching call chain
asynchronous).
== Linux ==
* vAPIC in PVHVM guests (Linux side) (none)
- Boris Ostrovsky
This, I believe, is queued for 3.19.
boris
is not available.
The commit message describes performance improvements that this change brings.
Boris Ostrovsky (2):
xen/pci: Defer initialization of MSI ops on HVM guests until after
x2APIC has been set up
xen/pci: Use APIC directly when APIC virtualization is supported by
hardware
On 11/25/2014 03:45 AM, Jan Beulich wrote:
@@ -1429,6 +1429,12 @@ int vlapic_init(struct vcpu *v)
HVM_DBG_LOG(DBG_LEVEL_VLAPIC, %d, v-vcpu_id);
+if ( is_pvh_vcpu(v) )
+{
+vlapic-hw.disabled = VLAPIC_HW_DISABLED;
I did consider doing that but I thought that this
/nodes).
Reported-by: Boris Ostrovsky boris.ostrov...@oracle.com
Signed-off-by: Wei Liu wei.l...@citrix.com
Cc: Ian Campbell ian.campb...@citrix.com
Cc: Ian Jackson ian.jack...@eu.citrix.com
Cc: Dario Faggioli dario.faggi...@citrix.com
If this end up being the approach, it can have the following
On 11/25/2014 07:06 AM, Stefano Stabellini wrote:
On Mon, 24 Nov 2014, Boris Ostrovsky wrote:
If the hardware supports APIC virtualization we may decide not to use pirqs
and instead use APIC/x2APIC directly, meaning that we don't want to set
x86_msi.setup_msi_irqs and x86_msi.teardown_msi_irq
On 11/25/2014 07:48 AM, Jan Beulich wrote:
On 17.11.14 at 00:07, boris.ostrov...@oracle.com wrote:
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -103,6 +103,10 @@
! vcpu_set_singleshot_timer vcpu.h
? xenoprof_init xenoprof.h
?
containing kernel
We should explicitly check type of default in image_index() and process it
appropriately.
Reported-by: Andrew Cooper andrew.coop...@citrix.com
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
Commit message is Andrew's with exception of the last sentense.
I only tested
On 11/25/2014 09:55 AM, Jan Beulich wrote:
Regardless, do you think that disabling VPMU for PVH is worth anyway?
That depends on what (bad) consequences not doing so has.
I haven't seen anything (besides VAPIC accesses) but I think it would be
prudent to prevent any VPMU activity from
No, this happens before guests are started.
On November 26, 2014 4:45:22 AM EST, Ian Campbell ian.campb...@citrix.com
wrote:
On Tue, 2014-11-25 at 15:23 -0500, Boris Ostrovsky wrote:
We have a regression due to (5195c14c8: netfilter: conntrack: fix
race
in __nf_conntrack_confirm against
On 11/25/2014 08:36 AM, Jan Beulich wrote:
+static long vpmu_sched_checkin(void *arg)
+{
+int cpu = cpumask_next(smp_processor_id(), cpu_online_map);
unsigned int.
+int ret;
+
+/* Mode change shouldn't take more than a few (say, 5) seconds. */
+if ( NOW()
On 11/25/2014 08:49 AM, Jan Beulich wrote:
On 17.11.14 at 00:07, boris.ostrov...@oracle.com wrote:
@@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
switch ( vendor )
{
case X86_VENDOR_AMD:
-if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-
On 11/25/2014 09:28 AM, Jan Beulich wrote:
+else
+{
+struct segment_register seg;
+
+hvm_get_segment_register(sampled, x86_seg_cs, seg);
+r-cs = seg.sel;
+hvm_get_segment_register(sampled, x86_seg_ss,
so we
need to postpone setting these ops until later, when we know which APIC mode
is used.
(Note that currently x2APIC is never initialized on HVM guests. This may
change in the future)
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Stefano Stabellini stefano.stabell
260 CPUID
36542 HLT
174 INJ_VIRQ
27250 INTR
222 INTR_WINDOW
20 NPF
24999 TRAP
381812 vlapic_accept_pic_intr
166480 VMENTRY
166479 VMEXIT
77208 VMMCALL
81 wrap_buffer
ApacheBench results (ab -n 1 -c 200) improve by about 10%
Signed-off-by: Boris Ostrovsky
enable x2APIC).
2. Set x86_msi ops to use pirqs only when APIC virtualization is not available.
The commit message describes performance improvements that this change brings.
Boris Ostrovsky (2):
xen/pci: Defer initialization of MSI ops on HVM guests until after
x2APIC has been set up
xen
that it can return both CPU and
device topology data. Add corresponding libxl interface
* Use new interface to query the hypervisor about topology and print
it with 'xl info -n'
* Replace all users of old cpu topology interface with the new
call. This patch is optional.
Boris Ostrovsky (4):
pci
Add support to XEN_SYSCTL_topologyinfo to return IO topology data.
Modify libxl_get_topology() to request this data, provide OS-dependent
helper functions that determine which devices we are inquiring about
(Linux only).
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
tools/libxl
Make current users of libxl_get_cpu_topology() call
libxl_get_topology() instead.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
tools/libxl/libxl.c | 25 -
tools/libxl/libxl_numa.c | 14 +++---
tools/libxl/libxl_utils.c | 24
XEN_SYSCTL_INTERFACE_VERSION
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
tools/libxl/libxl.c | 79 ---
tools/libxl/libxl.h | 4 ++
tools/libxl/libxl_types.idl | 12 ++
tools/libxl/libxl_utils.c | 6 +++
tools/misc
On 11/21/2014 05:17 PM, Konrad Rzeszutek Wilk wrote:
The commit xen/pciback: Don't deadlock when unbinding. was using
the version of pci_reset_function which would lock the device lock.
That is no good as we can dead-lock. As such we swapped to using
the lock-less version and requiring that the
that
-EOPNOTSUPP is more appropriate here.
Fix all of the above issues.
This patch is based on the original patch by Laszlo Ersek and a comment by
Jeff Moyer.
Signed-off-by: Vitaly Kuznetsov vkuzn...@redhat.com
Reviewed-by: Laszlo Ersek ler...@redhat.com
Reviewed-by: Boris Ostrovsky boris.ostrov
On 11/27/2014 03:57 AM, Jan Beulich wrote:
On 26.11.14 at 15:32, boris.ostrov...@oracle.com wrote:
On 11/25/2014 08:49 AM, Jan Beulich wrote:
On 17.11.14 at 00:07, boris.ostrov...@oracle.com wrote:
@@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
switch ( vendor )
{
On 11/27/2014 03:59 AM, Jan Beulich wrote:
On 26.11.14 at 15:39, boris.ostrov...@oracle.com wrote:
On 11/25/2014 09:28 AM, Jan Beulich wrote:
+else
+{
+struct segment_register seg;
+
+hvm_get_segment_register(sampled, x86_seg_cs, seg);
+
On 12/04/2014 03:51 AM, Jan Beulich wrote:
On 03.12.14 at 21:13, boris.ostrov...@oracle.com wrote:
On 11/27/2014 03:57 AM, Jan Beulich wrote:
On 26.11.14 at 15:32, boris.ostrov...@oracle.com wrote:
On 11/25/2014 08:49 AM, Jan Beulich wrote:
On 17.11.14 at 00:07, boris.ostrov...@oracle.com
On 12/05/2014 10:53 AM, Jan Beulich wrote:
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -56,6 +56,8 @@ struct pci_dev {
u8 phantom_stride;
+int node; /* NUMA node */
I don't think we currently support node IDs wider than 8 bits.
I used an int because
On 12/08/2014 09:17 AM, Vitaly Kuznetsov wrote:
flush_op is unambiguously defined by feature_flush:
REQ_FUA | REQ_FLUSH - BLKIF_OP_WRITE_BARRIER
REQ_FLUSH - BLKIF_OP_FLUSH_DISKCACHE
0 - 0
and thus can be removed. This is just a cleanup.
The patch was suggested by Boris Ostrovsky
We need to make sure that last_vcpu is not pointing to VCPU whose
VPMU is being destroyed. Otherwise we may try dereference it in
the future, when VCPU is gone.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
xen/arch/x86/hvm/vpmu.c | 22 ++
1 files changed
On 12/11/2014 01:04 PM, Juergen Gross wrote:
diff --git a/scripts/xen-hypercalls.sh b/scripts/xen-hypercalls.sh
new file mode 100644
index 000..e6447b7
--- /dev/null
+++ b/scripts/xen-hypercalls.sh
@@ -0,0 +1,11 @@
+#!/bin/sh
+out=$1
+shift
+in=$@
+
+for i in $in; do
+ eval $CPP
On 12/13/2014 02:08 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Dec 12, 2014 at 04:20:48PM -0500, Boris Ostrovsky wrote:
We need to make sure that last_vcpu is not pointing to VCPU whose
VPMU is being destroyed. Otherwise we may try dereference it in
the future, when VCPU is gone.
Signed-off
testing and clearing of
last_vcpu.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
xen/arch/x86/hvm/vpmu.c | 20
1 files changed, 20 insertions(+), 0 deletions(-)
Changes in v2:
* Test last_vcpu locally before IPI
* Don't handle local pcpu as a special case
* priv_enable: dom0 only profiling. dom0 collects samples for everyone.
Sampling
in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for HVM, PV and PVH and therefore has been
moved
up from hvm subtree
Boris
vmx_add_host_load_msr() and vmx_add_guest_msr() share fair amount of code. Merge
them to simplify code maintenance.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar
vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Jan Beulich jbeul...@suse.com
Reviewed-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested
Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.
Call core2_get_pmc_count() once, during initialization.
Properly clean up when core2_vpmu_alloc_resource() fails.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin
Don't have the hypervisor update APIC_LVTPC when _it_ thinks the vector should
be updated. Instead, handle guest's APIC_LVTPC accesses and write what the guest
explicitly wanted.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed
context_saved()
before vpmu_switch_to() is executed. (Note that while this change could have
been dalayed until that later patch, the changes are harmless to existing code
and so we do it here)
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Jan Beulich jbeul...@suse.com
Reviewed
Export Xen's symbols as {addresstypename} triplet via new XENPF_get_symbol
hypercall
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Daniel De Graaf dgde...@tycho.nsa.gov
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h
MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested
only Intel's BTS is currently supported.
Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h
-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Jan Beulich jbeul...@suse.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
---
xen/arch/x86/domain.c | 3 +--
xen/arch/x86/hvm/vmx
Code for initializing/tearing down PMU for PV guests
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Acked-by: Jan Beulich jbeul...@suse.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h
is being written, as opposed to postponing this
until later.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
---
xen/arch/x86/hvm/svm/svm.c
is currently always done prior to calling
vpmu_save_force() let's both set and clear it there.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h
to them.
While making these updates, also:
* Remove unused vpmu_domain() macro from vpmu.h
* Convert msraddr_to_bitpos() into an inline and make it a little faster by
realizing that all Intel's PMU-related MSRs are in the lower MSR range.
Signed-off-by: Boris Ostrovsky boris.ostrov
The failure to initialize VPMU may be temporary so we shouldn'd disable VMPU
forever.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
xen/arch/x86/hvm/vpmu.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86
Introduce vpmu_are_all_set that allows testing multiple bits at once. Convert
macros
into inlines for better compiler checking.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Kevin Tian kevin.t...@intel.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed
to hypervisor.
Since the interrupt handler may now force VPMU context save (i.e. set
VPMU_CONTEXT_SAVE flag) we need to make changes to amd_vpmu_save() which
until now expected this flag to be set only when the counters were stopped.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked
/vpmu.h - include/asm-x86/vpmu.h
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Jan Beulich jbeul...@suse.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
only access PMU MSRs with {rd,wr}msrl() (not the _safe versions
which would not be NMI-safe).
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
Acked-by: Jan Beulich jbeul...@suse.com
Reviewed-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
Tested-by: Dietmar Hahn dietmar.h...@ts.fujitsu.com
simplifies some of vpmu code.
For symmetry also modify vpmu_save() (and vpmu_save_force()) to use vpmu
instead of vcpu.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
xen/arch/x86/domain.c | 4 ++--
xen/arch/x86/hvm/svm/vpmu.c | 23 +++
xen
On 12/17/2014 12:28 PM, Jan Beulich wrote:
Boris Ostrovsky boris.ostrov...@oracle.com 12/17/14 4:10 PM
+/* Need to clear last_vcpu in case it points to v */
+(void)cmpxchg(last, v, NULL);
In a (later) reply to v2 I had indicated that it doesn't seem safe to so here but
rely
On 12/18/2014 07:15 AM, Dietmar Hahn wrote:
Am Mittwoch 17 Dezember 2014, 10:38:31 schrieb Boris Ostrovsky:
Version 16 of PV(H) PMU patches.
Hi Boris,
I did a fast and simple test on a small Intel machine and all went fine. I'll
do some more after my vacation in 2015.
Thanks!
After
On 12/18/2014 09:10 AM, Jan Beulich wrote:
On 17.12.14 at 16:38, boris.ostrov...@oracle.com wrote:
The failure to initialize VPMU may be temporary so we shouldn'd disable VMPU
forever.
Reported-by: Jan Beulich jbeul...@suse.com
(or Suggested-by if you like that better)
Signed-off-by: Boris
testing and clearing of
last_vcpu.
We should also check for VPMU_CONTEXT_ALLOCATED in vpmu_destroy() to
avoid unnecessary percpu tests and arch-specific destroy ops. Thus
checks in AMD and Intel routines are no longer needed.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
xen/arch
- just fit the two together. Note
that the hook can (and should) be used irrespective of whether being in
Dom0, as accessing port 0x61 in a DomU would be even worse, while the
shared info field would just hold zero all the time.
Reviewed-by: Boris Ostrovsky boris.ostrov...@oracle.com
Signed-off
With 250a1ac685f (x86, smpboot: Remove pointless preempt_disable() in
native_smp_prepare_cpus()) HVM guests no longer boot since we are
hitting BUG_ON(preemptible()) in xen_setup_cpu_clockevents().
I don't think we need this test (PV or HVM), do we?
-boris
On 12/22/2014 07:39 PM, Andy Lutomirski wrote:
The pvclock vdso code was too abstracted to understand easily and
excessively paranoid. Simplify it for a huge speedup.
This opens the door for additional simplifications, as the vdso no
longer accesses the pvti for any vcpu other than vcpu 0.
On 12/23/2014 10:14 AM, Paolo Bonzini wrote:
On 23/12/2014 16:14, Boris Ostrovsky wrote:
+do {
+version = pvti-version;
+
+/* This is also a read barrier, so we'll read version first. */
+rdtsc_barrier();
+tsc = __native_read_tsc();
This will cause VMEXIT
On 12/26/2014 02:02 PM, Konrad Rzeszutek Wilk wrote:
On Thu, Dec 25, 2014 at 02:58:06AM +, Hu, Robert wrote:
Hi
This is test report on Xen 4.5 RC4, from Intel OTC VMM Team.
Thank you!
Platform: Grantley-EP, Ivytown-EP
We found these issue blow. Hoping corresponding patches can be
On 01/26/2015 09:49 AM, Andrew Cooper wrote:
On 26/01/15 11:38, Jan Beulich wrote:
On 26.01.15 at 12:04, jbeul...@suse.com wrote:
On 24.01.15 at 13:54, ian.jack...@eu.citrix.com wrote:
test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail REGR. vs. 33637
Jan 24 00:35:16.262627
On 01/26/2015 09:56 AM, Andrew Cooper wrote:
On 26/01/15 14:51, Boris Ostrovsky wrote:
On 01/26/2015 09:49 AM, Andrew Cooper wrote:
On 26/01/15 11:38, Jan Beulich wrote:
On 26.01.15 at 12:04, jbeul...@suse.com wrote:
On 24.01.15 at 13:54, ian.jack...@eu.citrix.com wrote:
test-amd64-amd64
On 01/30/2015 08:31 AM, Jan Beulich wrote:
On 05.01.15 at 22:44, boris.ostrov...@oracle.com wrote:
+static long vpmu_unload_next(void *arg)
+{
+struct vcpu *last;
+int ret;
+unsigned int thiscpu = smp_processor_id();
+
+if ( thiscpu != vpmu_next_unload_cpu )
+{
+/*
NMI watchdog sets APIC_LVTPC register to generate an NMI when PMU counter
overflow occurs. This may be overwritten by VPMU code later, effectively
turning off the watchdog.
We should disable VPMU when NMI watchdog is running.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
docs
Use NMI_NONE when testing whether NMI watchdog is off.
Remove unused NMI_INVALID macro.
Signed-off-by: Boris Ostrovsky boris.ostrov...@oracle.com
---
xen/arch/x86/nmi.c | 4 ++--
xen/arch/x86/traps.c | 3 ++-
xen/include/asm-x86/apic.h | 1 -
3 files changed, 4 insertions(+), 4
On 01/30/2015 09:54 AM, Jan Beulich wrote:
On 05.01.15 at 22:44, boris.ostrov...@oracle.com wrote:
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -497,3 +497,39 @@ long do_xenpmu_op(int op,
XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
return ret;
}
+
+static int
On 02/05/2015 07:41 AM, David Vrabel wrote:
Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.
A fully preemptible kernel may deschedule such as task in any upcall
called from a
On 02/05/2015 11:14 AM, David Vrabel wrote:
On 05/02/15 16:11, Boris Ostrovsky wrote:
On 02/05/2015 07:41 AM, David Vrabel wrote:
+
+void xen_maybe_preempt_hcall(void)
+{
+if (__this_cpu_read(xen_in_preemptible_hcall)) {
Can you check should_resched() here?
_cond_resched() already does
On 01/19/2015 12:32 PM, Ian Campbell wrote:
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 0a123f1..eb83f0a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1070,6 +1070,10 @@ void libxl_vminfo_list_free(libxl_vminfo *list, int
nb_vm);
libxl_cputopology
On 01/20/2015 05:37 AM, Jan Beulich wrote:
On 05.01.15 at 22:43, boris.ostrov...@oracle.com wrote:
Changes in v17:
* Disable VPMU when unknown CPU vendor is detected (patch #2)
* Remove unnecessary vendor tests in vendor-specific init routines (patch #14)
* Remember first CPU that starts mode
On 01/20/2015 10:21 AM, Ian Campbell wrote:
On Tue, 2015-01-20 at 10:15 -0500, Boris Ostrovsky wrote:
diff --git a/tools/libxl/libxl_linux.c b/tools/libxl/libxl_linux.c
index ea5d8c1..07428c0 100644
--- a/tools/libxl/libxl_linux.c
+++ b/tools/libxl/libxl_linux.c
@@ -279,3 +279,74
On 01/13/2015 11:33 AM, Boris Ostrovsky wrote:
On 01/13/2015 11:17 AM, Boris Ostrovsky wrote:
On 01/13/2015 11:07 AM, David Vrabel wrote:
On 13/01/15 15:42, Boris Ostrovsky wrote:
On 01/13/2015 04:52 AM, David Vrabel wrote:
On 13/01/15 08:14, Imre Palik wrote:
From: Palik, Imre im
@@ -982,6 +1075,9 @@ static void __end_block_io_op(struct pending_req
*pending_req, int error)
* the grant references associated with 'request' and provide
* the proper response on the ring.
*/
+ if (atomic_dec_and_test(pending_req-pendcnt))
+
1 - 100 of 2895 matches
Mail list logo