On Mon, Dec 22, 2014 at 11:21 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 23/12/2014 01:39, Andy Lutomirski wrote:
This is a dramatic simplification and speedup of the vdso pvclock read
code. Is it correct?
Andy Lutomirski (2):
x86, vdso: Use asm volatile in __getcpu
x86, vdso,
flight 32585 qemu-mainline real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32585/
Failures :-/ but no regressions.
Regressions which are regarded as allowable (not blocking):
test-amd64-i386-pair17 guest-migrate/src_host/dst_host fail like 32571
Tests which did not
On 23/12/2014 09:16, Andy Lutomirski wrote:
Any thoughts as to whether it should be tagged for stable? I haven't
looked closely enough at the old pvclock code or the generated code to
have much of an opinion there. It'll be a big speedup for non-pvclock
users at least.
Yes, please.
Paolo
This is the xc side wrapper for XEN_SYSCTL_PSR_CMT_get_l3_event_mask
of XEN_SYSCTL_psr_cmt_op. Additional check for event id against value
got from this routine is also added.
Signed-off-by: Chao Peng chao.p.p...@linux.intel.com
---
tools/libxc/include/xenctrl.h |1 +
tools/libxc/xc_psr.c
L3 event mask indicates the event types supported in host, including
cache occupancy event as well as local/total memory bandwidth events
for Memory Bandwidth Monitoring(MBM). Expose it so all these events
can be monitored in user space.
Signed-off-by: Chao Peng chao.p.p...@linux.intel.com
---
Make some internal routines common so that total/local memory bandwidth
monitoring in the next patch can make use of them.
Signed-off-by: Chao Peng chao.p.p...@linux.intel.com
---
tools/libxl/libxl_psr.c | 42 ++
tools/libxl/xl_cmdimpl.c | 51
Intel Memory Bandwidth Monitoring(MBM) is a new hardware feature
which builds on the CMT infrastructure to allow monitoring of system
memory bandwidth. Event codes are provided to monitor both total
and local bandwidth, meaning bandwidth over QPI and other external
links can be monitored.
For
Add Memory Bandwidth Monitoring(MBM) for VMs. Two types of monitoring
are supported: total and local memory bandwidth monitoring. To use it,
CMT should be enabled in hypervisor.
Signed-off-by: Chao Peng chao.p.p...@linux.intel.com
---
docs/man/xl.pod.1 |2 ++
On Mon, Dec 22, 2014 at 5:35 PM, Herbert Xu
herb...@gondor.apana.org.au wrote:
On Mon, Dec 22, 2014 at 04:18:33PM +0800, Jason Wang wrote:
btw, looks like at least caif_virtio has the same issue.
Good catch.
-- 8 --
The commit d75b1ade567ffab085e8adbbdacf0092d10cd09c (net: less
On 23/12/14 00:39, Andy Lutomirski wrote:
The pvclock vdso code was too abstracted to understand easily and
excessively paranoid. Simplify it for a huge speedup.
This opens the door for additional simplifications, as the vdso no
longer accesses the pvti for any vcpu other than vcpu 0.
On 22/12/14 18:33, Boris Ostrovsky wrote:
There is no reason for having it and, with commit 250a1ac685f1 (x86,
smpboot: Remove pointless preempt_disable() in
native_smp_prepare_cpus()), it prevents HVM guests from booting.
Applied to stable/for-linus-3.19, thanks.
For arch-specific fixes I
On 19/12/14 16:16, Jan Beulich wrote:
Using the native code here can't work properly, as the hypervisor would
normally have cleared the two reason bits by the time Dom0 gets to see
the NMI (if passed to it at all). There's a shared info field for this,
and there's an existing hook to use -
flight 32596 libvirt real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32596/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-i386-libvirt 9 guest-start fail never pass
test-amd64-amd64-libvirt 9
Hi,
On 23/12/2014 04:43, manish jaggi wrote:
In gic.c, gic_update_one_lr, gic_hw_ops is called to read and write to an LR.
The function gic_update_one_lr is only used to update the LRs of the
current vCPU.
why is read/write not done on the LRs stored in the vcpu context ?
The LR array
On 23.12.14 at 07:52, kevin.t...@intel.com wrote:
From: Jan Beulich [mailto:jbeul...@suse.com]
Sent: Friday, December 19, 2014 7:26 PM
This can (and will) be legitimately the case when sharing page tables
with EPT (more of a problem before p2m_access_rwx became zero, but
still possible
Ping.
On 12/03/14 08:15, Don Slutz wrote:
From: Stefano Stabellini stefano.stabell...@eu.citrix.com
Increase maxmem before calling xc_domain_populate_physmap_exact to
avoid the risk of running out of guest memory. This way we can also
avoid complex memory calculations in libxl at domain
flight 32594 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32594/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-amd64-i386-xl-qemuu-ovmf-amd64 5 xen-boot fail pass in 32574
Regressions which are regarded as
On 12/22/2014 07:39 PM, Andy Lutomirski wrote:
The pvclock vdso code was too abstracted to understand easily and
excessively paranoid. Simplify it for a huge speedup.
This opens the door for additional simplifications, as the vdso no
longer accesses the pvti for any vcpu other than vcpu 0.
On 23/12/2014 16:14, Boris Ostrovsky wrote:
+do {
+version = pvti-version;
+
+/* This is also a read barrier, so we'll read version first. */
+rdtsc_barrier();
+tsc = __native_read_tsc();
This will cause VMEXIT on Xen with TSC_MODE_ALWAYS_EMULATE
On 12/23/2014 10:14 AM, Paolo Bonzini wrote:
On 23/12/2014 16:14, Boris Ostrovsky wrote:
+do {
+version = pvti-version;
+
+/* This is also a read barrier, so we'll read version first. */
+rdtsc_barrier();
+tsc = __native_read_tsc();
This will cause VMEXIT
On 23/12/2014 08:54, Chao Peng wrote:
Intel Memory Bandwidth Monitoring(MBM) is a new hardware feature
which builds on the CMT infrastructure to allow monitoring of system
memory bandwidth. Event codes are provided to monitor both total
and local bandwidth, meaning bandwidth over QPI and other
On 19/12/2014 18:14, Konrad Rzeszutek Wilk wrote:
On Fri, Dec 19, 2014 at 03:19:44PM +, Andrew Cooper wrote:
There will be another full nightly test happening tonight (based on c/s
7e88c23 libxl: Tell qemu to use raw format when using a tapdisk), and
some stress and scale tests if time
On 23/12/2014 08:54, Chao Peng wrote:
L3 event mask indicates the event types supported in host, including
cache occupancy event as well as local/total memory bandwidth events
for Memory Bandwidth Monitoring(MBM). Expose it so all these events
can be monitored in user space.
Signed-off-by: Chao
Julian Sivertsen julian.sivertsen+xen at gmail.com writes:
== Hardware ==
HP DL380 G5 (2x Intel Xeon 5130)
== Dom0 ==
Arch Linux 64-bit
== Functionality tested ==
Building xen 4.5 RC4 from the tarball, installing it and booting it up
with Arch Linux as the Dom0.
== Comments ==
flight 32598 qemu-mainline real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32598/
Failures :-/ but no regressions.
Regressions which are regarded as allowable (not blocking):
test-amd64-i386-pair17 guest-migrate/src_host/dst_host fail like 32585
Tests which did not
From: Herbert Xu herb...@gondor.apana.org.au
Date: Sun, 21 Dec 2014 07:16:25 +1100
This patch rearranges the loop in net_rx_action to reduce the
amount of jumping back and forth when reading the code.
Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
Applied.
From: Herbert Xu herb...@gondor.apana.org.au
Date: Sun, 21 Dec 2014 07:16:21 +1100
This patch creates a new function napi_poll and moves the napi
polling code from net_rx_action into it.
Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
Applied.
flight 32600 linux-next real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32600/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemut-winxpsp3 7 windows-install fail REGR. vs. 32564
Tests which are
28 matches
Mail list logo