alize backend memory objects in
parallel", 2024-02-06)
Cc: Mark Kanda
Signed-off-by: Paolo Bonzini
Reviewed-by: Mark Kanda
Thanks/regards,
-Mark
---
util/oslib-posix.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/util/oslib-posix.c b/util/oslib-posix.c
index 3
to
ensure optimal thread placement, asynchronous initialization requires prealloc
context threads to be in use.
Signed-off-by: Mark Kanda
Signed-off-by: David Hildenbrand
---
backends/hostmem.c | 7 ++-
hw/virtio/virtio-mem.c | 4 +-
include/hw/qdev-core.h | 5 ++
include/qemu/osdep.h
and line. In certain scenarios, such as memory being
preallocated across multiple numa nodes, this approach is not optimal due to
the unnecessary serialization.
This series addresses this issue by initializing the backend memory objects in
parallel.
Mark Kanda (1):
oslib-posix: initialize back
On 1/31/24 8:57 AM, David Hildenbrand wrote:
On 31.01.24 15:48, Mark Kanda wrote:
On 1/31/24 8:30 AM, David Hildenbrand wrote:
OK. I'll call it 'PHASE_LATE_BACKENDS_CREATED' (to make it consistent
with code comments/function name).
But then, you should set it at the very end
On 1/31/24 8:30 AM, David Hildenbrand wrote:
OK. I'll call it 'PHASE_LATE_BACKENDS_CREATED' (to make it consistent
with code comments/function name).
But then, you should set it at the very end of the function (not sure
if that would be a problem with the other devices that are getting
On 1/31/24 8:04 AM, David Hildenbrand wrote:
On 31.01.24 14:48, Mark Kanda wrote:
QEMU initializes preallocated backend memory as the objects are
parsed from
the command line. This is not optimal in some cases (e.g. memory
spanning
multiple NUMA nodes) because the memory objects
to
ensure optimal thread placement, asynchronous initialization requires prealloc
context threads to be in use.
Signed-off-by: Mark Kanda
Signed-off-by: David Hildenbrand
---
backends/hostmem.c | 8 ++-
hw/virtio/virtio-mem.c | 4 +-
include/qemu/osdep.h | 18 +-
system/vl.c
by initializing the backend memory objects in
parallel.
Mark Kanda (1):
oslib-posix: initialize backend memory objects in parallel
backends/hostmem.c | 8 ++-
hw/virtio/virtio-mem.c | 4 +-
include/qemu/osdep.h | 18 +-
system/vl.c| 8 +++
util/oslib-posix.c | 131
On 1/29/24 1:11 PM, David Hildenbrand wrote:
On 22.01.24 16:32, Mark Kanda wrote:
v2:
- require MADV_POPULATE_WRITE (simplify the implementation)
- require prealloc context threads to ensure optimal thread placement
- use machine phase 'initialized' to detremine when to allow parallel
init
Ping.
Any comments?
Thanks/regards,
-Mark
On 1/22/24 9:32 AM, Mark Kanda wrote:
v2:
- require MADV_POPULATE_WRITE (simplify the implementation)
- require prealloc context threads to ensure optimal thread placement
- use machine phase 'initialized' to detremine when to allow parallel init
optimal
thread placement, parallel initialization requires prealloc context threads
to be in use.
Signed-off-by: Mark Kanda
---
backends/hostmem.c | 8 ++--
hw/virtio/virtio-mem.c | 4 ++--
include/qemu/osdep.h | 14 --
system/vl.c| 6 ++
util/oslib-posix.c
the command line. In certain scenarios, such as memory being
preallocated across multiple numa nodes, this approach is not optimal due to
the unnecessary serialization.
This series addresses this issue by initializing the backend memory objects in
parallel.
Mark Kanda (2):
oslib-posix: refactor
Refactor the memory prealloc threads support:
- Make memset context a global qlist
- Move the memset thread join/cleanup code to a separate routine
This is functionally equivalent and facilitates multiple memset contexts
(used in a subsequent patch).
Signed-off-by: Mark Kanda
---
util/oslib
On 1/9/24 8:25 AM, David Hildenbrand wrote:
On 09.01.24 15:15, Daniel P. Berrangé wrote:
On Tue, Jan 09, 2024 at 03:02:00PM +0100, David Hildenbrand wrote:
On 08.01.24 19:40, Mark Kanda wrote:
On 1/8/24 9:40 AM, David Hildenbrand wrote:
On 08.01.24 16:10, Mark Kanda wrote:
Refactor
On 1/9/24 8:25 AM, David Hildenbrand wrote:
On 09.01.24 15:15, Daniel P. Berrangé wrote:
On Tue, Jan 09, 2024 at 03:02:00PM +0100, David Hildenbrand wrote:
On 08.01.24 19:40, Mark Kanda wrote:
On 1/8/24 9:40 AM, David Hildenbrand wrote:
On 08.01.24 16:10, Mark Kanda wrote:
Refactor
On 1/8/24 9:40 AM, David Hildenbrand wrote:
On 08.01.24 16:10, Mark Kanda wrote:
Refactor the memory prealloc threads support:
- Make memset context a global qlist
- Move the memset thread join/cleanup code to a separate routine
This is functionally equivalent and facilitates multiple memset
by initializing the backend memory objects in
parallel.
Mark Kanda (2):
oslib-posix: refactor memory prealloc threads
oslib-posix: initialize backend memory objects in parallel
include/qemu/osdep.h | 6 ++
system/vl.c | 2 +
util/oslib-posix.c | 150
Refactor the memory prealloc threads support:
- Make memset context a global qlist
- Move the memset thread join/cleanup code to a separate routine
This is functionally equivalent and facilitates multiple memset contexts
(used in a subsequent patch).
Signed-off-by: Mark Kanda
---
util/oslib
is
significant and scales with the number of objects. On a 2 socket Skylake VM
with 128GB and 2 init threads per socket (256GB total), the memory init time
decreases from ~27 seconds to ~14 seconds.
Signed-off-by: Mark Kanda
---
include/qemu/osdep.h | 6 ++
system/vl.c | 2 ++
util
Hi Stefano,
On 7/5/2023 7:36 AM, Stefano Garzarella wrote:
Hi Mark,
On Wed, Jul 05, 2023 at 07:28:05AM -0500, Mark Kanda wrote:
On 7/5/2023 2:15 AM, Stefano Garzarella wrote:
This reverts commit 8cc5583abe6419e7faaebc9fbd109f34f4c850f2.
That commit causes several problems in Linux
HI Stefano,
On 7/4/2023 9:14 AM, Stefano Garzarella wrote:
Hi Mark,
we have a bug [1] possibly related to this patch.
I saw this Oracle Linux errata [2] where you reverted this patch, but
there are no details.
Do you think we should revert it upstream as well?
Do you have any details about
://linux.oracle.com/errata/ELSA-2023-12065.html
Suggested-by: Thomas Huth
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2176702
Cc: qemu-sta...@nongnu.org
Cc: Mark Kanda
Signed-off-by: Stefano Garzarella
Reviewed-by: Mark Kanda
Thanks/regards,
-Mark
---
include/hw/scsi/scsi.h | 1 -
hw
s the step for qapi/stats.json.
Said commit explains the transformation in more detail. The invariant
violations mentioned there do not occur here.
Cc: Mark Kanda
Cc: Paolo Bonzini
Signed-off-by: Markus Armbruster
Reviewed-by: Mark Kanda
Thanks/regards,
-Mark
---
monitor/qmp-cmds
On 5/23/2022 10:07 AM, Paolo Bonzini wrote:
From: Mark Kanda
Add an HMP command to retrieve statistics collected at run-time.
The command will retrieve and print either all VM-level statistics,
or all vCPU-level statistics for the currently selected CPU.
As I'm credited as the 'poster
On 3/23/2022 1:56 PM, Philippe Mathieu-Daudé wrote:
On 23/3/22 18:17, Philippe Mathieu-Daudé wrote:
From: Mark Kanda
Create cpu_address_space_destroy() to free a CPU's cpu_ases list.
This seems incorrect...
vCPU hotunplug related leak reported by Valgrind:
==132362== 216 bytes in 1
Thanks Philippe,
In the patch subject, 'generic_destroy_vcpu_thread()' should be changed to
'common_vcpu_thread_destroy()'.
Same goes for the next patch (Free cpu->halt_cond).
Thanks/regards,
-Mark
On 3/23/2022 12:17 PM, Philippe Mathieu-Daudé wrote:
From: Mark Kanda
Free cpu->
: start_thread (in /usr/lib64/libpthread-2.28.so)
==132362== by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so)
Reported-by: Mark Kanda
Signed-off-by: Philippe Mathieu-Daudé
---
Based on a series from Mark:
https://lore.kernel.org/qemu-devel/20220321141409.3112932-1-mark.ka...@oracle.com
Thanks Philippe,
On 3/21/2022 5:12 PM, Philippe Mathieu-Daudé wrote:
On 21/3/22 15:14, Mark Kanda wrote:
vCPU hotunplug related leak reported by Valgrind:
==102631== 56 bytes in 1 blocks are definitely lost in loss record 5,089 of
8,555
==102631== at 0x4C3ADBB: calloc
On 3/21/2022 9:55 AM, Paolo Bonzini wrote:
On 3/21/22 14:50, Markus Armbruster wrote:
Mark Kanda writes:
Thank you Markus.
On 3/11/2022 7:06 AM, Markus Armbruster wrote:
Are the stats bulky enough to justfify the extra complexity of
filtering?
If this was only for KVM, the complexity
-monitor.c:713)
Signed-off-by: Mark Kanda
---
accel/accel-common.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/accel/accel-common.c b/accel/accel-common.c
index 623df43cc3..297d4e4ef1 100644
--- a/accel/accel-common.c
+++ b/accel/accel-common.c
@@ -140,4 +140,5 @@ type_init
631==by 0x933D3B: qdev_realize (qdev.c:333)
==102631==by 0x455EC4: qdev_device_add_from_qdict (qdev-monitor.c:713)
Signed-off-by: Mark Kanda
---
accel/accel-common.c | 6 ++
accel/hvf/hvf-accel-ops.c | 1 +
accel/kvm/kvm-accel-ops.c | 1 +
accel/qtest/qtes
Add destroy_vcpu_thread() to AccelOps as a method for vcpu thread cleanup.
This will be used in subsequent patches.
Suggested-by: Philippe Mathieu-Daudé
Signed-off-by: Mark Kanda
Reviewed-by: Philippe Mathieu-Daudé
---
include/sysemu/accel-ops.h | 1 +
softmmu/cpus.c | 3 +++
2
one (in /usr/lib64/libc-2.28.so)
Signed-off-by: Mark Kanda
---
accel/hvf/hvf-accel-ops.c | 11 ++-
accel/kvm/kvm-accel-ops.c | 11 ++-
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index b23a67881c..bc53890352 100
related cleanup
(Philippe)
Mark Kanda (5):
accel: Introduce AccelOpsClass::destroy_vcpu_thread()
softmmu/cpus: Free cpu->thread in generic_destroy_vcpu_thread()
softmmu/cpus: Free cpu->halt_cond in generic_destroy_vcpu_thread()
cpu: Free cpu->cpu_ases in cpu_address_space_destroy()
)
==132362==by 0x455E9A: qdev_device_add_from_qdict (qdev-monitor.c:713)
Signed-off-by: Mark Kanda
---
cpu.c | 1 +
include/exec/cpu-common.h | 7 +++
softmmu/physmem.c | 5 +
3 files changed, 13 insertions(+)
diff --git a/cpu.c b/cpu.c
index be1f8b074c
On 3/18/2022 11:32 AM, Philippe Mathieu-Daudé wrote:
On 18/3/22 16:15, Mark Kanda wrote:
vCPU hotunplug related leak reported by Valgrind:
==132362== 4,096 bytes in 1 blocks are definitely lost in loss record 8,440
of 8,549
==132362== at 0x4C3B15F: memalign (vg_replace_malloc.c:1265
On 3/18/2022 11:26 AM, Philippe Mathieu-Daudé wrote:
On 18/3/22 16:15, Mark Kanda wrote:
vCPU hotunplug related leak reported by Valgrind:
==132362== 216 bytes in 1 blocks are definitely lost in loss record 7,119 of
8,549
==132362== at 0x4C3ADBB: calloc (vg_replace_malloc.c:1117)
==132362
631==by 0x933D3B: qdev_realize (qdev.c:333)
==102631==by 0x455EC4: qdev_device_add_from_qdict (qdev-monitor.c:713)
Signed-off-by: Mark Kanda
---
accel/accel-common.c | 6 ++
accel/hvf/hvf-accel-ops.c | 1 +
accel/kvm/kvm-accel-ops.c | 1 +
accel/qtest/qtes
Add destroy_vcpu_thread() to AccelOps as a method for vcpu thread cleanup.
This will be used in subsequent patches.
Suggested-by: Philippe Mathieu-Daude
Signed-off-by: Mark Kanda
---
include/sysemu/accel-ops.h | 1 +
softmmu/cpus.c | 3 +++
2 files changed, 4 insertions(+)
diff
This series addresses a few vCPU hotunplug related leaks (found with Valgrind).
v2: Create AccelOpsClass::destroy_vcpu_thread() for vcpu thread related cleanup
(Philippe)
Mark Kanda (5):
accel: Introduce AccelOpsClass::destroy_vcpu_thread()
softmmu/cpus: Free cpu->thr
: kvm_vcpu_thread_fn (kvm-accel-ops.c:40)
==132362==by 0xB2EB26: qemu_thread_start (qemu-thread-posix.c:556)
==132362==by 0x7EB2159: start_thread (in /usr/lib64/libpthread-2.28.so)
==132362==by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so)
Signed-off-by: Mark Kanda
---
target/i386/cpu.c | 5
-monitor.c:713)
Signed-off-by: Mark Kanda
---
cpu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/cpu.c b/cpu.c
index be1f8b074c..6a3475022f 100644
--- a/cpu.c
+++ b/cpu.c
@@ -173,6 +173,7 @@ void cpu_exec_unrealizefn(CPUState *cpu)
if (tcg_enabled()) {
tcg_exec_unrealizefn(cpu
-monitor.c:713)
Signed-off-by: Mark Kanda
---
accel/accel-common.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/accel/accel-common.c b/accel/accel-common.c
index 80b0d909b2..ae71a27799 100644
--- a/accel/accel-common.c
+++ b/accel/accel-common.c
@@ -140,4 +140,5 @@ type_init
Thank you Markus.
On 3/11/2022 7:06 AM, Markus Armbruster wrote:
Mark Kanda writes:
Introduce QMP support for querying stats. Provide a framework for adding new
stats and support for the following commands:
- query-stats
Returns a list of all stats per target type (only VM and vCPU to start
Gentle ping - any thoughts on this series?
Thanks/regards,
-Mark
On 1/26/2022 8:29 AM, Mark Kanda wrote:
This series addresses a few vCPU hotunplug related leaks (found with Valgrind).
Mark Kanda (4):
softmmu/cpus: Free cpu->thread in cpu_remove_sync()
softmmu/cpus: Free cpu->hal
onent/unit
combinations (related to seconds and bytes)
Mark Kanda (3):
qmp: Support for querying stats
hmp: Support for querying stats
kvm: Support for querying fd-based stats
accel/kvm/kvm-all.c | 393
hmp-commands-info.hx| 28 +++
inclu
t;provider": "xyz",
"stats": [ ... ] } ] },
{ "path": "/machine/unattached/device[4]"
"providers": [
{ "provider": "kvm",
"stats": [ { "name": "l1d_flush
nanoseconds): 419637402657
- Display all VM stats for provider KVM:
(qemu) info stats * vm kvm
vm
provider: kvm
max_mmu_page_hash_collisions (peak): 0
max_mmu_rmap_size (peak): 0
nx_lpage_splits (instant): 51
...
Signed-off-by: Mark Kanda
---
hmp-commands-info.hx | 28
include
Add support for querying fd-based KVM stats - as introduced by Linux kernel
commit:
cb082bfab59a ("KVM: stats: Add fd-based API to read binary stats data")
Signed-off-by: Mark Kanda
---
accel/kvm/kvm-all.c | 393
qapi/stats.json |
On 2/1/2022 6:08 AM, Daniel P. Berrangé wrote:
+##
+# @StatsResults:
+#
+# Target specific results.
+#
+# Since: 7.0
+##
+{ 'union': 'StatsResults',
+ 'base': { 'target': 'StatsTarget' },
+ 'discriminator': 'target',
+ 'data': { 'vcpu': 'VCPUStatsResults',
+'vm': 'VMStatsResults'
On 2/3/2022 12:30 PM, Daniel P. Berrangé wrote:
On Thu, Feb 03, 2022 at 12:12:57PM -0600, Mark Kanda wrote:
Thanks Daniel,
On 2/1/2022 6:08 AM, Daniel P. Berrangé wrote:
+#
+# Since: 7.0
+##
+{ 'enum' : 'StatType',
+ 'data' : [ 'cumulative', 'instant', 'peak',
+ 'linear-hist
Thanks Daniel,
On 2/1/2022 6:08 AM, Daniel P. Berrangé wrote:
+#
+# Since: 7.0
+##
+{ 'enum' : 'StatType',
+ 'data' : [ 'cumulative', 'instant', 'peak',
+ 'linear-hist', 'log-hist', 'unknown' ] }
IMHO 'unknown' shouldn't exist at all.
I added the 'unknown' member here (and in
"stats": [
{ "name": "l1d_flush", "value": 24902 },
{ "name": "exits", "value": 74374 }
] }
],
"path": "/machine/unattached/device[4]"
}
],
Add support for querying fd-based KVM stats - as introduced by Linux kernel
commit:
cb082bfab59a ("KVM: stats: Add fd-based API to read binary stats data")
Signed-off-by: Mark Kanda
---
accel/kvm/kvm-all.c | 308
qapi/misc.json |
nanoseconds): 419637402657
- Display all VM stats for provider KVM:
(qemu) info stats * vm kvm
vm
provider: kvm
max_mmu_page_hash_collisions (peak): 0
max_mmu_rmap_size (peak): 0
nx_lpage_splits (instant): 51
...
Signed-off-by: Mark Kanda
---
hmp-commands-info.hx | 28
include
)
This patchset adds QEMU support for querying fd-based KVM stats. The
kernel support was introduced by:
cb082bfab59a ("KVM: stats: Add fd-based API to read binary stats data")
[1] https://lore.kernel.org/all/2029195153.11815-1-mark.ka...@oracle.com/
Mark Kanda (3):
qm
-monitor.c:711)
Signed-off-by: Mark Kanda
---
softmmu/cpus.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
index 1d8380d4aa..edaa36f6dc 100644
--- a/softmmu/cpus.c
+++ b/softmmu/cpus.c
@@ -604,6 +604,7 @@ void cpu_remove_sync(CPUState *cpu)
qemu_thread_join
-monitor.c:711)
Signed-off-by: Mark Kanda
---
softmmu/cpus.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
index 23bca46b07..1d8380d4aa 100644
--- a/softmmu/cpus.c
+++ b/softmmu/cpus.c
@@ -603,6 +603,7 @@ void cpu_remove_sync(CPUState *cpu
This series addresses a few vCPU hotunplug related leaks (found with Valgrind).
Mark Kanda (4):
softmmu/cpus: Free cpu->thread in cpu_remove_sync()
softmmu/cpus: Free cpu->halt_cond in cpu_remove_sync()
cpu: Free cpu->cpu_ases in cpu_exec_unrealizefn()
i386/cpu: Free env-&
-monitor.c:711)
Signed-off-by: Mark Kanda
---
cpu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/cpu.c b/cpu.c
index 016bf06a1a..d5c730164a 100644
--- a/cpu.c
+++ b/cpu.c
@@ -170,6 +170,7 @@ void cpu_exec_unrealizefn(CPUState *cpu)
if (tcg_enabled()) {
tcg_exec_unrealizefn(cpu
==by 0xAA72F0: qemu_thread_start (qemu-thread-posix.c:556)
==377357==by 0x8EE8159: start_thread (in /usr/lib64/libpthread-2.28.so)
==377357==by 0x91FCDD2: clone (in /usr/lib64/libc-2.28.so)
Signed-off-by: Mark Kanda
---
target/i386/cpu.c | 2 ++
1 file changed, 2 insertions(+)
diff --git
On 1/18/2022 6:52 AM, Daniel P. Berrangé wrote:
On Tue, Jan 18, 2022 at 01:26:32PM +0100, Paolo Bonzini wrote:
On 1/17/22 16:17, Mark Kanda wrote:
I agree except that I think this and StatsResults should be unions,
even if it means running multiple query-stats commands.
IIUC, making
On 1/17/2022 6:05 AM, Paolo Bonzini wrote:
On 12/7/21 19:42, Daniel P. Berrangé wrote:
Now onto the values being reported. AFAICT from the kernel
docs, for all the types of data it currently reports
(cumulative, instant, peak, none), there is only ever going
to be a single value. I assume the
20:51, Mark Kanda wrote:
v2: [Paolo]
- generalize the interface
- add support for querying stat schema and instances
- add additional HMP semantic processing for a few exponent/unit
combinations (related to seconds and bytes)
This patchset adds QEMU support for querying fd-based KVM stats. The
was introduced by:
cb082bfab59a ("KVM: stats: Add fd-based API to read binary stats data")
Mark Kanda (3):
qmp: Support for querying stats
hmp: Support for querying stats
kvm: Support for querying fd-based stats
accel/kvm/kvm-all.c | 399 ++
hm
Add support for querying fd-based KVM stats - as introduced by Linux
kernel commit:
cb082bfab59a ("KVM: stats: Add fd-based API to read binary stats data")
Signed-off-by: Mark Kanda
---
accel/kvm/kvm-all.c | 399
qapi/misc.json |
vcpu_1 kvm-vcpu
vcpu_0 kvm-vcpu
vm kvm-vm
Signed-off-by: Mark Kanda
---
hmp-commands-info.hx | 40 ++
include/monitor/hmp.h | 3 +
monitor/hmp-cmds.c| 125 ++
3 files changed, 168 insertions
"base": 10,
"val": [ 0 ],
"exponent": 0,
"type": "peak" },
...
{ "execute": "query-stats-schemas" }
{ "return": [
{ "type": "kvm-vcpu",
"stat
ot;name": "vm",
"stats": [] },
{ "name": "vcpu_0",
"stats": [
{ "name": "req_event",
"unit": "none",
"base": 10,
"val": [ 500 ],
vcpu_0:
req_event (cumulative): 538
nmi_injections (cumulative): 0
...
(qemu) info kvmstats halt_poll_fail_ns
vm:
vcpu_0:
halt_poll_fail_ns (cumulative): 20*10^-9 seconds
vcpu_1:
halt_poll_fail_ns (cumulative): 30*10^-9 seconds
Signed-off-by: Mark Kanda
---
hmp-commands-info.hx | 13
This patchset adds QEMU support for querying fd-based KVM stats. The kernel
support is provided by:
cb082bfab59a ("KVM: stats: Add fd-based API to read binary stats data")
Patch 1 adds QMP support; patch 2 adds HMP support.
Mark Kanda (2):
qmp: Support fd-based KVM stats query
hm
On 10/29/2020 10:58 PM, Chuan Zheng wrote:
Remove redundant blank line which is left by Commit 662770af7c6e8c,
also take this opportunity to remove redundant includes in dirtyrate.c.
Signed-off-by: Chuan Zheng
Reviewed-by: Mark Kanda
---
migration/dirtyrate.c | 5 -
1 file changed
Gentle ping - I would like to confirm this patch is acceptable.
Thanks/regards,
-Mark
On 7/17/2019 9:38 AM, Mark Kanda wrote:
The halt poll control MSR should only be enabled on hosts which
support it.
Fixes: ("kvm: i386: halt poll control MSR support")
Signed-off-by: Mark Kand
The halt poll control MSR should only be enabled on hosts which
support it.
Fixes: ("kvm: i386: halt poll control MSR support")
Signed-off-by: Mark Kanda
---
v2: Remove unnecessary hunks which break migration with older hosts (Paolo)
---
target/i386/cpu.c | 1 -
1 file changed,
The halt poll control MSR should only be enabled on hosts which
support it.
Fixes: ("kvm: i386: halt poll control MSR support")
Signed-off-by: Mark Kanda
---
target/i386/cpu.c | 8 +++-
target/i386/kvm.c | 2 --
target/i386/machine.c | 1 -
3 files changed, 7 insert
This patch addresses an issue with the 'queued, but not yet applied' halt
polling MSR patch. With this patch, halt polling is enabled 'by default';
this causes issues with hosts which don't support halt polling. The fix is
to only enable halt polling if it is supported by the host.
Mark Kanda (1
On 7/16/2019 4:15 PM, Paolo Bonzini wrote:
On 16/07/19 23:09, Paolo Bonzini wrote:
As such, I think we should only enable halt polling if it is supported
on the host - see the attached patch.
...thoughts?
No, it should not be enabled by default at all, at least not until we
can require kernel
= poll_control_msr_needed,
+.fields = (VMStateField[]) {
+VMSTATE_UINT64(env.poll_control_msr, X86CPU),
+VMSTATE_END_OF_LIST()
+}
+};
+
static bool fpop_ip_dp_needed(void *opaque)
{
X86CPU *cpu = opaque;
@@ -1062,6 +1081,7 @@ VMStateDescription vmstate_x86_cpu = {
such, QEMU should not
increment the emulated RUC.
Fixes: 3b2743017749 ("e1000: Implementing various counters")
Reviewed-by: Mark Kanda
Reviewed-by: Bhavesh Davda
Signed-off-by: Chris Kenna
---
hw/net/e1000.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/hw/net/e1000.c b/hw/net/e
On 2/27/2019 4:05 PM, Mark Kanda wrote:
Hi all,
I noticed nested SVM is enabled only in pc-i440fx-2.1 (default is
disabled); this was added when 2.1 was the latest:
75d373ef97 ("target-i386: Disable SVM by default in KVM mode")
However, this change was not carried forwar
Hi all,
I noticed nested SVM is enabled only in pc-i440fx-2.1 (default is
disabled); this was added when 2.1 was the latest:
75d373ef97 ("target-i386: Disable SVM by default in KVM mode")
However, this change was not carried forward to newer machine types. Is
this an oversight? Is
For CVE-2018-16847, I just noticed Kevin pulled in Li's previous fix (as
opposed to this one). Was this done in error?
Thanks,
-Mark
On 11/16/2018 3:31 AM, Paolo Bonzini wrote:
Because the CMB BAR has a min_access_size of 2, if you read the last
byte it will try to memcpy *2* bytes from
On 10/26/2018 1:37 PM, P J P wrote:
+-- On Fri, 26 Oct 2018, Mark Kanda wrote --+
| Deja vu requested that we include the following text in the commit message:
|
| Discovered by Deja vu Security. Reported by Oracle.
|
| Would that be acceptable?
Generally an email-id is used/preferred
On 10/26/2018 4:25 AM, P J P wrote:
+-- On Thu, 25 Oct 2018, Ameya More wrote --+
| While Mark and I reported this issue to you, it was actually discovered by
| Dejvau Security and they should receive credit for reporting this issue.
| http://www.dejavusecurity.com
I see; Would it be
On 8/15/2018 12:57 PM, prasad.singamse...@oracle.com wrote:
From: Prasad Singamsetty
qemu command fails to process -overcommit option. Add the missing
call to qemu_add_opts() in vl.c.
Signed-off-by: Prasad Singamsetty
Reviewed-by: Mark Kanda
---
vl.c | 1 +
1 file changed, 1
On 3/12/2018 5:59 AM, Gerd Hoffmann wrote:
Typically the scanline length and the line offset are identical. But
in case they are not our calculation for region_end is incorrect. Using
line_offset is fine for all scanlines, except the last one where we have
to use the actual scanline length.
On 1/29/2018 9:41 AM, Kevin Wolf wrote:
Am 24.01.2018 um 12:31 hat Stefan Hajnoczi geschrieben:
On Mon, Jan 22, 2018 at 09:01:49AM -0600, Mark Kanda wrote:
Add a BlockDriverState NULL check to virtio_blk_handle_request()
to prevent a segfault if the drive is forcibly removed using HMP
Add a BlockDriverState NULL check to virtio_blk_handle_request()
to prevent a segfault if the drive is forcibly removed using HMP
'drive_del' (without performing a hotplug 'device_del' first).
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Karl Heubaum <karl.heub...@o
On 12/11/2017 4:30 AM, Stefan Hajnoczi wrote:
Hi Mark,
Please resend as a top level email thread so the continuous integration
and patch management tools will detect your patch series.
Apologies. I've just resent the series.
Thanks,
-Mark
v2: add check for maximum queue size [Stefan]
This series is for two minor virtio-blk changes. The first patch
makes the virtio-blk queue size user configurable. The second patch
rejects logical block size > physical block configurations (similar
to a recent change in virtio-scsi).
Mark Kanda
Depending on the configuration, it can be beneficial to adjust the virtio-blk
queue size to something other than the current default of 128. Add a new
property to make the queue size configurable.
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Karl Heubaum <karl.heub...@o
equals the physical block size.
This is identical to commit 3da023b5827543ee4c022986ea2ad9d1274410b2
but applied to virtio-blk (instead of virtio-scsi).
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Reviewed-by: Ameya M
Depending on the configuration, it can be beneficial to adjust the virtio-blk
queue size to something other than the current default of 128. Add a new
property to make the queue size configurable.
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Karl Heubaum <karl.heub...@o
equals the physical block size.
This is identical to commit 3da023b5827543ee4c022986ea2ad9d1274410b2
but applied to virtio-blk (instead of virtio-scsi).
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Reviewed-by: Ameya M
v2: add check for maximum queue size [Stefan]
This series is for two minor virtio-blk changes. The first patch
makes the virtio-blk queue size user configurable. The second patch
rejects logical block size > physical block configurations (similar
to a recent change in virtio-scsi).
Mark Kanda
equals the physical block size.
This is identical to commit 3da023b5827543ee4c022986ea2ad9d1274410b2
but applied to virtio-blk (instead of virtio-scsi).
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Reviewed-by: Ameya M
This series is for two minor virtio-blk changes. The first patch
makes the virtio-blk queue size user configurable. The second patch
rejects logical block size > physical block configurations (similar
to a recent change in virtio-scsi).
Mark Kanda (2):
virtio-blk: make queue size configura
Depending on the configuration, it can be beneficial to adjust the virtio-blk
queue size to something other than the current default of 128. Add a new
property to make the queue size configurable.
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Karl Heubaum <karl.heub...@o
ers\Administrator>fsutil fsinfo ntfsinfo F:
...
Bytes Per Sector : 4096
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 4096
Bytes Per FileRecord Segment: 4096
...
Signed-off-by: Mark Kanda <mark.ka...@oracle.com>
Reviewed-by: Konrad Rz
99 matches
Mail list logo