Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect

2023-04-26 Thread Tim Smith
On Mon, Apr 24, 2023 at 2:51 PM Paul Durrant wrote: > > So if you drop the ring drain then this patch should still stop the > SEGVs, right? > I think that's worth a few test runs. I recall some coredumps in that condition when I was investigating early on, but I don't have them in my collection s

Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect

2023-04-24 Thread Tim Smith
On Mon, Apr 24, 2023 at 1:08 PM Mark Syms wrote: > > Copying in Tim who did the final phase of the changes. > > On Mon, 24 Apr 2023 at 11:32, Paul Durrant wrote: > > > > On 20/04/2023 12:02, mark.s...@citrix.com wrote: > > > From: Mark Syms > > > > > > Ensure the PV ring is drained on disconnect

Re: [Qemu-devel] Very slow finding extents in QCOW2-backed nbd

2019-01-28 Thread Tim Smith
On Monday, 28 January 2019 14:41:35 GMT Vladimir Sementsov-Ogievskiy wrote: > 28.01.2019 14:58, Tim Smith wrote: > > > Hi all, I have a question about the intent of the last call to > > bdrv_co_block_status() in bdrv_co_block_status(), in block/io.c about > > line > &g

[Qemu-devel] Very slow finding extents in QCOW2-backed nbd

2019-01-28 Thread Tim Smith
ll removed, and the only discernable difference was that everything went a lot faster. So I'm wondering what the intent is for that code, and in what circumstances it is useful? -- Tim Smith

[Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour

2018-11-02 Thread Tim Smith
amount proportional to the number which were already in flight at the time we started reading the ring. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 30 ++ 1 file changed, 30 insertions(+) diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c index 36eff94f84

[Qemu-devel] [PATCH 2/3] Improve xen_disk response latency

2018-11-02 Thread Tim Smith
reads as soon as possible adds latency to the guest. To alleviate that, complete IO requests as soon as they come back. blk_send_response() already returns a value indicating whether a notify should be sent, which is all the batching we need. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 43

[Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk

2018-11-02 Thread Tim Smith
BLKIF_MAX_SEGMENTS_PER_REQUEST pages (currently 11 pages) when the ioreq is created, and keep that allocation until it is destroyed. Since the ioreqs themselves are re-used via a free list, this should actually improve memory usage. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 11

[Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk v2

2018-11-02 Thread Tim Smith
posix_memalign() reduced the dirty heap from 25MB to 5MB in the case of a single datapath process while also improving performance. v2 removes some checkpatch complaints and fixes the CCs --- Tim Smith (3): Improve xen_disk batching behaviour Improve xen_disk response latency Avoid

[Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk

2018-11-02 Thread Tim Smith
BLKIF_MAX_SEGMENTS_PER_REQUEST pages (currently 11 pages) when the ioreq is created, and keep that allocation until it is destroyed. Since the ioreqs themselves are re-used via a free list, this should actually improve memory usage. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 10

[Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour

2018-11-02 Thread Tim Smith
amount proportional to the number which were already in flight at the time we started reading the ring. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 29 + 1 file changed, 29 insertions(+) diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c index 36eff94f84

[Qemu-devel] [PATCH 2/3] Improve xen_disk response latency

2018-11-02 Thread Tim Smith
reads as soon as possible adds latency to the guest. To alleviate that, complete IO requests as soon as they come back. blk_send_response() already returns a value indicating whether a notify should be sent, which is all the batching we need. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 43

[Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk

2018-11-02 Thread Tim Smith
posix_memalign() reduced the dirty heap from 25MB to 5MB in the case of a single datapath process while also improving performance. --- Tim Smith (3): Improve xen_disk batching behaviour Improve xen_disk response latency Avoid repeated memory allocation in xen_disk hw/block

[Qemu-devel] [PATCH 2/3] Improve xen_disk response latency

2018-09-07 Thread Tim Smith
reads as soon as possible adds latency to the guest. To alleviate that, complete IO requests as soon as they come back. blk_send_response() already returns a value indicating whether a notify should be sent, which is all the batching we need. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 43

[Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk

2018-09-07 Thread Tim Smith
BLKIF_MAX_SEGMENTS_PER_REQUEST pages (currently 11 pages) when the ioreq is created, and keep that allocation until it is destroyed. Since the ioreqs themselves are re-used via a free list, this should actually improve memory usage. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 10

[Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour

2018-09-07 Thread Tim Smith
amount proportional to the number which were already in flight at the time we started reading the ring. Signed-off-by: Tim Smith --- hw/block/xen_disk.c | 29 + 1 file changed, 29 insertions(+) diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c index 36eff94f84