On Tue, Dec 01, 2015 at 05:55:48PM +0000, Julien Grall wrote: > Hi Konrad, > > On 01/12/15 15:37, Konrad Rzeszutek Wilk wrote: > > On Wed, Nov 18, 2015 at 06:57:23PM +0000, Julien Grall wrote: > >> Hi all, > >> > >> This is a follow-up on the previous discussion [1] related to guest using > >> 64KB > >> page granularity which doesn't boot when the backend isn't using indirect > >> descriptor. > >> > >> This has been successfully tested on ARM64 with both 64KB and 4KB page > >> granularity guests and QEMU as the backend. Indeed QEMU doesn't support > >> indirect descriptor. > >> > >> This series is based on xentip/for-linus-4.4 which include the support for > >> 64KB Linux guest. > > > > In the meantime the multi-queue patches have been put in the queue > > > > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git > > #devel/for-jens-4.5 > > > > I will try rebasing the patches on top of that. > > It will likely clash with the multiqueue changes. I will rebase this > patch series and resend it.
I got patch #1 ported over (see attached). Testing it now. > > Regards, > > -- > Julien Grall
>From ebbda22e54e6557188298a3e1d6c0dcf4b04da26 Mon Sep 17 00:00:00 2001 From: Julien Grall <julien.gr...@citrix.com> Date: Wed, 18 Nov 2015 18:57:24 +0000 Subject: [PATCH] block/xen-blkfront: Introduce blkif_ring_get_request MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The code to get a request is always the same. Therefore we can factorize it in a single function. Signed-off-by: Julien Grall <julien.gr...@citrix.com> Acked-by: Roger Pau Monné <roger....@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.w...@oracle.com> --- drivers/block/xen-blkfront.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 4f77d36..38af260 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -481,6 +481,24 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode, return 0; } +static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo, + struct request *req, + struct blkif_request **ring_req) +{ + unsigned long id; + struct blkfront_info *info = rinfo->dev_info; + + *ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt); + rinfo->ring.req_prod_pvt++; + + id = get_id_from_freelist(rinfo); + rinfo->shadow[id].request = req; + + (*ring_req)->u.rw.id = id; + + return id; +} + static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo) { struct blkfront_info *info = rinfo->dev_info; @@ -488,9 +506,7 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf unsigned long id; /* Fill out a communications ring structure. */ - ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt); - id = get_id_from_freelist(rinfo); - rinfo->shadow[id].request = req; + id = blkif_ring_get_request(rinfo, req, &ring_req); ring_req->operation = BLKIF_OP_DISCARD; ring_req->u.discard.nr_sectors = blk_rq_sectors(req); @@ -501,8 +517,6 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf else ring_req->u.discard.flag = 0; - rinfo->ring.req_prod_pvt++; - /* Keep a private copy so we can reissue requests when recovering. */ rinfo->shadow[id].req = *ring_req; @@ -635,9 +649,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri } /* Fill out a communications ring structure. */ - ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt); - id = get_id_from_freelist(rinfo); - rinfo->shadow[id].request = req; + id = blkif_ring_get_request(rinfo, req, &ring_req); BUG_ON(info->max_indirect_segments == 0 && GREFS(req->nr_phys_segments) > BLKIF_MAX_SEGMENTS_PER_REQUEST); @@ -716,8 +728,6 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri if (setup.segments) kunmap_atomic(setup.segments); - rinfo->ring.req_prod_pvt++; - /* Keep a private copy so we can reissue requests when recovering. */ rinfo->shadow[id].req = *ring_req; -- 2.1.0
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel