Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-03-13 Thread Anchal Agarwal
On Thu, Mar 12, 2020 at 10:04:35AM +0100, Roger Pau Monné wrote:
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you can confirm the sender and know the 
> content is safe.
> 
> 
> 
> On Wed, Mar 11, 2020 at 10:25:15PM +, Agarwal, Anchal wrote:
> > Hi Roger,
> > I am trying to understand your comments on indirect descriptors specially 
> > without polluting the mailing list hence emailing you personally.
> 
> IMO it's better to send to the mailing list. The issues or questions
> you have about indirect descriptors can be helpful to others in the
> future. If there's no confidential information please send to the
> list next time.
> 
> Feel free to forward this reply to the list also.
>
Sure no problem at all.
> > Hope that's ok by you.  Please see my response inline.
> >
> > On Fri, Mar 06, 2020 at 06:40:33PM +, Anchal Agarwal wrote:
> > > On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > > > >   blkfront_gather_backend_features(info);
> > > > >   /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> > > > >   blkif_set_queue_limits(info);
> > > > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct 
> > blkfront_info *info)
> > > > >   kick_pending_request_queues(rinfo);
> > > > >   }
> > > > >
> > > > > + if (frozen)
> > > > > + return 0;
> > > >
> > > > I have to admit my memory is fuzzy here, but don't you need to
> > > > re-queue requests in case the backend has different limits of 
> > indirect
> > > > descriptors per request for example?
> > > >
> > > > Or do we expect that the frontend is always going to be resumed on 
> > the
> > > > same backend, and thus features won't change?
> > > >
> > > So to understand your question better here, AFAIU the  maximum number 
> > of indirect
> > > grefs is fixed by the backend, but the frontend can issue requests 
> > with any
> > > number of indirect segments as long as it's less than the number 
> > provided by
> > > the backend. So by your question you mean this max number of 
> > MAX_INDIRECT_SEGMENTS
> > > 256 on backend can change ?
> >
> > Yes, number of indirect descriptors supported by the backend can
> > change, because you moved to a different backend, or because the
> > maximum supported by the backend has changed. It's also possible to
> > resume on a backend that has no indirect descriptors support at all.
> >
> > AFAIU, the code for requeuing the requests is only for xen suspend/resume. 
> > These request in the queue are
> > same that gets added to queuelist in blkfront_resume. Also, even if 
> > indirect descriptors change on resume,
> > they just need to be broadcasted to frontend and which means we could just 
> > mean that a request can process
> > more data.
> 
> Or less data. You could legitimately migrate from a host that has
> indirect descriptors to one without, in which case requests would need
> to be smaller to fit the ring slots.
> 
> > We do setup indirect descriptors on front end on blkif_recover before 
> > returning and queue limits are
> > setup accordingly.
> > Am I missing anything here?
> 
> Calling blkif_recover should take care of it AFAICT. As it resets the
> queue limits according to the data announced on xenstore.
> 
> I think I got confused, using blkif_recover should be fine, sorry.
> 
Ok. Thanks for confirming. I will fixup other suggestions in the patch and send
out a v4.
> >
> > > > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk 
> > *disk, fmode_t mode)
> > > > >   mutex_unlock(_mutex);
> > > > >  }
> > > > >
> > > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > > +{
> > > > > + unsigned int i;
> > > > > + struct blkfront_info *info = dev_get_drvdata(>dev);
> > > > > + struct blkfront_ring_info *rinfo;
> > > > > + /* This would be reasonable timeout as used in 
> > xenbus_dev_shutdown() */
> > > > > + unsigned int timeout = 5 * HZ;
> > > > > + int err = 0;
> > > > > +
> > > > > + info->connected = BLKIF_STATE_FREEZING;
> > > > > +
> > > > > + blk_mq_freeze_queue(info->rq);
> > > > > + blk_mq_quiesce_queue(info->rq);
> > > >
> > > > Don't you need to also drain the queue and make sure it's empty?
> > > >
> > > blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of 
> > running HW queues synchronously
> > > and making sure all the ongoing dispatches have finished. Did I 
> > understand your question right?
> >
> > Can you please add some check to that end? (ie: that there are no
> > pending requests on any queue?)
> >
> > Well a check to see if there are any unconsumed responses could be done.
> > I haven't come across use case in my testing where this failed but maybe 

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-03-09 Thread Roger Pau Monné
On Fri, Mar 06, 2020 at 06:40:33PM +, Anchal Agarwal wrote:
> On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > >   blkfront_gather_backend_features(info);
> > >   /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> > >   blkif_set_queue_limits(info);
> > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> > >   kick_pending_request_queues(rinfo);
> > >   }
> > >  
> > > + if (frozen)
> > > + return 0;
> > 
> > I have to admit my memory is fuzzy here, but don't you need to
> > re-queue requests in case the backend has different limits of indirect
> > descriptors per request for example?
> > 
> > Or do we expect that the frontend is always going to be resumed on the
> > same backend, and thus features won't change?
> > 
> So to understand your question better here, AFAIU the  maximum number of 
> indirect 
> grefs is fixed by the backend, but the frontend can issue requests with any 
> number of indirect segments as long as it's less than the number provided by 
> the backend. So by your question you mean this max number of 
> MAX_INDIRECT_SEGMENTS 
> 256 on backend can change ?

Yes, number of indirect descriptors supported by the backend can
change, because you moved to a different backend, or because the
maximum supported by the backend has changed. It's also possible to
resume on a backend that has no indirect descriptors support at all.

> > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, 
> > > fmode_t mode)
> > >   mutex_unlock(_mutex);
> > >  }
> > >  
> > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > +{
> > > + unsigned int i;
> > > + struct blkfront_info *info = dev_get_drvdata(>dev);
> > > + struct blkfront_ring_info *rinfo;
> > > + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > > + unsigned int timeout = 5 * HZ;
> > > + int err = 0;
> > > +
> > > + info->connected = BLKIF_STATE_FREEZING;
> > > +
> > > + blk_mq_freeze_queue(info->rq);
> > > + blk_mq_quiesce_queue(info->rq);
> > 
> > Don't you need to also drain the queue and make sure it's empty?
> > 
> blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of running HW 
> queues synchronously
> and making sure all the ongoing dispatches have finished. Did I understand 
> your question right?

Can you please add some check to that end? (ie: that there are no
pending requests on any queue?)

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-03-06 Thread Anchal Agarwal
On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > From: Munehisa Kamata  > 
> > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > events, need to implement these xenbus_driver callbacks.
> > The freeze handler stops a block-layer queue and disconnect the
> > frontend from the backend while freeing ring_info and associated resources.
> > The restore handler re-allocates ring_info and re-connect to the
> > backend, so the rest of the kernel can continue to use the block device
> > transparently. Also, the handlers are used for both PM suspend and
> > hibernation so that we can keep the existing suspend/resume callbacks for
> > Xen suspend without modification. Before disconnecting from backend,
> > we need to prevent any new IO from being queued and wait for existing
> > IO to complete. Freeze/unfreeze of the queues will guarantee that there
> > are no requests in use on the shared ring.
> > 
> > Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> > xen/blkback: unmap all persistent grants when frontend gets disconnected,
> > the frontend may see massive amount of grant table warning when freeing
> > resources.
> > [   36.852659] deferring g.e. 0xf9 (pfn 0x)
> > [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> > 
> > In this case, persistent grants would need to be disabled.
> > 
> > [Anchal Changelog: Removed timeout/request during blkfront freeze.
> > Fixed major part of the code to work with blk-mq]
> > Signed-off-by: Anchal Agarwal 
> > Signed-off-by: Munehisa Kamata 
> > ---
> >  drivers/block/xen-blkfront.c | 119 ---
> >  1 file changed, 112 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 478120233750..d715ed3cb69a 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -47,6 +47,8 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> > +#include 
> >  
> >  #include 
> >  #include 
> > @@ -79,6 +81,8 @@ enum blkif_state {
> > BLKIF_STATE_DISCONNECTED,
> > BLKIF_STATE_CONNECTED,
> > BLKIF_STATE_SUSPENDED,
> > +   BLKIF_STATE_FREEZING,
> > +   BLKIF_STATE_FROZEN
> >  };
> >  
> >  struct grant {
> > @@ -220,6 +224,7 @@ struct blkfront_info
> > struct list_head requests;
> > struct bio_list bio_list;
> > struct list_head info_list;
> > +   struct completion wait_backend_disconnected;
> >  };
> >  
> >  static unsigned int nr_minors;
> > @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
> >  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
> >  static void blkfront_gather_backend_features(struct blkfront_info *info);
> >  static int negotiate_mq(struct blkfront_info *info);
> > +static void __blkif_free(struct blkfront_info *info);
> 
> I'm not particularly found of adding underscore prefixes to functions,
> I would rather use a more descriptive name if possible.
> blkif_free_{queues/rings} maybe?
>
Apologies for delayed response as I was OOTO.
Appreciate your feedback. Will fix
> >  
> >  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
> >  {
> > @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, 
> > u16 sector_size,
> > info->sector_size = sector_size;
> > info->physical_sector_size = physical_sector_size;
> > blkif_set_queue_limits(info);
> > +   init_completion(>wait_backend_disconnected);
> >  
> > return 0;
> >  }
> > @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct 
> > blkfront_info *info)
> >  /* Already hold rinfo->ring_lock. */
> >  static inline void kick_pending_request_queues_locked(struct 
> > blkfront_ring_info *rinfo)
> >  {
> > +   if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> > +   return;
> 
> Do you really need this check here?
> 
> The queue will be frozen and quiesced in blkfront_freeze when the state
> is set to BLKIF_STATE_FREEZING, and then the call to
> blk_mq_start_stopped_hw_queues is just a noop as long as the queue is
> quiesced (see blk_mq_run_hw_queue).
> 
You are right. Will fix it. May have skipped this part of the patch when fixing
blkfront_freeze.
> > if (!RING_FULL(>ring))
> > blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> >  }
> > @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info 
> > *rinfo)
> >  
> >  static void blkif_free(struct blkfront_info *info, int suspend)
> >  {
> > -   unsigned int i;
> > -
> > /* Prevent new requests being issued until we fix things up. */
> > info->connected = suspend ?
> > BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> > @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, 
> > int suspend)
> >

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-21 Thread Roger Pau Monné
On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> From: Munehisa Kamata  
> Add freeze, thaw and restore callbacks for PM suspend and hibernation
> support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> events, need to implement these xenbus_driver callbacks.
> The freeze handler stops a block-layer queue and disconnect the
> frontend from the backend while freeing ring_info and associated resources.
> The restore handler re-allocates ring_info and re-connect to the
> backend, so the rest of the kernel can continue to use the block device
> transparently. Also, the handlers are used for both PM suspend and
> hibernation so that we can keep the existing suspend/resume callbacks for
> Xen suspend without modification. Before disconnecting from backend,
> we need to prevent any new IO from being queued and wait for existing
> IO to complete. Freeze/unfreeze of the queues will guarantee that there
> are no requests in use on the shared ring.
> 
> Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> xen/blkback: unmap all persistent grants when frontend gets disconnected,
> the frontend may see massive amount of grant table warning when freeing
> resources.
> [   36.852659] deferring g.e. 0xf9 (pfn 0x)
> [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> 
> In this case, persistent grants would need to be disabled.
> 
> [Anchal Changelog: Removed timeout/request during blkfront freeze.
> Fixed major part of the code to work with blk-mq]
> Signed-off-by: Anchal Agarwal 
> Signed-off-by: Munehisa Kamata 
> ---
>  drivers/block/xen-blkfront.c | 119 ---
>  1 file changed, 112 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 478120233750..d715ed3cb69a 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -47,6 +47,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>  
>  #include 
>  #include 
> @@ -79,6 +81,8 @@ enum blkif_state {
>   BLKIF_STATE_DISCONNECTED,
>   BLKIF_STATE_CONNECTED,
>   BLKIF_STATE_SUSPENDED,
> + BLKIF_STATE_FREEZING,
> + BLKIF_STATE_FROZEN
>  };
>  
>  struct grant {
> @@ -220,6 +224,7 @@ struct blkfront_info
>   struct list_head requests;
>   struct bio_list bio_list;
>   struct list_head info_list;
> + struct completion wait_backend_disconnected;
>  };
>  
>  static unsigned int nr_minors;
> @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
>  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
>  static void blkfront_gather_backend_features(struct blkfront_info *info);
>  static int negotiate_mq(struct blkfront_info *info);
> +static void __blkif_free(struct blkfront_info *info);

I'm not particularly found of adding underscore prefixes to functions,
I would rather use a more descriptive name if possible.
blkif_free_{queues/rings} maybe?

>  
>  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
>  {
> @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 
> sector_size,
>   info->sector_size = sector_size;
>   info->physical_sector_size = physical_sector_size;
>   blkif_set_queue_limits(info);
> + init_completion(>wait_backend_disconnected);
>  
>   return 0;
>  }
> @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info 
> *info)
>  /* Already hold rinfo->ring_lock. */
>  static inline void kick_pending_request_queues_locked(struct 
> blkfront_ring_info *rinfo)
>  {
> + if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> + return;

Do you really need this check here?

The queue will be frozen and quiesced in blkfront_freeze when the state
is set to BLKIF_STATE_FREEZING, and then the call to
blk_mq_start_stopped_hw_queues is just a noop as long as the queue is
quiesced (see blk_mq_run_hw_queue).

>   if (!RING_FULL(>ring))
>   blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
>  }
> @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info 
> *rinfo)
>  
>  static void blkif_free(struct blkfront_info *info, int suspend)
>  {
> - unsigned int i;
> -
>   /* Prevent new requests being issued until we fix things up. */
>   info->connected = suspend ?
>   BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int 
> suspend)
>   if (info->rq)
>   blk_mq_stop_hw_queues(info->rq);
>  
> + __blkif_free(info);
> +}
> +
> +static void __blkif_free(struct blkfront_info *info)
> +{
> + unsigned int i;
> +
>   for (i = 0; i < info->nr_rings; i++)
>   blkif_free_ring(>rinfo[i]);
>  
> @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void 
> *dev_id)
>   struct blkfront_ring_info *rinfo = (struct 

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-21 Thread Roger Pau Monné
On Fri, Feb 21, 2020 at 10:33:42AM +, Durrant, Paul wrote:
> > -Original Message-
> > From: Roger Pau Monné 
> > Sent: 21 February 2020 10:22
> > To: Durrant, Paul 
> > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > ; len.br...@intel.com; pet...@infradead.org;
> > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> > fllin...@amaozn.com; Kamata, Munehisa ;
> > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> > Woodhouse, David 
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Fri, Feb 21, 2020 at 09:56:54AM +, Durrant, Paul wrote:
> > > > -Original Message-
> > > > From: Roger Pau Monné 
> > > > Sent: 21 February 2020 09:22
> > > > To: Durrant, Paul 
> > > > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > > > ; len.br...@intel.com; pet...@infradead.org;
> > > > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > > > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> > sstabell...@kernel.org;
> > > > fllin...@amaozn.com; Kamata, Munehisa ;
> > > > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > > > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > > > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > > > linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> > da...@davemloft.net;
> > > > Woodhouse, David 
> > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > callbacks
> > > > for PM suspend and hibernation
> > > >
> > > > On Thu, Feb 20, 2020 at 05:01:52PM +, Durrant, Paul wrote:
> > > > > > > Hopefully what I said above illustrates why it may not be 100%
> > > > common.
> > > > > >
> > > > > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > > > > that the hooks will have different prototypes), but I expect
> > > > > > that routines can be shared, and that the approach taken can be
> > the
> > > > > > same.
> > > > > >
> > > > > > For example one necessary difference will be that xenbus initiated
> > > > > > suspend won't close the PV connection, in case suspension fails.
> > On PM
> > > > > > suspend you seem to always close the connection beforehand, so you
> > > > > > will always have to re-negotiate on resume even if suspension
> > failed.
> > > > > >
> > > > > > What I'm mostly worried about is the different approach to ring
> > > > > > draining. Ie: either xenbus is changed to freeze the queues and
> > drain
> > > > > > the shared rings, or PM uses the already existing logic of not
> > > > > > flushing the rings an re-issuing in-flight requests on resume.
> > > > > >
> > > > >
> > > > > Yes, that's needs consideration. I don’t think the same semantic can
> > be
> > > > suitable for both. E.g. in a xen-suspend we need to freeze with as
> > little
> > > > processing as possible to avoid dirtying RAM late in the migration
> > cycle,
> > > > and we know that in-flight data can wait. But in a transition to S4 we
> > > > need to make sure that at least all the in-flight blkif requests get
> > > > completed, since they probably contain bits of the guest's memory
> > image
> > > > and that's not going to get saved any other way.
> > > >
> > > > Thanks, that makes sense and something along this lines should be
> > > > added to the commit message IMO.
> > > >
> > > > Wondering about S4, shouldn't we expect the queues to already be
> > > > empty? As any subsystem that wanted to store something to disk should
> > > > make sure requests have been successfully completed before
> > > > suspending.
> > >
> > > What about writing the suspend image itself? Normal

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-21 Thread Durrant, Paul
> -Original Message-
> From: Roger Pau Monné 
> Sent: 21 February 2020 10:22
> To: Durrant, Paul 
> Cc: Agarwal, Anchal ; Valentin, Eduardo
> ; len.br...@intel.com; pet...@infradead.org;
> b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> fllin...@amaozn.com; Kamata, Munehisa ;
> mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> ; ax...@kernel.dk; konrad.w...@oracle.com;
> b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> Woodhouse, David 
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Fri, Feb 21, 2020 at 09:56:54AM +, Durrant, Paul wrote:
> > > -Original Message-
> > > From: Roger Pau Monné 
> > > Sent: 21 February 2020 09:22
> > > To: Durrant, Paul 
> > > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > > ; len.br...@intel.com; pet...@infradead.org;
> > > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> sstabell...@kernel.org;
> > > fllin...@amaozn.com; Kamata, Munehisa ;
> > > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > > linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> da...@davemloft.net;
> > > Woodhouse, David 
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> callbacks
> > > for PM suspend and hibernation
> > >
> > > On Thu, Feb 20, 2020 at 05:01:52PM +, Durrant, Paul wrote:
> > > > > > Hopefully what I said above illustrates why it may not be 100%
> > > common.
> > > > >
> > > > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > > > that the hooks will have different prototypes), but I expect
> > > > > that routines can be shared, and that the approach taken can be
> the
> > > > > same.
> > > > >
> > > > > For example one necessary difference will be that xenbus initiated
> > > > > suspend won't close the PV connection, in case suspension fails.
> On PM
> > > > > suspend you seem to always close the connection beforehand, so you
> > > > > will always have to re-negotiate on resume even if suspension
> failed.
> > > > >
> > > > > What I'm mostly worried about is the different approach to ring
> > > > > draining. Ie: either xenbus is changed to freeze the queues and
> drain
> > > > > the shared rings, or PM uses the already existing logic of not
> > > > > flushing the rings an re-issuing in-flight requests on resume.
> > > > >
> > > >
> > > > Yes, that's needs consideration. I don’t think the same semantic can
> be
> > > suitable for both. E.g. in a xen-suspend we need to freeze with as
> little
> > > processing as possible to avoid dirtying RAM late in the migration
> cycle,
> > > and we know that in-flight data can wait. But in a transition to S4 we
> > > need to make sure that at least all the in-flight blkif requests get
> > > completed, since they probably contain bits of the guest's memory
> image
> > > and that's not going to get saved any other way.
> > >
> > > Thanks, that makes sense and something along this lines should be
> > > added to the commit message IMO.
> > >
> > > Wondering about S4, shouldn't we expect the queues to already be
> > > empty? As any subsystem that wanted to store something to disk should
> > > make sure requests have been successfully completed before
> > > suspending.
> >
> > What about writing the suspend image itself? Normal filesystem I/O
> > will have been flushed of course, but whatever vestigial kernel
> > actually writes out the hibernation file may well expect a final
> > D0->D3 on the storage device to cause a flush.
> 
> Hm, I have no idea really. I think whatever writes to the disk before
> suspend should actually make sure requests have completed, but what
> you suggest might also be a possibility.
> 
> Can you figure out whether there are requests on the ring or in the
> queue before suspending?

Well there's clearly pending stuff in the ring if rsp_prod != req_prod :-) As 
for internal queues, I don't know how blkfront manages that (or whether it has 
any pending work queue at all).

  Paul

> 
> > Again, I don't know the specifics for Linux (and Windows actually
> > uses an incarnation of the crash kernel to do the job, which brings
> > with it a whole other set of complexity as far as PV drivers go).
> 
> That seems extremely complex, I'm sure there's a reason for it :).
> 
> Thanks, Roger.
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-21 Thread Roger Pau Monné
On Fri, Feb 21, 2020 at 09:56:54AM +, Durrant, Paul wrote:
> > -Original Message-
> > From: Roger Pau Monné 
> > Sent: 21 February 2020 09:22
> > To: Durrant, Paul 
> > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > ; len.br...@intel.com; pet...@infradead.org;
> > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> > fllin...@amaozn.com; Kamata, Munehisa ;
> > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> > Woodhouse, David 
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Thu, Feb 20, 2020 at 05:01:52PM +, Durrant, Paul wrote:
> > > > > Hopefully what I said above illustrates why it may not be 100%
> > common.
> > > >
> > > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > > that the hooks will have different prototypes), but I expect
> > > > that routines can be shared, and that the approach taken can be the
> > > > same.
> > > >
> > > > For example one necessary difference will be that xenbus initiated
> > > > suspend won't close the PV connection, in case suspension fails. On PM
> > > > suspend you seem to always close the connection beforehand, so you
> > > > will always have to re-negotiate on resume even if suspension failed.
> > > >
> > > > What I'm mostly worried about is the different approach to ring
> > > > draining. Ie: either xenbus is changed to freeze the queues and drain
> > > > the shared rings, or PM uses the already existing logic of not
> > > > flushing the rings an re-issuing in-flight requests on resume.
> > > >
> > >
> > > Yes, that's needs consideration. I don’t think the same semantic can be
> > suitable for both. E.g. in a xen-suspend we need to freeze with as little
> > processing as possible to avoid dirtying RAM late in the migration cycle,
> > and we know that in-flight data can wait. But in a transition to S4 we
> > need to make sure that at least all the in-flight blkif requests get
> > completed, since they probably contain bits of the guest's memory image
> > and that's not going to get saved any other way.
> > 
> > Thanks, that makes sense and something along this lines should be
> > added to the commit message IMO.
> > 
> > Wondering about S4, shouldn't we expect the queues to already be
> > empty? As any subsystem that wanted to store something to disk should
> > make sure requests have been successfully completed before
> > suspending.
> 
> What about writing the suspend image itself? Normal filesystem I/O
> will have been flushed of course, but whatever vestigial kernel
> actually writes out the hibernation file may well expect a final
> D0->D3 on the storage device to cause a flush.

Hm, I have no idea really. I think whatever writes to the disk before
suspend should actually make sure requests have completed, but what
you suggest might also be a possibility.

Can you figure out whether there are requests on the ring or in the
queue before suspending?

> Again, I don't know the specifics for Linux (and Windows actually
> uses an incarnation of the crash kernel to do the job, which brings
> with it a whole other set of complexity as far as PV drivers go).

That seems extremely complex, I'm sure there's a reason for it :).

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-21 Thread Durrant, Paul
> -Original Message-
> From: Roger Pau Monné 
> Sent: 21 February 2020 09:22
> To: Durrant, Paul 
> Cc: Agarwal, Anchal ; Valentin, Eduardo
> ; len.br...@intel.com; pet...@infradead.org;
> b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> fllin...@amaozn.com; Kamata, Munehisa ;
> mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> ; ax...@kernel.dk; konrad.w...@oracle.com;
> b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> Woodhouse, David 
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Thu, Feb 20, 2020 at 05:01:52PM +, Durrant, Paul wrote:
> > > > Hopefully what I said above illustrates why it may not be 100%
> common.
> > >
> > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > that the hooks will have different prototypes), but I expect
> > > that routines can be shared, and that the approach taken can be the
> > > same.
> > >
> > > For example one necessary difference will be that xenbus initiated
> > > suspend won't close the PV connection, in case suspension fails. On PM
> > > suspend you seem to always close the connection beforehand, so you
> > > will always have to re-negotiate on resume even if suspension failed.
> > >
> > > What I'm mostly worried about is the different approach to ring
> > > draining. Ie: either xenbus is changed to freeze the queues and drain
> > > the shared rings, or PM uses the already existing logic of not
> > > flushing the rings an re-issuing in-flight requests on resume.
> > >
> >
> > Yes, that's needs consideration. I don’t think the same semantic can be
> suitable for both. E.g. in a xen-suspend we need to freeze with as little
> processing as possible to avoid dirtying RAM late in the migration cycle,
> and we know that in-flight data can wait. But in a transition to S4 we
> need to make sure that at least all the in-flight blkif requests get
> completed, since they probably contain bits of the guest's memory image
> and that's not going to get saved any other way.
> 
> Thanks, that makes sense and something along this lines should be
> added to the commit message IMO.
> 
> Wondering about S4, shouldn't we expect the queues to already be
> empty? As any subsystem that wanted to store something to disk should
> make sure requests have been successfully completed before
> suspending.

What about writing the suspend image itself? Normal filesystem I/O will have 
been flushed of course, but whatever vestigial kernel actually writes out the 
hibernation file may well expect a final D0->D3 on the storage device to cause 
a flush. Again, I don't know the specifics for Linux (and Windows actually uses 
an incarnation of the crash kernel to do the job, which brings with it a whole 
other set of complexity as far as PV drivers go).

  Paul

> 
> Thanks, Roger.
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-21 Thread Roger Pau Monné
On Fri, Feb 21, 2020 at 12:49:18AM +, Anchal Agarwal wrote:
> On Thu, Feb 20, 2020 at 10:01:52AM -0700, Durrant, Paul wrote:
> > > -Original Message-
> > > From: Roger Pau Monné 
> > > Sent: 20 February 2020 16:49
> > > To: Durrant, Paul 
> > > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > > ; len.br...@intel.com; pet...@infradead.org;
> > > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> > > fllin...@amaozn.com; Kamata, Munehisa ;
> > > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > > linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> > > Woodhouse, David 
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > > for PM suspend and hibernation
> > > For example one necessary difference will be that xenbus initiated
> > > suspend won't close the PV connection, in case suspension fails. On PM
> > > suspend you seem to always close the connection beforehand, so you
> > > will always have to re-negotiate on resume even if suspension failed.
> > >
> I don't get what you mean, 'suspension failure' during disconnecting frontend 
> from 
> backend? [as in this case we mark frontend closed and then wait for 
> completion]
> Or do you mean suspension fail in general post bkacend is disconnected from
> frontend for blkfront? 

I don't think you strictly need to disconnect from the backend when
suspending. Just waiting for all requests to finish should be enough.

This has the benefit of not having to renegotiate if the suspension
fails, and thus you can recover from suspension faster in case of
failure. Since you haven't closed the connection with the backend just
unfreezing the queues should get you working again, and avoids all the
renegotiation.

> In case of later, if anything fails after the dpm_suspend(),
> things need to be thawed or set back up so it should ok to always 
> re-negotitate just to avoid errors. 
> 
> > > What I'm mostly worried about is the different approach to ring
> > > draining. Ie: either xenbus is changed to freeze the queues and drain
> > > the shared rings, or PM uses the already existing logic of not
> > > flushing the rings an re-issuing in-flight requests on resume.
> > > 
> > 
> > Yes, that's needs consideration. I don’t think the same semantic can be 
> > suitable for both. E.g. in a xen-suspend we need to freeze with as little 
> > processing as possible to avoid dirtying RAM late in the migration cycle, 
> > and we know that in-flight data can wait. But in a transition to S4 we need 
> > to make sure that at least all the in-flight blkif requests get completed, 
> > since they probably contain bits of the guest's memory image and that's not 
> > going to get saved any other way.
> > 
> >   Paul
> I agree with Paul here. Just so as you know, I did try a hacky way in the 
> past 
> to re-queue requests in the past and failed miserably.

Well, it works AFAIK for xenbus initiated suspension, so I would be
interested to know why it doesn't work with PM suspension.

> I doubt[just from my experimentation]re-queuing the requests will work for PM 
> Hibernation for the same reason Paul mentioned above unless you give me 
> pressing
> reason why it should work.

My main reason is that I don't want to maintain two different
approaches to suspend/resume without a technical argument for it. I'm
not happy to take a bunch of new code just because the current one
doesn't seem to work in your use-case.

That being said, if there's a justification for doing it differently
it needs to be stated clearly in the commit. From the current commit
message I didn't gasp that there was a reason for not using the
current xenbus suspend/resume logic.

> Also, won't it effect the migration time if we start waiting for all the
> inflight requests to complete[last min page faults] ?

Well, it's going to dirty pages that would have to be re-send to the
destination side.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-21 Thread Roger Pau Monné
On Thu, Feb 20, 2020 at 05:01:52PM +, Durrant, Paul wrote:
> > > Hopefully what I said above illustrates why it may not be 100% common.
> > 
> > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > that the hooks will have different prototypes), but I expect
> > that routines can be shared, and that the approach taken can be the
> > same.
> > 
> > For example one necessary difference will be that xenbus initiated
> > suspend won't close the PV connection, in case suspension fails. On PM
> > suspend you seem to always close the connection beforehand, so you
> > will always have to re-negotiate on resume even if suspension failed.
> > 
> > What I'm mostly worried about is the different approach to ring
> > draining. Ie: either xenbus is changed to freeze the queues and drain
> > the shared rings, or PM uses the already existing logic of not
> > flushing the rings an re-issuing in-flight requests on resume.
> > 
> 
> Yes, that's needs consideration. I don’t think the same semantic can be 
> suitable for both. E.g. in a xen-suspend we need to freeze with as little 
> processing as possible to avoid dirtying RAM late in the migration cycle, and 
> we know that in-flight data can wait. But in a transition to S4 we need to 
> make sure that at least all the in-flight blkif requests get completed, since 
> they probably contain bits of the guest's memory image and that's not going 
> to get saved any other way.

Thanks, that makes sense and something along this lines should be
added to the commit message IMO.

Wondering about S4, shouldn't we expect the queues to already be
empty? As any subsystem that wanted to store something to disk should
make sure requests have been successfully completed before
suspending.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-20 Thread Anchal Agarwal
On Thu, Feb 20, 2020 at 10:01:52AM -0700, Durrant, Paul wrote:
> > -Original Message-
> > From: Roger Pau Monné 
> > Sent: 20 February 2020 16:49
> > To: Durrant, Paul 
> > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > ; len.br...@intel.com; pet...@infradead.org;
> > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> > fllin...@amaozn.com; Kamata, Munehisa ;
> > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> > Woodhouse, David 
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Thu, Feb 20, 2020 at 04:23:13PM +, Durrant, Paul wrote:
> > > > -Original Message-
> > > > From: Roger Pau Monné 
> > > > Sent: 20 February 2020 15:45
> > > > To: Durrant, Paul 
> > > > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > > > ; len.br...@intel.com; pet...@infradead.org;
> > > > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > > > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> > sstabell...@kernel.org;
> > > > fllin...@amaozn.com; Kamata, Munehisa ;
> > > > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > > > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > > > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > > > linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> > da...@davemloft.net;
> > > > Woodhouse, David 
> > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > callbacks
> > > > for PM suspend and hibernation
> > > >
> > > > On Thu, Feb 20, 2020 at 08:54:36AM +, Durrant, Paul wrote:
> > > > > > -Original Message-
> > > > > > From: Xen-devel  On Behalf
> > Of
> > > > > > Roger Pau Monné
> > > > > > Sent: 20 February 2020 08:39
> > > > > > To: Agarwal, Anchal 
> > > > > > Cc: Valentin, Eduardo ; len.br...@intel.com;
> > > > > > pet...@infradead.org; b...@kernel.crashing.org; x...@kernel.org;
> > linux-
> > > > > > m...@kvack.org; pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> > > > > > sstabell...@kernel.org; fllin...@amaozn.com; Kamata, Munehisa
> > > > > > ; mi...@redhat.com; xen-
> > > > de...@lists.xenproject.org;
> > > > > > Singh, Balbir ; ax...@kernel.dk;
> > > > > > konrad.w...@oracle.com; b...@alien8.de; boris.ostrov...@oracle.com;
> > > > > > jgr...@suse.com; net...@vger.kernel.org; linux...@vger.kernel.org;
> > > > > > r...@rjwysocki.net; linux-ker...@vger.kernel.org;
> > vkuzn...@redhat.com;
> > > > > > da...@davemloft.net; Woodhouse, David 
> > > > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > > > callbacks
> > > > > > for PM suspend and hibernation
> > > > > >
> > > > > > Thanks for this work, please see below.
> > > > > >
> > > > > > On Wed, Feb 19, 2020 at 06:04:24PM +, Anchal Agarwal wrote:
> > > > > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > > > > On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal
> > wrote:
> > > > > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné
> > wrote:
> > > > > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal
> > > > wrote:
> > > > > > > > > Quiescing the queue seemed a better option here as we want
> > to
> > > > make
> > > > > > sure ongoing
> > > > > > > > > requests dispatches are totally drained.
> > > > > > > > > I should accept that some of these notion is borrowed from
> > how
> > > > nvme
> > > > > > freeze/unfreeze
> > > > > > > > > is done although its not apple to apple

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-20 Thread Durrant, Paul
> -Original Message-
> From: Roger Pau Monné 
> Sent: 20 February 2020 16:49
> To: Durrant, Paul 
> Cc: Agarwal, Anchal ; Valentin, Eduardo
> ; len.br...@intel.com; pet...@infradead.org;
> b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> fllin...@amaozn.com; Kamata, Munehisa ;
> mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> ; ax...@kernel.dk; konrad.w...@oracle.com;
> b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> Woodhouse, David 
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Thu, Feb 20, 2020 at 04:23:13PM +, Durrant, Paul wrote:
> > > -Original Message-
> > > From: Roger Pau Monné 
> > > Sent: 20 February 2020 15:45
> > > To: Durrant, Paul 
> > > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > > ; len.br...@intel.com; pet...@infradead.org;
> > > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> sstabell...@kernel.org;
> > > fllin...@amaozn.com; Kamata, Munehisa ;
> > > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > > linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> da...@davemloft.net;
> > > Woodhouse, David 
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> callbacks
> > > for PM suspend and hibernation
> > >
> > > On Thu, Feb 20, 2020 at 08:54:36AM +, Durrant, Paul wrote:
> > > > > -Original Message-
> > > > > From: Xen-devel  On Behalf
> Of
> > > > > Roger Pau Monné
> > > > > Sent: 20 February 2020 08:39
> > > > > To: Agarwal, Anchal 
> > > > > Cc: Valentin, Eduardo ; len.br...@intel.com;
> > > > > pet...@infradead.org; b...@kernel.crashing.org; x...@kernel.org;
> linux-
> > > > > m...@kvack.org; pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> > > > > sstabell...@kernel.org; fllin...@amaozn.com; Kamata, Munehisa
> > > > > ; mi...@redhat.com; xen-
> > > de...@lists.xenproject.org;
> > > > > Singh, Balbir ; ax...@kernel.dk;
> > > > > konrad.w...@oracle.com; b...@alien8.de; boris.ostrov...@oracle.com;
> > > > > jgr...@suse.com; net...@vger.kernel.org; linux...@vger.kernel.org;
> > > > > r...@rjwysocki.net; linux-ker...@vger.kernel.org;
> vkuzn...@redhat.com;
> > > > > da...@davemloft.net; Woodhouse, David 
> > > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > > callbacks
> > > > > for PM suspend and hibernation
> > > > >
> > > > > Thanks for this work, please see below.
> > > > >
> > > > > On Wed, Feb 19, 2020 at 06:04:24PM +, Anchal Agarwal wrote:
> > > > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > > > On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal
> wrote:
> > > > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné
> wrote:
> > > > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal
> > > wrote:
> > > > > > > > Quiescing the queue seemed a better option here as we want
> to
> > > make
> > > > > sure ongoing
> > > > > > > > requests dispatches are totally drained.
> > > > > > > > I should accept that some of these notion is borrowed from
> how
> > > nvme
> > > > > freeze/unfreeze
> > > > > > > > is done although its not apple to apple comparison.
> > > > > > >
> > > > > > > That's fine, but I would still like to requests that you use
> the
> > > same
> > > > > > > logic (as much as possible) for both the Xen and the PM
> initiated
> > > > > > > suspension.
> > > > > > >
> > > > > > > So you either apply this freeze/unfreeze to the Xen suspension
> > > (an

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-20 Thread Roger Pau Monné
On Thu, Feb 20, 2020 at 04:23:13PM +, Durrant, Paul wrote:
> > -Original Message-
> > From: Roger Pau Monné 
> > Sent: 20 February 2020 15:45
> > To: Durrant, Paul 
> > Cc: Agarwal, Anchal ; Valentin, Eduardo
> > ; len.br...@intel.com; pet...@infradead.org;
> > b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> > pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> > fllin...@amaozn.com; Kamata, Munehisa ;
> > mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > ; ax...@kernel.dk; konrad.w...@oracle.com;
> > b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> > net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> > linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> > Woodhouse, David 
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Thu, Feb 20, 2020 at 08:54:36AM +, Durrant, Paul wrote:
> > > > -Original Message-
> > > > From: Xen-devel  On Behalf Of
> > > > Roger Pau Monné
> > > > Sent: 20 February 2020 08:39
> > > > To: Agarwal, Anchal 
> > > > Cc: Valentin, Eduardo ; len.br...@intel.com;
> > > > pet...@infradead.org; b...@kernel.crashing.org; x...@kernel.org; linux-
> > > > m...@kvack.org; pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> > > > sstabell...@kernel.org; fllin...@amaozn.com; Kamata, Munehisa
> > > > ; mi...@redhat.com; xen-
> > de...@lists.xenproject.org;
> > > > Singh, Balbir ; ax...@kernel.dk;
> > > > konrad.w...@oracle.com; b...@alien8.de; boris.ostrov...@oracle.com;
> > > > jgr...@suse.com; net...@vger.kernel.org; linux...@vger.kernel.org;
> > > > r...@rjwysocki.net; linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> > > > da...@davemloft.net; Woodhouse, David 
> > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > callbacks
> > > > for PM suspend and hibernation
> > > >
> > > > Thanks for this work, please see below.
> > > >
> > > > On Wed, Feb 19, 2020 at 06:04:24PM +, Anchal Agarwal wrote:
> > > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > > On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal wrote:
> > > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal
> > wrote:
> > > > > > > Quiescing the queue seemed a better option here as we want to
> > make
> > > > sure ongoing
> > > > > > > requests dispatches are totally drained.
> > > > > > > I should accept that some of these notion is borrowed from how
> > nvme
> > > > freeze/unfreeze
> > > > > > > is done although its not apple to apple comparison.
> > > > > >
> > > > > > That's fine, but I would still like to requests that you use the
> > same
> > > > > > logic (as much as possible) for both the Xen and the PM initiated
> > > > > > suspension.
> > > > > >
> > > > > > So you either apply this freeze/unfreeze to the Xen suspension
> > (and
> > > > > > drop the re-issuing of requests on resume) or adapt the same
> > approach
> > > > > > as the Xen initiated suspension. Keeping two completely different
> > > > > > approaches to suspension / resume on blkfront is not suitable long
> > > > > > term.
> > > > > >
> > > > > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> > > > good
> > > > > idea however, IMO that is a work for future and this patch series
> > should
> > > > > not be blocked for it. What do you think?
> > > >
> > > > It's not so much that I think an overhaul of suspend/resume in
> > > > blkfront is needed, it's just that I don't want to have two completely
> > > > different suspend/resume paths inside blkfront.
> > > >
> > > > So from my PoV I think the right solution is to either use the same
> > > > code (as much as possible) as it's currently used by Xen initiated
> > > > suspend/resume, or to also switch Xen initiated suspension to use the
> > > > newly introduced code.
> > >

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-20 Thread Durrant, Paul
> -Original Message-
> From: Roger Pau Monné 
> Sent: 20 February 2020 15:45
> To: Durrant, Paul 
> Cc: Agarwal, Anchal ; Valentin, Eduardo
> ; len.br...@intel.com; pet...@infradead.org;
> b...@kernel.crashing.org; x...@kernel.org; linux...@kvack.org;
> pa...@ucw.cz; h...@zytor.com; t...@linutronix.de; sstabell...@kernel.org;
> fllin...@amaozn.com; Kamata, Munehisa ;
> mi...@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> ; ax...@kernel.dk; konrad.w...@oracle.com;
> b...@alien8.de; boris.ostrov...@oracle.com; jgr...@suse.com;
> net...@vger.kernel.org; linux...@vger.kernel.org; r...@rjwysocki.net;
> linux-ker...@vger.kernel.org; vkuzn...@redhat.com; da...@davemloft.net;
> Woodhouse, David 
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Thu, Feb 20, 2020 at 08:54:36AM +, Durrant, Paul wrote:
> > > -Original Message-
> > > From: Xen-devel  On Behalf Of
> > > Roger Pau Monné
> > > Sent: 20 February 2020 08:39
> > > To: Agarwal, Anchal 
> > > Cc: Valentin, Eduardo ; len.br...@intel.com;
> > > pet...@infradead.org; b...@kernel.crashing.org; x...@kernel.org; linux-
> > > m...@kvack.org; pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> > > sstabell...@kernel.org; fllin...@amaozn.com; Kamata, Munehisa
> > > ; mi...@redhat.com; xen-
> de...@lists.xenproject.org;
> > > Singh, Balbir ; ax...@kernel.dk;
> > > konrad.w...@oracle.com; b...@alien8.de; boris.ostrov...@oracle.com;
> > > jgr...@suse.com; net...@vger.kernel.org; linux...@vger.kernel.org;
> > > r...@rjwysocki.net; linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> > > da...@davemloft.net; Woodhouse, David 
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> callbacks
> > > for PM suspend and hibernation
> > >
> > > Thanks for this work, please see below.
> > >
> > > On Wed, Feb 19, 2020 at 06:04:24PM +, Anchal Agarwal wrote:
> > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal wrote:
> > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal
> wrote:
> > > > > > Quiescing the queue seemed a better option here as we want to
> make
> > > sure ongoing
> > > > > > requests dispatches are totally drained.
> > > > > > I should accept that some of these notion is borrowed from how
> nvme
> > > freeze/unfreeze
> > > > > > is done although its not apple to apple comparison.
> > > > >
> > > > > That's fine, but I would still like to requests that you use the
> same
> > > > > logic (as much as possible) for both the Xen and the PM initiated
> > > > > suspension.
> > > > >
> > > > > So you either apply this freeze/unfreeze to the Xen suspension
> (and
> > > > > drop the re-issuing of requests on resume) or adapt the same
> approach
> > > > > as the Xen initiated suspension. Keeping two completely different
> > > > > approaches to suspension / resume on blkfront is not suitable long
> > > > > term.
> > > > >
> > > > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> > > good
> > > > idea however, IMO that is a work for future and this patch series
> should
> > > > not be blocked for it. What do you think?
> > >
> > > It's not so much that I think an overhaul of suspend/resume in
> > > blkfront is needed, it's just that I don't want to have two completely
> > > different suspend/resume paths inside blkfront.
> > >
> > > So from my PoV I think the right solution is to either use the same
> > > code (as much as possible) as it's currently used by Xen initiated
> > > suspend/resume, or to also switch Xen initiated suspension to use the
> > > newly introduced code.
> > >
> > > Having two different approaches to suspend/resume in the same driver
> > > is a recipe for disaster IMO: it adds complexity by forcing developers
> > > to take into account two different suspend/resume approaches when
> > > there's no need for it.
> >
> > I disagree. S3 or S4 suspend/resume (or perhaps we should call them
> power state transitions to avoid confusion) are quite different from Xen
> suspend/resum

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-20 Thread Roger Pau Monné
On Thu, Feb 20, 2020 at 08:54:36AM +, Durrant, Paul wrote:
> > -Original Message-
> > From: Xen-devel  On Behalf Of
> > Roger Pau Monné
> > Sent: 20 February 2020 08:39
> > To: Agarwal, Anchal 
> > Cc: Valentin, Eduardo ; len.br...@intel.com;
> > pet...@infradead.org; b...@kernel.crashing.org; x...@kernel.org; linux-
> > m...@kvack.org; pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> > sstabell...@kernel.org; fllin...@amaozn.com; Kamata, Munehisa
> > ; mi...@redhat.com; xen-devel@lists.xenproject.org;
> > Singh, Balbir ; ax...@kernel.dk;
> > konrad.w...@oracle.com; b...@alien8.de; boris.ostrov...@oracle.com;
> > jgr...@suse.com; net...@vger.kernel.org; linux...@vger.kernel.org;
> > r...@rjwysocki.net; linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> > da...@davemloft.net; Woodhouse, David 
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > Thanks for this work, please see below.
> > 
> > On Wed, Feb 19, 2020 at 06:04:24PM +, Anchal Agarwal wrote:
> > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal wrote:
> > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > > > > Quiescing the queue seemed a better option here as we want to make
> > sure ongoing
> > > > > requests dispatches are totally drained.
> > > > > I should accept that some of these notion is borrowed from how nvme
> > freeze/unfreeze
> > > > > is done although its not apple to apple comparison.
> > > >
> > > > That's fine, but I would still like to requests that you use the same
> > > > logic (as much as possible) for both the Xen and the PM initiated
> > > > suspension.
> > > >
> > > > So you either apply this freeze/unfreeze to the Xen suspension (and
> > > > drop the re-issuing of requests on resume) or adapt the same approach
> > > > as the Xen initiated suspension. Keeping two completely different
> > > > approaches to suspension / resume on blkfront is not suitable long
> > > > term.
> > > >
> > > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> > good
> > > idea however, IMO that is a work for future and this patch series should
> > > not be blocked for it. What do you think?
> > 
> > It's not so much that I think an overhaul of suspend/resume in
> > blkfront is needed, it's just that I don't want to have two completely
> > different suspend/resume paths inside blkfront.
> > 
> > So from my PoV I think the right solution is to either use the same
> > code (as much as possible) as it's currently used by Xen initiated
> > suspend/resume, or to also switch Xen initiated suspension to use the
> > newly introduced code.
> > 
> > Having two different approaches to suspend/resume in the same driver
> > is a recipe for disaster IMO: it adds complexity by forcing developers
> > to take into account two different suspend/resume approaches when
> > there's no need for it.
> 
> I disagree. S3 or S4 suspend/resume (or perhaps we should call them power 
> state transitions to avoid confusion) are quite different from Xen 
> suspend/resume.
> Power state transitions ought to be, and indeed are, visible to the software 
> running inside the guest. Applications, as well as drivers, can receive 
> notification and take whatever action they deem appropriate.
> Xen suspend/resume OTOH is used when a guest is migrated and the code should 
> go to all lengths possible to make any software running inside the guest 
> (other than Xen specific enlightened code, such as PV drivers) completely 
> unaware that anything has actually happened.

So from what you say above PM state transitions are notified to all
drivers, and Xen suspend/resume is only notified to PV drivers, and
here we are speaking about blkfront which is a PV driver, and should
get notified in both cases. So I'm unsure why the same (or at least
very similar) approach can't be used in both cases.

The suspend/resume approach proposed by this patch is completely
different than the one used by a xenbus initiated suspend/resume, and
I don't see a technical reason that warrants this difference.

I'm not saying that the approach used here is wrong, it's just that I
don't see the point in having two different ways to do suspend/resume
in the same driver, unless there's a technic

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-20 Thread Durrant, Paul
> -Original Message-
> From: Xen-devel  On Behalf Of
> Roger Pau Monné
> Sent: 20 February 2020 08:39
> To: Agarwal, Anchal 
> Cc: Valentin, Eduardo ; len.br...@intel.com;
> pet...@infradead.org; b...@kernel.crashing.org; x...@kernel.org; linux-
> m...@kvack.org; pa...@ucw.cz; h...@zytor.com; t...@linutronix.de;
> sstabell...@kernel.org; fllin...@amaozn.com; Kamata, Munehisa
> ; mi...@redhat.com; xen-devel@lists.xenproject.org;
> Singh, Balbir ; ax...@kernel.dk;
> konrad.w...@oracle.com; b...@alien8.de; boris.ostrov...@oracle.com;
> jgr...@suse.com; net...@vger.kernel.org; linux...@vger.kernel.org;
> r...@rjwysocki.net; linux-ker...@vger.kernel.org; vkuzn...@redhat.com;
> da...@davemloft.net; Woodhouse, David 
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> Thanks for this work, please see below.
> 
> On Wed, Feb 19, 2020 at 06:04:24PM +, Anchal Agarwal wrote:
> > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal wrote:
> > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > > > > > From: Munehisa Kamata  > > > > >
> > > > > > Add freeze, thaw and restore callbacks for PM suspend and
> hibernation
> > > > > > support. All frontend drivers that needs to use
> PM_HIBERNATION/PM_SUSPEND
> > > > > > events, need to implement these xenbus_driver callbacks.
> > > > > > The freeze handler stops a block-layer queue and disconnect the
> > > > > > frontend from the backend while freeing ring_info and associated
> resources.
> > > > > > The restore handler re-allocates ring_info and re-connect to the
> > > > > > backend, so the rest of the kernel can continue to use the block
> device
> > > > > > transparently. Also, the handlers are used for both PM suspend
> and
> > > > > > hibernation so that we can keep the existing suspend/resume
> callbacks for
> > > > > > Xen suspend without modification. Before disconnecting from
> backend,
> > > > > > we need to prevent any new IO from being queued and wait for
> existing
> > > > > > IO to complete.
> > > > >
> > > > > This is different from Xen (xenstore) initiated suspension, as in
> that
> > > > > case Linux doesn't flush the rings or disconnects from the
> backend.
> > > > Yes, AFAIK in xen initiated suspension backend takes care of it.
> > >
> > > No, in Xen initiated suspension backend doesn't take care of flushing
> > > the rings, the frontend has a shadow copy of the ring contents and it
> > > re-issues the requests on resume.
> > >
> > Yes, I meant suspension in general where both xenstore and backend knows
> > system is going under suspension and not flushing of rings.
> 
> backend has no idea the guest is going to be suspended. Backend code
> is completely agnostic to suspension/resume.
> 
> > That happens
> > in frontend when backend indicates that state is closing and so on.
> > I may have written it in wrong context.
> 
> I'm afraid I'm not sure I fully understand this last sentence.
> 
> > > > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > > > +{
> > > > > > +   unsigned int i;
> > > > > > +   struct blkfront_info *info = dev_get_drvdata(>dev);
> > > > > > +   struct blkfront_ring_info *rinfo;
> > > > > > +   /* This would be reasonable timeout as used in
> xenbus_dev_shutdown() */
> > > > > > +   unsigned int timeout = 5 * HZ;
> > > > > > +   int err = 0;
> > > > > > +
> > > > > > +   info->connected = BLKIF_STATE_FREEZING;
> > > > > > +
> > > > > > +   blk_mq_freeze_queue(info->rq);
> > > > > > +   blk_mq_quiesce_queue(info->rq);
> > > > > > +
> > > > > > +   for (i = 0; i < info->nr_rings; i++) {
> > > > > > +   rinfo = >rinfo[i];
> > > > > > +
> > > > > > +   gnttab_cancel_free_callback(>callback);
> > > > > > +   flush_work(>work);
> > > > > > +   }
> > > > > > +
> > > > > > +   /* Kick the backend to 

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-20 Thread Roger Pau Monné
Thanks for this work, please see below.

On Wed, Feb 19, 2020 at 06:04:24PM +, Anchal Agarwal wrote:
> On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal wrote:
> > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > > > > From: Munehisa Kamata  > > > > 
> > > > > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > > > > support. All frontend drivers that needs to use 
> > > > > PM_HIBERNATION/PM_SUSPEND
> > > > > events, need to implement these xenbus_driver callbacks.
> > > > > The freeze handler stops a block-layer queue and disconnect the
> > > > > frontend from the backend while freeing ring_info and associated 
> > > > > resources.
> > > > > The restore handler re-allocates ring_info and re-connect to the
> > > > > backend, so the rest of the kernel can continue to use the block 
> > > > > device
> > > > > transparently. Also, the handlers are used for both PM suspend and
> > > > > hibernation so that we can keep the existing suspend/resume callbacks 
> > > > > for
> > > > > Xen suspend without modification. Before disconnecting from backend,
> > > > > we need to prevent any new IO from being queued and wait for existing
> > > > > IO to complete.
> > > > 
> > > > This is different from Xen (xenstore) initiated suspension, as in that
> > > > case Linux doesn't flush the rings or disconnects from the backend.
> > > Yes, AFAIK in xen initiated suspension backend takes care of it. 
> > 
> > No, in Xen initiated suspension backend doesn't take care of flushing
> > the rings, the frontend has a shadow copy of the ring contents and it
> > re-issues the requests on resume.
> > 
> Yes, I meant suspension in general where both xenstore and backend knows
> system is going under suspension and not flushing of rings.

backend has no idea the guest is going to be suspended. Backend code
is completely agnostic to suspension/resume.

> That happens
> in frontend when backend indicates that state is closing and so on.
> I may have written it in wrong context.

I'm afraid I'm not sure I fully understand this last sentence.

> > > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > > +{
> > > > > + unsigned int i;
> > > > > + struct blkfront_info *info = dev_get_drvdata(>dev);
> > > > > + struct blkfront_ring_info *rinfo;
> > > > > + /* This would be reasonable timeout as used in 
> > > > > xenbus_dev_shutdown() */
> > > > > + unsigned int timeout = 5 * HZ;
> > > > > + int err = 0;
> > > > > +
> > > > > + info->connected = BLKIF_STATE_FREEZING;
> > > > > +
> > > > > + blk_mq_freeze_queue(info->rq);
> > > > > + blk_mq_quiesce_queue(info->rq);
> > > > > +
> > > > > + for (i = 0; i < info->nr_rings; i++) {
> > > > > + rinfo = >rinfo[i];
> > > > > +
> > > > > + gnttab_cancel_free_callback(>callback);
> > > > > + flush_work(>work);
> > > > > + }
> > > > > +
> > > > > + /* Kick the backend to disconnect */
> > > > > + xenbus_switch_state(dev, XenbusStateClosing);
> > > > 
> > > > Are you sure this is safe?
> > > > 
> > > In my testing running multiple fio jobs, other test scenarios running
> > > a memory loader works fine. I did not came across a scenario that would
> > > have failed resume due to blkfront issues unless you can sugest some?
> > 
> > AFAICT you don't wait for the in-flight requests to be finished, and
> > just rely on blkback to finish processing those. I'm not sure all
> > blkback implementations out there can guarantee that.
> > 
> > The approach used by Xen initiated suspension is to re-issue the
> > in-flight requests when resuming. I have to admit I don't think this
> > is the best approach, but I would like to keep both the Xen and the PM
> > initiated suspension using the same logic, and hence I would request
> > that you try to re-use the existing resume logic (blkfront_resume).
> > 
> > > > I don't think you wait for all requests pending on the ring to be
> > > > finished by the backend, and hence you might loose requests as the
> > > > ones on the ring would not be re-issued by blkfront_restore AFAICT.
> > > > 
> > > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no 
> > > used
> > > request on the shared ring. Also, we I want to pause the queue and flush 
> > > all
> > > the pending requests in the shared ring before disconnecting from backend.
> > 
> > Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
> > finished. I guess it's fine then.
> > 
> Ok.
> > > Quiescing the queue seemed a better option here as we want to make sure 
> > > ongoing
> > > requests dispatches are totally drained.
> > > I should accept that some of these notion is borrowed from how nvme 
> > > freeze/unfreeze 
> > > is done although its not apple to apple comparison.
> > 
> > That's fine, but I 

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-19 Thread Anchal Agarwal
On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal wrote:
> > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > > > From: Munehisa Kamata  > > > 
> > > > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > > > support. All frontend drivers that needs to use 
> > > > PM_HIBERNATION/PM_SUSPEND
> > > > events, need to implement these xenbus_driver callbacks.
> > > > The freeze handler stops a block-layer queue and disconnect the
> > > > frontend from the backend while freeing ring_info and associated 
> > > > resources.
> > > > The restore handler re-allocates ring_info and re-connect to the
> > > > backend, so the rest of the kernel can continue to use the block device
> > > > transparently. Also, the handlers are used for both PM suspend and
> > > > hibernation so that we can keep the existing suspend/resume callbacks 
> > > > for
> > > > Xen suspend without modification. Before disconnecting from backend,
> > > > we need to prevent any new IO from being queued and wait for existing
> > > > IO to complete.
> > > 
> > > This is different from Xen (xenstore) initiated suspension, as in that
> > > case Linux doesn't flush the rings or disconnects from the backend.
> > Yes, AFAIK in xen initiated suspension backend takes care of it. 
> 
> No, in Xen initiated suspension backend doesn't take care of flushing
> the rings, the frontend has a shadow copy of the ring contents and it
> re-issues the requests on resume.
> 
Yes, I meant suspension in general where both xenstore and backend knows
system is going under suspension and not flushing of rings. That happens
in frontend when backend indicates that state is closing and so on.
I may have written it in wrong context.
> > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > +{
> > > > +   unsigned int i;
> > > > +   struct blkfront_info *info = dev_get_drvdata(>dev);
> > > > +   struct blkfront_ring_info *rinfo;
> > > > +   /* This would be reasonable timeout as used in 
> > > > xenbus_dev_shutdown() */
> > > > +   unsigned int timeout = 5 * HZ;
> > > > +   int err = 0;
> > > > +
> > > > +   info->connected = BLKIF_STATE_FREEZING;
> > > > +
> > > > +   blk_mq_freeze_queue(info->rq);
> > > > +   blk_mq_quiesce_queue(info->rq);
> > > > +
> > > > +   for (i = 0; i < info->nr_rings; i++) {
> > > > +   rinfo = >rinfo[i];
> > > > +
> > > > +   gnttab_cancel_free_callback(>callback);
> > > > +   flush_work(>work);
> > > > +   }
> > > > +
> > > > +   /* Kick the backend to disconnect */
> > > > +   xenbus_switch_state(dev, XenbusStateClosing);
> > > 
> > > Are you sure this is safe?
> > > 
> > In my testing running multiple fio jobs, other test scenarios running
> > a memory loader works fine. I did not came across a scenario that would
> > have failed resume due to blkfront issues unless you can sugest some?
> 
> AFAICT you don't wait for the in-flight requests to be finished, and
> just rely on blkback to finish processing those. I'm not sure all
> blkback implementations out there can guarantee that.
> 
> The approach used by Xen initiated suspension is to re-issue the
> in-flight requests when resuming. I have to admit I don't think this
> is the best approach, but I would like to keep both the Xen and the PM
> initiated suspension using the same logic, and hence I would request
> that you try to re-use the existing resume logic (blkfront_resume).
> 
> > > I don't think you wait for all requests pending on the ring to be
> > > finished by the backend, and hence you might loose requests as the
> > > ones on the ring would not be re-issued by blkfront_restore AFAICT.
> > > 
> > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
> > request on the shared ring. Also, we I want to pause the queue and flush all
> > the pending requests in the shared ring before disconnecting from backend.
> 
> Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
> finished. I guess it's fine then.
> 
Ok.
> > Quiescing the queue seemed a better option here as we want to make sure 
> > ongoing
> > requests dispatches are totally drained.
> > I should accept that some of these notion is borrowed from how nvme 
> > freeze/unfreeze 
> > is done although its not apple to apple comparison.
> 
> That's fine, but I would still like to requests that you use the same
> logic (as much as possible) for both the Xen and the PM initiated
> suspension.
> 
> So you either apply this freeze/unfreeze to the Xen suspension (and
> drop the re-issuing of requests on resume) or adapt the same approach
> as the Xen initiated suspension. Keeping two completely different
> approaches to suspension / resume on blkfront is not suitable long
> term.
> 
I agree with you on overhaul of xen 

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-18 Thread Roger Pau Monné
On Mon, Feb 17, 2020 at 11:05:53PM +, Anchal Agarwal wrote:
> On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > > From: Munehisa Kamata  > > 
> > > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > > events, need to implement these xenbus_driver callbacks.
> > > The freeze handler stops a block-layer queue and disconnect the
> > > frontend from the backend while freeing ring_info and associated 
> > > resources.
> > > The restore handler re-allocates ring_info and re-connect to the
> > > backend, so the rest of the kernel can continue to use the block device
> > > transparently. Also, the handlers are used for both PM suspend and
> > > hibernation so that we can keep the existing suspend/resume callbacks for
> > > Xen suspend without modification. Before disconnecting from backend,
> > > we need to prevent any new IO from being queued and wait for existing
> > > IO to complete.
> > 
> > This is different from Xen (xenstore) initiated suspension, as in that
> > case Linux doesn't flush the rings or disconnects from the backend.
> Yes, AFAIK in xen initiated suspension backend takes care of it. 

No, in Xen initiated suspension backend doesn't take care of flushing
the rings, the frontend has a shadow copy of the ring contents and it
re-issues the requests on resume.

> > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > +{
> > > + unsigned int i;
> > > + struct blkfront_info *info = dev_get_drvdata(>dev);
> > > + struct blkfront_ring_info *rinfo;
> > > + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > > + unsigned int timeout = 5 * HZ;
> > > + int err = 0;
> > > +
> > > + info->connected = BLKIF_STATE_FREEZING;
> > > +
> > > + blk_mq_freeze_queue(info->rq);
> > > + blk_mq_quiesce_queue(info->rq);
> > > +
> > > + for (i = 0; i < info->nr_rings; i++) {
> > > + rinfo = >rinfo[i];
> > > +
> > > + gnttab_cancel_free_callback(>callback);
> > > + flush_work(>work);
> > > + }
> > > +
> > > + /* Kick the backend to disconnect */
> > > + xenbus_switch_state(dev, XenbusStateClosing);
> > 
> > Are you sure this is safe?
> > 
> In my testing running multiple fio jobs, other test scenarios running
> a memory loader works fine. I did not came across a scenario that would
> have failed resume due to blkfront issues unless you can sugest some?

AFAICT you don't wait for the in-flight requests to be finished, and
just rely on blkback to finish processing those. I'm not sure all
blkback implementations out there can guarantee that.

The approach used by Xen initiated suspension is to re-issue the
in-flight requests when resuming. I have to admit I don't think this
is the best approach, but I would like to keep both the Xen and the PM
initiated suspension using the same logic, and hence I would request
that you try to re-use the existing resume logic (blkfront_resume).

> > I don't think you wait for all requests pending on the ring to be
> > finished by the backend, and hence you might loose requests as the
> > ones on the ring would not be re-issued by blkfront_restore AFAICT.
> > 
> AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
> request on the shared ring. Also, we I want to pause the queue and flush all
> the pending requests in the shared ring before disconnecting from backend.

Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
finished. I guess it's fine then.

> Quiescing the queue seemed a better option here as we want to make sure 
> ongoing
> requests dispatches are totally drained.
> I should accept that some of these notion is borrowed from how nvme 
> freeze/unfreeze 
> is done although its not apple to apple comparison.

That's fine, but I would still like to requests that you use the same
logic (as much as possible) for both the Xen and the PM initiated
suspension.

So you either apply this freeze/unfreeze to the Xen suspension (and
drop the re-issuing of requests on resume) or adapt the same approach
as the Xen initiated suspension. Keeping two completely different
approaches to suspension / resume on blkfront is not suitable long
term.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-17 Thread Anchal Agarwal
On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> > From: Munehisa Kamata  > 
> > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > events, need to implement these xenbus_driver callbacks.
> > The freeze handler stops a block-layer queue and disconnect the
> > frontend from the backend while freeing ring_info and associated resources.
> > The restore handler re-allocates ring_info and re-connect to the
> > backend, so the rest of the kernel can continue to use the block device
> > transparently. Also, the handlers are used for both PM suspend and
> > hibernation so that we can keep the existing suspend/resume callbacks for
> > Xen suspend without modification. Before disconnecting from backend,
> > we need to prevent any new IO from being queued and wait for existing
> > IO to complete.
> 
> This is different from Xen (xenstore) initiated suspension, as in that
> case Linux doesn't flush the rings or disconnects from the backend.
Yes, AFAIK in xen initiated suspension backend takes care of it. 
> 
> This is done so that in case suspensions fails the recovery doesn't
> need to reconnect the PV devices, and in order to speed up suspension
> time (ie: waiting for all queues to be flushed can take time as Linux
> supports multiqueue, multipage rings and indirect descriptors), and
> the backend could be contended if there's a lot of IO pressure from
> guests.
> 
> Linux already keeps a shadow of the ring contents, so in-flight
> requests can be re-issued after the frontend has reconnected during
> resume.
> 
> > Freeze/unfreeze of the queues will guarantee that there
> > are no requests in use on the shared ring.
> > 
> > Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> > xen/blkback: unmap all persistent grants when frontend gets disconnected,
> > the frontend may see massive amount of grant table warning when freeing
> > resources.
> > [   36.852659] deferring g.e. 0xf9 (pfn 0x)
> > [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> > 
> > In this case, persistent grants would need to be disabled.
> > 
> > [Anchal Changelog: Removed timeout/request during blkfront freeze.
> > Fixed major part of the code to work with blk-mq]
> > Signed-off-by: Anchal Agarwal 
> > Signed-off-by: Munehisa Kamata 
> > ---
> >  drivers/block/xen-blkfront.c | 119 ---
> >  1 file changed, 112 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 478120233750..d715ed3cb69a 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -47,6 +47,8 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> > +#include 
> >  
> >  #include 
> >  #include 
> > @@ -79,6 +81,8 @@ enum blkif_state {
> > BLKIF_STATE_DISCONNECTED,
> > BLKIF_STATE_CONNECTED,
> > BLKIF_STATE_SUSPENDED,
> > +   BLKIF_STATE_FREEZING,
> > +   BLKIF_STATE_FROZEN
> >  };
> >  
> >  struct grant {
> > @@ -220,6 +224,7 @@ struct blkfront_info
> > struct list_head requests;
> > struct bio_list bio_list;
> > struct list_head info_list;
> > +   struct completion wait_backend_disconnected;
> >  };
> >  
> >  static unsigned int nr_minors;
> > @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
> >  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
> >  static void blkfront_gather_backend_features(struct blkfront_info *info);
> >  static int negotiate_mq(struct blkfront_info *info);
> > +static void __blkif_free(struct blkfront_info *info);
> >  
> >  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
> >  {
> > @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, 
> > u16 sector_size,
> > info->sector_size = sector_size;
> > info->physical_sector_size = physical_sector_size;
> > blkif_set_queue_limits(info);
> > +   init_completion(>wait_backend_disconnected);
> >  
> > return 0;
> >  }
> > @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct 
> > blkfront_info *info)
> >  /* Already hold rinfo->ring_lock. */
> >  static inline void kick_pending_request_queues_locked(struct 
> > blkfront_ring_info *rinfo)
> >  {
> > +   if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> > +   return;
> > if (!RING_FULL(>ring))
> > blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> >  }
> > @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info 
> > *rinfo)
> >  
> >  static void blkif_free(struct blkfront_info *info, int suspend)
> >  {
> > -   unsigned int i;
> > -
> > /* Prevent new requests being issued until we fix things up. */
> > info->connected = suspend ?
> > BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> 

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-17 Thread Roger Pau Monné
On Fri, Feb 14, 2020 at 11:25:34PM +, Anchal Agarwal wrote:
> From: Munehisa Kamata  
> Add freeze, thaw and restore callbacks for PM suspend and hibernation
> support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> events, need to implement these xenbus_driver callbacks.
> The freeze handler stops a block-layer queue and disconnect the
> frontend from the backend while freeing ring_info and associated resources.
> The restore handler re-allocates ring_info and re-connect to the
> backend, so the rest of the kernel can continue to use the block device
> transparently. Also, the handlers are used for both PM suspend and
> hibernation so that we can keep the existing suspend/resume callbacks for
> Xen suspend without modification. Before disconnecting from backend,
> we need to prevent any new IO from being queued and wait for existing
> IO to complete.

This is different from Xen (xenstore) initiated suspension, as in that
case Linux doesn't flush the rings or disconnects from the backend.

This is done so that in case suspensions fails the recovery doesn't
need to reconnect the PV devices, and in order to speed up suspension
time (ie: waiting for all queues to be flushed can take time as Linux
supports multiqueue, multipage rings and indirect descriptors), and
the backend could be contended if there's a lot of IO pressure from
guests.

Linux already keeps a shadow of the ring contents, so in-flight
requests can be re-issued after the frontend has reconnected during
resume.

> Freeze/unfreeze of the queues will guarantee that there
> are no requests in use on the shared ring.
> 
> Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> xen/blkback: unmap all persistent grants when frontend gets disconnected,
> the frontend may see massive amount of grant table warning when freeing
> resources.
> [   36.852659] deferring g.e. 0xf9 (pfn 0x)
> [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> 
> In this case, persistent grants would need to be disabled.
> 
> [Anchal Changelog: Removed timeout/request during blkfront freeze.
> Fixed major part of the code to work with blk-mq]
> Signed-off-by: Anchal Agarwal 
> Signed-off-by: Munehisa Kamata 
> ---
>  drivers/block/xen-blkfront.c | 119 ---
>  1 file changed, 112 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 478120233750..d715ed3cb69a 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -47,6 +47,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>  
>  #include 
>  #include 
> @@ -79,6 +81,8 @@ enum blkif_state {
>   BLKIF_STATE_DISCONNECTED,
>   BLKIF_STATE_CONNECTED,
>   BLKIF_STATE_SUSPENDED,
> + BLKIF_STATE_FREEZING,
> + BLKIF_STATE_FROZEN
>  };
>  
>  struct grant {
> @@ -220,6 +224,7 @@ struct blkfront_info
>   struct list_head requests;
>   struct bio_list bio_list;
>   struct list_head info_list;
> + struct completion wait_backend_disconnected;
>  };
>  
>  static unsigned int nr_minors;
> @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
>  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
>  static void blkfront_gather_backend_features(struct blkfront_info *info);
>  static int negotiate_mq(struct blkfront_info *info);
> +static void __blkif_free(struct blkfront_info *info);
>  
>  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
>  {
> @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 
> sector_size,
>   info->sector_size = sector_size;
>   info->physical_sector_size = physical_sector_size;
>   blkif_set_queue_limits(info);
> + init_completion(>wait_backend_disconnected);
>  
>   return 0;
>  }
> @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info 
> *info)
>  /* Already hold rinfo->ring_lock. */
>  static inline void kick_pending_request_queues_locked(struct 
> blkfront_ring_info *rinfo)
>  {
> + if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> + return;
>   if (!RING_FULL(>ring))
>   blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
>  }
> @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info 
> *rinfo)
>  
>  static void blkif_free(struct blkfront_info *info, int suspend)
>  {
> - unsigned int i;
> -
>   /* Prevent new requests being issued until we fix things up. */
>   info->connected = suspend ?
>   BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int 
> suspend)
>   if (info->rq)
>   blk_mq_stop_hw_queues(info->rq);
>  
> + __blkif_free(info);
> +}
> +
> +static void __blkif_free(struct blkfront_info *info)
> +{
> + unsigned int i;
> +
>   for (i = 0; i < 

[Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-14 Thread Anchal Agarwal
From: Munehisa Kamata 
Signed-off-by: Munehisa Kamata 
---
 drivers/block/xen-blkfront.c | 119 ---
 1 file changed, 112 insertions(+), 7 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 478120233750..d715ed3cb69a 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -47,6 +47,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -79,6 +81,8 @@ enum blkif_state {
BLKIF_STATE_DISCONNECTED,
BLKIF_STATE_CONNECTED,
BLKIF_STATE_SUSPENDED,
+   BLKIF_STATE_FREEZING,
+   BLKIF_STATE_FROZEN
 };
 
 struct grant {
@@ -220,6 +224,7 @@ struct blkfront_info
struct list_head requests;
struct bio_list bio_list;
struct list_head info_list;
+   struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
 static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
 static void blkfront_gather_backend_features(struct blkfront_info *info);
 static int negotiate_mq(struct blkfront_info *info);
+static void __blkif_free(struct blkfront_info *info);
 
 static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
 {
@@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 
sector_size,
info->sector_size = sector_size;
info->physical_sector_size = physical_sector_size;
blkif_set_queue_limits(info);
+   init_completion(>wait_backend_disconnected);
 
return 0;
 }
@@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info 
*info)
 /* Already hold rinfo->ring_lock. */
 static inline void kick_pending_request_queues_locked(struct 
blkfront_ring_info *rinfo)
 {
+   if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
+   return;
if (!RING_FULL(>ring))
blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
 }
@@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info 
*rinfo)
 
 static void blkif_free(struct blkfront_info *info, int suspend)
 {
-   unsigned int i;
-
/* Prevent new requests being issued until we fix things up. */
info->connected = suspend ?
BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int 
suspend)
if (info->rq)
blk_mq_stop_hw_queues(info->rq);
 
+   __blkif_free(info);
+}
+
+static void __blkif_free(struct blkfront_info *info)
+{
+   unsigned int i;
+
for (i = 0; i < info->nr_rings; i++)
blkif_free_ring(>rinfo[i]);
 
@@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
struct blkfront_info *info = rinfo->dev_info;
 
-   if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
-   return IRQ_HANDLED;
+   if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
+   if (info->connected != BLKIF_STATE_FREEZING)
+   return IRQ_HANDLED;
+   }
 
spin_lock_irqsave(>ring_lock, flags);
  again:
@@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
struct bio *bio;
unsigned int segs;
 
+   bool frozen = info->connected == BLKIF_STATE_FROZEN;
blkfront_gather_backend_features(info);
/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
blkif_set_queue_limits(info);
@@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
kick_pending_request_queues(rinfo);
}
 
+   if (frozen)
+   return 0;
+
list_for_each_entry_safe(req, n, >requests, queuelist) {
/* Requeue pending requests (flush or discard) */
list_del_init(>queuelist);
@@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
 
return;
case BLKIF_STATE_SUSPENDED:
+   case BLKIF_STATE_FROZEN:
/*
 * If we are recovering from suspension, we need to wait
 * for the backend to announce it's features before
@@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
break;
 
case XenbusStateClosed:
-   if (dev->state == XenbusStateClosed)
+   if (dev->state == XenbusStateClosed) {
+   if (info->connected == BLKIF_STATE_FREEZING) {
+   __blkif_free(info);
+   info->connected = BLKIF_STATE_FROZEN;
+   complete(>wait_backend_disconnected);
+   break;
+   }
+
break;
+   }
+
+   /*
+  

[Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-02-12 Thread Anchal Agarwal
From: Munehisa Kamata 

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
events, need to implement these xenbus_driver callbacks.
The freeze handler stops a block-layer queue and disconnect the
frontend from the backend while freeing ring_info and associated resources.
The restore handler re-allocates ring_info and re-connect to the
backend, so the rest of the kernel can continue to use the block device
transparently. Also, the handlers are used for both PM suspend and
hibernation so that we can keep the existing suspend/resume callbacks for
Xen suspend without modification. Before disconnecting from backend,
we need to prevent any new IO from being queued and wait for existing
IO to complete. Freeze/unfreeze of the queues will guarantee that there
are no requests in use on the shared ring.

Note:For older backends,if a backend doesn't have commit'12ea729645ace'
xen/blkback: unmap all persistent grants when frontend gets disconnected,
the frontend may see massive amount of grant table warning when freeing
resources.
[   36.852659] deferring g.e. 0xf9 (pfn 0x)
[   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!

In this case, persistent grants would need to be disabled.

[Anchal Changelog: Removed timeout/request during blkfront freeze.
Fixed major part of the code to work with blk-mq]
Signed-off-by: Anchal Agarwal 
Signed-off-by: Munehisa Kamata 

---
Changes since V2: None
---
 drivers/block/xen-blkfront.c | 119 ---
 1 file changed, 112 insertions(+), 7 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 478120233750..d715ed3cb69a 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -47,6 +47,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -79,6 +81,8 @@ enum blkif_state {
BLKIF_STATE_DISCONNECTED,
BLKIF_STATE_CONNECTED,
BLKIF_STATE_SUSPENDED,
+   BLKIF_STATE_FREEZING,
+   BLKIF_STATE_FROZEN
 };
 
 struct grant {
@@ -220,6 +224,7 @@ struct blkfront_info
struct list_head requests;
struct bio_list bio_list;
struct list_head info_list;
+   struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
 static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
 static void blkfront_gather_backend_features(struct blkfront_info *info);
 static int negotiate_mq(struct blkfront_info *info);
+static void __blkif_free(struct blkfront_info *info);
 
 static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
 {
@@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 
sector_size,
info->sector_size = sector_size;
info->physical_sector_size = physical_sector_size;
blkif_set_queue_limits(info);
+   init_completion(>wait_backend_disconnected);
 
return 0;
 }
@@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info 
*info)
 /* Already hold rinfo->ring_lock. */
 static inline void kick_pending_request_queues_locked(struct 
blkfront_ring_info *rinfo)
 {
+   if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
+   return;
if (!RING_FULL(>ring))
blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
 }
@@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info 
*rinfo)
 
 static void blkif_free(struct blkfront_info *info, int suspend)
 {
-   unsigned int i;
-
/* Prevent new requests being issued until we fix things up. */
info->connected = suspend ?
BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int 
suspend)
if (info->rq)
blk_mq_stop_hw_queues(info->rq);
 
+   __blkif_free(info);
+}
+
+static void __blkif_free(struct blkfront_info *info)
+{
+   unsigned int i;
+
for (i = 0; i < info->nr_rings; i++)
blkif_free_ring(>rinfo[i]);
 
@@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
struct blkfront_info *info = rinfo->dev_info;
 
-   if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
-   return IRQ_HANDLED;
+   if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
+   if (info->connected != BLKIF_STATE_FREEZING)
+   return IRQ_HANDLED;
+   }
 
spin_lock_irqsave(>ring_lock, flags);
  again:
@@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
struct bio *bio;
unsigned int segs;
 
+   bool frozen = info->connected == BLKIF_STATE_FROZEN;