> Subject: Re: [PATCH] tpm: tpm_crb: relinquish locality on error path.
> 
> On Fri, Apr 20, 2018 at 01:19:12PM +0000, Winkler, Tomas wrote:
> > > > On Tue, 2018-04-10 at 09:00 +0000, Winkler, Tomas wrote:
> > > > > >
> > > > > > On Sat, 2018-04-07 at 19:12 +0300, Tomas Winkler wrote:
> > > > > > > In crb_map_io() function, __crb_request_locality() is called
> > > > > > > prior to crb_cmd_ready(), but if one of the consecutive
> > > > > > > function fails the flow bails out instead of trying to relinquish
> locality.
> > > > > > > This patch adds goto jump to __crb_relinquish_locality() on
> > > > > > > the error path.
> > > > > > >
> > > > > > > Fixes: 888d867df441 (tpm: cmd_ready command can be issued
> > > > > > > only after granting
> > > > > > > locality)
> > > > > > > Signed-off-by: Tomas Winkler <tomas.wink...@intel.com>
> > > > > > > ---
> > > > > > >  drivers/char/tpm/tpm_crb.c | 10 +++++++---
> > > > > > >  1 file changed, 7 insertions(+), 3 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/char/tpm/tpm_crb.c
> > > > > > > b/drivers/char/tpm/tpm_crb.c index
> > > > > > > 7f78482cd157..34fbc6cb097b
> > > > > > > 100644
> > > > > > > --- a/drivers/char/tpm/tpm_crb.c
> > > > > > > +++ b/drivers/char/tpm/tpm_crb.c
> > > > > > > @@ -511,8 +511,10 @@ static int crb_map_io(struct
> > > > > > > acpi_device *device, struct crb_priv *priv,
> > > > > > >
> > > > > > >   priv->regs_t = crb_map_res(dev, priv, &io_res, buf-
> > > > > > > >control_address,
> > > > > > >                              sizeof(struct crb_regs_tail));
> > > > > > > - if (IS_ERR(priv->regs_t))
> > > > > > > -         return PTR_ERR(priv->regs_t);
> > > > > > > + if (IS_ERR(priv->regs_t)) {
> > > > > > > +         ret = PTR_ERR(priv->regs_t);
> > > > > > > +         goto out_relinquish_locality;
> > > > > > > + }
> > > > > > >
> > > > > > >   /*
> > > > > > >    * PTT HW bug w/a: wake up the device to access @@ -520,7
> > > > > > > +522,7
> > > > > >
> > > > > > @@
> > > > > > > static int crb_map_io(struct acpi_device *device, struct
> > > > > > > crb_priv *priv,
> > > > > > >    */
> > > > > > >   ret = crb_cmd_ready(dev, priv);
> > > > > > >   if (ret)
> > > > > > > -         return ret;
> > > > > > > +         goto out_relinquish_locality;
> > > > > > >
> > > > > > >   pa_high = ioread32(&priv->regs_t->ctrl_cmd_pa_high);
> > > > > > >   pa_low  = ioread32(&priv->regs_t->ctrl_cmd_pa_low);
> > > > > > > @@ -565,6 +567,8 @@ static int crb_map_io(struct acpi_device
> > > > > > > *device, struct crb_priv *priv,
> > > > > > >
> > > > > > >   crb_go_idle(dev, priv);
> > > > > > >
> > > > > > > +out_relinquish_locality:
> > > > > > > +
> > > > > > >   __crb_relinquish_locality(dev, priv, 0);
> > > > > > >
> > > > > > >   return ret;
> > > > > >
> > > > > > Thanks, please just call it before returning in the error path.
> > > > >
> > > > > Can you please elaborate why, isn't the centralized exiting of
> > > > > functions preferred kernel coding style?
> > > > > https://www.kernel.org/doc/html/v4.11/process/coding-style.html#
> > > > > cent
> > > > > ra
> > > > > lized-ex
> > > > > iting-of-functions
> > > >
> > > > You exit only from one location (not multiple) and not from a
> > > > nested context. Here you just add more complexity by doing this.
> > >
> > > Where is the complexity ? I see it as a standard way of undoing on exit.
> > > Tomas
> >
> > Jarkko, can you please respond.
> > Thanks
> > Tomas
> 
> I was away for Mon-Wed last week and did not work on TPM for Thu-Fri.
> 
> My earlier comment was incorrect as there are two locations to exit (not
> sure how I managed to overlook the patch that way).
> 
> Thus, I have only two very  minor requets:
> 
> * Remove the extra newline (the last line addition in the patch).
Okay
> * Use just label named out as we have only one exception handler.
Cannot do that, as the bail out is prior to cmd_ready request so there is no 
need for go_idle which is under out label.
> 
> I'll move on to testing, and if it it passes, I can do those updates myself.
Thanks,  I prefer to resend myself. 

Tomas

Reply via email to