On Wed, Sep 07, 2016 at 09:14:35PM +0000, Winkler, Tomas wrote:

> > Jarkko and I have been talking about higher level locking (eg to sequence a
> > series of operations) it might make more sense to move the power
> > management up to that layer. 
> 
> I'm not sure this has much benefit, this would be much more error prone and
> would be unnecessary more complicated than just protecting the low API in one 
> place.

Well, we will have the API, so it shouldn't be much risk. I guess it
can change later

> Yes, of course I've considered runtime_pm but I've pulled from using
> it for now is that it has many side effects that need to be mapped
> first, second because of the HW bug we cannot safely detect if the
> device is in idle state so some more complicated book keeping would
> have to be implemented. Last we need a simple solution to backport.

So are you going to send a runtime_pm patch too? What is the downside
to the idle state that requires it to be controlled by the host and
not internally?

> > > +static int __crb_go_idle(struct device *dev, struct crb_priv *priv) {
> > > +static int crb_go_idle(struct tpm_chip *chip) {
> > > + struct crb_priv *priv = dev_get_drvdata(&chip->dev);
> > > +
> > > + return __crb_go_idle(&chip->dev, priv);
> > 
> > Hurm, these ugly wrappers should probably be part of the next patch, since
> > that is what needs them.
> 
> You are right it missing a bit context if you're looking into single
> patch, but the paradigm is used widely in the kernel.

The widely used paradigm has the wrapper actually do something useful,
eg lock/unlock. This does nothing useful until you realize it is
needed because you don't have a chip yet in crb_map_io due to the work
around. Which begs the question if this is even the right approach at all,
or should be chip be allocated sooner (as tpm_tis does)..

So I'd rather see it in the next patch, and I'd rather not see it at
all, move the chip allocation up instead..

Jason

------------------------------------------------------------------------------
_______________________________________________
tpmdd-devel mailing list
tpmdd-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tpmdd-devel

Reply via email to