RE: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Appana Durga Kedareswara Rao
Hi Vinod,

Thanks for the review...

[Snip]
> > > Btw how and when does DMA stop, assuming it is circular it never
> > > would, isn't there a valid/stop flag associated with a descriptor
> > > which tells DMA engine what to do next
> >
> > There are two registers that controls the DMA transfers.
> > Current descriptor and tail descriptor register.
> > When current descriptor reaches tail descriptor dma engine will pause.
> >
> > When reprogramming the tail descriptor the DMA engine will starts fetching
> descriptors again.
> >
> > But with the existing driver flow if we reprogram the tail descriptor
> > The tail descriptor next descriptor field is pointing to an invalid
> > location Causing data corruption...
> 
> So the solution is..?

This patch.
I mean if we have a set of descriptors in chain (in circular manner last 
descriptor points to first descriptor)
It always points to valid descriptors. 

Will update the patch commit description in the next version...

> 
> > > Btw there is something wrong with your MUA perhaps line are
> > > titlecased for no reason. This is typically behavious of non linux
> > > tool which may not be great tool for this work.
> >
> > Thanks for pointing it out.
> > I usually replies from outlook from a windows machine.
> > Will check with others in my team how they configured their mail client.
> 
> Yeah that isnt right tool for the job. See Documentation/process/email-
> clients.rst
> 
> FWIW, I use mutt, vim as editor with exchange servers, seems to work well for
> me now.

Thanks for pointing it out will go through it.
Will install mutt in my Linux machine will start replying from that

Regards,
Kedar.

> 
> --
> ~Vinod


RE: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Appana Durga Kedareswara Rao
Hi Vinod,

Thanks for the review...

[Snip]
> > > Btw how and when does DMA stop, assuming it is circular it never
> > > would, isn't there a valid/stop flag associated with a descriptor
> > > which tells DMA engine what to do next
> >
> > There are two registers that controls the DMA transfers.
> > Current descriptor and tail descriptor register.
> > When current descriptor reaches tail descriptor dma engine will pause.
> >
> > When reprogramming the tail descriptor the DMA engine will starts fetching
> descriptors again.
> >
> > But with the existing driver flow if we reprogram the tail descriptor
> > The tail descriptor next descriptor field is pointing to an invalid
> > location Causing data corruption...
> 
> So the solution is..?

This patch.
I mean if we have a set of descriptors in chain (in circular manner last 
descriptor points to first descriptor)
It always points to valid descriptors. 

Will update the patch commit description in the next version...

> 
> > > Btw there is something wrong with your MUA perhaps line are
> > > titlecased for no reason. This is typically behavious of non linux
> > > tool which may not be great tool for this work.
> >
> > Thanks for pointing it out.
> > I usually replies from outlook from a windows machine.
> > Will check with others in my team how they configured their mail client.
> 
> Yeah that isnt right tool for the job. See Documentation/process/email-
> clients.rst
> 
> FWIW, I use mutt, vim as editor with exchange servers, seems to work well for
> me now.

Thanks for pointing it out will go through it.
Will install mutt in my Linux machine will start replying from that

Regards,
Kedar.

> 
> --
> ~Vinod


Re: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Vinod Koul
On Fri, Jan 13, 2017 at 06:00:44AM +, Appana Durga Kedareswara Rao wrote:
> Hi Vinod,
> 
>   Thanks for the review...
> 
> [Snip]
> > >
> > > > On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > > > > When driver is handling AXI DMA SoftIP When user submits multiple
> > > > > descriptors back to back on the S2MM(recv) side with the current
> > > > > driver flow the last buffer descriptor next bd points to a invalid
> > > > > location resulting the invalid data or errors in the DMA engine.
> > > >
> > > > Can you rephrase this, it a bit hard to understand.
> > >
> > > When DMA is receiving packets h/w expects the descriptors Should be in
> > > the form of a ring (I mean h/w buffer descriptor Next descriptor field
> > > should always point to valid address So that when DMA engine go and
> > > fetch that next descriptor it always Sees a valid address).
> > >
> > >
> > > But with the current driver implementation when user queues Multiple
> > > descriptors the last descriptor next descriptor field Pointing to an
> > > invalid location causing data corruption or Errors from the DMA h/w
> > > engine...
> > >
> > > To avoid this issue creating a Buffer descriptor Chain during Channel
> > > allocation and using those buffer descriptors for processing User
> > > requested data.
> > 
> > Is it not doable to to modify the next pointer to point to subsequent 
> > transaction.
> > IOW you are modifying tail descriptor to point to subsequent descriptor.
> > 
> > Btw how and when does DMA stop, assuming it is circular it never would, 
> > isn't
> > there a valid/stop flag associated with a descriptor which tells DMA engine 
> > what
> > to do next
> 
> There are two registers that controls the DMA transfers.
> Current descriptor and tail descriptor register. 
> When current descriptor reaches tail descriptor dma engine will pause.
> 
> When reprogramming the tail descriptor the DMA engine will starts fetching 
> descriptors again.
> 
> But with the existing driver flow if we reprogram the tail descriptor
> The tail descriptor next descriptor field is pointing to an invalid location
> Causing data corruption...

So the solution is..?

> > Btw there is something wrong with your MUA perhaps line are titlecased for 
> > no
> > reason. This is typically behavious of non linux tool which may not be 
> > great tool
> > for this work.
> 
> Thanks for pointing it out.
> I usually replies from outlook from a windows machine.
> Will check with others in my team how they configured their mail client.

Yeah that isnt right tool for the job. See
Documentation/process/email-clients.rst

FWIW, I use mutt, vim as editor with exchange servers, seems to work well
for me now.

-- 
~Vinod


Re: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Vinod Koul
On Fri, Jan 13, 2017 at 06:00:44AM +, Appana Durga Kedareswara Rao wrote:
> Hi Vinod,
> 
>   Thanks for the review...
> 
> [Snip]
> > >
> > > > On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > > > > When driver is handling AXI DMA SoftIP When user submits multiple
> > > > > descriptors back to back on the S2MM(recv) side with the current
> > > > > driver flow the last buffer descriptor next bd points to a invalid
> > > > > location resulting the invalid data or errors in the DMA engine.
> > > >
> > > > Can you rephrase this, it a bit hard to understand.
> > >
> > > When DMA is receiving packets h/w expects the descriptors Should be in
> > > the form of a ring (I mean h/w buffer descriptor Next descriptor field
> > > should always point to valid address So that when DMA engine go and
> > > fetch that next descriptor it always Sees a valid address).
> > >
> > >
> > > But with the current driver implementation when user queues Multiple
> > > descriptors the last descriptor next descriptor field Pointing to an
> > > invalid location causing data corruption or Errors from the DMA h/w
> > > engine...
> > >
> > > To avoid this issue creating a Buffer descriptor Chain during Channel
> > > allocation and using those buffer descriptors for processing User
> > > requested data.
> > 
> > Is it not doable to to modify the next pointer to point to subsequent 
> > transaction.
> > IOW you are modifying tail descriptor to point to subsequent descriptor.
> > 
> > Btw how and when does DMA stop, assuming it is circular it never would, 
> > isn't
> > there a valid/stop flag associated with a descriptor which tells DMA engine 
> > what
> > to do next
> 
> There are two registers that controls the DMA transfers.
> Current descriptor and tail descriptor register. 
> When current descriptor reaches tail descriptor dma engine will pause.
> 
> When reprogramming the tail descriptor the DMA engine will starts fetching 
> descriptors again.
> 
> But with the existing driver flow if we reprogram the tail descriptor
> The tail descriptor next descriptor field is pointing to an invalid location
> Causing data corruption...

So the solution is..?

> > Btw there is something wrong with your MUA perhaps line are titlecased for 
> > no
> > reason. This is typically behavious of non linux tool which may not be 
> > great tool
> > for this work.
> 
> Thanks for pointing it out.
> I usually replies from outlook from a windows machine.
> Will check with others in my team how they configured their mail client.

Yeah that isnt right tool for the job. See
Documentation/process/email-clients.rst

FWIW, I use mutt, vim as editor with exchange servers, seems to work well
for me now.

-- 
~Vinod


RE: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Appana Durga Kedareswara Rao
Hi Vinod,

Thanks for the review...

[Snip]
> >
> > > On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > > > When driver is handling AXI DMA SoftIP When user submits multiple
> > > > descriptors back to back on the S2MM(recv) side with the current
> > > > driver flow the last buffer descriptor next bd points to a invalid
> > > > location resulting the invalid data or errors in the DMA engine.
> > >
> > > Can you rephrase this, it a bit hard to understand.
> >
> > When DMA is receiving packets h/w expects the descriptors Should be in
> > the form of a ring (I mean h/w buffer descriptor Next descriptor field
> > should always point to valid address So that when DMA engine go and
> > fetch that next descriptor it always Sees a valid address).
> >
> >
> > But with the current driver implementation when user queues Multiple
> > descriptors the last descriptor next descriptor field Pointing to an
> > invalid location causing data corruption or Errors from the DMA h/w
> > engine...
> >
> > To avoid this issue creating a Buffer descriptor Chain during Channel
> > allocation and using those buffer descriptors for processing User
> > requested data.
> 
> Is it not doable to to modify the next pointer to point to subsequent 
> transaction.
> IOW you are modifying tail descriptor to point to subsequent descriptor.
> 
> Btw how and when does DMA stop, assuming it is circular it never would, isn't
> there a valid/stop flag associated with a descriptor which tells DMA engine 
> what
> to do next

There are two registers that controls the DMA transfers.
Current descriptor and tail descriptor register. 
When current descriptor reaches tail descriptor dma engine will pause.

When reprogramming the tail descriptor the DMA engine will starts fetching 
descriptors again.

But with the existing driver flow if we reprogram the tail descriptor
The tail descriptor next descriptor field is pointing to an invalid location
Causing data corruption...

> 
> 
> Btw there is something wrong with your MUA perhaps line are titlecased for no
> reason. This is typically behavious of non linux tool which may not be great 
> tool
> for this work.

Thanks for pointing it out.
I usually replies from outlook from a windows machine.
Will check with others in my team how they configured their mail client.

> 
> >
> > Please let me know if the above explanation is not clear will explain in 
> > detail
> >
> > >
> > > >
> > > > This patch fixes this issue by creating a BD Chain during
> > >
> > > whats a BD?
> >
> > Buffer descriptor.
> 
> Thats nowhere mentioned..

Yep sorry I should have been mentioned it...

Regards,
Kedar.



RE: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Appana Durga Kedareswara Rao
Hi Vinod,

Thanks for the review...

[Snip]
> >
> > > On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > > > When driver is handling AXI DMA SoftIP When user submits multiple
> > > > descriptors back to back on the S2MM(recv) side with the current
> > > > driver flow the last buffer descriptor next bd points to a invalid
> > > > location resulting the invalid data or errors in the DMA engine.
> > >
> > > Can you rephrase this, it a bit hard to understand.
> >
> > When DMA is receiving packets h/w expects the descriptors Should be in
> > the form of a ring (I mean h/w buffer descriptor Next descriptor field
> > should always point to valid address So that when DMA engine go and
> > fetch that next descriptor it always Sees a valid address).
> >
> >
> > But with the current driver implementation when user queues Multiple
> > descriptors the last descriptor next descriptor field Pointing to an
> > invalid location causing data corruption or Errors from the DMA h/w
> > engine...
> >
> > To avoid this issue creating a Buffer descriptor Chain during Channel
> > allocation and using those buffer descriptors for processing User
> > requested data.
> 
> Is it not doable to to modify the next pointer to point to subsequent 
> transaction.
> IOW you are modifying tail descriptor to point to subsequent descriptor.
> 
> Btw how and when does DMA stop, assuming it is circular it never would, isn't
> there a valid/stop flag associated with a descriptor which tells DMA engine 
> what
> to do next

There are two registers that controls the DMA transfers.
Current descriptor and tail descriptor register. 
When current descriptor reaches tail descriptor dma engine will pause.

When reprogramming the tail descriptor the DMA engine will starts fetching 
descriptors again.

But with the existing driver flow if we reprogram the tail descriptor
The tail descriptor next descriptor field is pointing to an invalid location
Causing data corruption...

> 
> 
> Btw there is something wrong with your MUA perhaps line are titlecased for no
> reason. This is typically behavious of non linux tool which may not be great 
> tool
> for this work.

Thanks for pointing it out.
I usually replies from outlook from a windows machine.
Will check with others in my team how they configured their mail client.

> 
> >
> > Please let me know if the above explanation is not clear will explain in 
> > detail
> >
> > >
> > > >
> > > > This patch fixes this issue by creating a BD Chain during
> > >
> > > whats a BD?
> >
> > Buffer descriptor.
> 
> Thats nowhere mentioned..

Yep sorry I should have been mentioned it...

Regards,
Kedar.



Re: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Vinod Koul
On Thu, Jan 12, 2017 at 02:19:49PM +, Appana Durga Kedareswara Rao wrote:
> Hi Vinod,
> 
>   Thanks for the review...
> 
> > On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > > When driver is handling AXI DMA SoftIP When user submits multiple
> > > descriptors back to back on the S2MM(recv) side with the current
> > > driver flow the last buffer descriptor next bd points to a invalid
> > > location resulting the invalid data or errors in the DMA engine.
> > 
> > Can you rephrase this, it a bit hard to understand.
> 
> When DMA is receiving packets h/w expects the descriptors
> Should be in the form of a ring (I mean h/w buffer descriptor
> Next descriptor field should always point to valid address
> So that when DMA engine go and fetch that next descriptor it always 
> Sees a valid address).
>
> 
> But with the current driver implementation when user queues
> Multiple descriptors the last descriptor next descriptor field
> Pointing to an invalid location causing data corruption or 
> Errors from the DMA h/w engine...
> 
> To avoid this issue creating a Buffer descriptor Chain during 
> Channel allocation and using those buffer descriptors for processing
> User requested data.

Is it not doable to to modify the next pointer to point to subsequent
transaction. IOW you are modifying tail descriptor to point to subsequent
descriptor.

Btw how and when does DMA stop, assuming it is circular it never would,
isn't there a valid/stop flag associated with a descriptor which tells DMA
engine what to do next


Btw there is something wrong with your MUA perhaps line are titlecased for
no reason. This is typically behavious of non linux tool which may not be
great tool for this work.

> 
> Please let me know if the above explanation is not clear will explain in 
> detail
> 
> > 
> > >
> > > This patch fixes this issue by creating a BD Chain during
> > 
> > whats a BD?
> 
> Buffer descriptor.

Thats nowhere mentioned..

> 
> > 
> > > channel allocation itself and use those BD's.
> > >
> > > Signed-off-by: Kedareswara rao Appana 
> > > ---
> > >
> > >  drivers/dma/xilinx/xilinx_dma.c | 133
> > > +---
> > >  1 file changed, 83 insertions(+), 50 deletions(-)
> > >
> > > diff --git a/drivers/dma/xilinx/xilinx_dma.c
> > > b/drivers/dma/xilinx/xilinx_dma.c index 0e9c02e..af2159d 100644
> > > --- a/drivers/dma/xilinx/xilinx_dma.c
> > > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > > @@ -163,6 +163,7 @@
> > >  #define XILINX_DMA_BD_SOPBIT(27)
> > >  #define XILINX_DMA_BD_EOPBIT(26)
> > >  #define XILINX_DMA_COALESCE_MAX  255
> > > +#define XILINX_DMA_NUM_DESCS 255
> > 
> > why 255?
> 
> It is not an h/w limitation 
> Allocating 255 descriptors (Each descriptor is capable of sending 7MB data)
> So roughly using allocated descriptors DMA engine can transfer 1GB data
> And in the driver we are reusing the allocated descriptors when they are free.
> 
> Regards,
> Kedar.

-- 
~Vinod


Re: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Vinod Koul
On Thu, Jan 12, 2017 at 02:19:49PM +, Appana Durga Kedareswara Rao wrote:
> Hi Vinod,
> 
>   Thanks for the review...
> 
> > On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > > When driver is handling AXI DMA SoftIP When user submits multiple
> > > descriptors back to back on the S2MM(recv) side with the current
> > > driver flow the last buffer descriptor next bd points to a invalid
> > > location resulting the invalid data or errors in the DMA engine.
> > 
> > Can you rephrase this, it a bit hard to understand.
> 
> When DMA is receiving packets h/w expects the descriptors
> Should be in the form of a ring (I mean h/w buffer descriptor
> Next descriptor field should always point to valid address
> So that when DMA engine go and fetch that next descriptor it always 
> Sees a valid address).
>
> 
> But with the current driver implementation when user queues
> Multiple descriptors the last descriptor next descriptor field
> Pointing to an invalid location causing data corruption or 
> Errors from the DMA h/w engine...
> 
> To avoid this issue creating a Buffer descriptor Chain during 
> Channel allocation and using those buffer descriptors for processing
> User requested data.

Is it not doable to to modify the next pointer to point to subsequent
transaction. IOW you are modifying tail descriptor to point to subsequent
descriptor.

Btw how and when does DMA stop, assuming it is circular it never would,
isn't there a valid/stop flag associated with a descriptor which tells DMA
engine what to do next


Btw there is something wrong with your MUA perhaps line are titlecased for
no reason. This is typically behavious of non linux tool which may not be
great tool for this work.

> 
> Please let me know if the above explanation is not clear will explain in 
> detail
> 
> > 
> > >
> > > This patch fixes this issue by creating a BD Chain during
> > 
> > whats a BD?
> 
> Buffer descriptor.

Thats nowhere mentioned..

> 
> > 
> > > channel allocation itself and use those BD's.
> > >
> > > Signed-off-by: Kedareswara rao Appana 
> > > ---
> > >
> > >  drivers/dma/xilinx/xilinx_dma.c | 133
> > > +---
> > >  1 file changed, 83 insertions(+), 50 deletions(-)
> > >
> > > diff --git a/drivers/dma/xilinx/xilinx_dma.c
> > > b/drivers/dma/xilinx/xilinx_dma.c index 0e9c02e..af2159d 100644
> > > --- a/drivers/dma/xilinx/xilinx_dma.c
> > > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > > @@ -163,6 +163,7 @@
> > >  #define XILINX_DMA_BD_SOPBIT(27)
> > >  #define XILINX_DMA_BD_EOPBIT(26)
> > >  #define XILINX_DMA_COALESCE_MAX  255
> > > +#define XILINX_DMA_NUM_DESCS 255
> > 
> > why 255?
> 
> It is not an h/w limitation 
> Allocating 255 descriptors (Each descriptor is capable of sending 7MB data)
> So roughly using allocated descriptors DMA engine can transfer 1GB data
> And in the driver we are reusing the allocated descriptors when they are free.
> 
> Regards,
> Kedar.

-- 
~Vinod


RE: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Appana Durga Kedareswara Rao
Hi Vinod,

Thanks for the review...

> On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > When driver is handling AXI DMA SoftIP When user submits multiple
> > descriptors back to back on the S2MM(recv) side with the current
> > driver flow the last buffer descriptor next bd points to a invalid
> > location resulting the invalid data or errors in the DMA engine.
> 
> Can you rephrase this, it a bit hard to understand.

When DMA is receiving packets h/w expects the descriptors
Should be in the form of a ring (I mean h/w buffer descriptor
Next descriptor field should always point to valid address
So that when DMA engine go and fetch that next descriptor it always 
Sees a valid address).

But with the current driver implementation when user queues
Multiple descriptors the last descriptor next descriptor field
Pointing to an invalid location causing data corruption or 
Errors from the DMA h/w engine...

To avoid this issue creating a Buffer descriptor Chain during 
Channel allocation and using those buffer descriptors for processing
User requested data.

Please let me know if the above explanation is not clear will explain in 
detail

> 
> >
> > This patch fixes this issue by creating a BD Chain during
> 
> whats a BD?

Buffer descriptor.

> 
> > channel allocation itself and use those BD's.
> >
> > Signed-off-by: Kedareswara rao Appana 
> > ---
> >
> >  drivers/dma/xilinx/xilinx_dma.c | 133
> > +---
> >  1 file changed, 83 insertions(+), 50 deletions(-)
> >
> > diff --git a/drivers/dma/xilinx/xilinx_dma.c
> > b/drivers/dma/xilinx/xilinx_dma.c index 0e9c02e..af2159d 100644
> > --- a/drivers/dma/xilinx/xilinx_dma.c
> > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > @@ -163,6 +163,7 @@
> >  #define XILINX_DMA_BD_SOP  BIT(27)
> >  #define XILINX_DMA_BD_EOP  BIT(26)
> >  #define XILINX_DMA_COALESCE_MAX255
> > +#define XILINX_DMA_NUM_DESCS   255
> 
> why 255?

It is not an h/w limitation 
Allocating 255 descriptors (Each descriptor is capable of sending 7MB data)
So roughly using allocated descriptors DMA engine can transfer 1GB data
And in the driver we are reusing the allocated descriptors when they are free.

Regards,
Kedar.


RE: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-12 Thread Appana Durga Kedareswara Rao
Hi Vinod,

Thanks for the review...

> On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> > When driver is handling AXI DMA SoftIP When user submits multiple
> > descriptors back to back on the S2MM(recv) side with the current
> > driver flow the last buffer descriptor next bd points to a invalid
> > location resulting the invalid data or errors in the DMA engine.
> 
> Can you rephrase this, it a bit hard to understand.

When DMA is receiving packets h/w expects the descriptors
Should be in the form of a ring (I mean h/w buffer descriptor
Next descriptor field should always point to valid address
So that when DMA engine go and fetch that next descriptor it always 
Sees a valid address).

But with the current driver implementation when user queues
Multiple descriptors the last descriptor next descriptor field
Pointing to an invalid location causing data corruption or 
Errors from the DMA h/w engine...

To avoid this issue creating a Buffer descriptor Chain during 
Channel allocation and using those buffer descriptors for processing
User requested data.

Please let me know if the above explanation is not clear will explain in 
detail

> 
> >
> > This patch fixes this issue by creating a BD Chain during
> 
> whats a BD?

Buffer descriptor.

> 
> > channel allocation itself and use those BD's.
> >
> > Signed-off-by: Kedareswara rao Appana 
> > ---
> >
> >  drivers/dma/xilinx/xilinx_dma.c | 133
> > +---
> >  1 file changed, 83 insertions(+), 50 deletions(-)
> >
> > diff --git a/drivers/dma/xilinx/xilinx_dma.c
> > b/drivers/dma/xilinx/xilinx_dma.c index 0e9c02e..af2159d 100644
> > --- a/drivers/dma/xilinx/xilinx_dma.c
> > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > @@ -163,6 +163,7 @@
> >  #define XILINX_DMA_BD_SOP  BIT(27)
> >  #define XILINX_DMA_BD_EOP  BIT(26)
> >  #define XILINX_DMA_COALESCE_MAX255
> > +#define XILINX_DMA_NUM_DESCS   255
> 
> why 255?

It is not an h/w limitation 
Allocating 255 descriptors (Each descriptor is capable of sending 7MB data)
So roughly using allocated descriptors DMA engine can transfer 1GB data
And in the driver we are reusing the allocated descriptors when they are free.

Regards,
Kedar.


Re: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-10 Thread Vinod Koul
On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> When driver is handling AXI DMA SoftIP
> When user submits multiple descriptors back to back on the S2MM(recv)
> side with the current driver flow the last buffer descriptor next bd
> points to a invalid location resulting the invalid data or errors in the
> DMA engine.

Can you rephrase this, it a bit hard to understand.

> 
> This patch fixes this issue by creating a BD Chain during

whats a BD?

> channel allocation itself and use those BD's.
> 
> Signed-off-by: Kedareswara rao Appana 
> ---
> Changes for v5:
> ---> None.
> Changes for v4:
> ---> None.
> Changes for v3:
> ---> None.
> Changes for v2:
> ---> None.
> 
>  drivers/dma/xilinx/xilinx_dma.c | 133 
> +---
>  1 file changed, 83 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 0e9c02e..af2159d 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -163,6 +163,7 @@
>  #define XILINX_DMA_BD_SOPBIT(27)
>  #define XILINX_DMA_BD_EOPBIT(26)
>  #define XILINX_DMA_COALESCE_MAX  255
> +#define XILINX_DMA_NUM_DESCS 255

why 255?

>  #define XILINX_DMA_NUM_APP_WORDS 5
>  
>  /* Multi-Channel DMA Descriptor offsets*/
> @@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor {
>   * @pending_list: Descriptors waiting
>   * @active_list: Descriptors ready to submit
>   * @done_list: Complete descriptors
> + * @free_seg_list: Free descriptors
>   * @common: DMA common channel
>   * @desc_pool: Descriptors pool
>   * @dev: The dma device
> @@ -331,7 +333,9 @@ struct xilinx_dma_tx_descriptor {
>   * @desc_submitcount: Descriptor h/w submitted count
>   * @residue: Residue for AXI DMA
>   * @seg_v: Statically allocated segments base
> + * @seg_p: Physical allocated segments base
>   * @cyclic_seg_v: Statically allocated segment base for cyclic transfers
> + * @cyclic_seg_p: Physical allocated segments base for cyclic dma
>   * @start_transfer: Differentiate b/w DMA IP's transfer
>   */
>  struct xilinx_dma_chan {
> @@ -342,6 +346,7 @@ struct xilinx_dma_chan {
>   struct list_head pending_list;
>   struct list_head active_list;
>   struct list_head done_list;
> + struct list_head free_seg_list;
>   struct dma_chan common;
>   struct dma_pool *desc_pool;
>   struct device *dev;
> @@ -363,7 +368,9 @@ struct xilinx_dma_chan {
>   u32 desc_submitcount;
>   u32 residue;
>   struct xilinx_axidma_tx_segment *seg_v;
> + dma_addr_t seg_p;
>   struct xilinx_axidma_tx_segment *cyclic_seg_v;
> + dma_addr_t cyclic_seg_p;
>   void (*start_transfer)(struct xilinx_dma_chan *chan);
>   u16 tdest;
>  };
> @@ -569,17 +576,31 @@ static struct xilinx_axidma_tx_segment *
>  xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
>  {
>   struct xilinx_axidma_tx_segment *segment;
> - dma_addr_t phys;
> -
> - segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, );
> - if (!segment)
> - return NULL;
> + unsigned long flags;
>  
> - segment->phys = phys;
> + spin_lock_irqsave(>lock, flags);
> + if (!list_empty(>free_seg_list)) {
> + segment = list_first_entry(>free_seg_list,
> +struct xilinx_axidma_tx_segment,
> +node);
> + list_del(>node);
> + }
> + spin_unlock_irqrestore(>lock, flags);
>  
>   return segment;
>  }
>  
> +static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw)
> +{
> + u32 next_desc = hw->next_desc;
> + u32 next_desc_msb = hw->next_desc_msb;
> +
> + memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw));
> +
> + hw->next_desc = next_desc;
> + hw->next_desc_msb = next_desc_msb;
> +}
> +
>  /**
>   * xilinx_dma_free_tx_segment - Free transaction segment
>   * @chan: Driver specific DMA channel
> @@ -588,7 +609,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan 
> *chan)
>  static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
>   struct xilinx_axidma_tx_segment *segment)
>  {
> - dma_pool_free(chan->desc_pool, segment, segment->phys);
> + xilinx_dma_clean_hw_desc(>hw);
> +
> + list_add_tail(>node, >free_seg_list);
>  }
>  
>  /**
> @@ -713,16 +736,26 @@ static void xilinx_dma_free_descriptors(struct 
> xilinx_dma_chan *chan)
>  static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
>  {
>   struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> + unsigned long flags;
>  
>   dev_dbg(chan->dev, "Free all channel resources.\n");
>  
>   xilinx_dma_free_descriptors(chan);
> +
>   if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
> - xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v);
> - 

Re: [PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-10 Thread Vinod Koul
On Sat, Jan 07, 2017 at 12:15:30PM +0530, Kedareswara rao Appana wrote:
> When driver is handling AXI DMA SoftIP
> When user submits multiple descriptors back to back on the S2MM(recv)
> side with the current driver flow the last buffer descriptor next bd
> points to a invalid location resulting the invalid data or errors in the
> DMA engine.

Can you rephrase this, it a bit hard to understand.

> 
> This patch fixes this issue by creating a BD Chain during

whats a BD?

> channel allocation itself and use those BD's.
> 
> Signed-off-by: Kedareswara rao Appana 
> ---
> Changes for v5:
> ---> None.
> Changes for v4:
> ---> None.
> Changes for v3:
> ---> None.
> Changes for v2:
> ---> None.
> 
>  drivers/dma/xilinx/xilinx_dma.c | 133 
> +---
>  1 file changed, 83 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 0e9c02e..af2159d 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -163,6 +163,7 @@
>  #define XILINX_DMA_BD_SOPBIT(27)
>  #define XILINX_DMA_BD_EOPBIT(26)
>  #define XILINX_DMA_COALESCE_MAX  255
> +#define XILINX_DMA_NUM_DESCS 255

why 255?

>  #define XILINX_DMA_NUM_APP_WORDS 5
>  
>  /* Multi-Channel DMA Descriptor offsets*/
> @@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor {
>   * @pending_list: Descriptors waiting
>   * @active_list: Descriptors ready to submit
>   * @done_list: Complete descriptors
> + * @free_seg_list: Free descriptors
>   * @common: DMA common channel
>   * @desc_pool: Descriptors pool
>   * @dev: The dma device
> @@ -331,7 +333,9 @@ struct xilinx_dma_tx_descriptor {
>   * @desc_submitcount: Descriptor h/w submitted count
>   * @residue: Residue for AXI DMA
>   * @seg_v: Statically allocated segments base
> + * @seg_p: Physical allocated segments base
>   * @cyclic_seg_v: Statically allocated segment base for cyclic transfers
> + * @cyclic_seg_p: Physical allocated segments base for cyclic dma
>   * @start_transfer: Differentiate b/w DMA IP's transfer
>   */
>  struct xilinx_dma_chan {
> @@ -342,6 +346,7 @@ struct xilinx_dma_chan {
>   struct list_head pending_list;
>   struct list_head active_list;
>   struct list_head done_list;
> + struct list_head free_seg_list;
>   struct dma_chan common;
>   struct dma_pool *desc_pool;
>   struct device *dev;
> @@ -363,7 +368,9 @@ struct xilinx_dma_chan {
>   u32 desc_submitcount;
>   u32 residue;
>   struct xilinx_axidma_tx_segment *seg_v;
> + dma_addr_t seg_p;
>   struct xilinx_axidma_tx_segment *cyclic_seg_v;
> + dma_addr_t cyclic_seg_p;
>   void (*start_transfer)(struct xilinx_dma_chan *chan);
>   u16 tdest;
>  };
> @@ -569,17 +576,31 @@ static struct xilinx_axidma_tx_segment *
>  xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
>  {
>   struct xilinx_axidma_tx_segment *segment;
> - dma_addr_t phys;
> -
> - segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, );
> - if (!segment)
> - return NULL;
> + unsigned long flags;
>  
> - segment->phys = phys;
> + spin_lock_irqsave(>lock, flags);
> + if (!list_empty(>free_seg_list)) {
> + segment = list_first_entry(>free_seg_list,
> +struct xilinx_axidma_tx_segment,
> +node);
> + list_del(>node);
> + }
> + spin_unlock_irqrestore(>lock, flags);
>  
>   return segment;
>  }
>  
> +static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw)
> +{
> + u32 next_desc = hw->next_desc;
> + u32 next_desc_msb = hw->next_desc_msb;
> +
> + memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw));
> +
> + hw->next_desc = next_desc;
> + hw->next_desc_msb = next_desc_msb;
> +}
> +
>  /**
>   * xilinx_dma_free_tx_segment - Free transaction segment
>   * @chan: Driver specific DMA channel
> @@ -588,7 +609,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan 
> *chan)
>  static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
>   struct xilinx_axidma_tx_segment *segment)
>  {
> - dma_pool_free(chan->desc_pool, segment, segment->phys);
> + xilinx_dma_clean_hw_desc(>hw);
> +
> + list_add_tail(>node, >free_seg_list);
>  }
>  
>  /**
> @@ -713,16 +736,26 @@ static void xilinx_dma_free_descriptors(struct 
> xilinx_dma_chan *chan)
>  static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
>  {
>   struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> + unsigned long flags;
>  
>   dev_dbg(chan->dev, "Free all channel resources.\n");
>  
>   xilinx_dma_free_descriptors(chan);
> +
>   if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
> - xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v);
> - xilinx_dma_free_tx_segment(chan, chan->seg_v);

[PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-06 Thread Kedareswara rao Appana
When driver is handling AXI DMA SoftIP
When user submits multiple descriptors back to back on the S2MM(recv)
side with the current driver flow the last buffer descriptor next bd
points to a invalid location resulting the invalid data or errors in the
DMA engine.

This patch fixes this issue by creating a BD Chain during
channel allocation itself and use those BD's.

Signed-off-by: Kedareswara rao Appana 
---
Changes for v5:
---> None.
Changes for v4:
---> None.
Changes for v3:
---> None.
Changes for v2:
---> None.

 drivers/dma/xilinx/xilinx_dma.c | 133 +---
 1 file changed, 83 insertions(+), 50 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 0e9c02e..af2159d 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -163,6 +163,7 @@
 #define XILINX_DMA_BD_SOP  BIT(27)
 #define XILINX_DMA_BD_EOP  BIT(26)
 #define XILINX_DMA_COALESCE_MAX255
+#define XILINX_DMA_NUM_DESCS   255
 #define XILINX_DMA_NUM_APP_WORDS   5
 
 /* Multi-Channel DMA Descriptor offsets*/
@@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor {
  * @pending_list: Descriptors waiting
  * @active_list: Descriptors ready to submit
  * @done_list: Complete descriptors
+ * @free_seg_list: Free descriptors
  * @common: DMA common channel
  * @desc_pool: Descriptors pool
  * @dev: The dma device
@@ -331,7 +333,9 @@ struct xilinx_dma_tx_descriptor {
  * @desc_submitcount: Descriptor h/w submitted count
  * @residue: Residue for AXI DMA
  * @seg_v: Statically allocated segments base
+ * @seg_p: Physical allocated segments base
  * @cyclic_seg_v: Statically allocated segment base for cyclic transfers
+ * @cyclic_seg_p: Physical allocated segments base for cyclic dma
  * @start_transfer: Differentiate b/w DMA IP's transfer
  */
 struct xilinx_dma_chan {
@@ -342,6 +346,7 @@ struct xilinx_dma_chan {
struct list_head pending_list;
struct list_head active_list;
struct list_head done_list;
+   struct list_head free_seg_list;
struct dma_chan common;
struct dma_pool *desc_pool;
struct device *dev;
@@ -363,7 +368,9 @@ struct xilinx_dma_chan {
u32 desc_submitcount;
u32 residue;
struct xilinx_axidma_tx_segment *seg_v;
+   dma_addr_t seg_p;
struct xilinx_axidma_tx_segment *cyclic_seg_v;
+   dma_addr_t cyclic_seg_p;
void (*start_transfer)(struct xilinx_dma_chan *chan);
u16 tdest;
 };
@@ -569,17 +576,31 @@ static struct xilinx_axidma_tx_segment *
 xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 {
struct xilinx_axidma_tx_segment *segment;
-   dma_addr_t phys;
-
-   segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, );
-   if (!segment)
-   return NULL;
+   unsigned long flags;
 
-   segment->phys = phys;
+   spin_lock_irqsave(>lock, flags);
+   if (!list_empty(>free_seg_list)) {
+   segment = list_first_entry(>free_seg_list,
+  struct xilinx_axidma_tx_segment,
+  node);
+   list_del(>node);
+   }
+   spin_unlock_irqrestore(>lock, flags);
 
return segment;
 }
 
+static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw)
+{
+   u32 next_desc = hw->next_desc;
+   u32 next_desc_msb = hw->next_desc_msb;
+
+   memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw));
+
+   hw->next_desc = next_desc;
+   hw->next_desc_msb = next_desc_msb;
+}
+
 /**
  * xilinx_dma_free_tx_segment - Free transaction segment
  * @chan: Driver specific DMA channel
@@ -588,7 +609,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
struct xilinx_axidma_tx_segment *segment)
 {
-   dma_pool_free(chan->desc_pool, segment, segment->phys);
+   xilinx_dma_clean_hw_desc(>hw);
+
+   list_add_tail(>node, >free_seg_list);
 }
 
 /**
@@ -713,16 +736,26 @@ static void xilinx_dma_free_descriptors(struct 
xilinx_dma_chan *chan)
 static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
 {
struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+   unsigned long flags;
 
dev_dbg(chan->dev, "Free all channel resources.\n");
 
xilinx_dma_free_descriptors(chan);
+
if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
-   xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v);
-   xilinx_dma_free_tx_segment(chan, chan->seg_v);
+   spin_lock_irqsave(>lock, flags);
+   INIT_LIST_HEAD(>free_seg_list);
+   spin_unlock_irqrestore(>lock, flags);
+
+   /* Free Memory that is allocated for cyclic DMA Mode */
+   dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v),
+  

[PATCH v5 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario

2017-01-06 Thread Kedareswara rao Appana
When driver is handling AXI DMA SoftIP
When user submits multiple descriptors back to back on the S2MM(recv)
side with the current driver flow the last buffer descriptor next bd
points to a invalid location resulting the invalid data or errors in the
DMA engine.

This patch fixes this issue by creating a BD Chain during
channel allocation itself and use those BD's.

Signed-off-by: Kedareswara rao Appana 
---
Changes for v5:
---> None.
Changes for v4:
---> None.
Changes for v3:
---> None.
Changes for v2:
---> None.

 drivers/dma/xilinx/xilinx_dma.c | 133 +---
 1 file changed, 83 insertions(+), 50 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 0e9c02e..af2159d 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -163,6 +163,7 @@
 #define XILINX_DMA_BD_SOP  BIT(27)
 #define XILINX_DMA_BD_EOP  BIT(26)
 #define XILINX_DMA_COALESCE_MAX255
+#define XILINX_DMA_NUM_DESCS   255
 #define XILINX_DMA_NUM_APP_WORDS   5
 
 /* Multi-Channel DMA Descriptor offsets*/
@@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor {
  * @pending_list: Descriptors waiting
  * @active_list: Descriptors ready to submit
  * @done_list: Complete descriptors
+ * @free_seg_list: Free descriptors
  * @common: DMA common channel
  * @desc_pool: Descriptors pool
  * @dev: The dma device
@@ -331,7 +333,9 @@ struct xilinx_dma_tx_descriptor {
  * @desc_submitcount: Descriptor h/w submitted count
  * @residue: Residue for AXI DMA
  * @seg_v: Statically allocated segments base
+ * @seg_p: Physical allocated segments base
  * @cyclic_seg_v: Statically allocated segment base for cyclic transfers
+ * @cyclic_seg_p: Physical allocated segments base for cyclic dma
  * @start_transfer: Differentiate b/w DMA IP's transfer
  */
 struct xilinx_dma_chan {
@@ -342,6 +346,7 @@ struct xilinx_dma_chan {
struct list_head pending_list;
struct list_head active_list;
struct list_head done_list;
+   struct list_head free_seg_list;
struct dma_chan common;
struct dma_pool *desc_pool;
struct device *dev;
@@ -363,7 +368,9 @@ struct xilinx_dma_chan {
u32 desc_submitcount;
u32 residue;
struct xilinx_axidma_tx_segment *seg_v;
+   dma_addr_t seg_p;
struct xilinx_axidma_tx_segment *cyclic_seg_v;
+   dma_addr_t cyclic_seg_p;
void (*start_transfer)(struct xilinx_dma_chan *chan);
u16 tdest;
 };
@@ -569,17 +576,31 @@ static struct xilinx_axidma_tx_segment *
 xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 {
struct xilinx_axidma_tx_segment *segment;
-   dma_addr_t phys;
-
-   segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, );
-   if (!segment)
-   return NULL;
+   unsigned long flags;
 
-   segment->phys = phys;
+   spin_lock_irqsave(>lock, flags);
+   if (!list_empty(>free_seg_list)) {
+   segment = list_first_entry(>free_seg_list,
+  struct xilinx_axidma_tx_segment,
+  node);
+   list_del(>node);
+   }
+   spin_unlock_irqrestore(>lock, flags);
 
return segment;
 }
 
+static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw)
+{
+   u32 next_desc = hw->next_desc;
+   u32 next_desc_msb = hw->next_desc_msb;
+
+   memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw));
+
+   hw->next_desc = next_desc;
+   hw->next_desc_msb = next_desc_msb;
+}
+
 /**
  * xilinx_dma_free_tx_segment - Free transaction segment
  * @chan: Driver specific DMA channel
@@ -588,7 +609,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
struct xilinx_axidma_tx_segment *segment)
 {
-   dma_pool_free(chan->desc_pool, segment, segment->phys);
+   xilinx_dma_clean_hw_desc(>hw);
+
+   list_add_tail(>node, >free_seg_list);
 }
 
 /**
@@ -713,16 +736,26 @@ static void xilinx_dma_free_descriptors(struct 
xilinx_dma_chan *chan)
 static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
 {
struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+   unsigned long flags;
 
dev_dbg(chan->dev, "Free all channel resources.\n");
 
xilinx_dma_free_descriptors(chan);
+
if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
-   xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v);
-   xilinx_dma_free_tx_segment(chan, chan->seg_v);
+   spin_lock_irqsave(>lock, flags);
+   INIT_LIST_HEAD(>free_seg_list);
+   spin_unlock_irqrestore(>lock, flags);
+
+   /* Free Memory that is allocated for cyclic DMA Mode */
+   dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v),
+