Re: Depreciated spi_master.transfer and prepared spi messages for an optimized pipelined-SPI-DMA-driver

2013-10-29 Thread Linus Walleij
On Mon, Oct 28, 2013 at 11:42 AM, Martin Sperl mar...@sperl.org wrote:

 (... ) I thought of moving away from spi_transfer_one_message and back to
 the simpler transfer interface, where the preprocessing would get done
 (DMA control block-chain generation) and then appending it to the
 existing (possibly running) DMA chain.

OK quite a cool idea.

But I hope that you have the necessary infrastructure using the dmaengine
subsystem for this, or that changes requires will be proposed to that
first or together with these changes.

As you will be using dmaengine (I guess?) maybe a lot of this can
actually be handled directly in the core since that code should be
pretty generic, or in a separate file like spi-dmaengine-chain.c?

 But just yesterday I was looking thru the code and came to the message:
 master is unqueued, this is depreciated (drivers/spi/spi.c Line 1167).
 This came in with commit ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0 and got 
 included in 3.4.

 So I am wondering why you would depreciate this interface

Simply because none of the in-kernel users was doing what you are
trying to do now. And noone said anything about such future usecases,
so how could we know?

 Now this brings me to different question:
 Could we implement some additional functions for preparing
 an SPI message (...)
 The interface could looks something like this:
 int spi_prepare_message(struct_spi_dev*, struct spi_message *);
 int spi_unprepare_message(struct_spi_dev*, struct spi_message *);

Maybe? I cannot tell from the above how this would look so
I think it is better if you send a patch showing how this improves
efficiency.

Yours,
Linus Walleij

--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
___
spi-devel-general mailing list
spi-devel-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/spi-devel-general


Depreciated spi_master.transfer and prepared spi messages for an optimized pipelined-SPI-DMA-driver

2013-10-28 Thread Martin Sperl
Hi!

I am currently writing an SPI driver for the raspberry pi that only relies on 
DMA for the whole transfer.
It can already handle a FULL SPI_message with multiple transfers, CS_CHANGE, 
and speed_hz changes only by using DMA, so it is using the dma to the max...

Right now (for the ease of development) the driver is still using the 
transfer_one_message interface.
But unfortunately this interface introduces latencies/gaps between individual 
messages because of the latencies: interrupt - wakeup_spinlock-schedule 
message_pump_thread.

So as the DMA driver is already handling a single SPI transfer very 
efficiently, there is also an option to chain multiple SPI transfers in the DMA 
engine and just get interrupts for completion.

This I why I thought of moving away from spi_transfer_one_message and back to 
the simpler transfer interface, where the preprocessing would get done (DMA 
control block-chain generation) and then appending it to the existing (possibly 
running) DMA chain.

But just yesterday I was looking thru the code and came to the message: master 
is unqueued, this is depreciated (drivers/spi/spi.c Line 1167). 
This came in with commit ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0 and got 
included in 3.4.

So I am wondering why you would depreciate this interface - I agree that for 
most drivers moving to the transfer_one_message interface is the way forward 
and reduces code size and bugs.
And this depreciation message will hopefully motivate driver authors to rewrite 
their code to make use of the new interface.

But there may be some drivers that would really benefit of using the more 
complex interface without it getting depreciated and at some point possibly 
removed. 
And I believe that the pipelined DMA driver is a good candidate to show that 
under some circumstances the transfer interface may still be the better 
solution...
Obviously I could work around the inner working of the message-pump by 
manssaging the data before some of the calls, which I would say is even more 
depreciated!

Now this brings me to different question:
Could we implement some additional functions for preparing an SPI message to 
reduce the overhead of computation of a spi message repeatedly by keeping a 
prepared Transaction, which we just need to schedule it. Changing data would 
not be a problem, but changing the data block-addresses would be a change. that 
would require recalculating the data.

The interface could looks something like this:
int spi_prepare_message(struct_spi_dev*, struct spi_message *);
int spi_unprepare_message(struct_spi_dev*, struct spi_message *);

together with an additional object (void* prepared) in the spi_message 
structure for keeping such prepared data... 
OK - abusing the existing queue and state  could be used, but -if i read the 
comments correctly - this data is only guaranteed to be available to the driver 
between spi_async and the corresponding callback (outside of it it is used used 
by the message pump and gets cleared in spi_finalize_current_message function.


Thanks,
  Martin
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk
___
spi-devel-general mailing list
spi-devel-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/spi-devel-general