Hi Mark,

> From: Mark Brown [mailto:[email protected]]
> > On Thu, Oct 17, 2013 at 05:54:53PM +0530, Sourav Poddar wrote:
> >   Setup:
> >     Here, the actual memcpy is done in the spi controller, and flash
> >     communicates to the qspi controller to do the memcpy using the
> >     SPI framework. This is what is propsed in the $subject patch.
> 
> >   Setup:
> >     Here, the actual memcpy is done in the mtd read api itself, by
> > getting the
> >     memmap address from the spi controller.
> 
> > So, time reduced almost to half while bypassing the SPI framework.
> 
> The interesting case for benchmarking here is more a comparison between
> normal DMA driven transfers and the memcpy().  Some consideration of the
> CPU load would also be interesting here, if the SoC is waiting for the
> flash then it's probably useful if it can make progress on other things.
> 
So in this CASE-2:  SPI framework is bypassed: mtd_read() becomes
mtd_read() {
        if (flash->mmap_mode)
                if (dma_available)
                        read_via_dma(destination, source, length);
                else
                        memcpy(destination, source, length);
        else
                /* use spi frame-work by default */
}

Are you looking for comparison between  read_via_dma() v/s memcpy() ?

If yes, then unfortunately we are bit constrained because our controller
does not support DMA. So, we have to depend on CPU based memcpy()
only. However, use of DMA can be added as an independent patch on
top of this CASE-2 patch.

So will the base patch for CASE-2  (with SPI framework is bypassed) help ?


with regards, pekon

------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60135031&iu=/4140/ostg.clktrk
_______________________________________________
spi-devel-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/spi-devel-general

Reply via email to