On Saturday 07 May 2011, Per Forlin wrote:
> > The mmc queue never runs empty until end of transfer.. The requests
> > are 128 blocks (64k limit set in mmc host driver) compared to 256
> > blocks before. This will not improve performance much since the
> > transfer now are smaller than before. The latency is minimal but
> > instead there extra number of transfer cause more mmc cmd overhead.
> > I added prints to print the wait time in lock_page_killable too.
> > I wonder if I can achieve a none empty mmc block queue without
> > compromising the mmc host driver performance.
> >
> There is actually a performance increase from 16.5 MB/s to 18.4 MB/s
> when lowering the max_req_size to 64k.
> I run a dd test on a pandaboard using 2.6.39-rc5 kernel.

I've noticed with a number of cards that using 64k writes is faster
than any other size. What I could not figure out yet is whether this
is a common hardware optimization for MS Windows (which always uses
64K I/O when it can), or if it's a software effect and we can actually
make it go faster with Linux by tuning for other sizes.

        Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to