Re: mass storage behaviour

2015-10-17 Thread Paul Jones
On 17 Oct 2015, at 12:30, Alan Stern wrote: > Paul, can you tell your email client to wrap lines after 72 columns or > so? It would make replying a lot easier… Mac Mail is not very friendly in this respect, but I can pay attention to not make long lines :) > ...

Re: mass storage behaviour

2015-10-17 Thread Alan Stern
On Sat, 17 Oct 2015, Paul Jones wrote: > On 17 Oct 2015, at 12:30, Alan Stern wrote: > > > Paul, can you tell your email client to wrap lines after 72 columns or > > so? It would make replying a lot easier… > Mac Mail is not very friendly in this respect, but I can

Re: mass storage behaviour

2015-10-17 Thread Paul Jones
On 17 Oct 2015, at 16:06, Alan Stern wrote: > On Sat, 17 Oct 2015, Paul Jones wrote: > >> On 17 Oct 2015, at 12:30, Alan Stern wrote: >> >>> Paul, can you tell your email client to wrap lines after 72 columns or >>> so? It would make

Re: mass storage behaviour

2015-10-17 Thread Alan Stern
Paul, can you tell your email client to wrap lines after 72 columns or so? It would make replying a lot easier... On Fri, 16 Oct 2015, Paul Jones wrote: > >> The slow queuing of transfers seems to be due to waiting in a spin lock on > >> ep->dev->lock (4+ us). > >> I�m guessing this is caused

Re: mass storage behaviour

2015-10-16 Thread Paul Jones
On 06 Oct 2015, at 12:34, Paul Jones wrote: > On 06 Oct 2015, at 16:44, Alan Stern wrote: > >> On Tue, 6 Oct 2015, Paul Jones wrote: >> >>> On 05 Oct 2015, at 23:09, Alan Stern wrote: >>> On Mon, 5 Oct 2015,

Re: mass storage behaviour

2015-10-16 Thread Alan Stern
On Fri, 16 Oct 2015, Paul Jones wrote: > Added some debugging statements in f_mass_storage/net2280 to get an idea of > what is going on on the wire (as I unfortunately don’t have any tools to > figure it out any other way). > Additional debug statement locations: ... > Log for a single large

Re: mass storage behaviour

2015-10-16 Thread Felipe Balbi
Hi, Paul Jones writes: > First DMA to first IRQ = 116us. > > Second DMA to second IRQ = 107us. > > These seem fairly stable as the last request in a 500+MB transfer had > exactly the same timings. > > Each IRQ is taking around 14us to handle, during which most of the > time

Re: mass storage behaviour

2015-10-16 Thread Paul Jones
On 16 Oct 2015, at 15:59, Alan Stern wrote: > On Fri, 16 Oct 2015, Paul Jones wrote: > >> Added some debugging statements in f_mass_storage/net2280 to get an idea of >> what is going on on the wire (as I unfortunately don’t have any tools to >> figure it out any

Re: mass storage behaviour

2015-10-16 Thread Paul Jones
On 16 Oct 2015, at 18:06, Felipe Balbi wrote: > > Hi, > > Paul Jones writes: >> First DMA to first IRQ = 116us. >> >> Second DMA to second IRQ = 107us. >> >> These seem fairly stable as the last request in a 500+MB transfer had >> exactly the same timings.

Re: mass storage behaviour

2015-10-10 Thread Paul Jones
On 06 Oct 2015, at 13:26, Paul Zimmerman wrote: > On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern wrote: >> On Tue, 6 Oct 2015, Felipe Balbi wrote: >> > In my experience, you need to do at least the following to get max > performance from the

Re: mass storage behaviour

2015-10-10 Thread Alan Stern
On Sat, 10 Oct 2015, Paul Jones wrote: > >> Why is Windows so much faster? Or to put it another way, why is Linux > >> slow? How can we improve things? > > > > I don't know. We were doing our performance demos using Windows, so we > > never looked into why Linux was slower. But I do know the

Re: mass storage behaviour

2015-10-10 Thread Paul Jones
On 10 Oct 2015, at 16:34, Alan Stern wrote: > On Sat, 10 Oct 2015, Paul Jones wrote: > Why is Windows so much faster? Or to put it another way, why is Linux slow? How can we improve things? >>> >>> I don't know. We were doing our performance demos using

Re: mass storage behaviour

2015-10-07 Thread Greg KH
On Tue, Oct 06, 2015 at 10:26:08AM -0700, Paul Zimmerman wrote: > On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern wrote: > > On Tue, 6 Oct 2015, Felipe Balbi wrote: > > > >> >> In my experience, you need to do at least the following to get max > >> >> performance from the

Re: mass storage behaviour

2015-10-07 Thread Paul Jones
On 07 Oct 2015, at 10:13, Greg KH wrote: > On Tue, Oct 06, 2015 at 10:26:08AM -0700, Paul Zimmerman wrote: >> On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern >> wrote: >>> On Tue, 6 Oct 2015, Felipe Balbi wrote: >>> >> In my experience,

Re: mass storage behaviour

2015-10-06 Thread Alan Stern
On Tue, 6 Oct 2015, Paul Jones wrote: > On 05 Oct 2015, at 23:09, Alan Stern wrote: > > > On Mon, 5 Oct 2015, Paul Jones wrote: > > > >>> Increasing the max_sectors_kb value, on the other hand, might remove > >>> overhead by allowing a higher percentage of the

Re: mass storage behaviour

2015-10-06 Thread Felipe Balbi
John Youn writes: > Hi Paul, > > Good to see you're still hanging around. > > On 10/5/2015 3:38 PM, Paul Zimmerman wrote: >> On Mon, 5 Oct 2015, Alan Stern wrote: >>> On Mon, 5 Oct 2015, Paul Jones wrote: >>> > Increasing the max_sectors_kb value, on the other hand,

Re: mass storage behaviour

2015-10-06 Thread Paul Jones
On 05 Oct 2015, at 23:09, Alan Stern wrote: > On Mon, 5 Oct 2015, Paul Jones wrote: > >>> Increasing the max_sectors_kb value, on the other hand, might remove >>> overhead by allowing a higher percentage of the transfer to consist of >>> real data as opposed to CBW

Re: mass storage behaviour

2015-10-06 Thread Paul Jones
On 06 Oct 2015, at 16:44, Alan Stern wrote: > On Tue, 6 Oct 2015, Paul Jones wrote: > >> On 05 Oct 2015, at 23:09, Alan Stern wrote: >> >>> On Mon, 5 Oct 2015, Paul Jones wrote: >>> > Increasing the max_sectors_kb value, on the other

Re: mass storage behaviour

2015-10-06 Thread Alan Stern
On Tue, 6 Oct 2015, Felipe Balbi wrote: > >> In my experience, you need to do at least the following to get max > >> performance from the mass storage gadget: > >> > >> - Use Windows 8 or higher on the host. It's much faster than Linux. Why is Windows so much faster? Or to put it another way,

Re: mass storage behaviour

2015-10-06 Thread Paul Zimmerman
On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern wrote: > On Tue, 6 Oct 2015, Felipe Balbi wrote: > >> >> In my experience, you need to do at least the following to get max >> >> performance from the mass storage gadget: >> >> >> >> - Use Windows 8 or higher on the host.

Re: mass storage behaviour

2015-10-06 Thread Alan Stern
On Tue, 6 Oct 2015, Paul Jones wrote: > I changed /sys/block//device/max_sectors to 4096 and > /sys/block//queue/max_sectors_kb to 2048 > That improves matters slightly from 140MB/s to 160MB/s. > > Using Paul Zimmerman’s suggestion I can increase that to 174MB/s using a 160k > buffer. >

mass storage behaviour

2015-10-05 Thread Paul Jones
I’m investigating the (lack of) performance (around 150MB/s) of the USB3380 gadget in mass storage mode. Whilst tracing on a Linux 4.1 host I noticed that the Linux max storage driver is requesting 240 blocks, 16 blocks, 240 blocks, 16 blocks, etc. when doing a dd directly on the device:

Re: mass storage behaviour

2015-10-05 Thread Alan Stern
On Mon, 5 Oct 2015, Paul Jones wrote: > I�m investigating the (lack of) performance (around 150MB/s) of the USB3380 > gadget in mass storage mode. > Whilst tracing on a Linux 4.1 host I noticed that the Linux max storage > driver is requesting 240 blocks, 16 blocks, 240 blocks, 16 blocks, etc.

Re: mass storage behaviour

2015-10-05 Thread Alan Stern
On Mon, 5 Oct 2015, Paul Jones wrote: > > g_mass_storage, by default, uses 2 struct usb_request, try increasing that > > to 4 > > (can be done from make menuconfig itself) and see if anything changes. > If you are talking about the �number of storage pipeline buffers� I already > have them at

Re: mass storage behaviour

2015-10-05 Thread Paul Jones
On 05 Oct 2015, at 20:29, Alan Stern wrote: > On Mon, 5 Oct 2015, Paul Jones wrote: > >> I�m investigating the (lack of) performance (around 150MB/s) of the USB3380 >> gadget in mass storage mode. >> Whilst tracing on a Linux 4.1 host I noticed that the Linux max

Re: mass storage behaviour

2015-10-05 Thread Felipe Balbi
On Mon, Oct 05, 2015 at 07:30:05PM +0200, Paul Jones wrote: > I’m investigating the (lack of) performance (around 150MB/s) of the USB3380 > gadget in mass storage mode. Whilst tracing on a Linux 4.1 host I noticed > that the Linux max storage driver is requesting 240 blocks, 16 blocks, 240 >

Re: mass storage behaviour

2015-10-05 Thread Paul Jones
On 05 Oct 2015, at 20:08, Felipe Balbi wrote: > On Mon, Oct 05, 2015 at 07:30:05PM +0200, Paul Jones wrote: >> I’m investigating the (lack of) performance (around 150MB/s) of the USB3380 >> gadget in mass storage mode. Whilst tracing on a Linux 4.1 host I noticed >> that the

Re: mass storage behaviour

2015-10-05 Thread Alan Stern
On Mon, 5 Oct 2015, Paul Jones wrote: > > Increasing the max_sectors_kb value, on the other hand, might remove > > overhead by allowing a higher percentage of the transfer to consist of > > real data as opposed to CBW and CSW packets. This depends to some > > extent on other factors (such as

Re: mass storage behaviour

2015-10-05 Thread Alan Stern
On Mon, 5 Oct 2015, Paul Jones wrote: > >> Any ideas why the driver is requesting varying block sizes? > > > > The usb-storage driver requests what the block layer tells it to > > request. > I’m running a dd with a block size of 64k, so it seems to be > aggregating 2 requests and then splits

Re: mass storage behaviour

2015-10-05 Thread Paul Jones
On 05 Oct 2015, at 20:54, Alan Stern wrote: > On Mon, 5 Oct 2015, Paul Jones wrote: > >>> g_mass_storage, by default, uses 2 struct usb_request, try increasing that >>> to 4 >>> (can be done from make menuconfig itself) and see if anything changes. >> If you are

Re: mass storage behaviour

2015-10-05 Thread Paul Zimmerman
On Mon, 5 Oct 2015, Alan Stern wrote: > On Mon, 5 Oct 2015, Paul Jones wrote: > >>> Increasing the max_sectors_kb value, on the other hand, might remove >>> overhead by allowing a higher percentage of the transfer to consist of >>> real data as opposed to CBW and CSW packets. This depends to some

Re: mass storage behaviour

2015-10-05 Thread John Youn
Hi Paul, Good to see you're still hanging around. On 10/5/2015 3:38 PM, Paul Zimmerman wrote: > On Mon, 5 Oct 2015, Alan Stern wrote: >> On Mon, 5 Oct 2015, Paul Jones wrote: >> Increasing the max_sectors_kb value, on the other hand, might remove overhead by allowing a higher