On 17 Oct 2015, at 12:30, Alan Stern wrote:
> Paul, can you tell your email client to wrap lines after 72 columns or
> so? It would make replying a lot easier…
Mac Mail is not very friendly in this respect, but I can pay attention to
not make long lines :)
> ...
On Sat, 17 Oct 2015, Paul Jones wrote:
> On 17 Oct 2015, at 12:30, Alan Stern wrote:
>
> > Paul, can you tell your email client to wrap lines after 72 columns or
> > so? It would make replying a lot easier…
> Mac Mail is not very friendly in this respect, but I can
On 17 Oct 2015, at 16:06, Alan Stern wrote:
> On Sat, 17 Oct 2015, Paul Jones wrote:
>
>> On 17 Oct 2015, at 12:30, Alan Stern wrote:
>>
>>> Paul, can you tell your email client to wrap lines after 72 columns or
>>> so? It would make
Paul, can you tell your email client to wrap lines after 72 columns or
so? It would make replying a lot easier...
On Fri, 16 Oct 2015, Paul Jones wrote:
> >> The slow queuing of transfers seems to be due to waiting in a spin lock on
> >> ep->dev->lock (4+ us).
> >> I�m guessing this is caused
On 06 Oct 2015, at 12:34, Paul Jones wrote:
> On 06 Oct 2015, at 16:44, Alan Stern wrote:
>
>> On Tue, 6 Oct 2015, Paul Jones wrote:
>>
>>> On 05 Oct 2015, at 23:09, Alan Stern wrote:
>>>
On Mon, 5 Oct 2015,
On Fri, 16 Oct 2015, Paul Jones wrote:
> Added some debugging statements in f_mass_storage/net2280 to get an idea of
> what is going on on the wire (as I unfortunately don’t have any tools to
> figure it out any other way).
> Additional debug statement locations:
...
> Log for a single large
Hi,
Paul Jones writes:
> First DMA to first IRQ = 116us.
>
> Second DMA to second IRQ = 107us.
>
> These seem fairly stable as the last request in a 500+MB transfer had
> exactly the same timings.
>
> Each IRQ is taking around 14us to handle, during which most of the
> time
On 16 Oct 2015, at 15:59, Alan Stern wrote:
> On Fri, 16 Oct 2015, Paul Jones wrote:
>
>> Added some debugging statements in f_mass_storage/net2280 to get an idea of
>> what is going on on the wire (as I unfortunately don’t have any tools to
>> figure it out any
On 16 Oct 2015, at 18:06, Felipe Balbi wrote:
>
> Hi,
>
> Paul Jones writes:
>> First DMA to first IRQ = 116us.
>>
>> Second DMA to second IRQ = 107us.
>>
>> These seem fairly stable as the last request in a 500+MB transfer had
>> exactly the same timings.
On 06 Oct 2015, at 13:26, Paul Zimmerman wrote:
> On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern wrote:
>> On Tue, 6 Oct 2015, Felipe Balbi wrote:
>>
> In my experience, you need to do at least the following to get max
> performance from the
On Sat, 10 Oct 2015, Paul Jones wrote:
> >> Why is Windows so much faster? Or to put it another way, why is Linux
> >> slow? How can we improve things?
> >
> > I don't know. We were doing our performance demos using Windows, so we
> > never looked into why Linux was slower. But I do know the
On 10 Oct 2015, at 16:34, Alan Stern wrote:
> On Sat, 10 Oct 2015, Paul Jones wrote:
>
Why is Windows so much faster? Or to put it another way, why is Linux
slow? How can we improve things?
>>>
>>> I don't know. We were doing our performance demos using
On Tue, Oct 06, 2015 at 10:26:08AM -0700, Paul Zimmerman wrote:
> On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern wrote:
> > On Tue, 6 Oct 2015, Felipe Balbi wrote:
> >
> >> >> In my experience, you need to do at least the following to get max
> >> >> performance from the
On 07 Oct 2015, at 10:13, Greg KH wrote:
> On Tue, Oct 06, 2015 at 10:26:08AM -0700, Paul Zimmerman wrote:
>> On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern
>> wrote:
>>> On Tue, 6 Oct 2015, Felipe Balbi wrote:
>>>
>> In my experience,
On Tue, 6 Oct 2015, Paul Jones wrote:
> On 05 Oct 2015, at 23:09, Alan Stern wrote:
>
> > On Mon, 5 Oct 2015, Paul Jones wrote:
> >
> >>> Increasing the max_sectors_kb value, on the other hand, might remove
> >>> overhead by allowing a higher percentage of the
John Youn writes:
> Hi Paul,
>
> Good to see you're still hanging around.
>
> On 10/5/2015 3:38 PM, Paul Zimmerman wrote:
>> On Mon, 5 Oct 2015, Alan Stern wrote:
>>> On Mon, 5 Oct 2015, Paul Jones wrote:
>>>
> Increasing the max_sectors_kb value, on the other hand,
On 05 Oct 2015, at 23:09, Alan Stern wrote:
> On Mon, 5 Oct 2015, Paul Jones wrote:
>
>>> Increasing the max_sectors_kb value, on the other hand, might remove
>>> overhead by allowing a higher percentage of the transfer to consist of
>>> real data as opposed to CBW
On 06 Oct 2015, at 16:44, Alan Stern wrote:
> On Tue, 6 Oct 2015, Paul Jones wrote:
>
>> On 05 Oct 2015, at 23:09, Alan Stern wrote:
>>
>>> On Mon, 5 Oct 2015, Paul Jones wrote:
>>>
> Increasing the max_sectors_kb value, on the other
On Tue, 6 Oct 2015, Felipe Balbi wrote:
> >> In my experience, you need to do at least the following to get max
> >> performance from the mass storage gadget:
> >>
> >> - Use Windows 8 or higher on the host. It's much faster than Linux.
Why is Windows so much faster? Or to put it another way,
On Tue, Oct 6, 2015 at 10:01 AM, Alan Stern wrote:
> On Tue, 6 Oct 2015, Felipe Balbi wrote:
>
>> >> In my experience, you need to do at least the following to get max
>> >> performance from the mass storage gadget:
>> >>
>> >> - Use Windows 8 or higher on the host.
On Tue, 6 Oct 2015, Paul Jones wrote:
> I changed /sys/block//device/max_sectors to 4096 and
> /sys/block//queue/max_sectors_kb to 2048
> That improves matters slightly from 140MB/s to 160MB/s.
>
> Using Paul Zimmerman’s suggestion I can increase that to 174MB/s using a 160k
> buffer.
>
I’m investigating the (lack of) performance (around 150MB/s) of the USB3380
gadget in mass storage mode.
Whilst tracing on a Linux 4.1 host I noticed that the Linux max storage driver
is requesting 240 blocks, 16 blocks, 240 blocks, 16 blocks, etc. when doing a
dd directly on the device:
On Mon, 5 Oct 2015, Paul Jones wrote:
> I�m investigating the (lack of) performance (around 150MB/s) of the USB3380
> gadget in mass storage mode.
> Whilst tracing on a Linux 4.1 host I noticed that the Linux max storage
> driver is requesting 240 blocks, 16 blocks, 240 blocks, 16 blocks, etc.
On Mon, 5 Oct 2015, Paul Jones wrote:
> > g_mass_storage, by default, uses 2 struct usb_request, try increasing that
> > to 4
> > (can be done from make menuconfig itself) and see if anything changes.
> If you are talking about the �number of storage pipeline buffers� I already
> have them at
On 05 Oct 2015, at 20:29, Alan Stern wrote:
> On Mon, 5 Oct 2015, Paul Jones wrote:
>
>> I�m investigating the (lack of) performance (around 150MB/s) of the USB3380
>> gadget in mass storage mode.
>> Whilst tracing on a Linux 4.1 host I noticed that the Linux max
On Mon, Oct 05, 2015 at 07:30:05PM +0200, Paul Jones wrote:
> I’m investigating the (lack of) performance (around 150MB/s) of the USB3380
> gadget in mass storage mode. Whilst tracing on a Linux 4.1 host I noticed
> that the Linux max storage driver is requesting 240 blocks, 16 blocks, 240
>
On 05 Oct 2015, at 20:08, Felipe Balbi wrote:
> On Mon, Oct 05, 2015 at 07:30:05PM +0200, Paul Jones wrote:
>> I’m investigating the (lack of) performance (around 150MB/s) of the USB3380
>> gadget in mass storage mode. Whilst tracing on a Linux 4.1 host I noticed
>> that the
On Mon, 5 Oct 2015, Paul Jones wrote:
> > Increasing the max_sectors_kb value, on the other hand, might remove
> > overhead by allowing a higher percentage of the transfer to consist of
> > real data as opposed to CBW and CSW packets. This depends to some
> > extent on other factors (such as
On Mon, 5 Oct 2015, Paul Jones wrote:
> >> Any ideas why the driver is requesting varying block sizes?
> >
> > The usb-storage driver requests what the block layer tells it to
> > request.
> I’m running a dd with a block size of 64k, so it seems to be
> aggregating 2 requests and then splits
On 05 Oct 2015, at 20:54, Alan Stern wrote:
> On Mon, 5 Oct 2015, Paul Jones wrote:
>
>>> g_mass_storage, by default, uses 2 struct usb_request, try increasing that
>>> to 4
>>> (can be done from make menuconfig itself) and see if anything changes.
>> If you are
On Mon, 5 Oct 2015, Alan Stern wrote:
> On Mon, 5 Oct 2015, Paul Jones wrote:
>
>>> Increasing the max_sectors_kb value, on the other hand, might remove
>>> overhead by allowing a higher percentage of the transfer to consist of
>>> real data as opposed to CBW and CSW packets. This depends to some
Hi Paul,
Good to see you're still hanging around.
On 10/5/2015 3:38 PM, Paul Zimmerman wrote:
> On Mon, 5 Oct 2015, Alan Stern wrote:
>> On Mon, 5 Oct 2015, Paul Jones wrote:
>>
Increasing the max_sectors_kb value, on the other hand, might remove
overhead by allowing a higher
32 matches
Mail list logo