Mike Christie wrote:
>> - the kfifo_put call is safe since there's one consumer (xmit flow under
>>lock for passthrough, and xmit worker for non passthrough) and one
>>producer (response flow has single tasklet instance).
> I do do not know what you mean that there is one consumer. There
On 02/03/2010 03:09 AM, Or Gerlitz wrote:
[PATCH RFC] remove some of the locking in iser scsi command response flow
currently iser recv completion flow takes the session lock twice.
optimize it to avoid the first one by letting iser_task_rdma_finalize()
be called only from the cleanup_task call
On 02/03/2010 01:50 AM, Erez Zilber wrote:
It looks like I posted it at Red Hat and never got a response, and I
probably then forgot about it and never asked upstream. Will send mail
upstream now.
Which list are you sending it to? I thought it was lkml, but didn't
find any discussion there.
On 02/02/2010 09:57 PM, Jack Z wrote:
Hi all,
In the source code of ver 2.0.871 there's a struct
struct iscsi_segment {
unsigned char *data;
unsigned intsize;
unsigned intcopied;
unsigned inttotal_size;
unsign
On Wed, Feb 3, 2010 at 11:30 AM, Mike Christie wrote:
> On 02/03/2010 01:50 AM, Erez Zilber wrote:
>>>
>>> It looks like I posted it at Red Hat and never got a response, and I
>>> probably then forgot about it and never asked upstream. Will send mail
>>> upstream now.
>>
>> Which list are you send
The following patch set removes some in efficiencies in the iser data path
through simplification and reducing the amount of code, using less atomic
operations, avoiding TX interrupts, moving to iscsi passthrough mode,
etc. I did my best to build it as a sequence of patches and not as one
big re-wr
Mike Christie wrote:
> Doh forgot about the original issue. Nice idea and patch. Looks ok to me.
lets see... I was kind of under the impression that the --original-- or major
issue here
is the session lock being held for two much time and introducing too much of a
contention
both between the co
Or Gerlitz wrote:
> I can't go over this max when applying my patch of lockless flow for
> queuecommand / passthrough
this is the patch I was using till today when I started to suspect it doesn't
yield any
or very little of IOPS over the rest of the patches.
---
drivers/infiniband/ulp/iser/i
Mike Christie wrote:
> So in the end the only lock we would have in the io path is the per task one
and what you were thinking the patch buy us? less cpu usage? more iops? did you
had some proof-of-concept testbed that yielded this under the patch?
Or.
--
You received this message because you a
On Wed, Feb 3, 2010 at 4:30 PM, Or Gerlitz wrote:
> The following patch set removes some in efficiencies in the iser data path
> through simplification and reducing the amount of code, using less atomic
> operations, avoiding TX interrupts, moving to iscsi passthrough mode,
> etc. I did my best t
On Wed, Feb 3, 2010 at 4:30 PM, Or Gerlitz wrote:
>
> The following patch set removes some in efficiencies in the iser data path
> through simplification and reducing the amount of code, using less atomic
> operations, avoiding TX interrupts, moving to iscsi passthrough mode,
> etc. I did my best
On 02/03/2010 10:08 AM, Or Gerlitz wrote:
Mike Christie wrote:
So in the end the only lock we would have in the io path is the per task one
and what you were thinking the patch buy us? less cpu usage? more iops? did you
had some proof-of-concept testbed that yielded this under the patch?
Mo
On 02/03/2010 10:03 AM, Or Gerlitz wrote:
Mike Christie wrote:
Doh forgot about the original issue. Nice idea and patch. Looks ok to me.
lets see... I was kind of under the impression that the --original-- or major
issue here
It is. Your patch was a incremental change that removed the extra
On 02/03/2010 06:07 AM, Erez Zilber wrote:
On Wed, Feb 3, 2010 at 11:30 AM, Mike Christie wrote:
On 02/03/2010 01:50 AM, Erez Zilber wrote:
It looks like I posted it at Red Hat and never got a response, and I
probably then forgot about it and never asked upstream. Will send mail
upstream now.
Hi Mike,
Thank you for being so helpful!
>
> What are you working on btw?
I observed a throughput degradation of open-iscsi over long RTT links.
I'm trying to understand the nature of this performance degradation
and possibly come up with some solution.
Thanks again,
Jack
On Feb 3, 2:36 am, M
15 matches
Mail list logo