Hi Rob,

Sorry for replying late.

On Tue, Oct 12, 2010 at 6:08 AM, Robert Love <[email protected]> wrote:
> On Fri, 2010-10-08 at 22:57 +0800, Hillf Danton wrote:
>
> Hi Hillf,
>
> Can you please prefix the libfc patches with "libfc: <title>"? Also, for

Yes, I can.

> your other patches that have a "SCSI" prefix, I think James' system adds
> the "[SCSI]" prefix when he applies patches so you don't need to add it.

I will remove SCSI.

>
>> Fibre Channel exchange pool is setup in a per cpu way that
>> entire exchange id range is divided equally across all cpus.
>>
>> When cpu goes offline, the corresponding range of exchg id
>> becomes unavailable until it is online again.
>>
>> This work tries to fix the unavailability based on notifier_block.
>>
>> Signed-off-by: Hillf Danton <[email protected]>
>> ---
>>
>> --- o/linux-2.6.36-rc4/drivers/scsi/libfc/fc_exch.c   2010-09-13
>> 07:07:38.000000000 +0800
>> +++ m/linux-2.6.36-rc4/drivers/scsi/libfc/fc_exch.c   2010-10-08
>> 22:24:48.000000000 +0800
>
> Most patches I've seen on the various mailing lists are created by
> diffing the files within the kernel directory, so that you don't have
> the "/linux-2.6.36-rc/" in each hunk. It's the difference between using
> 'patch -p1' or 'patch -p2' when applying the patch.

I want to show clearly patch reviewers the kernel version.

>
> I think this is preventing git-am from applying the patches. To import
> them I ended up having git-am fail and then manually using the 'patch
> -p2' command, so that I could get both the commit message from git-am
> and then the patch itself from 'patch'. I don't know about anyone else,
> but if you post patches without the kernel directory in the diff line it
> would help me out. Thanks.

Sorry for getting you in trouble.

>
>> @@ -26,6 +26,9 @@
>>  #include <linux/timer.h>
>>  #include <linux/slab.h>
>>  #include <linux/err.h>
>> +#include <linux/notifier.h>
>> +#include <linux/cpu.h>
>> +#include <linux/cpumask.h>
>>
>>  #include <scsi/fc/fc_fc2.h>
>>
>> @@ -40,6 +43,8 @@ static u16  fc_cpu_order;   /* 2's power to
>>  static struct kmem_cache *fc_em_cachep;             /* cache for exchanges 
>> */
>>  struct workqueue_struct *fc_exch_workqueue;
>>
>> +static cpumask_t fc_offline_cpu_mask = CPU_MASK_NONE;
>> +
>>  /*
>>   * Structure and function definitions for managing Fibre Channel Exchanges
>>   * and Sequences.
>> @@ -666,6 +671,7 @@ static struct fc_exch *fc_exch_em_alloc(
>>       unsigned int cpu;
>>       u16 index;
>>       struct fc_exch_pool *pool;
>> +     unsigned int index = -1;
>
> This patch doesn't compile:

This is my fault when I prepare the patch based upon
for_each_cpu in cpumask.h, and sorry.

/**
 * for_each_cpu - iterate over every cpu in a mask
 * @cpu: the (optionally unsigned) integer iterator
 * @mask: the cpumask pointer
 *
 * After the loop, cpu is >= nr_cpu_ids.
 */

>
> drivers/scsi/libfc/fc_exch.c: In function ‘fc_exch_em_alloc’:
> drivers/scsi/libfc/fc_exch.c:674: error: conflicting types for ‘index’
> drivers/scsi/libfc/fc_exch.c:672: note: previous declaration of ‘index’
> was here
> make[3]: *** [drivers/scsi/libfc/fc_exch.o] Error 1
>
> In general I think the idea is fine. I haven't run into any
> circumstances where the per-CPU pools are exhausted. I know others have

The design of exch pool is an outstandingly good model for applications in
the environment of multiple cpus, it will be in my work.

> when using NPIV, but there were other ways to fix those issues.
>
>>
>>       /* allocate memory for exchange */
>>       ep = mempool_alloc(mp->ep_pool, GFP_ATOMIC);
>> @@ -679,6 +685,7 @@ static struct fc_exch *fc_exch_em_alloc(
>>       pool = per_cpu_ptr(mp->pool, cpu);
>>       spin_lock_bh(&pool->lock);
>>       put_cpu();
>> +again:
>>       index = pool->next_index;
>>       /* allocate new exch from pool */
>>       while (fc_exch_ptr_get(pool, index)) {
>> @@ -719,6 +726,17 @@ out:
>>  err:
>>       spin_unlock_bh(&pool->lock);
>>       atomic_inc(&mp->stats.no_free_exch_xid);
>
> I think this should be moved below the offline pool check, when we've

Node.

> really failed to allocate an exchange. It's not a per-CPU pool stat,
> it's an overall resource allocation failure stat, which you're now
> helping to happen less frequently. With the current placement we'll be
> bumping up the counter even though we have allocated an exchange.
>
> Thanks, //Rob
>
>

Thank you for sharing so much.

//Hillf
_______________________________________________
devel mailing list
[email protected]
http://www.open-fcoe.org/mailman/listinfo/devel

Reply via email to