>>>> There is no special meaning for the pool size, if flutter of > 25 events, 
>>>> notify sas events will return error, and the further step work is 
>>>> depending on LLDD drivers.
>>>> I hope libsas could do more work in this case, but now it seems a little 
>>>> difficult, this patch may be a interim fix, until we find a perfect 
>>>> solution.
>>>
>>> The principal of having a fixed-sized pool is ok, even though the pool size 
>>> needs more consideration.
>>>
>>> However my issue is how to handle pool exhaustion. For a start, relaying 
>>> info to the LLDD that the event notification failed is probably not the way 
>>> to go. I only now noticed "scsi: sas: scsi_queue_work can fail, so make 
>>> callers aware" made it into the kernel; as I mentioned in response to this 
>>> patch, the LLDD does not know how to handle this (and no LLDDs do actually 
>>> handle this).
>>>
>>> I would say it is better to shut down the PHY from libsas (As Dan mentioned 
>>> in the v1 series) when the pool exhausts, under the assumption that the PHY 
>>> has gone into some erroneous state. The user can later re-enable the PHY 
>>> from sysfs, if required.
>>
>> I considered this suggestion, and what I am worried about are, first if we 
>> disable phy once the sas event pool exhausts, it may hurt the pending sas 
>> event process which has been queued,
> 
> I don't see how it affects currently queued events - they should just be 
> processed normally. As for LLDD reporting events when the pool is exhausted, 
> they are just lost.

So if we disable a phy, it's nothing affect to the already queued sas event 
process, which including access the phy to find target device ?

> 
>> second, if phy was disabled, and no one trigger the reenable by sysfs, the 
>> LLDD has no way to post new sas phy events.
> 
> For the extreme scenario of pool becoming exhausted and PHY being disabled, 
> it should remain disabled until user takes some action to fix originating 
> problem.

So we should print explicit message to tell user what's happen and how to fix 
it.

Thanks!
Yijing.

> 
>>
>> Thanks!
>> Yijing.
>>
>>>
>>> Much appreciated,
>>> John
>>>
>>>>
>>>> Thanks!
>>>> Yijing.
>>>>
>>>>>
>>>>> Thanks,
>>>>> John
>>>>>
>>>>>
>>>>> .
>>>>>
>>>>
>>>>
>>>> .
>>>>
>>>
>>>
>>>
>>> .
>>>
>>
>>
>> .
>>
> 
> 
> 
> .
> 

Reply via email to