On Thu, Oct 30, 2008 at 10:26 PM, Sumit Gupta <[EMAIL PROTECTED]> wrote:
> Cyril Plisko wrote:
>>
>> On Thu, Oct 30, 2008 at 8:19 PM, Sumit Gupta <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> Cyril Plisko wrote:
>>>
>>>>
>>>> On Thu, Oct 30, 2008 at 7:15 PM, Sumit Gupta <[EMAIL PROTECTED]>
>>>> wrote:
>>>>
>>>>
>>>>>
>>>>> On Oct 30, 2008, at 9:25 AM, Cyril Plisko wrote:
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> On Thu, Oct 30, 2008 at 6:01 PM, Sumit Gupta <[EMAIL PROTECTED]>
>>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Hi Cyril
>>>>>>>
>>>>>>> The LU entry points are called using kernel threads (called worker
>>>>>>> threads
>>>>>>> inside the framework). The thread pool is dynamically grown from 4 to
>>>>>>> 256
>>>>>>> (a
>>>>>>> soft limit) by stmf based on the load. The task spread is round robin
>>>>>>> among
>>>>>>> all the workers. Regarding sleeping, the framework is designed to
>>>>>>> allow
>>>>>>> for
>>>>>>> both synchronous and asynchronous operations. You can choose a model
>>>>>>> which
>>>>>>> is best for your application. As a guideline, if you have to sleep
>>>>>>> for
>>>>>>> several milliseconds, you should not block the worker thread and use
>>>>>>> the
>>>>>>> lu_poll entry point (by calling stmf_task_poll_lu()). But if your
>>>>>>> sleep
>>>>>>> is
>>>>>>> just to implement synchronous operations and your backend is
>>>>>>> reasonably
>>>>>>> fast, its ok to block the worker.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Sumit,
>>>>>>
>>>>>> thanks for explanation. As for the duration of the sleep - I was
>>>>>> wondering about calling kmem_alloc() with KM_SLEEP vs KM_NOSLEEP
>>>>>> inside LU entry point. I have no idea how long this will sleep.
>>>>>>
>>>>>>
>>>>>
>>>>>      It is ok to use KM_SLEEP inside LU entry points as in most cases
>>>>> you
>>>>> will have memory and the call wont sleep. If you are running out of
>>>>> memory
>>>>> then your system performance is not going to be good anyway. Which is
>>>>> what
>>>>> will happen if the worker sleeps for a long time.
>>>>>
>>>>>
>>>>>
>>>>
>>>> I see, thanks.
>>>>
>>>>
>>>
>>>  One more thing I forgot to mention is that you should use lu_task_alloc
>>> entry point to do most of your per-task allocations. The benefit is that
>>> those allocations are cached by the framework and you wont do them again
>>> and
>>> again for the subsequent tasks. The cache is automatically drained
>>> (slowly)
>>> by stmf (by calling lu_task_free entry point) after few seconds of
>>> inactivity.
>>>
>>
>> Right. That's what I understood.
>> BTW in sbd_scsi.c:sbd_task_alloc() -  kmem_alloc() is called with
>> KM_NOSLEEP flag. Any reason to avoid KM_SLEEP here ? In latter case
>> there wouldn't be any allocation errors to take care of.
>>
>
>   The lu_task_alloc entry point in particular is designed to handle
> allocation failures so KM_NOSLEEP there is just using that feature. Its not
> a requirement but just an implementation choice. The framework will work
> either way but the behavior may be different depending upon what KM_ flag
> you use (assuming you really are running out of kernel memory).


Sumit.

thanks for your time.


-- 
Regards,
        Cyril
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to