Hi Juntaek,

Awesome information, thanks! I will proceed accordingly.

--John

On Sun, Mar 27, 2016 at 12:16 AM, Jungtaek Lim <[email protected]> wrote:

> Hi John,
>
> Your understanding is right, and I think it makes sense since your bolt
> is CPU-intensive so you may don't want to let cores spending their time
> to enqueue and dequeue. Lowering parallelism could help in that case.
>
> Btw, experimenting yourself is the best practice. Just give it a try and
> see usages of each core.
>
> Thanks,
> Jungtaek Lim (HeartSaVioR)
>
> 2016년 3월 26일 (토) 오후 8:34, John Yost <[email protected]>님이 작성:
>
>> Hi Everyone,
>>
>> I have a very CPU-intensive bolt that requires a high (at least I think
>> it's high) number of executors per worker--in my case, 10.  I am finding
>> that about 40% of the time for the CPU-intensive bolt executor threads is
>> spent in the LMAX messaging layer.
>>
>> My understanding of Storm executor internals is that each executor has
>> two threads to process LMAX messages--one incoming and one outgoing--and I
>> am thinking I am getting a lot of context switching due to the large number
>> of executors overall in my workers (15 between bolts and spouts).
>> Consequently, I am thinking of multithreading the CPU-intensive bolt so I
>> can get the same number of threads processing the data while decreasing
>> context switching I think is occurring between all of the CPU-intensive
>> bolt executor threads.
>>
>> Question--does this make sense? Any thoughts by people who understand
>> Storm better than me are always welcome. :)
>>
>> Thanks
>>
>> --John
>>
>

Reply via email to