thanks

On Tue, Sep 9, 2014 at 2:00 PM, 潘臻轩 <[email protected]> wrote:

> use ganglia to monitor zk, 40 machine is a test value, worker maybe be
> 200, executor is 4k..
>
> 2014-09-09 18:12 GMT+08:00 Vladi Feigin <[email protected]>:
>
>> How do you monitor ZK? How did you discover that in 40 machines clusters,
>> ZK became a bottleneck?
>> How many workers , executors do you have?
>> Thanks
>>
>> On Tue, Sep 9, 2014 at 12:48 PM, 潘臻轩 <[email protected]> wrote:
>>
>>> *5 zk node, I do some optimize for storm,*
>>> *It decrease use of zk *
>>> *storm use zk for exchange heartbeat and assignment, *
>>> *heartbeat will lead write pressure for zk*
>>> *and assignment **will lead read pressure for zk.*
>>> *one zk is not ok, I think zk cluster min should 3 machine*
>>>
>>> 2014-09-09 17:29 GMT+08:00 Spico Florin <[email protected]>:
>>>
>>>> Hello!
>>>>  How many ZK nodes you are using? If are adding more zk nodes will be
>>>> then well load balanced for the storm-cluster? What is the information that
>>>> is exchanged via zk and how can I see it? I had a look on the Exhibitor but
>>>> without help to get this information.
>>>> I have a topology with aprox 900 executors and one single zk. Storm
>>>> cannot handle with this configuration.
>>>> Regards,
>>>> Florin
>>>>
>>>> On Tue, Sep 9, 2014 at 12:11 PM, 潘臻轩 <[email protected]> wrote:
>>>>
>>>>> I have test, storm with 40 machine , A job with 4k executor ,zk will
>>>>> be bottleneck
>>>>>
>>>>> 2014-09-09 17:05 GMT+08:00 潘臻轩 <[email protected]>:
>>>>>
>>>>>> I maintain a storm platform have 700 machine..
>>>>>>
>>>>>> 2014-09-09 17:03 GMT+08:00 Kobi Salant <[email protected]>:
>>>>>>
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Please see Nathan's answer.
>>>>>>> https://groups.google.com/forum/#!topic/storm-user/Ffscv10iF-g
>>>>>>>
>>>>>>> Kobi
>>>>>>>
>>>>>>> On Tue, Sep 9, 2014 at 7:18 AM, Vladi Feigin <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> What's the recommended size Nimbus can handle without any
>>>>>>>> management/performance implications?
>>>>>>>> In terms of supervisors, workers.
>>>>>>>> We have already a few hundreds of workers , a few tens of
>>>>>>>> topologies in the cluster
>>>>>>>> We observe that sometimes (not too rarely) Nimbus getting stuck
>>>>>>>> during re-balancing so we suspect we'are on the limit
>>>>>>>> Can you share how are big you clusters ? When do you split the
>>>>>>>> clusters? If at all you do ?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Vladi Feigin
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> This message may contain confidential and/or privileged information.
>>>>>>> If you are not the addressee or authorized to receive this on behalf
>>>>>>> of the addressee you must not use, copy, disclose or take action based 
>>>>>>> on
>>>>>>> this message or any information herein.
>>>>>>> If you have received this message in error, please advise the sender
>>>>>>> immediately by reply email and delete this message. Thank you.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to