Hi Maheshakya,

I will look into this.

According to your setting, the ideal scenario is,
node1 ->  spark master (active) + worker
node2 -> spark master (standby) + worker
node3 -> worker

did you start the servers all at once or one by one?

rgds

On Mon, Jul 27, 2015 at 11:07 AM, Anjana Fernando <[email protected]> wrote:

> Hi,
>
> Actually, when the 3'rd sever has started up, all 3 servers should have
> worker instances. This seems to be a bug. @Niranda, please check it out
> ASAP.
>
> Cheers,
> Anjana.
>
> On Mon, Jul 27, 2015 at 11:01 AM, Maheshakya Wijewardena <
> [email protected]> wrote:
>
>> Hi,
>>
>> I have tried to create a Spark cluster with DAS using Carbon clustering.
>> 3 DAS nodes are configured and number of Spark masters is set to 2. In this
>> setting, one of the 3 nodes should have a Spark worker node, but all 3
>> nodes are starting as Spark masters. What can be the reason for this?
>>
>> Configuration files (one of *axis2.xml* files of DAS clusters and
>> *spark-defaults.conf*) of DAS are attached herewith.
>>
>> Best regards.
>> --
>> Pruthuvi Maheshakya Wijewardena
>> Software Engineer
>> WSO2 : http://wso2.com/
>> Email: [email protected]
>> Mobile: +94711228855
>>
>>
>>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to