Hi François,

Yes i looked at Mesos which can be used for resource scheduling(i.e. one 
master and many executor) , but we would like to have master in 
active/passive mode which i afraid Mesos will not provide

Our data source is TCP sockets

I also posted query regarding high availability. Please have your view

On Monday, April 20, 2015 at 7:32:36 PM UTC+5:30, François Garillot wrote:
>
> Hi Tomer,
>
> 1. Have you looked into Apache Mesos ?
>
> 2. What is your streaming data source ? Is it Kafka ?
>
> Cheers,
> -- 
> FG
>
> On Sunday, April 19, 2015 at 11:04:15 AM UTC+2, tomerneeraj wrote:
>>
>> Hi, 
>>
>> We would like to use spark without Hadoop. To use it in highly scalable 
>> and high availability mode, yarn and hdfs Api do the purpose of resource 
>> scheduling and shared storage. We have data stored in separate disk(not 
>> shared). Couple of queries regarding this 
>>
>> 1. Can we replace YARN with Akka cluster for resource scheduling(master 
>> and worker node work distribution )?? 
>>
>> 2. Is it necessary to have shared file system for spark streaming. Can we 
>> have standalone disk for master and worker in spark streaming and resource 
>> scheduling without sharing any disk between spark nodes?? 
>>
>> 3. What is the algorithm to distribute traffic by master node to worker 
>> node and how does spark streaming scale. Is there any way AKKA cluster 
>> helping it somehow?? 
>>
>> Regards 
>> Neeraj 
>>
>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to