On Fri, Dec 4, 2015 at 11:24 AM, Anfernee Xu <anfernee...@gmail.com> wrote:

> If multiple users are looking at the same data set, then it's good choice
> to share the SparkContext.
>
> But my usercases are different, users are looking at different data(I use
> custom Hadoop InputFormat to load data from my data source based on the
> user input), the data might not have any overlap. For now I'm taking below
> approach
>

Still if you want fine grained sharing of compute resources as well, you
want to using single SparkContext.

Reply via email to