I finally isolated the issue to be related to the ActorSystem I reuse from
SparkEnv.get.actorSystem. This ActorSystem will contain the configuration
defined in my application jar's reference.conf in both local cluster case,
and in the case I use it directly in an extension to BaseRelation's buildSc
The configure is in the jar I passed in. And if I do not create my own RDD for
partitioned loading, everything is fine, in which case the task is run in
executor right? So it seems some special call path before triggering my RDD
compute makes the configure 'lost'.
I will try to see if I can d
This looks like a specific Spray configuration issue (or how Spray reads
config files). Maybe Spray is reading some local config file that doesn't
exist on your executors?
You might need to email the Spray list.
On Fri, Apr 24, 2015 at 2:38 PM, Yang Lei wrote:
> forward to dev.
>
> On Mon, Apr
forward to dev.
On Mon, Apr 20, 2015 at 10:46 AM, Yang Lei wrote:
> I implemented two kinds of DataSource, one load data during buildScan,
> the other returning my RDD class with partition information for future
> loading.
>
> My RDD's compute gets actorSystem from SparkEnv.get.actorSystem, the