Yeah, unfortunately that will be up to them to fix, though it wouldn't hurt to
send them a JIRA mentioning this.
Matei
> On Nov 25, 2014, at 2:58 PM, Corey Nolet wrote:
>
> I was wiring up my job in the shell while i was learning Spark/Scala. I'm
> getting more comfortable with them both now
I was wiring up my job in the shell while i was learning Spark/Scala. I'm
getting more comfortable with them both now so I've been mostly testing
through Intellij with mock data as inputs.
I think the problem lies more on Hadoop than Spark as the Job object seems
to check it's state and throw an e
How are you creating the object in your Scala shell? Maybe you can write a
function that directly returns the RDD, without assigning the object to a
temporary variable.
Matei
> On Nov 5, 2014, at 2:54 PM, Corey Nolet wrote:
>
> The closer I look @ the stack trace in the Scala shell, it appear
-
--Harihar
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Configuring-custom-input-format-tp18220p19800.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsu
The closer I look @ the stack trace in the Scala shell, it appears to be
the call to toString() that is causing the construction of the Job object
to fail. Is there a ways to suppress this output since it appears to be
hindering my ability to new up this object?
On Wed, Nov 5, 2014 at 5:49 PM, Cor
I'm trying to use a custom input format with SparkContext.newAPIHadoopRDD.
Creating the new RDD works fine but setting up the configuration file via
the static methods on input formats that require a Hadoop Job object is
proving to be difficult.
Trying to new up my own Job object with the
SparkCon