Hello David,

Let me make it more clear;


  *   There is not any spark installed on windows laptop, just the intellij and 
the related dependencies.
  *   SparkLauncher is good starting point for submitting a job programatically 
but  i am not sure if my current problem is related with job execution strategy
  *   I am not even using spark-submit
  *   I have one notebook running intellij and the below code, one master 
ubuntu and two slaves ubuntu on my network.

Here is the code :

import 
com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream
import org.apache.spark.sql.SparkSession
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.kinesis.KinesisUtils
import org.apache.spark.streaming.{Milliseconds, Seconds, StreamingContext}

/**
 * Created by serkan on 23.04.2017.
 */
object KinesisStreamCluster {



 val spark:SparkSession = SparkSession.builder()
   .config("spark.jars.packages",
"org.apache.spark:spark-streaming-kinesis-asl_2.11:2.1.0")
   .master(“spark://xxx.xxx.xxx.xxx:7077”).getOrCreate()

 def main(args: Array[String]): Unit = {

   val ssc = new StreamingContext(spark.sparkContext, Seconds(20))

   val kinesisCheckpointInterval = Milliseconds(20000)

   val kinesisStream = KinesisUtils.createStream(ssc, “scala",
"stream_name_for_kinesis", 
"kinesis.eu-west-1.amazonaws.com<http://kinesis.eu-west-1.amazonaws.com>", 
"eu-west-1",
InitialPositionInStream.LATEST, kinesisCheckpointInterval,
StorageLevel.MEMORY_AND_DISK_2)

   kinesisStream.print()

   ssc.start()
   ssc.awaitTermination()

 }

}


and getting error while executing the code :



17/05/11 09:52:25 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 70,
10.240.65.189, executor 1): java.io.IOException:
java.lang.ClassNotFoundException:
org.apache.spark.streaming.kinesis.KinesisReceiver
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1276)
at
org.apache.spark.rdd.ParallelCollectionPartition.readObject(ParallelCollectionRDD.scala:70)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2122)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:258)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



I have modifed the spark-defaults.conf on master and slaves and inserted the 
definition below but nothing changed.


spark.jars.packages    org.apache.spark:spark-streaming-kinesis-asl_2.11:2.1.0



Nothing changed !

David Kaczynski <dkaczyn...@gmail.com<mailto:dkaczyn...@gmail.com>> şunları 
yazdı (10 May 2017 15:29):

Do you have Spark installed locally on your laptop with IntelliJ?  Are you 
using the SparkLauncher class or your local spark-submit script?  A while back, 
I was trying to submit a spark job from my local workstation to a remote 
cluster using the SparkLauncher class, but I didn't actually have SPARK_HOME 
set or the spark-submit script on my local machine yet, so the submit was 
failing.  I think the error I was getting was that SPARK_HOME environment 
variable was not set, though.

On Wed, May 10, 2017 at 5:51 AM s t 
<serkan_...@hotmail.com<mailto:serkan_...@hotmail.com>> wrote:
Hello,

I am trying to run spark code from my laptop with intellij. I have cluster of 2 
nodes and a master. When i start the program from intellij it gets error of 
some missing classes.

I am aware that some jars need to be distributed to the workers but do not know 
if it is possible programatically. spark submit or jupyter notebook handles the 
issue but intellij does not.

can any one give some advices to me ?
---------------------------------------------------------------------
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>


Reply via email to