Greetings,
I modified ActorWordCount example a little and it uses simple case class as the
message for Streaming instead of the primitive string.
I also modified launch code to not use run-example script, but set spark master
in the code and attach the jar (setJars(...)) with all the classes
Just checked it with 1.0.2
Still same exception.
From: Anton Brazhnyk [mailto:anton.brazh...@genesys.com]
Sent: Wednesday, August 27, 2014 6:46 PM
To: Tathagata Das
Cc: user@spark.apache.org
Subject: RE: [Streaming] Akka-based receiver with messages defined in uploaded
jar
Sorry for the delay
at
either Spark or application code?
From: Tathagata Das [mailto:tathagata.das1...@gmail.com]
Sent: Friday, August 29, 2014 7:21 PM
To: Anton Brazhnyk
Cc: user@spark.apache.org
Subject: Re: [Streaming] Akka-based receiver with messages defined in uploaded
jar
Can you try adding the JAR
Just a guess.
updateStateByKey has overloaded variants with partitioner as parameter. Can it
help?
-Original Message-
From: qihong [mailto:qc...@pivotal.io]
Sent: Tuesday, September 09, 2014 9:13 PM
To: u...@spark.incubator.apache.org
Subject: Re: how to setup steady state stream
Greetings,
First comment on the issue says that reason for non-supporting of multiple
contexts is
There are numerous assumptions in the code base that uses a shared cache or
thread local variables or some global identifiers
which prevent us from using multiple SparkContext's.
May it be worked
Check for the dependencies. Looks like you have a conflict around servlet-api
jars.
Maven's dependency-tree, some exclusions and some luck :) could help.
From: Ralph Bergmann | the4thFloor.eu [ra...@the4thfloor.eu]
Sent: Tuesday, February 17, 2015 4:14 PM
Greetings,
I have a relatively complex application with Spark, Jetty and Guava (16) not
fitting together.
Exception happens when some components try to use mix of Guava classes
(including Spark's pieces) that are loaded by different classloaders:
java.lang.LinkageError: loader constraint
The problem is with 1.3.1
It has Function class (mentioned in exception) in
spark-network-common_2.10-1.3.1.jar.
Our current resolution is actually backport to 1.2.2, which is working fine.
From: Marcelo Vanzin [mailto:van...@cloudera.com]
Sent: Thursday, May 14, 2015 6:27 PM
To: Anton Brazhnyk
.
That’s why I proposed to put them into separate maven artifact where they could
be just excluded in the build of the app that depends on Spark.
From: Marcelo Vanzin [mailto:van...@cloudera.com]
Sent: Friday, May 15, 2015 11:55 AM
To: Anton Brazhnyk
Cc: user@spark.apache.org
Subject: Re: Spark's