Hi,
Everytime I run my spark application using mesos, I get logs in my console
in the form:
2016-08-26 15:25:30,949:960521(0x7f6bccff9700):ZOO_INFO@log_env
2016-08-26 15:25:30,949:960521(0x7f6bccff9700):ZOO_INFO@log_env
2016-08-26 15:25:30,949:960521(0x7f6bccff9700):ZOO_INFO@log_env
2016-08-26
Sorry, I have not been able to solve the issue. I used speculation mode as
workaround to this.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-task-hangs-infinitely-when-accessing-S3-from-AWS-tp25289p26068.html
Sent from the Apache Spark User List
Any hints?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-task-hangs-infinitely-when-accessing-S3-from-AWS-tp25289p25365.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Some other stats:
The number of files I have in the folder is 48.
The number of partitions used when reading data is 7315.
The maximum size of a file to read is 14G
The size of the folder is around: 270G
--
View this message in context:
Any help on this? this is really blocking me and I don't find any feasible
solution yet.
Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-task-hangs-infinitely-when-accessing-S3-from-AWS-tp25289p25327.html
Sent from the Apache Spark User List
Hi guys, when reading data from S3 from AWS using Spark 1.5.1 one of the
tasks hangs when reading data in a way that cannot be reproduced. Some times
it hangs, some times it doesn't.
This is the thread dump from the hung task:
"Executor task launch worker-3" daemon prio=10 tid=0x7f419c023000
Hi guys,
It happens to me quite often that when the locality level of a task goes
further than LOCAL (NODE, RACK, etc), I get some of the following
exceptions: too many files open, encountered unregistered class id,
cannot cast X to Y.
I do not get any exceptions during shuffling (which means
Hi guys,
I get Kryo exceptions of the type unregistered class id and cannot cast
to class when the locality level of the tasks go beyond LOCAL.
However I get no Kryo exceptions during shuffling operations.
If the locality level never goes beyond LOCAL everything works fine.
Is there a special
Hello guys,
I'm using Spark 1.0.0 and Kryo serialization
In the Spark Shell, when I create a class that contains as an attribute the
SparkContext, in this way:
class AAA(val s: SparkContext) { }
val aaa = new AAA(sc)
and I execute any action using that attribute like:
val myNumber = 5
Marcelo Vanzin wrote
Do you expect to be able to use the spark context on the remote task?
Not At all, what I want to create is a wrapper of the SparkContext, to be
used only on the driver node.
I would like to have in this AAA wrapper several attributes, such as the
SparkContext and other
:
Hello,
On Mon, Nov 24, 2014 at 12:07 PM, aecc [hidden email]
http://user/SendEmail.jtp?type=nodenode=19687i=0 wrote:
This is the stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task
not
serializable: java.io.NotSerializableException: $iwC$$iwC$$iwC$$iwC$AAA
to be serialized, and this:
sc.parallelize(1 to 10).filter(_ == myNumber).count
does not.
2014-11-24 23:13 GMT+01:00 Marcelo Vanzin [via Apache Spark User List]
ml-node+s1001560n19692...@n3.nabble.com:
On Mon, Nov 24, 2014 at 1:56 PM, aecc [hidden email]
http://user/SendEmail.jtp?type=nodenode=19692i
Hello, I would like to have a kind of sub windows. The idea is to have 3
windows in the following way:
future - --
past
w1 w2 w3
So I can do some processing with the
Hi, I would like to know how is the correct way to add kafka to my project in
StandAlone YARN, given that now it's in a different artifact than the Spark
core.
I tried adding the dependency to my project but I get a
NotClassFoundException to my main class. Also, that makes my Jar file very
big,
14 matches
Mail list logo