> For dynamic allocation feature, I need spark-xxx-yarn-shuffle.jar. In my
> local spark build, I can see it. But in maven central, I can’t find it. My
> build script pulls all jars from maven central. The only option is to check
> in this jar into git?
>
> Thanks,
>
> -Neal
>
--
-Dhruve Ashar
l.
>> catalyst.expressions.GeneratedClass$SpecificOrdering" grows beyond 64 KB
>>
>
> Unfortunately I'm not clear on how to even isolate the source of this
> problem. I didn't have this problem in Spark 1.6.1.
>
> Any clues?
>
--
-Dhruve Ashar
stopped! Dropping event
>> SparkListenerExecutorMetricsUpdate(70,WrappedArray())
>> 16/08/02 16:59:33 WARN yarn.YarnAllocator: Expected to find pending
>> requests, but found none.
>> 16/08/02 16:59:33 WARN netty.Dispatcher: Message
>> RemoteProcessDisconnected(17.138.53.26:55338) dropped. Could not find
>> MapOutputTracker.
>>
>>
>>
>> Cheers,
>>
>>
>> LZ
>>
>>
>
--
-Dhruve Ashar
;> already stopped! Dropping event
>> SparkListenerExecutorMetricsUpdate(70,WrappedArray())
>> 16/08/02 16:59:33 WARN yarn.YarnAllocator: Expected to find pending
>> requests, but found none.
>> 16/08/02 16:59:33 WARN netty.Dispatcher: Message
>> RemoteProcessDisconnected(17.138.53.26:55338) dropped. Could not find
>> MapOutputTracker.
>>
>>
>>
>> Cheers,
>>
>>
>> LZ
>>
>>
>
--
-Dhruve Ashar
f.scala:454)
>
>
> /*Please note the same works with CDH 5.4 with spark 1.3.0.*/
>
> Regards,
> Sam
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/spark-driver-extraJavaOptions-tp27389.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
-Dhruve Ashar
INFO ContextCleaner: Cleaned RDD 14
> 16/07/11 14:28:46 INFO BlockManagerInfo: Removed broadcast_11_piece0 on
> 10.101.230.154:35192 in memory (size: 25.5 KB, free: 37.1 GB)
> ...
>
> In fact, the job is still running, Spark's UI shows uptime of 20.6 hours
> with last job finishing
re.
>
> Would appreciate any help, thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-hangs-at-Removed-broadcast-tp27320.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> --
s, -t) unlimited
> max user processes (-u) 241204
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
>
> but when my spark application crash , show error " Failed to
> write core dump. Core dumps have been disabled. To enablecore du
m/Error-report-file-is-deleted-automatically-after-spark-application-finished-tp27247.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
-Dhruve Ashar
Leader election in curator and says that noMethod found
>> (getProcess) and master doesnt get started.
>>
>> Just wondering what could be causing the issue.
>>
>> I am using same zookeeper cluster for HDFS High availability and it is
>> working just fine.
>>
>>
>> Thanks
>>
>
--
-Dhruve Ashar
10 matches
Mail list logo