unsubscribe
DStream.scala#L463).
>
> You can control this behaviour by StreamingContext#remember to some extent.
>
> // maropu
>
>
> On Fri, Jan 20, 2017 at 3:17 AM, Andrew Milkowski
> wrote:
>
>> hello
>>
>> using spark 2.0.2 and while running sample streaming a
hello
using spark 2.0.2 and while running sample streaming app with kinesis
noticed (in admin ui Storage tab) "Stream Blocks" for each worker keeps
climbing up
then also (on same ui page) in Blocks section I see blocks such as below
input-0-1484753367056
that are marked as Memory Serialized
Hello, have question , we seeing below exceptions, and at the moment are
enabling JVM profiler to look into gc activity on workers and if you have
any other suggestions do let know please , we dont just want increase rpc
timeout (from 120) to 600 sec lets say but get to reason why workers
timeout
Dear community never mind! although I was using 1.0.0 spark everywhere I did
not update my spark client
changed pom (from 0.9.0) to 1.0.0
1.0.0-cdh5.1.0
1.0.0-cdh5.1.0
fixed the problem
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/akka-tcp-spark-loc
Hello community
Using following distros:
spark:
http://archive.cloudera.com/cdh5/cdh/5/spark-1.0.0-cdh5.1.0-src.tar.gz
mesos: http://archive.apache.org/dist/mesos/0.19.0/mesos-0.19.0.tar.gz
both assembled with with scala 2.10.4 and java 7
my
#!/usr/bin/env bash
my spark-env.sh looks as follow
/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
On Wed, Jul 16, 2014 at 1:47 PM, Andrew Milkowski
wrote:
> Sandy, perfect! you saved me tons of time! added this in yarn-site.xml job
> ran to completion
>
> Can you do me (us) a favor and
ote:
>
>> Somewhere in here, you are not actually running vs Hadoop 2 binaries.
>> Your cluster is certainly Hadoop 2, but your client is not using the
>> Hadoop libs you think it is (or your compiled binary is linking
>> against Hadoop 1, which is the default for Spark -- d
re, you are not actually running vs Hadoop 2 binaries.
>> Your cluster is certainly Hadoop 2, but your client is not using the
>> Hadoop libs you think it is (or your compiled binary is linking
>> against Hadoop 1, which is the default for Spark -- did you change
>> it?)
>>
he default for Spark -- did you change
> it?)
>
> On Wed, Jul 16, 2014 at 5:45 PM, Andrew Milkowski
> wrote:
> > Hello community,
> >
> > tried to run storm app on yarn, using cloudera hadoop and spark distro
> (from
> > http://archive.cloudera.com/cdh5
Hello community,
tried to run storm app on yarn, using cloudera hadoop and spark distro
(from http://archive.cloudera.com/cdh5/cdh/5)
hadoop version: hadoop-2.3.0-cdh5.0.3.tar.gz
spark version: spark-0.9.0-cdh5.0.3.tar.gz
DEFAULT_YARN_APPLICATION_CLASSPATH is part of hadoop-api-yarn jar ...
tha
13 matches
Mail list logo