But is it possible to make t resizable? When we don't have many RDD to
cache, we can give some memory to others.
2014-09-04 13:45 GMT+08:00 Patrick Wendell pwend...@gmail.com:
Changing this is not supported, it si immutable similar to other spark
configuration settings.
On Wed, Sep 3, 2014
You don’t need to. It is not static allocated to RDD cache, it is just an up
limit.
If you don’t use up the memory by RDD cache, it is always available for other
usage. except those one also controlled by some memoryFraction conf. e.g.
spark.shuffle.memoryFraction which you also set the up
Thanks raymond.
I duplicated the question. Please see the reply here. [?]
2014-09-04 14:27 GMT+08:00 牛兆捷 nzjem...@gmail.com:
But is it possible to make t resizable? When we don't have many RDD to
cache, we can give some memory to others.
2014-09-04 13:45 GMT+08:00 Patrick Wendell
I think there is no public API available to do this. In this case, the best you
can do might be unpersist some RDDs manually. The problem is that this is done
by RDD unit, not by block unit. And then, if the storage level including disk
level, the data on the disk will be removed too.
Best
ok. So can I use the similar logic as the block manager does when space
fills up ?
2014-09-04 15:05 GMT+08:00 Liu, Raymond raymond@intel.com:
I think there is no public API available to do this. In this case, the
best you can do might be unpersist some RDDs manually. The problem is that
I am trying to use Kinesis as source to Spark Streaming and have run into a
dependency issue that can't be resolved without making my own custom Spark
build. The issue is that Spark is transitively dependent
on org.apache.httpcomponents:httpclient:jar:4.1.2 (I think because of
libfb303 coming from
Dumb question -- are you using a Spark build that includes the Kinesis
dependency? that build would have resolved conflicts like this for
you. Your app would need to use the same version of the Kinesis client
SDK, ideally.
All of these ideas are well-known, yes. In cases of super-common
Hi,
I run into the same issue and apart from the ideas Aniket said, I only
could find a nasty workaround. Add my custom PoolingClientConnectionManager
to my classpath.
custom spark builds should not be the answer. at least not if spark ever
wants to have a vibrant community for spark apps.
spark does support a user-classpath-first option, which would deal with
some of these issues, but I don't think it works.
On Sep 4, 2014 9:01 AM, Felix Garcia Borrego
I've experienced something related to what we discussed. NaïveBayes crashes
with native blas/lapack libraries for breeze/netlib on Windows:
https://issues.apache.org/jira/browse/SPARK-3403
I've also attached to the issue another example with gradient that crashes in
runMiniBatchSGD, probably
+1. Ran spark on yarn on hadoop 0.23 and 2.x.
Tom
On Wednesday, September 3, 2014 2:25 AM, Patrick Wendell pwend...@gmail.com
wrote:
Please vote on releasing the following candidate as Apache Spark version 1.1.0!
The tag to be voted on is v1.1.0-rc4 (commit 2f9b2bd):
On 09/03/2014 04:23 PM, Nicholas Chammas wrote:
On Wed, Sep 3, 2014 at 3:24 AM, Patrick Wendell pwend...@gmail.com wrote:
== What default changes should I be aware of? ==
1. The default value of spark.io.compression.codec is now snappy
-- Old behavior can be restored by switching to lzf
2.
LICENSE and NOTICE files are good
Hash files are good
Signature files are good
No 3rd parties executables
Source compiled
Run local and standalone tests
Test persist off heap with Tachyon looks good
+1
- Henry
On Wed, Sep 3, 2014 at 12:24 AM, Patrick Wendell pwend...@gmail.com wrote:
Please
i am trying to get things up and running, but it looks like either the
firewall gateway or jenkins server itself is down. i'll update as soon as
i know more.
looks like a power outage in soda hall. more updates as they happen.
On Thu, Sep 4, 2014 at 12:25 PM, shane knapp skn...@berkeley.edu wrote:
i am trying to get things up and running, but it looks like either the
firewall gateway or jenkins server itself is down. i'll update as soon as
i
+1
Compiled, ran on yarn-hadoop-2.3 simple job.
2014-09-04 22:22 GMT+04:00 Henry Saputra henry.sapu...@gmail.com:
LICENSE and NOTICE files are good
Hash files are good
Signature files are good
No 3rd parties executables
Source compiled
Run local and standalone tests
Test persist off
looks like some hardware failed, and we're swapping in a replacement. i
don't have more specific information yet -- including *what* failed, as our
sysadmin is super busy ATM. the root cause was an incorrect circuit being
switched off during building maintenance.
on a side note, this incident
it's a faulty power switch on the firewall, which has been swapped out.
we're about to reboot and be good to go.
On Thu, Sep 4, 2014 at 1:19 PM, shane knapp skn...@berkeley.edu wrote:
looks like some hardware failed, and we're swapping in a replacement. i
don't have more specific
On Thu, Sep 4, 2014 at 1:50 PM, Gurvinder Singh gurvinder.si...@uninett.no
wrote:
There is a regression when using pyspark to read data
from HDFS.
Could you open a JIRA http://issues.apache.org/jira/ with a brief repro?
We'll look into it.
(You could also provide a repro in a separate
+1
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC4-tp8219p8278.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
I have a java class which calls SparkSubmit.scala with all the arguments to
run a spark job in a thread. I am running them in local mode for now but
also want to run them in yarn-cluster mode later.
Now, I want to kill the running spark job (which can be in local or
yarn-cluster mode)
AND WE'RE UP!
sorry that this took so long... i'll send out a more detailed explanation
of what happened soon.
now, off to back up jenkins.
shane
On Thu, Sep 4, 2014 at 1:27 PM, shane knapp skn...@berkeley.edu wrote:
it's a faulty power switch on the firewall, which has been swapped out.
Woohoo! Thanks Shane.
Do you know if queued PR builds will automatically be picked up? Or do we
have to ping the Jenkinmensch manually from each PR?
Nick
On Thu, Sep 4, 2014 at 5:37 PM, shane knapp skn...@berkeley.edu wrote:
AND WE'RE UP!
sorry that this took so long... i'll send out a
It appears that our main man is having trouble
https://amplab.cs.berkeley.edu/jenkins/view/Pull%20Request%20Builders/job/SparkPullRequestBuilder/
hearing new requests
https://github.com/apache/spark/pull/2277#issuecomment-54549106.
Do we need some smelling salts?
On Thu, Sep 4, 2014 at 5:49
Hm yeah it seems that it hasn't been polling since 3:45.
On Thu, Sep 4, 2014 at 4:21 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
It appears that our main man is having trouble hearing new requests.
Do we need some smelling salts?
On Thu, Sep 4, 2014 at 5:49 PM, shane knapp
looking
On Thu, Sep 4, 2014 at 4:21 PM, Nicholas Chammas nicholas.cham...@gmail.com
wrote:
It appears that our main man is having trouble
https://amplab.cs.berkeley.edu/jenkins/view/Pull%20Request%20Builders/job/SparkPullRequestBuilder/
hearing new requests
i'm going to restart jenkins and see if that fixes things.
On Thu, Sep 4, 2014 at 4:56 PM, shane knapp skn...@berkeley.edu wrote:
looking
On Thu, Sep 4, 2014 at 4:21 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
It appears that our main man is having trouble
+1
Compiled, ran newly-introduced PySpark Hadoop input/output examples.
On Thu, Sep 4, 2014 at 1:10 PM, Egor Pahomov pahomov.e...@gmail.com wrote:
+1
Compiled, ran on yarn-hadoop-2.3 simple job.
2014-09-04 22:22 GMT+04:00 Henry Saputra henry.sapu...@gmail.com:
LICENSE and NOTICE files
Looks like during the last build
https://amplab.cs.berkeley.edu/jenkins/view/Pull%20Request%20Builders/job/SparkPullRequestBuilder/19797/console
Jenkins was unable to execute a git fetch?
On Thu, Sep 4, 2014 at 7:58 PM, shane knapp skn...@berkeley.edu wrote:
i'm going to restart jenkins and
yep. that's exactly the behavior i saw earlier, and will be figuring out
first thing tomorrow morning. i bet it's an environment issues on the
slaves.
On Thu, Sep 4, 2014 at 7:10 PM, Nicholas Chammas nicholas.cham...@gmail.com
wrote:
Looks like during the last build
30 matches
Mail list logo