Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/548#issuecomment-34511462
what happened here? Jenkins dead?
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/548#issuecomment-34532680
Why this will affect the correctness of test cases in streamingand this
error does not happen at all time...
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/548#issuecomment-34534153
passed, what I have changed after the previous failure is
1. make LRU scheduling as optional, i.e. the default case is the
"round-robin"
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/548#issuecomment-34534298
I am still confused by the previous failure, how can this change interacts
with the streaming recovery mechanism?
actually, even without the above two
Github user CodingCat closed the pull request at:
https://github.com/apache/incubator-spark/pull/556
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/472#issuecomment-34581720
@pwendell sure, the more eyes the better!
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/472#issuecomment-34601727
added some test cases for the feature, waiting for more feedbacks from the
users..
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/548#issuecomment-34722573
added a test case
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/588
[SPARK-1041] remove dead code in start script, remind user to set that in
spark-env.sh
the lines in start-master.sh and start-slave.sh no longer work
in ec2, the host name has
Github user CodingCat closed the pull request at:
https://github.com/apache/incubator-spark/pull/391
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/599
[SPARK-1090] improvement on spark_shell (help information, configure memory)
spark-shell should print help information about parameters and should allow
user to configure exe memory
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#discussion_r9767385
Hi, @aarondav, just a bit confused. from the code
private[spark] val executorMemory = conf.getOption("spark.executor.memory")
.orE
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#discussion_r9768064
OK, I got it
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#issuecomment-35135807
added a new parameter to set the memory used by spark-shell driver
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/602
[SPARK-1092] remove SPARK_MEM usage in sparkcontext.scala
https://spark-project.atlassian.net/browse/SPARK-1092?jql=project%20%3D%20SPARK
Currently, users will usually set
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#issuecomment-35138442
@aarondav I totally agree that we should deprecate SPARK_MEM, since that
PR is still in progress (seems dead), I think we should avoid reading this
variable
Github user CodingCat closed the pull request at:
https://github.com/apache/incubator-spark/pull/602
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/602#issuecomment-35139787
@pwendell @aarondav thanks for pointing this out
agree, closed
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/602#issuecomment-35139930
@pwendell @aarondav can any of you help to close the related JIRA?
https://spark-project.atlassian.net/browse/SPARK-1092
Thank you
GitHub user CodingCat reopened a pull request:
https://github.com/apache/incubator-spark/pull/602
[SPARK-1092] remove SPARK_MEM usage in sparkcontext.scala
https://spark-project.atlassian.net/browse/SPARK-1092?jql=project%20%3D%20SPARK
Currently, users will usually set
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/602#issuecomment-35141292
Hi, @pwendell @aarondav, I reopened it and changed my commit to printing
warning information
I just think that executor and driver share the same env
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/602#issuecomment-35157378
Hi, @mridulm , I think it will be used in local, mesos, and standalone mode
1. local
case LOCAL_CLUSTER_REGEX(numSlaves, coresPerSlave
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/602#issuecomment-35207540
@aarondav just fixed
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post
Github user CodingCat closed the pull request at:
https://github.com/apache/incubator-spark/pull/602
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your response.
If your project does not have this
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#discussion_r9780199
enmaybe --execmem?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#discussion_r9780202
still ugly...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#discussion_r9780299
I thought that "Created spark context" is something signalling a
significant step in starting spark-shell, we'd better write it to the log file
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#discussion_r9780400
fixed the above two
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/599#issuecomment-35225862
@aarondav thank you for the comments, another round of fix
---
If your project is set up for it, you can reply to this email and have your
reply appear on
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/614
[SPARK-1089] fix the problem in 0.9 that ADD_JARS value was not recognized
https://spark-project.atlassian.net/browse/SPARK-1089
load jar in process() and work around for scala
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/616
fix site scala version error in doc
fix site scala version error
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/CodingCat/incubator
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/616#issuecomment-35454588
thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your response
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/616#issuecomment-35457861
Sorry, @pwendell , I oversimplified the situation
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/616#issuecomment-35459785
@pwendell , Yes, I just grep the string, it seems so
I will fix this
---
If your project is set up for it, you can reply to this email and have
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/618
[SPARK-1105] fix site scala version error in docs
https://spark-project.atlassian.net/browse/SPARK-1105
fix site scala version error
You can merge this pull request into a Git
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/618#issuecomment-35460726
@pwendell @aarondav I'm sorry, I will be more careful next time
---
If your project is set up for it, you can reply to this email and have your
reply a
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/619#issuecomment-35497321
@mengxr DOI link may not be accessible to non-paid users, I think yahoo
research is relatively stable enough
---
If your project is set up for it, you can
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/618#issuecomment-35530603
done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your response
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/618#issuecomment-35530650
thank you very much for your comments @pwendell @aarondav
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/619#issuecomment-35568283
@mengxr good point, I agree
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/626
[SPARK-1100] prevent Spark from overwriting directory silently and leaving
dirty directory
Thanks for Diana Carroll to report this issue
the current saveAsTextFile/SequenceFile
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35731089
@mridulm Thank you for telling me the standard solution, I will revise my
patch today. I learnt a lot from the discussion with you in my other patches
---
If
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35731273
@jyotiska that would be nice!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35760516
@mridulm I tested that and found that it is actually not handled in Spark,
.map(line => ("a", &
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/634
[SPARK-1055] fix the SCALA_VERSION and SPARK_VERSION in docker file
As reported in https://spark-project.atlassian.net/browse/SPARK-1055
"The used Spark version in the ...
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/548#issuecomment-35805718
rebased
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/472#issuecomment-35805772
rebased
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/588#issuecomment-35806076
anyone noticed this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/614#issuecomment-35806096
any discussion on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/614#discussion_r9972360
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -876,7 +876,14 @@ class SparkILoop(in0: Option[BufferedReader
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/634#discussion_r9972514
--- Diff: docker/spark-test/base/Dockerfile ---
@@ -25,8 +25,8 @@ RUN apt-get update
# install a few other useful packages plus Open Jdk 7
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/614#discussion_r9972883
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -876,7 +876,14 @@ class SparkILoop(in0: Option[BufferedReader
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/588#issuecomment-35813920
@shivaram yes, I checked that as long as the user set SPARK_PUBLIC_DNS in
spark-env.sh, I remember I made a PR to spark-ec2, and you merged that
---
If
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/614#discussion_r9972921
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -876,7 +876,14 @@ class SparkILoop(in0: Option[BufferedReader
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/634#issuecomment-35819274
@pwendell @aarondav thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/588#issuecomment-35823609
@shivaram thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/635#issuecomment-35824430
I think now they require that every PR corresponds to a certain
JIRA...https://spark-project.atlassian.net/browse/SPARK
---
If your project is set up for it
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/635#issuecomment-35824526
@coderxiang I mean they suggest including the JIRA id in the PR title,I
remember so...there was a discussion in dev list...cc: @pwendell
---
If your project
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/635#issuecomment-35824561
ah, it's here:
http://apache-spark-developers-list.1001551.n3.nabble.com/Proposal-for-JIRA-and-Pull-Request-Policy-td505.html
---
If your project is s
GitHub user CodingCat opened a pull request:
https://github.com/apache/incubator-spark/pull/636
[SPARK-1102] Create a saveAsNewAPIHadoopDataset method
Create a saveAsNewAPIHadoopDataset method
By @mateiz: "Right now RDDs can only be saved as files using the new Hadoop
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35838665
OK, fixed some bugs and squashed the commits, I think it's ready for
further review
---
If your project is set up for it, you can reply to this emai
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35841703
@pwendell Thanks for the comments, I also considered what you mentioned,
but will that prevent other components like Spark Streaming from doing the
right job
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35842285
@pwendell the second situation can be avoided, sorry, just brain
damaged..the only issue is if there is a component relies on the fact that
Spark allows
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/638#discussion_r9979453
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -617,6 +617,10 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35851751
I just went through the Spark Streaming document, it seems that it's safe
to follow your suggestion @pwendell
---
If your project is set up for it, yo
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35851996
but why not just preventing users from overwriting the directory, no matter
whether there is part-*?
---
If your project is set up for it, you can reply to
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/636#discussion_r9988672
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -686,6 +649,47 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/614#issuecomment-35880521
@ScrapCodes thank you very much
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/636#issuecomment-35905921
Jenkinsare you OK?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/614#issuecomment-35908324
@ScrapCodes I'm looking at your suggestion, the difficulty to move to
createInterpreter() is that you cannot not pass the parameter "settings"
Github user CodingCat commented on the pull request:
https://github.com/apache/incubator-spark/pull/136#issuecomment-35947344
anyone is still looking at this? I think application-specific spreadout
option is good
---
If your project is set up for it, you can reply to this email and
72 matches
Mail list logo