Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/964#issuecomment-46398850
Btw, we shouldn't use default parameters in method definition. It is
convenient in Scala but it is not Java friendly. Also, this is hard for us to
maintain binary
Github user gregakespret closed the pull request at:
https://github.com/apache/spark/pull/1109
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user gregakespret commented on the pull request:
https://github.com/apache/spark/pull/1109#issuecomment-46399053
@rxin Sure, PR closed. At the time I created this PR, the other one wasn't
yet merged in I suppose.
---
If your project is set up for it, you can reply to this
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1109#issuecomment-46399205
Yup looks like a racing condition (in a good way). Thanks a lot for
catching this!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rezazadeh commented on a diff in the pull request:
https://github.com/apache/spark/pull/964#discussion_r13900614
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
---
@@ -220,16 +247,43 @@ class RowMatrix(
}
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/807#discussion_r13900807
--- Diff:
external/flume-sink/src/main/scala/org/apache/spark/flume/sink/SparkSink.scala
---
@@ -0,0 +1,432 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-46400984
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/813#discussion_r13901312
--- Diff: python/pyspark/join.py ---
@@ -79,15 +79,15 @@ def dispatch(seq):
return _do_python_join(rdd, other, numPartitions, dispatch)
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/813#discussion_r13901329
--- Diff: python/pyspark/rdd.py ---
@@ -1324,11 +1324,11 @@ def mapValues(self, f):
return self.map(map_values_fn, preservesPartitioning=True)
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/813#discussion_r13901345
--- Diff: python/pyspark/join.py ---
@@ -79,15 +79,15 @@ def dispatch(seq):
return _do_python_join(rdd, other, numPartitions, dispatch)
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-46401368
Another situation is that the works lists changes frequently, which will
make drivers relaunching a lot.
---
If your project is set up for it, you can reply to
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-46401560
Hey @douglaz, thanks for updating this. One thing missing here is tests in
each of the languages -- please add them so that this code will be tested later.
---
If your
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/813#discussion_r13901535
--- Diff: python/pyspark/join.py ---
@@ -79,15 +79,15 @@ def dispatch(seq):
return _do_python_join(rdd, other, numPartitions, dispatch)
Github user dorx commented on the pull request:
https://github.com/apache/spark/pull/1025#issuecomment-46402513
@srowen Hey Sean, turns out Colt(or breeze for that matter) doesn't have
inverseCDF for the distributions, which I need for the implementation. The
original thought was to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-46403384
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-46403385
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15866/
---
If your project
Github user BaiGang commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46403426
Thanks, @dbtsai .
Just did a git rebase. Will wait for the conclusion of consolidated
interfaces.
---
If your project is set up for it, you can reply to this email
Github user vrilleup commented on the pull request:
https://github.com/apache/spark/pull/964#issuecomment-46405519
Hi Xiangrui,
Thank you for the comments! For the API, I think separating svd and svds
would be a better design. The user should choose which implementation (dense or
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1091#issuecomment-46405754
Thanks for working on this, @ScrapCodes. I talked with Matei and while we
both agree compression would be better set in per-RDD basis, adding another
boolean flag to
Github user vrilleup commented on a diff in the pull request:
https://github.com/apache/spark/pull/964#discussion_r13903174
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
---
@@ -220,16 +247,43 @@ class RowMatrix(
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1105#discussion_r13903361
--- Diff: core/src/main/scala/org/apache/spark/util/MetadataCleaner.scala
---
@@ -91,8 +91,13 @@ private[spark] object MetadataCleaner {
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1103#discussion_r13903517
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -363,6 +363,12 @@ private[spark] class BlockManager(
val info =
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1103#issuecomment-46406863
This LGTM actually. Makes sense to do another check within the synchronized
block in case a block is being removed by another thread.
---
If your project is set up for
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1095#issuecomment-46407129
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1095#issuecomment-46407125
Do you mind updating the pull request title to say something like
[SPARK-2151] Recognize memory format for spark-submit?
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1095#issuecomment-46407449
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1095#issuecomment-46407455
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1087#issuecomment-46408239
Just leaving a note that this pr has been reverted because changing the
parameter name in Scala could make the function non-source-compatible anymore
...
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1103#issuecomment-46408334
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1103#issuecomment-46408343
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1095#issuecomment-46410627
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1095#issuecomment-46410629
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15867/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1103#issuecomment-46411394
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1103#issuecomment-46411395
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15868/
---
If your project is set up for it, you can
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46411430
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1104#discussion_r13905413
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/optimization/LBFGSSuite.scala ---
@@ -195,4 +195,39 @@ class LBFGSSuite extends FunSuite with
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1112
SPARK-1291: Link the spark UI to RM ui in yarn-client mode
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1291
Alternatively
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1104#discussion_r13905479
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/optimization/LBFGSSuite.scala ---
@@ -195,4 +195,39 @@ class LBFGSSuite extends FunSuite with
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1104#discussion_r13905461
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/optimization/LBFGSSuite.scala ---
@@ -195,4 +195,39 @@ class LBFGSSuite extends FunSuite with
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46411830
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46411812
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46411779
The change looks good to me. Let us wait for Jenkins and MIMA.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-46411809
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-46411829
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/1104#discussion_r13905548
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/optimization/LBFGSSuite.scala ---
@@ -195,4 +195,39 @@ class LBFGSSuite extends FunSuite with
Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46412293
I think it will be a problem for MIMA to change the signature.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/1113
add ability to submit multiple jars for Driver
add ability to submit multiple jars for Driver
You can merge this pull request into a Git repository by running:
$ git pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1113#issuecomment-46413148
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1113#issuecomment-46413162
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/1095#issuecomment-46413404
Thanks @vanzin, @rxin , updated the title
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/1114
discarded exceeded completedDrivers
When completedDrivers number exceeds the threshold, the first
Max(spark.deploy.retainedDrivers, 1) will be discarded.
You can merge this pull request into a
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1114#issuecomment-46414499
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1114#issuecomment-46414512
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-46415412
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46415414
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15870/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-46415415
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15869/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1104#issuecomment-46415413
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1113#issuecomment-46416660
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15871/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1113#issuecomment-46416659
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1114#issuecomment-46417682
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15872/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1114#issuecomment-46417681
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user epahomov commented on a diff in the pull request:
https://github.com/apache/spark/pull/1107#discussion_r13908570
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkIMain.scala ---
@@ -102,7 +102,8 @@ import org.apache.spark.util.Utils
val
Github user epahomov commented on a diff in the pull request:
https://github.com/apache/spark/pull/1107#discussion_r13908643
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -102,7 +102,24 @@ private[spark] class ConnectionManager(port: Int,
Github user epahomov commented on a diff in the pull request:
https://github.com/apache/spark/pull/1107#discussion_r13908739
--- Diff: core/src/main/scala/org/apache/spark/HttpServer.scala ---
@@ -41,45 +41,73 @@ private[spark] class ServerStateException(message:
String) extends
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1107#discussion_r13908816
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -102,7 +102,24 @@ private[spark] class ConnectionManager(port: Int,
Github user epahomov commented on a diff in the pull request:
https://github.com/apache/spark/pull/1107#discussion_r13908948
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -102,7 +102,24 @@ private[spark] class ConnectionManager(port: Int,
Github user BaiGang commented on a diff in the pull request:
https://github.com/apache/spark/pull/1104#discussion_r13910039
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/optimization/LBFGSSuite.scala ---
@@ -195,4 +195,39 @@ class LBFGSSuite extends FunSuite with
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/1087#issuecomment-46435921
Is there a place we're collecting a backlog of changes queued for the next
API breaking release?
On Jun 18, 2014 4:23 AM, Reynold Xin notificati...@github.com
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/999#issuecomment-46436035
Hmmm, that doesn't precisely match my recollection or understanding.
Certainly we discussed that alpha components aren't required to maintain a
stable API, but I
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1105#issuecomment-46437289
Updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/906#issuecomment-46443977
I think this is good to go. The initial test passed, but recent ones
errored out. Just to double-check:
Jenkins, test this please.
---
If your project is set up for
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/980#issuecomment-46444091
Pardon, could I ping this issue for review and consideration for commit? I
think it's a clean fix and improvement.
---
If your project is set up for it, you can reply to
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-46446620
I'm not sure about efficiency of change to another mode for the extreme
case that some worker (which is exactly the one running a lot of drivers) joins
and leaves
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/906#issuecomment-46448348
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/906#issuecomment-46448599
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/906#issuecomment-46448580
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1100#issuecomment-46449272
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1115#issuecomment-46458901
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1103#issuecomment-46460187
Hm, ok. Maybe at the last we want to logWarning then? This currently
obscures a potential exception for the DISK_ONLY case
---
If your project is set up for it, you
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/906#issuecomment-46460807
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15873/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1100#issuecomment-46460806
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1100#issuecomment-46460809
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15874/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/906#issuecomment-46460805
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user etrain commented on a diff in the pull request:
https://github.com/apache/spark/pull/886#discussion_r13926351
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/DecisionTreeRunner.scala
---
@@ -49,6 +49,7 @@ object DecisionTreeRunner {
case class
Github user AndreSchumacher commented on a diff in the pull request:
https://github.com/apache/spark/pull/360#discussion_r13926401
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetConverter.scala ---
@@ -0,0 +1,667 @@
+/*
+ * Licensed to the Apache
Github user etrain commented on a diff in the pull request:
https://github.com/apache/spark/pull/886#discussion_r13926460
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
@@ -45,7 +46,7 @@ class DecisionTree (private val strategy: Strategy)
Github user AndreSchumacher commented on a diff in the pull request:
https://github.com/apache/spark/pull/360#discussion_r13926479
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetConverter.scala ---
@@ -0,0 +1,667 @@
+/*
+ * Licensed to the Apache
Github user etrain commented on a diff in the pull request:
https://github.com/apache/spark/pull/886#discussion_r13926555
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
@@ -212,7 +211,9 @@ object DecisionTree extends Serializable with Logging {
Github user etrain commented on a diff in the pull request:
https://github.com/apache/spark/pull/886#discussion_r13926606
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
@@ -233,13 +234,73 @@ object DecisionTree extends Serializable with Logging
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1116#issuecomment-46465139
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1116#issuecomment-46465158
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1115#issuecomment-46464659
(This PR has way more than you intend -- thousands of files changed,
hundreds of commits. You need to rebase the branch on master.)
---
If your project is set up for it,
Github user etrain commented on the pull request:
https://github.com/apache/spark/pull/886#issuecomment-46465872
I've taken a first pass at this and at a high level it looks good.
The main two things I'd say are
1) I think an implicit that converts LabeledPoint to
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1116#issuecomment-46466207
Good catch, thanks :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/917#issuecomment-46467351
Another friendly ping. Could I get some eyes on this change? It's pretty
trivial.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user concretevitamin commented on the pull request:
https://github.com/apache/spark/pull/1116#issuecomment-46469383
Sorry for introducing the bug in the first place. Just throwing a thought
out there: I have found that there are a lot of arguably hidden assumptions and
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1116#issuecomment-46469732
That's not a bad idea. Also we should add more documentation. While Spark
SQL code in general is extremely concise, it can be hard to understand
(especially the optimizer
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1116#issuecomment-46469830
Thanks. Merging this in master branch-1.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1116
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1103#issuecomment-46470613
Thanks. I'm merging this in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
1 - 100 of 265 matches
Mail list logo