Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2427#issuecomment-55977057
LGTM, retest this please!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/2378#discussion_r17701227
--- Diff: python/pyspark/mllib/linalg.py ---
@@ -61,16 +195,19 @@ def __init__(self, size, *args):
if type(pairs) == dict:
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2419#issuecomment-55977238
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2419#issuecomment-55977249
this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2428#issuecomment-55977471
I've already investigated other Backends but I think this issue is occurred
only in SparkDeploySchedulerBackend.
---
If your project is set up for it, you can reply to
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2294#issuecomment-55977479
Adding new methods to a trait is a break change. We can mark `Vector` and
`Matrix` as sealed, so no one can extend them. From Jenkins log:
~~~
[error] *
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/2378#discussion_r17701626
--- Diff: python/pyspark/mllib/linalg.py ---
@@ -61,16 +195,19 @@ def __init__(self, size, *args):
if type(pairs) == dict:
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2333#issuecomment-55977729
@sarutak In the long run, I'd be interested in re-writing the UI in terms
of a richer REST API that exposes data as JSON, exactly for the visualization
use-case that
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/577#discussion_r17701696
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -73,7 +73,21 @@ private[spark] class DiskStore(blockManager:
BlockManager,
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1977#issuecomment-55978000
Can you guys also add some tests that do rdd.groupByKey().filter().map(),
and skip some of the groups? As well as tests that iterate over the values in a
SameKey object
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1977#issuecomment-55978058
Also, can you see what happens when you do rdd.groupByKey().cache()? Can we
serialize and deserialize these objects back to Scala-land? It's okay if we
can't cache overly
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2428#issuecomment-55978105
Ok, thanks I'm merging this into master and 1.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/2403#issuecomment-55978195
Yeah, these seem like false positives. @ScrapCodes can you take a look and
suggest how to update the MIMA rules if these are indeed false positives?
---
If your project
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/2403#issuecomment-55978382
Actually it may also be the case that the object doesn't work quite the way
the default companion object for a case class should. Can you double check that
stuff like
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/2403#issuecomment-55978495
Yeah I think this is an actual problem, see
https://issues.scala-lang.org/browse/SI-3664. Maybe we should just go with
Durations for simplicity.
---
If your project is
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2333#issuecomment-55978533
Thanks @JoshRosen !
So, for now, I use self implementation for #2342 and I'll use the feature
you'll try.
---
If your project is set up for it, you can reply to
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2428
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1358#issuecomment-55978678
I see, so maybe the problem is that an executor dies, and another is
launched on the same Mesos machine with the same executor ID, which then breaks
assumptions elsewhere
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1358#issuecomment-55978771
BTW the delta from the original pull request would be that we only
increment our counter when the old executor fails. If you want to implement
that, please create a JIRA
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/2378#discussion_r17702050
--- Diff: python/pyspark/mllib/linalg.py ---
@@ -257,10 +410,34 @@ def stringify(vector):
Vectors.stringify(Vectors.dense([0.0, 1.0]))
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/2378#discussion_r17702101
--- Diff: python/pyspark/mllib/linalg.py ---
@@ -257,10 +410,34 @@ def stringify(vector):
Vectors.stringify(Vectors.dense([0.0, 1.0]))
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2413#issuecomment-55979003
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20504/consoleFull)
for PR 2413 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2435#issuecomment-55979465
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20499/consoleFull)
for PR 2435 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2413#issuecomment-55980504
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20505/consoleFull)
for PR 2413 at commit
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/2405#issuecomment-55980788
What will happen if I use this syntax in predicates?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2413#issuecomment-55980849
**[Tests timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20496/consoleFull)**
after a configured wait of `120m`.
---
If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2439#issuecomment-55981192
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20500/consoleFull)
for PR 2439 at commit
Github user OdinLin commented on the pull request:
https://github.com/apache/spark/pull/2423#issuecomment-55981845
Got it!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703451
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/2378#discussion_r17703466
--- Diff: python/pyspark/mllib/tests.py ---
@@ -198,41 +212,36 @@ def test_serialize(self):
lil[1, 0] = 1
lil[3, 0] = 2
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703479
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703524
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user brkyvz commented on the pull request:
https://github.com/apache/spark/pull/2294#issuecomment-55982010
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/2378#discussion_r17703595
--- Diff: python/pyspark/mllib/tree.py ---
@@ -90,53 +89,24 @@ class DecisionTree(object):
EXPERIMENTAL: This is an experimental API.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2294#issuecomment-55982351
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20506/consoleFull)
for PR 2294 at commit
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703788
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/2344#discussion_r17703765
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
---
@@ -221,19 +221,35 @@ trait HiveTypeCoercion
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/2344#discussion_r17703775
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
---
@@ -221,19 +221,35 @@ trait HiveTypeCoercion
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2379#issuecomment-55982531
OK, the code has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/2344#discussion_r17703854
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -95,6 +100,8 @@ case class Cast(child: Expression,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2379#issuecomment-55982667
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20507/consoleFull)
for PR 2379 at commit
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/1977#issuecomment-55982725
@mateiz In this patch, the values in SameKey can only be iterated once, I
will fix this later.
---
If your project is set up for it, you can reply to this email and
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703914
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/2344#discussion_r17703950
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -140,6 +147,39 @@ case class Cast(child:
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2419#issuecomment-55982927
Last build had a [problem fetching from
GitHub](https://amplab.cs.berkeley.edu/jenkins/view/Pull%20Request%20Builders/job/SparkPullRequestBuilder/20502/console).
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/2344#discussion_r17704012
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -140,6 +147,39 @@ case class Cast(child:
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/2344#discussion_r17704067
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -140,6 +147,39 @@ case class Cast(child:
GitHub user sryza opened a pull request:
https://github.com/apache/spark/pull/2440
SPARK-3574. Shuffle finish time always reported as -1
The included test waits 100 ms after job completion for task completion
events to come in so it can verify they have reasonable finish times.
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/2344#discussion_r17704236
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -177,6 +223,8 @@ case class Cast(child: Expression,
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/2441
[Build] Test selective testing
This is a dummy PR to test the work done in #2420 and #2437.
Do not merge it.
You can merge this pull request into a Git repository by running:
$ git
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55983475
Addressed @yhuai's comments except for adding more tests, will add them
soon.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2440#issuecomment-55983581
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20508/consoleFull)
for PR 2440 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2413#issuecomment-55983614
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20505/consoleFull)
for PR 2413 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17704323
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55983602
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20509/consoleFull)
for PR 2226 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/2344#issuecomment-55983850
A few comments on returns null V.S. raise exception in `Cast`, and which is
the more wider type for `Date` and `Timestamp`, the other LGTM.
---
If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2441#issuecomment-55983920
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20510/consoleFull)
for PR 2441 at commit
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2419#issuecomment-55983870
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r1770
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704453
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704450
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704455
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704445
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704447
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704457
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704449
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704452
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704454
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2294#discussion_r17704446
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/BLAS.scala ---
@@ -197,4 +201,368 @@ private[mllib] object BLAS extends Serializable {
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55984128
Use 127 instead, it is the biggest prime number in those less than 128.
How about it, guys?
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2419#issuecomment-55984228
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20512/consoleFull)
for PR 2419 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55984246
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20511/consoleFull)
for PR 2421 at commit
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2427#issuecomment-55984199
@andrewor14 Looks like Jenkins is not triggered?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2421#discussion_r17704596
--- Diff: sbin/start-thriftserver.sh ---
@@ -27,7 +27,7 @@ set -o posix
FWDIR=$(cd `dirname $0`/..; pwd)
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1977#issuecomment-55984553
We can't merge this patch until that's done then, because that would be a
regression. In general we try to keep even master free of regressions because
quite a few people
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2441#issuecomment-55984847
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20513/consoleFull)
for PR 2441 at commit
Github user larryxiao commented on the pull request:
https://github.com/apache/spark/pull/1903#issuecomment-55985270
Thanks Ankur!
I learn something :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/577#discussion_r17704914
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -73,7 +73,21 @@ private[spark] class DiskStore(blockManager:
BlockManager,
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/1541#issuecomment-55985572
@JoshRosen do you think it's OK?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2427#issuecomment-55985750
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55985869
@WangTaoTheTonic According to the wiki page @vanzin pointed out, values
above 125 are used by bash for special purposes. Since the purpose of this PR
is to reduce the
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/2232#issuecomment-55985896
Hmm. My feeling is that it's better to be consistent here and consider the
old behavior a bug than to maintain compatibility than to support a cornerish
case for a
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/2407#issuecomment-55985941
+1, LGTM. I think this can go first, as it's simpler and cleaner (maybe
fixed some unknown bugs), besides, it blocks the #2344.
---
If your project is set up for
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2441#issuecomment-55986032
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2441#issuecomment-55986258
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20517/consoleFull)
for PR 2441 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2294#issuecomment-55986305
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20506/consoleFull)
for PR 2294 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55986374
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20509/consoleFull)
for PR 2226 at commit
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55986443
Sorry for not noticnig If a command is not found, the child process
created to execute it returns a status of 127. If a command is found but is not
executable,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2419#issuecomment-55986521
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20512/consoleFull)
for PR 2419 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55986533
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20518/consoleFull)
for PR 2421 at commit
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2427#issuecomment-55986575
He might be tired. -_-
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2379#issuecomment-55986568
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20507/consoleFull)
for PR 2379 at commit
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2441#issuecomment-55986600
@marmbrus @pwendell @rxin
Here are the Jenkins outputs for when:
* [Only SQL
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2440#issuecomment-55986579
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20508/consoleFull)
for PR 2440 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2441#issuecomment-55986709
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20510/consoleFull)
for PR 2441 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2232#issuecomment-55986797
Yes, probably. This list is by no means complete. For `spark.yarn.dist.*`
however, it seems that there is some prior discussion on keeping it unresolved
(i.e.
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/2378#issuecomment-55987147
@davies This looks like a great PR! I donât see major issues, though +1
to the remarks about checking for performance regressions. Pending performance
testing and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2379#issuecomment-55987569
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20519/consoleFull)
for PR 2379 at commit
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/2405#issuecomment-55987610
@yhuai It's hard to define the semantic of f1.f11 f2.f22 as they are
arbitrarily nested arrays. What if the array size is not equal? What if the
nested level is not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55989137
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20511/consoleFull)
for PR 2421 at commit
301 - 400 of 484 matches
Mail list logo