Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4948#issuecomment-77841961
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4948#issuecomment-77855093
As I mentioned, I don't think it's efficient to try to make changes like
this one line at a time. There are a number of warnings like this, and other
build warnings in
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4950#discussion_r26036340
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala
---
@@ -199,12 +199,12 @@ object MatrixFactorizationModel
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4950#issuecomment-77856603
[Test build #28392 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28392/consoleFull)
for PR 4950 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4938#issuecomment-77860204
[Test build #28391 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28391/consoleFull)
for PR 4938 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4899#discussion_r26038573
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/LDAExample.scala ---
@@ -174,6 +174,7 @@ object LDAExample {
// Get dataset
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4916
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user haiyangsea commented on a diff in the pull request:
https://github.com/apache/spark/pull/4929#discussion_r26027360
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -287,6 +282,20 @@ class SQLQuerySuite extends QueryTest with
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4950#discussion_r26036401
--- Diff:
external/kafka/src/test/java/org/apache/spark/streaming/kafka/JavaKafkaRDDSuite.java
---
@@ -19,23 +19,19 @@
import
Github user viirya closed the pull request at:
https://github.com/apache/spark/pull/4949
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user haiyangsea commented on a diff in the pull request:
https://github.com/apache/spark/pull/4939#discussion_r26028022
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -208,6 +209,87 @@ case class DescribeCommand(
}
/**
Github user epahomov closed the pull request at:
https://github.com/apache/spark/pull/2731
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4947#issuecomment-77836711
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user epahomov commented on the pull request:
https://github.com/apache/spark/pull/2731#issuecomment-77836797
My PR is too old for current architecture and I already found too much to
improve in it. I'll do better and resubmit.
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4947#issuecomment-77836705
[Test build #28388 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28388/consoleFull)
for PR 4947 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4938#issuecomment-77848860
[Test build #28391 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28391/consoleFull)
for PR 4938 at commit
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/4949
[SPARK-6224][SQL] Also collect NamedExpressions in PhysicalOperation
Currently in `PhysicalOperation`, only `Alias` expressions are collected.
Similarly, `NamedExpression` can be collected for
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4949#issuecomment-77848861
[Test build #28390 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28390/consoleFull)
for PR 4949 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4949#issuecomment-77853437
[Test build #28390 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28390/consoleFull)
for PR 4949 at commit
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/4950
SPARK-6225 [CORE] [SQL] [STREAMING] Resolve most build warnings, 1.3.0
edition
Resolve javac, scalac warnings of various types -- deprecations, Scala
lang, unchecked cast, etc.
You can merge this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4929#issuecomment-77834974
[Test build #28389 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28389/consoleFull)
for PR 4929 at commit
GitHub user vinodkc opened a pull request:
https://github.com/apache/spark/pull/4948
[SPARK-6223][SQL] Fix build warning- enable implicit value
scala.language.existentials visible
You can merge this pull request into a Git repository by running:
$ git pull
Github user yanboliang commented on the pull request:
https://github.com/apache/spark/pull/4911#issuecomment-77854374
@mengxr Yes, it make sense, I will try to implement the save/load operation
in Python which do the same thing as in Scala.
---
If your project is set up for it, you
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4929#issuecomment-77844025
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4929#issuecomment-77844015
[Test build #28389 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28389/consoleFull)
for PR 4929 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4950#discussion_r26036283
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1104,7 +1104,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4948#discussion_r26036226
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -18,6 +18,7 @@
package org.apache.spark.sql.sources
import
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4951#issuecomment-77861051
[Test build #28393 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28393/consoleFull)
for PR 4951 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/4947#discussion_r26033358
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -158,7 +158,13 @@ private[spark] class KryoSerializerInstance(ks:
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/4938#discussion_r26034326
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/columnar/ColumnAccessor.scala ---
@@ -107,24 +110,28 @@ private[sql] class
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4949#issuecomment-77853448
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4938#issuecomment-77860216
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/4938#discussion_r26034375
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/columnar/ColumnAccessor.scala ---
@@ -107,24 +110,28 @@ private[sql] class
GitHub user yinxusen opened a pull request:
https://github.com/apache/spark/pull/4951
[SPARK-5986][MLLib] Add save/load for k-means
This PR adds save/load for K-means as described in SPARK-5986. Python
version will be added in another PR.
You can merge this pull request into a Git
Github user levkhomich commented on a diff in the pull request:
https://github.com/apache/spark/pull/4947#discussion_r26041766
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -158,7 +158,13 @@ private[spark] class KryoSerializerInstance(ks:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4947#issuecomment-77874158
[Test build #28397 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28397/consoleFull)
for PR 4947 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4945#discussion_r26044965
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
---
@@ -220,6 +220,22 @@ trait HiveTypeCoercion {
Github user levkhomich commented on a diff in the pull request:
https://github.com/apache/spark/pull/4947#discussion_r26038649
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -158,7 +158,13 @@ private[spark] class KryoSerializerInstance(ks:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4951#issuecomment-77861968
[Test build #28394 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28394/consoleFull)
for PR 4951 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4873#issuecomment-77863414
Sorry to bug @pwendell again but I think you may also be familiar with this
script. I went to the extreme and removed the check for Hive jars entirely.
Datanucleus goes
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4926#discussion_r26041674
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUdfs.scala
---
@@ -179,7 +179,12 @@ private[hive] case class
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4945#discussion_r26044586
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
---
@@ -269,6 +285,14 @@ trait HiveTypeCoercion {
Github user levkhomich commented on a diff in the pull request:
https://github.com/apache/spark/pull/4947#discussion_r26044200
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -158,7 +158,13 @@ private[spark] class KryoSerializerInstance(ks:
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3597#issuecomment-77874151
Mind closing this PR? I do not think this change is right for Spark.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4602#issuecomment-77879123
[Test build #28395 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28395/consoleFull)
for PR 4602 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4873#issuecomment-77880845
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1611#issuecomment-77875648
Mind closing this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4950#issuecomment-77875679
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/4805#issuecomment-77882744
As it stands now, no offsets are stored by spark unless you're
checkpointing. Does it really make sense to have an option to
automatically store offsets in
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4634#issuecomment-77862472
@pwendell @rxin I'd like to merge this, and while I'm all but sure the API
change question is OK, I'd feel better if a maintainer could give it a look.
---
If your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2110#issuecomment-77870526
I think this contribution may have timed out, along with
https://github.com/apache/spark/pull/2096 . They're probably good
implementations, but I am not clear if this
Github user koeninger commented on a diff in the pull request:
https://github.com/apache/spark/pull/4805#discussion_r26048829
--- Diff:
external/kafka/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala
---
@@ -84,6 +83,11 @@ class
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4919#issuecomment-77883455
Yeah, if @JoshRosen (who wrote the original `setup_boto()` function) can't
take a look, maybe @shivaram can give this a look.
---
If your project is set up for it, you
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4919#issuecomment-77869609
Obviously I'd like to get another actual active EC2 user to review this,
but the principle looks fine. this is refactoring the boto-specific mechanism
to be general and
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2096#issuecomment-77870482
I think this contribution may have timed out, along with
https://github.com/apache/spark/pull/2110 . They're probably good
implementations, but I am not clear if this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4951#issuecomment-77878863
[Test build #28394 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28394/consoleFull)
for PR 4951 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4951#issuecomment-77878875
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4873#issuecomment-77880824
[Test build #28396 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28396/consoleFull)
for PR 4873 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4873#issuecomment-77863875
[Test build #28396 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28396/consoleFull)
for PR 4873 at commit
Github user vinodkc closed the pull request at:
https://github.com/apache/spark/pull/4948
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4947#discussion_r26041428
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -158,7 +158,13 @@ private[spark] class KryoSerializerInstance(ks:
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4947#issuecomment-77874328
LGTM. I'll wait a bit longer for more comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4950#issuecomment-77875660
[Test build #28392 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28392/consoleFull)
for PR 4950 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4602#issuecomment-77879139
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4602#issuecomment-77862925
[Test build #28395 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28395/consoleFull)
for PR 4602 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4926#issuecomment-77866817
`SELECT 1` Seems doesn't work in Hive 0.12, probably introduced since Hive
0.13. See:https://issues.apache.org/jira/browse/HIVE-4144
---
If your project is set
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/4947#discussion_r26041118
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -158,7 +158,13 @@ private[spark] class KryoSerializerInstance(ks:
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2477#issuecomment-77875745
Mind closing this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4951#issuecomment-77877303
[Test build #28393 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28393/consoleFull)
for PR 4951 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4951#issuecomment-77877321
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user koeninger commented on a diff in the pull request:
https://github.com/apache/spark/pull/4805#discussion_r26048624
--- Diff:
external/kafka/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala
---
@@ -118,6 +123,7 @@ class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4947#issuecomment-77892155
[Test build #28397 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28397/consoleFull)
for PR 4947 at commit
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4906#discussion_r26056478
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/GradientBoostedTrees.scala ---
@@ -69,6 +74,42 @@ class GradientBoostedTrees(private val
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4435#discussion_r26061359
--- Diff:
core/src/main/scala/org/apache/spark/status/StatusJsonHandler.scala ---
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software
Github user tzolov closed the pull request at:
https://github.com/apache/spark/pull/1611
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/4945#discussion_r26054740
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
---
@@ -269,6 +285,14 @@ trait HiveTypeCoercion {
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4947#issuecomment-77892679
Is this not needed or `serializeStream` as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-77904109
Hi @kellyzly ,
Renaming the PR sounds fine. But I see that the PR still has the old code.
Are you planning on having the updated code up here soon? Otherwise, as
Github user tzolov commented on the pull request:
https://github.com/apache/spark/pull/2477#issuecomment-77895022
i'm closing this PR as this functionality is deprecated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user tzolov commented on the pull request:
https://github.com/apache/spark/pull/1611#issuecomment-77895099
i'm closing this PR as this functionality is deprecated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4634#discussion_r26056981
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
---
@@ -233,18 +235,44 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
def
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/4901#discussion_r26060596
--- Diff: ec2/spark_ec2.py ---
@@ -872,9 +886,13 @@ def deploy_files(conn, root_dir, opts, master_nodes,
slave_nodes, modules):
if . in
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/4901#issuecomment-77911329
Thanks @uronce-cc - Change looks good to me but for the minor comment
inline.
@nchammas -- Any other comments ?
---
If your project is set up for it, you can reply
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4634#discussion_r26056992
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
---
@@ -233,18 +235,44 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
def
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4142#issuecomment-77899873
Ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4634#issuecomment-77903115
Serializer seems ok to add.
One thing I am not sure about is the mapSideCombine thing -- I'm never a
fan of that parameter even though I added it myself, for the
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4435#discussion_r26061003
--- Diff:
core/src/main/scala/org/apache/spark/status/StatusJsonHandler.scala ---
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4435#discussion_r26061127
--- Diff:
core/src/main/scala/org/apache/spark/status/api/ApplicationInfo.scala ---
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software
Github user tzolov closed the pull request at:
https://github.com/apache/spark/pull/2477
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/4945#discussion_r26054354
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
---
@@ -269,6 +285,14 @@ trait HiveTypeCoercion {
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4906#discussion_r26056473
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/GradientBoostedTrees.scala ---
@@ -69,6 +74,42 @@ class GradientBoostedTrees(private val
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4919#issuecomment-77905979
This seems fine to me. I guess the alternatives would be
1. storing the libraries in our source tree, which is a bad option for
several reasons, including
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4435#discussion_r26060515
--- Diff:
core/src/test/scala/org/apache/spark/status/JsonRequestHandlerTest.scala ---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4947#issuecomment-77892167
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4953#issuecomment-77946908
It's strictly a code move, from child to parent module. Although I've never
been that familiar with this code, I understand the motivation, to use it from
the other child
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4950#discussion_r26078523
--- Diff:
external/kafka/src/test/java/org/apache/spark/streaming/kafka/JavaKafkaRDDSuite.java
---
@@ -19,23 +19,19 @@
import
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079948
--- Diff: bin/spark-sql ---
@@ -43,15 +46,12 @@ function usage {
echo
echo CLI options:
$FWDIR/bin/spark-class $CLASS --help 21 | grep
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079926
--- Diff: bin/spark-sql ---
@@ -25,12 +25,15 @@ set -o posix
# NOTE: This exact class name is matched downstream by SparkSubmit.
# Any
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079987
--- Diff: bin/spark-submit ---
@@ -17,58 +17,18 @@
# limitations under the License.
#
-# NOTE: Any changes in this file must be reflected
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079978
--- Diff: bin/spark-submit ---
@@ -17,58 +17,18 @@
# limitations under the License.
#
-# NOTE: Any changes in this file must be reflected
1 - 100 of 383 matches
Mail list logo