Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/10058#issuecomment-164809399
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user nikit-os commented on the pull request:
https://github.com/apache/spark/pull/10294#issuecomment-164819873
I did this duplicates because I didn't want to broke the current
implementation. Do you have some ideas how to do consolidation between this two
versions? Maybe we
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/10305#issuecomment-164837584
Merging to master and 1.6. Thanks @jerryshao
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672816
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -108,6 +108,34 @@ abstract class JdbcDialect extends Serializable {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672535
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -303,7 +316,32 @@ final class DataFrameWriter private[sql](df:
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/10311#issuecomment-164846751
Scala always puts the parameter names in the `scalasig` and you can get
them from the `Type` (which they describe how to get
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672127
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -253,6 +253,7 @@ final class DataFrameWriter private[sql](df: DataFrame)
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672218
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -265,10 +266,22 @@ final class DataFrameWriter private[sql](df:
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672324
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -265,10 +266,22 @@ final class DataFrameWriter private[sql](df:
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/10290#discussion_r47676426
--- Diff: docs/sparkr.md ---
@@ -148,7 +148,7 @@ printSchema(people)
The data sources API can also be used to save out DataFrames into
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/10290#discussion_r47676515
--- Diff: docs/sparkr.md ---
@@ -387,3 +387,10 @@ The following functions are masked by the SparkR
package:
Since part of SparkR is modeled on the
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672941
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -108,6 +108,34 @@ abstract class JdbcDialect extends Serializable {
Github user BrianLondon commented on the pull request:
https://github.com/apache/spark/pull/10256#issuecomment-164834977
I removed the explicit dependence on the AWS Java SDK. There's a newline
that was added to `KinesisReceiver.scala` since the last test run. I think if
tests are
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/10208#discussion_r47673951
--- Diff: core/src/main/scala/org/apache/spark/HttpFileServer.scala ---
@@ -20,6 +20,7 @@ package org.apache.spark
import java.io.File
import
Github user 3ourroom commented on the pull request:
https://github.com/apache/spark/pull/10312#issuecomment-164840588
NAVER - http://www.naver.com/
3ourr...@naver.com ëê» ë³´ë´ì ë©ì¼ <[spark] [SPARK-12010][SQL] Add
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672516
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -303,7 +316,32 @@ final class DataFrameWriter private[sql](df:
Github user 3ourroom commented on the pull request:
https://github.com/apache/spark/pull/10294#issuecomment-164821463
NAVER - http://www.naver.com/
3ourr...@naver.com ëê» ë³´ë´ì ë©ì¼ ì´
ë¤ìê³¼ ê°ì ì´ì ë¡
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672403
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -303,7 +316,32 @@ final class DataFrameWriter private[sql](df:
Github user 3ourroom commented on the pull request:
https://github.com/apache/spark/pull/10312#issuecomment-164841412
NAVER - http://www.naver.com/
3ourr...@naver.com ëê» ë³´ë´ì ë©ì¼ ì´ ë¤ìê³¼ ê°ì ì´ì ë¡
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672664
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -234,7 +235,8 @@ object JdbcUtils extends Logging {
Github user CK50 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47673153
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -511,13 +511,20 @@ def jdbc(self, url, table, mode=None,
properties=None):
:param properties:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8645#issuecomment-164853890
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10171#issuecomment-164852205
@gatorsmile I reported the spam to Github, who said Apache had to block
them. I was about to contact them, but, realized I don't see the spam comments
anymore. Is it
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8645#issuecomment-164857576
**[Test build #47744 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47744/consoleFull)**
for PR 8645 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10215#discussion_r47681732
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -698,47 +733,47 @@ class Dataset[T] private[sql](
def takeAsList(num:
Github user JoshRosen closed the pull request at:
https://github.com/apache/spark/pull/9427
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/10171#issuecomment-164861506
@srowen ! It sounds like his mailing app is hacked. Hopefully, it will not
happen again! Thank you!
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8980#issuecomment-164863631
[Test build #47746 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47746/consoleFull)
for PR 8980 at commit
Github user naveenminchu commented on the pull request:
https://github.com/apache/spark/pull/10313#issuecomment-164862829
Hi @JoshRosen, Are you suggesting to go ahead look over
"Runtime.getRuntime.addShutdownHook" use and get rid of same
---
If your project is set up for it, you
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/10313#issuecomment-164864825
@naveenminchu, yeah, I'm suggesting that we add a rule to
`scalastyle-config.xml` (like
GitHub user CK50 opened a pull request:
https://github.com/apache/spark/pull/10312
[SPARK-12010][SQL] Add columnMapping support
In the past Spark JDBC write only worked with technologies which support
the following INSERT statement syntax (JdbcUtils.scala: insertStatement()):
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/10221#issuecomment-164871044
Why not just implement `unhandledFilters`
`https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala#L247`?
Github user mccheah closed the pull request at:
https://github.com/apache/spark/pull/8438
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/7739#issuecomment-164873314
I'm not super convinced that we should add something like this. This patch
introduces a config that's not really necessary and has potentially unclear
semantics.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10314#issuecomment-164870635
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5483#issuecomment-164878820
Let's close this patch for now since there has not been activity for more
than 3 months. We can always re-open it later if there is more interest.
---
If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5598#issuecomment-164879041
retest this please
@ankurdave
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8048#issuecomment-164880222
This patch has gone stale; we now have a `SQLListener` that may be relevant
here. I would recommend that we close this patch and re-open it later against
the latest
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8128#issuecomment-164880475
Is this still an issue given all the latest memory management changes?
@chenghao-intel are you still able to reproduce this in master?
---
If your project is set up
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8312#issuecomment-164881073
**[Test build #47748 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47748/consoleFull)**
for PR 8312 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672425
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -303,7 +316,32 @@ final class DataFrameWriter private[sql](df:
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672570
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -303,7 +316,32 @@ final class DataFrameWriter private[sql](df:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/9168#issuecomment-164852995
If we're waiting on the HDFS JIRA to be resolved, can we close this PR for
now? We can always re-open it later once the other issue is addressed.
---
If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/9113#issuecomment-164857542
Jenkins, this is ok to test.
@xiaowangyu can you describe your test setup briefly?
@liancheng if you have time please review.
---
If your project is set
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/8647#discussion_r47686635
--- Diff: python/pyspark/sql/column.py ---
@@ -329,6 +329,8 @@ def cast(self, dataType):
[Row(ages=u'2'), Row(ages=u'5')]
>>>
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8915#issuecomment-164874758
@zhichao-li can you rebase to master?
@liancheng please have a look. Does this still apply?
---
If your project is set up for it, you can reply to this email and
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/10149#issuecomment-164874665
This seems reasonable, but if we are going to do work on `sqlContext.range`
it would be nice if we could take care of a bigger issue. Right now this code
is much
Github user nongli commented on a diff in the pull request:
https://github.com/apache/spark/pull/10260#discussion_r47687871
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -251,6 +251,25 @@ case class
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8750#issuecomment-164877084
@viirya @yhuai Has this been fixed in master already?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10260#issuecomment-164877081
**[Test build #47747 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47747/consoleFull)**
for PR 10260 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/10260#issuecomment-164877638
This looks great now!
LGTM pending tests / conflict resolution
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8312#issuecomment-164881673
**[Test build #47748 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47748/consoleFull)**
for PR 8312 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8312#issuecomment-164881692
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/10305
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672003
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -511,13 +511,20 @@ def jdbc(self, url, table, mode=None,
properties=None):
:param properties:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10312#issuecomment-164839152
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10313#issuecomment-164856111
LGTM, but not sure how we missed this one. It makes me wonder if there's a
particular reason for it. But I don't see a reason the current implementation
would guarantee
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/9427#issuecomment-164855721
@JoshRosen It should be something wrong in this PR, if we can't fix it
easily, would you mind close this one?
---
If your project is set up for it, you can reply to
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8980#issuecomment-164860554
@alexrovner actually this PR is opened against the wrong branch. Would you
mind closing it and re-opening against the master branch?
---
If your project is set up
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10215#issuecomment-164860412
**[Test build #47745 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47745/consoleFull)**
for PR 10215 at commit
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10215#discussion_r47678074
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -698,47 +733,47 @@ class Dataset[T] private[sql](
def takeAsList(num:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8647#issuecomment-164870848
@davies @GayathriMurali any updates on this? Is this patch still active?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user BryanCutler commented on the pull request:
https://github.com/apache/spark/pull/10284#issuecomment-164873259
Hi @andrewor14, it looks like the default RPC `NettyRpcEnv` will not
serialize a local message, so if I were to send the message
`AttachCompletedRebuildUI(appId:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5978#issuecomment-164879912
Let's close this patch for now since there hasn't been activity for more
than 3 months. We can always re-open it later if there's more interest.
---
If your project
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3794#issuecomment-164882462
@markhamstra @kayousterhout could you have a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4674#discussion_r47691009
--- Diff:
graphx/src/test/scala/org/apache/spark/graphx/GraphLoaderSuite.scala ---
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4674#issuecomment-164881132
retest this please (this should fail scala style tests because of the issue
I brought up)
---
If your project is set up for it, you can reply to this email and have
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47672592
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -61,16 +61,16 @@ object JdbcUtils extends Logging {
Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/8438#issuecomment-164870966
Sorry, yeah I haven't had the bandwidth to look at this further. I agree
that groupByKey should be avoided anyways so it might not be worthwhile to
pursue this further.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8980#issuecomment-164872820
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8980#issuecomment-164872685
[Test build #47746 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47746/console)
for PR 8980 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8625#issuecomment-164878623
@ankurdave is this something we want?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8625#issuecomment-164878633
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5754#issuecomment-164880698
ping @szheng79 can you rebase to master?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8625#issuecomment-164880637
**[Test build #47749 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/47749/consoleFull)**
for PR 8625 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8195#issuecomment-164880876
ok to test. @kbastani can you address the comments? Also maybe this is of
interest to @ankurdave.
---
If your project is set up for it, you can reply to this email
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5848#issuecomment-164884554
I wonder if we'll ever want `1e6` samples, which seems like a lot. Echoing
@srowen's thinking aloud earlier, does it make sense to change this to
something like
Github user CK50 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10066#discussion_r47671041
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/CassandraDialect.scala ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/10305#issuecomment-164837498
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8645#issuecomment-164853867
This seems like a low hanging fruit to merge. @davies @yhuai can you have a
look?
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10313#issuecomment-164856021
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/10125#discussion_r47681228
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -75,6 +75,7 @@ object JavaTypeInference {
case
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47673067
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -96,6 +96,16 @@ class JDBCWriteSuite extends SharedSQLContext with
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/10311#issuecomment-164848287
Other small comments:
- instead of `node-name` I would probably call it `class` and use the
fully qualified classname.
- I think that the best test would be
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/9113#discussion_r47681729
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2.scala
---
@@ -57,6 +57,11 @@ object
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/10313#issuecomment-164857807
Hey, can we add a linter rule to ban uses of
"Runtime.getRuntime.addShutdownHook", similar to our existing rule for
"Class.forName", which recommends using our own
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/9427#issuecomment-164857620
Yeah, I'm going to close this for now; I don't think that this is a
high-priority issue to fix for 1.5.x since it's been around forever and nobody
reported problems
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8981#issuecomment-164867369
@mccheah @JoshRosen does this patch buy us anything if we're not merging
#8438?
---
If your project is set up for it, you can reply to this email and have your
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/8647#discussion_r47686610
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -943,13 +969,48 @@ def dropDuplicates(self, subset=None):
+---+--+-+
|
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8885#issuecomment-164875069
@lianhuiwang would you mind rebasing to master?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8885#issuecomment-164875349
@sarutak @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user markgrover commented on the pull request:
https://github.com/apache/spark/pull/7739#issuecomment-164877416
Hey Andrew, thanks for your feedback.
> What happens if I specify different versions of some jar on both common
and executor class path? Which one gets
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8382#issuecomment-164879608
@cloud-fan this patch has mostly gone stale and I think given the changes
in master it would take significant effort to resolve the conflicts. Can you
close this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8625#issuecomment-164882503
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8312#issuecomment-164881687
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4674#issuecomment-164881339
@maropu would you mind bringing this up to date in master?
@ankurdave this looks like an OK change. Is it good to merge once it's
rebased?
---
If your project
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3221#issuecomment-164884922
@mengxr @jkbradley is this still relevant given the recent changes in ALS?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/10290#issuecomment-164852902
One minor thing is that I'm not sure its very good to hard code these
version numbers here. For example we could have a RC3 and then this might be in
Spark 1.6
GitHub user naveenminchu opened a pull request:
https://github.com/apache/spark/pull/10313
[SPARK-9886][CORE] Fix to use ShutdownHookManager in
ExternalBlockStore.scala
You can merge this pull request into a Git repository by running:
$ git pull
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/8980#issuecomment-164857678
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
1 - 100 of 782 matches
Mail list logo