Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18968
I agree that the original check should be in `checkAnalysis` instead of
`checkInputDataTypes`.
The additional check added by this change can be put in `resolved`. Sounds
good to me.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18986
In Hive, `1 = 'true'` will return `true`? `19157170390056973L =
'19157170390056971'` will return `true`?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18315
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80884/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18315
**[Test build #80884 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80884/testReport)**
for PR 18315 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18315
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/15770
@WeichenXu123 I have made changes based on your comments. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18866
In current change:
1. `ClusteredDistribution` becomes ClusteredDistribution(clustering:
Seq[Expression], clustersOpt: Option[Int] = None, useHiveHash: Boolean =
false)` -- a) number and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19002
**[Test build #80886 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80886/testReport)**
for PR 19002 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19001
**[Test build #80885 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80885/testReport)**
for PR 19001 at commit
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/19001
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/19001#discussion_r134106172
--- Diff:
sql/hive/src/main/java/org/apache/hadoop/hive/ql/io/BucketizedSparkRecordReader.java
---
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18315
**[Test build #80884 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80884/testReport)**
for PR 18315 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18968
[Moving such checks from `In.checkInputDataTypes ` to
`checkAnalysis`](https://github.com/dilipbiswal/spark/commit/185159e82572fd8c41f482bae54f791d2bf1b56a)
looks cleaner to me. What we are
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19002
LGTM except a minor comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18990
LGTM. I agree that in theory there is no reason we should depend on the
exact shuffle distribution here. It should be beneficial to have a more even
distribution.
---
If your project is set up for
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19002#discussion_r134105895
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala
---
@@ -255,6 +256,18 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19002
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19002
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80880/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19002
**[Test build #80880 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80880/testReport)**
for PR 19002 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18953
Hi, @cloud-fan .
The PR is ready for review again. Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19004
**[Test build #80883 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80883/testReport)**
for PR 19004 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/19004
cc @cloud-fan , @gatorsmile , @hvanhovell , @sameeragarwal , @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/19004
[SPARK-21791][SQL] ORC should support column names with dot
## What changes were proposed in this pull request?
Currently, Apache Spark ORC data source doesn't support field names
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80881/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #80881 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80881/testReport)**
for PR 18953 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19003
**[Test build #80882 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80882/testReport)**
for PR 19003 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/19003
[SPARK-21769] [SQL] Add a table-specific option for always respecting
schemas inferred/controlled by Spark SQL
## What changes were proposed in this pull request?
For Hive-serde tables, we
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19001
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19001
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80879/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19001
**[Test build #80879 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80879/testReport)**
for PR 19001 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18954
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80878/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18954
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18954
**[Test build #80878 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80878/testReport)**
for PR 18954 at commit
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18986
@gatorsmile Seems cast to `double` is correct.
```
hive> create table spark_21646(c1 string, c2 string);
hive> insert into spark_21646
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #80881 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80881/testReport)**
for PR 18953 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18953
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18953
The Jenkins fails twice with different R test suite.s Those errors are
irrelevant.
- test_mllib_tree.R (Test build #80875)
```
1. Error: spark.gbt (@test_mllib_tree.R#120)
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19002#discussion_r134104208
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -39,7 +39,6 @@ import org.apache.spark.sql.catalyst.plans.PlanTest
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80877/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #80877 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80877/testReport)**
for PR 18953 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19002
**[Test build #80880 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80880/testReport)**
for PR 19002 at commit
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19002
[SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdown verification back.
## What changes were proposed in this pull request?
The previous PR(https://github.com/apache/spark/pull/19000)
Github user sureshthalamati commented on the issue:
https://github.com/apache/spark/pull/18994
sure. DDL that changes table name , column name and data type of the
referenced primary key will affect foreign key definitions. I will check the
spark DDL that does schema changes and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/19001#discussion_r134103839
--- Diff:
sql/hive/src/main/java/org/apache/hadoop/hive/ql/io/BucketizedSparkRecordReader.java
---
@@ -0,0 +1,147 @@
+/**
+ * Licensed to
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18522
Thanks @srowen
I think it's better to keep the same in `driverRunner` and `ExecutorRunner
`. @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18711
@JoshRosen @cloud-fan Could you help to review it ? thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/19001
cc @cloud-fan @gatorsmile @sameeragarwal @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18954
I have a new PR (https://github.com/apache/spark/pull/19001) which
supersedes this one. It has everything this PR does (ie. writer side changes)
plus reader side changes.
---
If your project
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18954#discussion_r134103430
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
---
@@ -50,7 +50,9 @@ case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19001
**[Test build #80879 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80879/testReport)**
for PR 19001 at commit
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/19001
[SPARK-19256][SQL] Hive bucketing support
## What changes were proposed in this pull request?
This PR implements both read and write side changes for supporting hive
bucketing in
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18954
**[Test build #80878 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80878/testReport)**
for PR 18954 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18029
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80876/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18029
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18029
**[Test build #80876 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80876/testReport)**
for PR 18029 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #80877 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80877/testReport)**
for PR 18953 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18953
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18029
**[Test build #80876 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80876/testReport)**
for PR 18029 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80875/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #80875 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80875/testReport)**
for PR 18953 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18994
Could you check whether the impact of the other DDL on the constraints? For
example, rename.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #80875 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80875/testReport)**
for PR 18953 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19000#discussion_r134098359
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala
---
@@ -255,15 +255,6 @@ class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18953
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19000
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19000
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19000
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18866
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80873/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18866
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18866
**[Test build #80873 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80873/testReport)**
for PR 18866 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19000
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80872/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19000
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18986
@stanzhai @wangyum How about Hive? I think your usage scenarios are for
hive compatibility, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19000
**[Test build #80872 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80872/testReport)**
for PR 19000 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18985
```
spark-sql> create database test;
17/08/19 10:29:33 WARN ObjectStore: Failed to get database test, returning
NoSuchObjectException
spark-sql> use test;
spark-sql> create table
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18975
**[Test build #80874 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80874/testReport)**
for PR 18975 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18975
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18975
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80874/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18975
**[Test build #80874 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80874/testReport)**
for PR 18975 at commit
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18986
There is a issue if a string value out of double range, see:
[SPARK-21646](https://issues.apache.org/jira/browse/SPARK-21646).
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19000
**[Test build #80872 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80872/testReport)**
for PR 19000 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18866
**[Test build #80873 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80873/testReport)**
for PR 18866 at commit
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19000
[SPARK-21790][TESTS] Fix Docker-based Integration Test errors.
## What changes were proposed in this pull request?
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18940#discussion_r134092710
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -597,7 +597,8 @@ private[spark] object SparkConf extends Logging {
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18999#discussion_r134092696
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -659,19 +659,77 @@ def distinct(self):
return DataFrame(self._jdf.distinct(),
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18985
Is you code change based on the latest `InsertIntoHiveTable`? Can you sync
up with master branch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/18492
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/18940
LGTM, also cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18999#discussion_r134092119
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -659,19 +659,77 @@ def distinct(self):
return DataFrame(self._jdf.distinct(), self.sql_ctx)
Github user liupc commented on the issue:
https://github.com/apache/spark/pull/18985
You can try this very simple test case:
```
use test; // change db
create table t1 as select * from t2;
```
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18029
**[Test build #80871 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80871/testReport)**
for PR 18029 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18029
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80871/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18029
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18029
**[Test build #80871 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80871/testReport)**
for PR 18029 at commit
Github user yssharma commented on the issue:
https://github.com/apache/spark/pull/18029
@brkyvz I have made the suggested modifications to the code. Please have a
look when you get time. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply
Github user yssharma commented on the issue:
https://github.com/apache/spark/pull/18029
Thanks @budde for the review . Love the suggestions. ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user LiShuMing commented on the issue:
https://github.com/apache/spark/pull/18905
ping @jerryshao
I found a method to check disk in hadoop:
1 - 100 of 124 matches
Mail list logo