Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15393
Actually, I hit this before. Sorry, I did not catch it when I merging the
PR. Next time, I will be more careful about it.
---
If your project is set up for it, you can reply to this email and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15393
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15393
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15334
Saw @viirya submitted a PR for the same issue:
https://github.com/apache/spark/pull/7793
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15388
Why the following case can pass?
```Scala
setTest(1,
bUpper > 100 ||
aUpper <= 10 &&
aUpper > bUpper,
bUpper > 100 ||
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15388
BTW, we need to add more mixed cases in the test cases when supporting
`And` and `Or`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15388
LGTM
Forgot the precedence of `AND` is higher than `OR`.
BTW, also checked SQL. It follows the same precedence order. That is, `AND`
is higher than `OR`.
```
From
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15388
@rxin Agree. Sorry for that. Will be more careful in the future.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82537727
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -17,47 +17,132 @@
package
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15399
First, I am not familiar with the code in this component. Thus, I am not
the right person to review it.
Second, when I going over the pending JIRA list, I found many bugs that are
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15316#discussion_r82643571
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/AnalysisException.scala ---
@@ -43,6 +43,11 @@ class AnalysisException protected[sql
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15358#discussion_r82655758
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -52,9 +52,15 @@ case class CatalogStorageFormat
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15316
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15417
Is this the only one?
If you add the commands in the function `comparePlans` of
`PlanTest.scala`, you can see more.
```Scala
SimpleAnalyzer.checkAnalysis(plan1
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15190
cc @yhuai @cloud-fan Based on the above PR discussion, it sounds like this
PR is ok to merge. What do you think? Thank you!
---
If your project is set up for it, you can reply to this email and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15360
Will review this tonight. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15292
Thank you for working on it! Please update the PR description. Ping me when
it is done. Then, I can merge it. : )
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15398
Also cc @yhuai and @JoshRosen @mengxr Please check whether the changes here
can satisfy what you want. Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15292
Sorry, I did not explain it in details. In this PR, we had a bug fix. We
need a separate bullet in the PR description.
Previously, when attempting to make a database connection, we pass
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15292
Thanks! Merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14702
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82730527
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -358,50 +358,180 @@ class StatisticsSuite extends QueryTest with
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82730881
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -358,50 +358,180 @@ class StatisticsSuite extends QueryTest with
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82731661
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -62,7 +62,7 @@ case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82731979
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -358,50 +358,180 @@ class StatisticsSuite extends QueryTest with
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15230
cc @cloud-fan Could you review it again? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15230
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15417
You can make a try.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15398
I see. Agree. Will do an investigation with @jodersky Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14847
@viirya This is a pretty interesting. Based on the suggestion from @rxin
let us discuss it? Also cc @ioana-delaney @nsyca
---
If your project is set up for it, you can reply to this email and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15190
Added them to your test cases. : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15295
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15295
Merging to master! Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15434
Just FYI. Hive allows the following changes:
```SQL
ALTER TABLE db1.tbl RENAME TO db2.tbl2
```
---
If your project is set up for it, you can reply to this email and have your
reply
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15230#discussion_r82940270
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -225,6 +225,11 @@ case class AlterTableSetPropertiesCommand
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15230
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15434#discussion_r82944802
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -459,11 +459,20 @@ class SessionCatalog
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15434
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15434
cc @zhzhan Just FYI, the behavior of this DDL is different from Hive. If
your team is migrating your Hive to Spark, you need to double check your
scripts containing ALTER TABLE RENAME TO
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82947471
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -358,53 +358,189 @@ class StatisticsSuite extends QueryTest with
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15360
cc @cloud-fan I do not have any more comment. Could you check this please?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15230#discussion_r83006784
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -225,6 +225,11 @@ case class AlterTableSetPropertiesCommand
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/9973#discussion_r83048584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -175,14 +187,18 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/9973#discussion_r83049327
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -175,14 +187,18 @@ object JdbcUtils extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15048
Yeah. We should backport it to 2.0
Yeah. It affects both data source tables and hive serde tables. To fix it
in Spark 2.0, we need to rewrite the fix since Spark 2.0 does not have a
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Not sure whether you realize it. Since this PR changes the input parm of
`Rand` and `Randn`, you also changes the external support.
Now, users can do something like
```SQL
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Since you are running mysql, the output of rand(0) is the same as
rand(null)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15048
Yeah, based on my understanding, it should cover the hive serde table. I
will submit a PR to make sure it and also include the test case you provided
above. Thank you!
---
If your project is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/9973
The basic problem is multiple connections work on the same transaction. It
is doable but might not be applicable as a general JDBC data source connector.
Let us keep it as an open problem. If
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
I have a very general comment about the work you are working. Like what we
are doing for the `LIKE` operation, we did an investigation on ANSI standard,
and all the mainstream data stores
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Let me show you an example:
https://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/sqlref/src/tpc/db2z_bif_rand.html
This is the official document of `rand` in DB2 z/OS. Below is
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15230#discussion_r83138864
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -111,6 +111,10 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Unfortunately, not all the things have a standard to follow. That is why I
suggested you to do a research about it. Like Oracle, it does not have such a
function in their SQL-function list
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
At first, we do not strictly follow Hive. You can easily find many in
Spark. I do not think this is an urgent JIRA, right? Like what @srowen replied
in the JIRA, he does not think this is a bug
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
What is the behavior of `PostgreSQL`? Treating `NULL` as zero?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Now, at least we have four options, when users setting `NULL` as a seed
number for `rand`:
1. Hive/MySQL - `NULL` is equivalent to `0`
2. DB2 - when the seed is `NULL`, `rand` returns
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
@HyukjinKwon Could you document the behavior in the description of `rand`
function? Checked whether we have any missing test case? Not sure whether you
are also can check whether `rand` in R and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15230#discussion_r83149074
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -111,6 +111,10 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15458
If we turn this flag on, I am just afraid that some DDL statements might
not work well. I assume this flag is really used for the our Spark SQL
developers.
---
If your project is set up for
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15459
[SPARK-17409] [SQL] [FOLLOW-UP] Do Not Optimize Query in CTAS More Than Once
### What changes were proposed in this pull request?
This follow-up PR is for addressing the
[comment](https
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/9973
Checked it with @huaxingao who worked for JDBC driver team before. Yeah, we
are unable to do it using JDBC. In my previous team, we did it using the native
connection methods instead of JDBC. It
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15398#discussion_r83349251
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/RegexpExpressionsSuite.scala
---
@@ -54,20 +57,32 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15398#discussion_r83349295
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/RegexpExpressionsSuite.scala
---
@@ -90,6 +105,28 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15398#discussion_r83349306
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/RegexpExpressionsSuite.scala
---
@@ -90,6 +105,28 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15460
LGTM except one comment about test case
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15460#discussion_r83349707
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -636,6 +637,13 @@ private[spark] class HiveExternalCatalog
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15478
[SPARK-17899][SQL][DO-NOT-MERGE] Check the Impact When Turning On
spark.sql.debug
### What changes were proposed in this pull request?
**DO NOT MERGE**
This PR is just to do a
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15190
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile closed the pull request at:
https://github.com/apache/spark/pull/15478
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15478
`ALTER TABLE`, `ANALYZE TABLE` and `CREATE TABLE LIKE` do not work well
when `spark.sql.debug` is set to true.
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15458
Just FYI. https://github.com/apache/spark/pull/15478 shows at least three
DDL commands do not work properly when `spark.sql.debug` is set to `true`:
`ALTER TABLE`, `ANALYZE TABLE` and `CREATE
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15478
Yeah, I see. This is just for showing the impact. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15465
Not sure what is the reason in the original design.
If we write data before creating the table, users will never get partial
results. If we first create a table, and then write data
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15190
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15190
Merging to master! Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15458#discussion_r83496528
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -915,4 +915,9 @@ object StaticSQLConf {
.internal
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15190
It sounds like the other merged PRs made a change and impact your code.
Please resolve it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15494
[SPARK-17947] [SQL] Add Doc and Comment about spark.sql.debug
### What changes were proposed in this pull request?
Just document the impact of `spark.sql.debug`:
When enabling the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15434
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15434
cc @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15398#discussion_r83509641
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/RegexpExpressionsSuite.scala
---
@@ -74,6 +107,31 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15398
As @jodersky said, SQL2003 syntax is like
```
WHERE expression [NOT] LIKE string_pattern [ESCAPE escape_sequence]
```
```
- ESCAPE escape_sequence
Allows you to search for
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15398
@jodersky Please follow @rxin 's suggestion and submit a separate PR to
support it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15398#discussion_r83512274
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/RegexpExpressionsSuite.scala
---
@@ -74,6 +107,31 @@ class
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15502
[SPARK-17892] [SQL] [2.0] Do Not Optimize Query in CTAS More Than Once
#15048
### What changes were proposed in this pull request?
This PR is to backport https://github.com/apache/spark
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15494
You can see the failed test cases in the PR:
https://github.com/apache/spark/pull/15478
`ANALYZE TABLE` will fail due to the
[checking](https://github.com/apache/spark/blob
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15495
@yhuai Do you think it is good enough to merge? Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15398
Also cc @yhuai , @JoshRosen and @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15398#discussion_r83537154
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/RegexpExpressionsSuite.scala
---
@@ -74,6 +107,31 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Like what I mentioned above, the current change in this PR also allows
`rand` to take an expression that returns a value of a built-in integer/long
data type. Thus, you also need to test them
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15432#discussion_r83537639
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/randomExpressions.scala
---
@@ -50,22 +54,17 @@ abstract class RDG
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Here, I am talking about the black box testing. If you add the new
capability to any external function, you should add it in the test cases. This
is very fundamental when we developing
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
One more suggestion to your future PR. Whenever you submit a PR, please try
to improve the test case coverage. This can help you find bugs in your codes
and also benefit the whole community
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15432
Regarding the test cases of R and PythonR, I am fine if you do not add them
into the code base. However, please at least run them manually. We hit many
surprise bugs in the past just because
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14531
Sure, will submit a PR for it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15218#discussion_r83560546
--- Diff: docs/configuration.md ---
@@ -1334,6 +1334,17 @@ Apart from these, the following properties are also
available, and may be useful
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15218#discussion_r83560691
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -61,6 +59,21 @@ private[spark] class TaskSchedulerImpl
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15218#discussion_r83561571
--- Diff: docs/configuration.md ---
@@ -1334,6 +1334,17 @@ Apart from these, the following properties are also
available, and may be useful
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15218#discussion_r83562599
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala ---
@@ -408,4 +474,5 @@ class TaskSchedulerImplSuite extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15218#discussion_r83562596
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala ---
@@ -109,6 +109,72 @@ class TaskSchedulerImplSuite extends
801 - 900 of 14069 matches
Mail list logo