Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72834461
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72825428
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72823630
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72822407
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14395#discussion_r72820228
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -20,7 +20,7 @@ package org.apache.spark.sql
import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13830#discussion_r72515446
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -73,21 +73,67 @@ class ListingFileCatalog
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
Thanks for review. I am merging this to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14353#discussion_r72182390
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -33,13 +33,24 @@ case class CreateArray
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14353#discussion_r72182316
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -33,13 +33,24 @@ case class CreateArray
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14132#discussion_r72181762
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1774,6 +1775,49 @@ class Analyzer
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
yea. that's a good point.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and w
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
this this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14350
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13585
@chenghao-intel Will you have time to update this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14297#discussion_r72104591
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -44,7 +50,11 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r72083334
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WindowExec.scala ---
@@ -625,10 +643,12 @@ private[execution] final class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r72083182
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14204
In my test, the column does not exist.
On Sun, Jul 24, 2016 at 6:41 PM -0700, "Tao Lin"
wrote:
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14204#discussion_r71997820
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/ExecutorData.scala ---
@@ -34,5 +34,6 @@ private[cluster] class ExecutorData
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14204
@nblintao I tried `./bin/spark-shell --master=local-cluster[2,1,1024]`.
Seems those worker links do not show up? Maybe something has been changed and
links do not show up anymore?
---
If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13620#discussion_r71997586
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -369,3 +375,246 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13620
@nblintao Can you comment on your PR to explain which parts are new code
and which parts are based on existing code?
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13620#discussion_r71997489
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -210,64 +214,69 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13620#discussion_r71997400
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -210,64 +214,69 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14331#discussion_r71996213
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -365,9 +365,6 @@ private[hive] class HiveClientImpl
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14302#discussion_r71995943
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -520,7 +522,7 @@ case class DescribeTableCommand(table
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14283
LGTM pending jenkins (I trigged the tests again in case some changes merged
in the past 4 days causing issues with this one.).
---
If your project is set up for it, you can reply to this email and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14283
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14295
@liancheng Can you also change `First`? I think that one is also broken
for this case.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14318
Let's create a jira :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14036
Having a query just to test this expression is good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71782889
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71781548
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
I am merging this PR to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
Hopefully this can make the test more stable by using different temp dirs
for different tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14281
Thanks. I am merging this to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14014
Thank you! I am going to merge this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14278#discussion_r71626483
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -136,7 +137,9 @@ public void
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14289
[SPARK-16656] [SQL] Try to make CreateTableAsSelectSuite more stable
## What changes were proposed in this pull request?
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62593
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14281#discussion_r71614119
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/ConstraintPropagationSuite.scala
---
@@ -79,13 +79,15 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71598723
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLWindowFunctionSuite.scala
---
@@ -367,4 +367,50 @@ class SQLWindowFunctionSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71598678
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71588935
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLWindowFunctionSuite.scala
---
@@ -367,4 +367,50 @@ class SQLWindowFunctionSuite
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
Without a good reason and providing a way  to make lead and lag respect
Bulls, we should not change the behavior.
On Wed, Jul 20, 2016 at 2:04 AM -0700, "A
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71489063
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WindowExec.scala ---
@@ -582,25 +582,43 @@ private[execution] final class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71488537
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -382,7 +382,7 @@ abstract class
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14284
[SPARK-16633] [SPARK-16642] Fixes three issues related to window functions
## What changes were proposed in this pull request?
This PR contains three changes.
First, this PR changes the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14272
yea. I think the fix is pretty safe. After discussion with @liancheng,
seems the more general fix is to just to use the requested catalyst schema to
initialize the vectorized reader.
---
If your
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14267
[SPARK-15705] [SQL] Change the default value of
spark.sql.hive.convertMetastoreOrc to false.
## What changes were proposed in this pull request?
In 2.0, we add a new logic to convert
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14014
Let's also update the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r71277147
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -442,13 +445,23 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r71276489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRecordMaterializer.scala
---
@@ -30,10 +30,11 @@ import
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14245
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71273081
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -146,6 +151,15 @@ case class CatalogTable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272934
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -303,6 +303,7 @@ object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272434
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -313,18 +313,48 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272290
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -146,6 +151,15 @@ case class CatalogTable
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14036
@techaddict Can you test the performance with and without your change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14249
I am merging this PR to master and branch 2.0.
Thanks @adrian-wang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14249#discussion_r71227856
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1329,7 +1332,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14249
[SPARK-16515][SQL]set default record reader and writer for script
transformation
## What changes were proposed in this pull request?
In ScriptInputOutputSchema, we read default RecordReader and
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71192358
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1306,7 +1306,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14245
LGTM. Can we reuse a existing jira number?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71102534
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1340,10 +1340,17 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71097210
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096802
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096761
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096401
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096388
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096347
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71095725
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JSONOptions.scala
---
@@ -51,7 +53,8 @@ private[sql] class JSONOptions
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
LGTM pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71058385
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1329,7 +1329,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14139
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
Thank you! I am merging this PR to branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14139#discussion_r70843685
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -273,6 +273,20 @@ private[hive] class HiveMetastoreCatalog(val
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
@rxin I think this version is the minimal change. Since the partition
discovery logic in inside HadoopFsRelation in 1.6 and the refresh is triggered
by using lazy val, passing a flag down will
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14139#discussion_r70727924
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -273,6 +273,22 @@ private[hive] class HiveMetastoreCatalog(val
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14148
LGTM. Merging to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70571914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570551
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -105,7 +105,7 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -431,7 +431,7 @@ case class DescribeTableCommand(table
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13701
@viirya Thank you for updating this. Our schedules are pretty packed for
the release. We can take a look at it once 2.0 is released.
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
let me take another look to see if there is a better change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
tes this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
lgtm. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
tes thsi please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14020
lgtm. Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
701 - 800 of 5297 matches
Mail list logo