Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88170694
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -263,8 +265,19 @@ private[hive] case class HiveGenericUDTF
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88170550
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -263,8 +265,19 @@ private[hive] case class HiveGenericUDTF
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88168751
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -263,8 +265,19 @@ private[hive] case class HiveGenericUDTF
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140970
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88141512
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDAFSuite.scala
---
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140760
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88141072
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140933
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88141492
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDAFSuite.scala
---
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140713
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140381
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15857
Seems this breaks the scala 2.10 build?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15857
```
[error] [warn]
/home/jenkins/workspace/spark-master-compile-sbt-scala-2.10/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:439:
Cannot check match for
Repository: spark
Updated Branches:
refs/heads/master f14ae4900 -> 745ab8bc5
[SPARK-18379][SQL] Make the parallelism of parallelPartitionDiscovery
configurable.
## What changes were proposed in this pull request?
The largest parallelism in PartitioningAwareFileIndex
#listLeafFilesInParallel
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15829
lgtm. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c8628e877 -> 6e7310590
[SPARK-18368][SQL] Fix regexp replace when serialized
## What changes were proposed in this pull request?
This makes the result value both transient and lazy, so that if the
RegExpReplace object is initialized t
Repository: spark
Updated Branches:
refs/heads/branch-2.1 626f6d6d4 -> 80f58510a
[SPARK-18368][SQL] Fix regexp replace when serialized
## What changes were proposed in this pull request?
This makes the result value both transient and lazy, so that if the
RegExpReplace object is initialized t
Repository: spark
Updated Branches:
refs/heads/master 47636618a -> d4028de97
[SPARK-18368][SQL] Fix regexp replace when serialized
## What changes were proposed in this pull request?
This makes the result value both transient and lazy, so that if the
RegExpReplace object is initialized then
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15834
Great. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15834
![image](https://cloud.githubusercontent.com/assets/2072857/20150618/9d871bf2-a66b-11e6-8d21-1a9bb6eb27d7.png)
Since tests have already passed, I am merging this PR to
master/branch-2.1
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15834
Awesome! btw looks like your original changes in
`ExpressionEvalHelper.scala`
(https://github.com/apache/spark/pull/15816/files#diff-41747ec3f56901eb7bfb95d2a217e94d)
uncovered issues with other
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15829#discussion_r87253993
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -396,6 +396,13 @@ object SQLConf {
.intConf
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15816
@rdblue Can you send a new pr with the fix? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15816
Reverted from master/branch-2.1/branch-2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/branch-2.0 bdddc661b -> c8628e877
Revert "[SPARK-18368] Fix regexp_replace with task serialization."
This reverts commit b9192bb3ffc319ebee7dbd15c24656795e454749.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-
Repository: spark
Updated Branches:
refs/heads/branch-2.1 5bd31dc9d -> 626f6d6d4
Revert "[SPARK-18368] Fix regexp_replace with task serialization."
This reverts commit b9192bb3ffc319ebee7dbd15c24656795e454749.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-
Repository: spark
Updated Branches:
refs/heads/master 06a13ecca -> 47636618a
Revert "[SPARK-18368] Fix regexp_replace with task serialization."
This reverts commit b9192bb3ffc319ebee7dbd15c24656795e454749.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.a
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15816
oh, seems the last commit did not pass build.
Sorry. I am going to revert this patch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15816
I am wondering if it breaks some tests?
```
org.apache.spark.sql.catalyst.expressions.MathExpressionsSuite.e
org.apache.spark.sql.catalyst.expressions.MathExpressionsSuite.pi
Repository: spark
Updated Branches:
refs/heads/master 02c5325b8 -> 205e6d586
[SPARK-18338][SQL][TEST-MAVEN] Fix test case initialization order under Maven
builds
## What changes were proposed in this pull request?
Test case initialization order under Maven and SBT are different. Maven always
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15802
I am going to merge this to fix maven build.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15812
Yea. Warehouse location should not be session specific. Since we will
propagate it to hive, it is shared by all sessions.
---
If your project is set up for it, you can reply to this email and have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15802
Seems it does not work with sbt?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Repository: spark
Updated Branches:
refs/heads/master 4cee2ce25 -> 0e3312ee7
[SPARK-18256] Improve the performance of event log replay in HistoryServer
## What changes were proposed in this pull request?
This patch significantly improves the performance of event log replay in the
HistoryServ
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15756
Cool. Merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Repository: spark
Updated Branches:
refs/heads/branch-2.1 e51978c3d -> 0a303a694
[SPARK-18167] Re-enable the non-flaky parts of SQLQuerySuite
## What changes were proposed in this pull request?
It seems the proximate cause of the test failures is that `cast(str as
decimal)` in derby will rai
Repository: spark
Updated Branches:
refs/heads/master 550cd56e8 -> 4cee2ce25
[SPARK-18167] Re-enable the non-flaky parts of SQLQuerySuite
## What changes were proposed in this pull request?
It seems the proximate cause of the test failures is that `cast(str as
decimal)` in derby will raise a
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15725
lgtm. merging to master and branch 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15756
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14750#discussion_r86470657
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -417,11 +429,12 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14750#discussion_r86469470
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -95,8 +95,11 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14750#discussion_r86471858
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -537,22 +559,11 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14750#discussion_r86469682
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -255,6 +267,12 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14750#discussion_r86471149
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -475,18 +490,27 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14750#discussion_r86472353
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -620,7 +667,9 @@ private[spark] class HiveExternalCatalog(conf
Repository: spark
Updated Branches:
refs/heads/master 66a99f4a4 -> 27daf6bcd
[SPARK-17949][SQL] A JVM object based aggregate operator
## What changes were proposed in this pull request?
This PR adds a new hash-based aggregate operator named
`ObjectHashAggregateExec` that supports `TypedImper
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15590
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15024
OK. Let's get https://github.com/apache/spark/pull/14750 updated to fix
SPARK-17183.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Repository: spark
Updated Branches:
refs/heads/branch-2.1 2aff2ea81 -> 5ea2f9e5e
[SPARK-17470][SQL] unify path for data source table and locationUri for hive
serde table
## What changes were proposed in this pull request?
Due to a limitation of hive metastore(table location must be directory
Repository: spark
Updated Branches:
refs/heads/master fd90541c3 -> 3a1bc6f47
[SPARK-17470][SQL] unify path for data source table and locationUri for hive
serde table
## What changes were proposed in this pull request?
Due to a limitation of hive metastore(table location must be directory pat
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15024
LGTM. Merging to master and branch 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86273774
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -517,15 +517,15 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86272489
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -189,66 +188,39 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86070024
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -91,7 +73,8 @@ case class CreateTableLikeCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86070455
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -207,6 +207,9 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86069832
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -85,14 +86,7 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86069964
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -665,15 +665,7 @@ case class AlterTableSetLocationCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86070624
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -383,8 +389,22 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86070959
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -513,6 +555,16 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86070329
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/PathOptionSuite.scala ---
@@ -0,0 +1,97 @@
+/*
+* Licensed to the Apache Software
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86070568
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -259,10 +266,9 @@ private[spark] class HiveExternalCatalog(conf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86070169
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -541,3 +434,123 @@ case class DataSource
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r86066888
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
---
@@ -196,18 +196,32 @@ class InMemoryCatalog
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15725
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15701
merging to master. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Repository: spark
Updated Branches:
refs/heads/master de3f87fa7 -> 6633b97b5
[SPARK-18167][SQL] Also log all partitions when the SQLQuerySuite test flakes
## What changes were proposed in this pull request?
One possibility for this test flaking is that we have corrupted the partition
schema
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15701
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Repository: spark
Updated Branches:
refs/heads/master 26b07f190 -> 8bfc3b7aa
[SPARK-17972][SQL] Add Dataset.checkpoint() to truncate large query plans
## What changes were proposed in this pull request?
### Problem
Iterative ML code may easily create query plans that grow exponentially. We
f
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15651
lgtm pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15024
> 3. before passing storage properties to DataSource as data source
options, add locationUri as path option, to keep the previous behaviour, i.e.
the path option always exists(even users did
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15024
What do we do for data source tables if the path is a single file?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15651#discussion_r85645164
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -130,17 +130,40 @@ case class ExternalRDDScanExec[T
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15651#discussion_r85645138
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -130,17 +130,40 @@ case class ExternalRDDScanExec[T
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15676#discussion_r85627218
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -585,7 +586,19 @@ private[client] class Shim_v0_13 extends Shim_v0_12
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15657
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Repository: spark
Updated Branches:
refs/heads/master 79fd0cc05 -> ccb115430
http://git-wip-us.apache.org/repos/asf/spark/blob/ccb11543/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala
---
[SPARK-17970][SQL] store partition spec in metastore for data source table
## What changes were proposed in this pull request?
We should follow hive table and also store partition spec in metastore for data
source table.
This brings 2 benefits:
1. It's more flexible to manage the table data fil
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15515
Cool. I am merging this pr to unblock other tasks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15515#discussion_r85420521
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -50,7 +50,8 @@ case class AnalyzeColumnCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15515#discussion_r85415502
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -387,7 +388,15 @@ final class DataFrameWriter[T] private[sql](ds:
Dataset
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15515
Looks good. I left a few questions. Let me know if you want to address them
in follow-up prs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15515#discussion_r85421683
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -531,6 +529,11 @@ case class AlterTableRecoverPartitionsCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15515#discussion_r85421410
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -232,6 +238,15 @@ case class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15661
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15657
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15657
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Repository: spark
Updated Branches:
refs/heads/branch-2.0 dcf2f090c -> 1a4be51d6
[SPARK-18132] Fix checkstyle
This PR fixes checkstyle.
Author: Yin Huai
Closes #15656 from yhuai/fix-format.
(cherry picked from commit d3b4831d009905185ad74096ce3ecfa934bc191d)
Signed-off-by: Yin H
Repository: spark
Updated Branches:
refs/heads/master dd4f088c1 -> d3b4831d0
[SPARK-18132] Fix checkstyle
This PR fixes checkstyle.
Author: Yin Huai
Closes #15656 from yhuai/fix-format.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/re
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15656
merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/15656
[SPARK-18132] Fix checkstyle
This PR fixes checkstyle.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/yhuai/spark fix-format
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15651#discussion_r85259326
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
---
@@ -919,6 +922,44 @@ class DatasetSuite extends QueryTest with
SharedSQLContext
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15520
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15590
lgtm1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15024
For a CatalogTable, will its option still have `path` set?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r84984048
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
---
@@ -196,18 +196,30 @@ class InMemoryCatalog
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1c1e847bc -> 7c8d9a557
[SPARK-18070][SQL] binary operator should not consider nullability when
comparing input types
## What changes were proposed in this pull request?
Binary operator requires its inputs to be of same type, but it sh
Repository: spark
Updated Branches:
refs/heads/master c5fe3dd4f -> a21791e31
[SPARK-18070][SQL] binary operator should not consider nullability when
comparing input types
## What changes were proposed in this pull request?
Binary operator requires its inputs to be of same type, but it should
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15606
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15394
This breaks the scala 2.10 build. Can you fix the problem?
```
[error]
/home/jenkins/workspace/spark-master-compile-sbt-scala-2.10/mllib/src/test/scala/org/apache/spark/ml/optim
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15024
Can you update the description to explain how we handle sources that are
not file-based (e.g. jdbc)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
301 - 400 of 6156 matches
Mail list logo