Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14289
[SPARK-16656] [SQL] Try to make CreateTableAsSelectSuite more stable
## What changes were proposed in this pull request?
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62593
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14281#discussion_r71614119
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/ConstraintPropagationSuite.scala
---
@@ -79,13 +79,15 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71598723
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLWindowFunctionSuite.scala
---
@@ -367,4 +367,50 @@ class SQLWindowFunctionSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71598678
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71588935
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLWindowFunctionSuite.scala
---
@@ -367,4 +367,50 @@ class SQLWindowFunctionSuite
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
Without a good reason and providing a way  to make lead and lag respect
Bulls, we should not change the behavior.
On Wed, Jul 20, 2016 at 2:04 AM -0700, "A
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71489063
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WindowExec.scala ---
@@ -582,25 +582,43 @@ private[execution] final class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71488537
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -382,7 +382,7 @@ abstract class
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14284
[SPARK-16633] [SPARK-16642] Fixes three issues related to window functions
## What changes were proposed in this pull request?
This PR contains three changes.
First, this PR changes the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14272
yea. I think the fix is pretty safe. After discussion with @liancheng,
seems the more general fix is to just to use the requested catalyst schema to
initialize the vectorized reader.
---
If your
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14267
[SPARK-15705] [SQL] Change the default value of
spark.sql.hive.convertMetastoreOrc to false.
## What changes were proposed in this pull request?
In 2.0, we add a new logic to convert
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14014
Let's also update the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r71277147
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -442,13 +445,23 @@ private[parquet
Repository: spark
Updated Branches:
refs/heads/branch-2.0 24ea87519 -> ef2a6f131
[SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update
## What changes were proposed in this pull request?
This PR moves one and the last hard-coded Scala example snippet from the SQL
programming guide in
Repository: spark
Updated Branches:
refs/heads/master e5fbb182c -> 1426a0805
[SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update
## What changes were proposed in this pull request?
This PR moves one and the last hard-coded Scala example snippet from the SQL
programming guide into `
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r71276489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRecordMaterializer.scala
---
@@ -30,10 +30,11 @@ import
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14245
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71273081
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -146,6 +151,15 @@ case class CatalogTable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272934
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -303,6 +303,7 @@ object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272434
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -313,18 +313,48 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272290
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -146,6 +151,15 @@ case class CatalogTable
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14036
@techaddict Can you test the performance with and without your change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
ark, but would fail now.
## How was this patch tested?
added a test case in SQLQuerySuite.
Closes #14169
Author: Daoyuan Wang
Author: Yin Huai
Closes #14249 from yhuai/scriptTransformation.
(cherry picked from commit 96e9afaae93318250334211cc80ed0fee3d055b9)
Signed-off-by: Yin Huai
Proj
ark, but would fail now.
## How was this patch tested?
added a test case in SQLQuerySuite.
Closes #14169
Author: Daoyuan Wang
Author: Yin Huai
Closes #14249 from yhuai/scriptTransformation.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14249
I am merging this PR to master and branch 2.0.
Thanks @adrian-wang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14249#discussion_r71227856
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1329,7 +1332,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14249
[SPARK-16515][SQL]set default record reader and writer for script
transformation
## What changes were proposed in this pull request?
In ScriptInputOutputSchema, we read default RecordReader and
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71192358
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1306,7 +1306,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Repository: spark
Updated Branches:
refs/heads/master 8ea3f4eae -> 2877f1a52
[SPARK-16351][SQL] Avoid per-record type dispatch in JSON when writing
## What changes were proposed in this pull request?
Currently, `JacksonGenerator.apply` is doing type-based dispatch for each row
to write appro
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14245
LGTM. Can we reuse a existing jira number?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71102534
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1340,10 +1340,17 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71097210
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096802
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096761
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096401
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096388
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096347
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71095725
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JSONOptions.scala
---
@@ -51,7 +53,8 @@ private[sql] class JSONOptions
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
LGTM pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71058385
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1329,7 +1329,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14139
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
his patch tested?
Manually tested.
**Note: This is a backport of https://github.com/apache/spark/pull/13987**
Author: Yin Huai
Closes #14139 from yhuai/SPARK-16313-branch-1.6.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/com
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
Thank you! I am merging this PR to branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14139#discussion_r70843685
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -273,6 +273,20 @@ private[hive] class HiveMetastoreCatalog(val
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
@rxin I think this version is the minimal change. Since the partition
discovery logic in inside HadoopFsRelation in 1.6 and the refresh is triggered
by using lazy val, passing a flag down will
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14139#discussion_r70727924
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -273,6 +273,22 @@ private[hive] class HiveMetastoreCatalog(val
Repository: spark
Updated Branches:
refs/heads/branch-2.0 9e3a59858 -> 550d0e7dc
[SPARK-16482][SQL] Describe Table Command for Tables Requiring Runtime Inferred
Schema
What changes were proposed in this pull request?
If we create a table pointing to a parquet/json datasets without specif
Repository: spark
Updated Branches:
refs/heads/master fb2e8eeb0 -> c5ec87982
[SPARK-16482][SQL] Describe Table Command for Tables Requiring Runtime Inferred
Schema
What changes were proposed in this pull request?
If we create a table pointing to a parquet/json datasets without specifying
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14148
LGTM. Merging to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70571914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570551
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -105,7 +105,7 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -431,7 +431,7 @@ case class DescribeTableCommand(table
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13701
@viirya Thank you for updating this. Our schedules are pretty packed for
the release. We can take a look at it once 2.0 is released.
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
let me take another look to see if there is a better change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
tes this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Repository: spark
Updated Branches:
refs/heads/master 9cc74f95e -> b1e5281c5
[SPARK-12639][SQL] Mark Filters Fully Handled By Sources with *
## What changes were proposed in this pull request?
In order to make it clear which filters are fully handled by the
underlying datasource we will mark
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
lgtm. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
tes thsi please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Repository: spark
Updated Branches:
refs/heads/master 7f38b9d5f -> b4fbe140b
[SPARK-16349][SQL] Fall back to isolated class loader when classes not found.
Some Hadoop classes needed by the Hive metastore client jars are not present
in Spark's packaging (for example, "org/apache/hadoop/mapred/M
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14020
lgtm. Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14020#discussion_r70337582
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
---
@@ -220,9 +220,15 @@ private[hive] class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14020
also cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14020#discussion_r70335850
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
---
@@ -220,9 +220,15 @@ private[hive] class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14020
Will putting that jar in Spark's classpath work? Seems so?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13973
@srowen Seems this commit breaks 1.6 builds
(https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-branch-1.6-test-sbt-hadoop-1.0/248/)?
---
If your project is set up
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
Let me see if we can have a flag to determine if we want to swallow the FNF
(like what https://github.com/apache/spark/pull/13987/files does).
---
If your project is set up for it, you can reply to
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
I think there will be one warning when we create a table. Or maybe there is
no warning during table creation because the refresh is called lazily.
---
If your project is set up for it, you can
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14139
[SPARK-16313][SQL][BRANCH-1.6] Spark should not silently drop exceptions in
file listing
## What changes were proposed in this pull request?
Spark silently drops exceptions during file listing
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13991
OK. Thanks. Then, it will be good to add more tests for cases that are not
covered by those hive tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
tes this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13991
As a follow-up task. Can you take a look at the following query files and
add useful tests in your test? Thanks.
```
.//sql/hive/src/test/resources/ql/src/test/queries/clientpositive
by
release-build.sh.
Author: Yin Huai
Closes #14108 from yhuai/SPARK-16453.
(cherry picked from commit 60ba436b7010436c77dfe5219a9662accc25bffa)
Signed-off-by: Yin Huai
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/07f562f5
T
ase-build.sh.
Author: Yin Huai
Closes #14108 from yhuai/SPARK-16453.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/60ba436b
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/60ba436b
Diff: http://git-wip-us.apache.
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14108
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14108
@srowen Does it look good?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14108#discussion_r70144675
--- Diff: dev/create-release/release-build.sh ---
@@ -258,7 +258,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
-Phive
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14108#discussion_r70144693
--- Diff: dev/create-release/release-build.sh ---
@@ -258,7 +258,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
-Phive
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14108#discussion_r70144390
--- Diff: dev/create-release/release-build.sh ---
@@ -258,7 +258,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
-Phive
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14108
cc @JoshRosen @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14108
[SPARK-16453] [BUILD] release-build.sh is missing hive-thriftserver for
scala 2.10
## What changes were proposed in this pull request?
This PR adds hive-thriftserver profile to scala 2.10 build
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030627
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030569
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030381
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030343
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70029947
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70029907
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70029843
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14028#discussion_r69936170
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonGenerator.scala
---
@@ -17,74 +17,180 @@
package
901 - 1000 of 6156 matches
Mail list logo