GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/19054
[SPARK-18067] Avoid shuffling child if join keys are superset of child's
partitioning keys
Jira : https://issues.apache.org/jira/browse/SPARK-18067
## What problem is being addressed
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/19001
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/19001#discussion_r134106172
--- Diff:
sql/hive/src/main/java/org/apache/hadoop/hive/ql/io/BucketizedSparkRecordReader.java
---
@@ -0,0 +1,147 @@
+/**
+ * Licensed
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/19001
cc @cloud-fan @gatorsmile @sameeragarwal @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18954
I have a new PR (https://github.com/apache/spark/pull/19001) which
supersedes this one. It has everything this PR does (ie. writer side changes)
plus reader side changes.
---
If your project
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18954#discussion_r134103430
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
---
@@ -50,7 +50,9 @@ case class EnsureRequirements
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/19001
[SPARK-19256][SQL] Hive bucketing support
## What changes were proposed in this pull request?
This PR implements both read and write side changes for supporting hive
bucketing
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18954
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18954
cc @cloud-fan @gatorsmile @sameeragarwal @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18954
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18954
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/18954
[SPARK-17654] [SQL] Enable creating hive bucketed tables
## What changes were proposed in this pull request?
### Semantics:
- If the Hive table is bucketed, then INSERT node expect
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r132715845
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala ---
@@ -543,6 +551,68 @@ abstract class BucketedReadSuite
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18843
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18843#discussion_r132621976
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1139,9 +1154,14 @@ class SQLConf extends Serializable
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
BTW: The "summary of this patch" in your comment accurately captures what
this PR is doing.
---
If your project is set up for it, you can reply to this email and have your
re
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
@cloud-fan : I was on a long vacation for a quite sometime so couldn't get
to this. Wrt to the concern you had, I have replied to that discussion in the
PR : https://github.com/apache/spark
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r132620278
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/ReorderJoinPredicates.scala
---
@@ -0,0 +1,93 @@
+/*
+ * Licensed
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18843#discussion_r132543304
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArray.scala
---
@@ -31,16 +31,16 @@ import
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18843#discussion_r132546198
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -844,24 +844,39 @@ object SQLConf {
.stringConf
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18843#discussion_r132570353
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -844,24 +844,39 @@ object SQLConf {
.stringConf
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18843
@hvanhovell : let me know what you think about this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/18843
[SPARK-21595] Separate thresholds for buffering and spilling in
ExternalAppendOnlyUnsafeRowArray
## What changes were proposed in this pull request?
[SPARK-21595](https
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131194926
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -50,6 +50,7 @@ private[hive
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131188713
--- Diff: docs/configuration.md ---
@@ -2326,7 +2326,7 @@ from this directory.
# Inheriting Hadoop Cluster Configuration
If you plan
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131187701
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,61 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131185632
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -404,6 +404,13 @@ private[spark] object HiveUtils extends Logging
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18805
re build failure: you can repro that locally by running
"./dev/test-dependencies.sh". Its failing due to introducing a new dep... you
need to add it to `dev/deps/spark-deps-
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769482
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -50,13 +51,14 @@ private[spark] object CompressionCodec
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18805
In `Benchmark` section the values for `Lz4` are all zeros which feels
confusing while reading.. first thing I thought is they were absolute values
but they are supposed to be relative
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769858
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769646
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769548
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18309#discussion_r122217829
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -109,16 +124,16 @@ object
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18309#discussion_r122217717
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -128,6 +128,40 @@ class StatisticsSuite extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18309#discussion_r122102333
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -81,6 +83,19 @@ case class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18309#discussion_r122102110
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -81,6 +83,19 @@ case class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18309#discussion_r122102041
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -109,16 +124,16 @@ object
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18209
Given that at Facebook we use our own in-house scheduler, I see why people
would want to see their scheduler impls added right in Spark codebase as a
first class citizen. Like @srowen said
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
@cloud-fan : ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17180#discussion_r121021527
--- Diff:
core/src/main/java/org/apache/spark/unsafe/map/BytesToBytesMap.java ---
@@ -358,10 +358,20 @@ public long spill(long numBytes) throws
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18221#discussion_r120997057
--- Diff: project/SparkBuild.scala ---
@@ -240,7 +240,8 @@ object SparkBuild extends PomBuild {
javacOptions in Compile ++= Seq
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18221#discussion_r120536351
--- Diff: project/SparkBuild.scala ---
@@ -240,7 +240,8 @@ object SparkBuild extends PomBuild {
javacOptions in Compile ++= Seq
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18221#discussion_r120535432
--- Diff:
common/kvstore/src/main/java/org/apache/spark/kvstore/ArrayWrappers.java ---
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18221#discussion_r120539837
--- Diff:
common/kvstore/src/main/java/org/apache/spark/kvstore/ArrayWrappers.java ---
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r118854380
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ConstantPropagationSuite.scala
---
@@ -0,0 +1,167
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r118852332
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ConstantPropagationSuite.scala
---
@@ -0,0 +1,167
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r118852031
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,62 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r118849967
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,62 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r118765087
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ConstantPropagationSuite.scala
---
@@ -0,0 +1,154
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r118765089
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,62 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r118765075
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,62 @@ object ConstantFolding extends
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17993
cc @hvanhovell @gatorsmile @dongjoon-hyun
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
@cloud-fan : ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
@cloud-fan : doesn't look like this PR is related to
[SPARK-12704](https://issues.apache.org/jira/browse/SPARK-12704).. however,
SPARK-12704 does seem related to another PR I was working
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17993
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17993
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117619380
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,62 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117618800
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,59 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117618804
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,59 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117618801
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ConstantPropagationSuite.scala
---
@@ -0,0 +1,102
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117618796
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,59 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117618790
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,59 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117618788
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,59 @@ object ConstantFolding extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17993#discussion_r117618791
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -54,6 +54,59 @@ object ConstantFolding extends
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17993
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/17993
[SPARK-20758][SQL] Add Constant propagation optimization
## What changes were proposed in this pull request?
Added a rule based on this logic:
- look for expression node
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116506037
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -408,9 +425,9 @@ private[hive] class HiveClientImpl
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116505830
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -870,6 +887,23 @@ private[hive] object HiveClientImpl
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116414803
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -307,6 +307,27 @@ case class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116384127
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -307,6 +307,27 @@ case class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116384135
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -408,9 +425,7 @@ private[hive] class HiveClientImpl
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116383924
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -17,6 +17,7 @@
package
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r116345617
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/ReorderJoinPredicates.scala
---
@@ -0,0 +1,93 @@
+/*
+ * Licensed
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116342178
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -171,8 +172,7 @@ private[hive] class HiveMetastoreCatalog
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r116163888
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/ReorderJoinPredicates.scala
---
@@ -0,0 +1,93 @@
+/*
+ * Licensed
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r116163611
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/ReorderJoinPredicates.scala
---
@@ -0,0 +1,93 @@
+/*
+ * Licensed
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r116162386
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala ---
@@ -315,8 +317,14 @@ abstract class BucketedReadSuite
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116157920
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -871,6 +886,23 @@ private[hive] object HiveClientImpl
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
@cloud-fan : I have made suggested change(s).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17938#discussion_r116059777
--- Diff: docs/sql-programming-guide.md ---
@@ -581,6 +581,46 @@ Starting from Spark 2.1, persistent datasource tables
have per-partition metadat
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116012486
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -871,6 +886,23 @@ private[hive] object HiveClientImpl
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116003799
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -335,6 +336,32 @@ abstract class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116003794
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -902,9 +902,14 @@ case class ShowCreateTableCommand(table
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14702
I dont see I will be getting time to work on this. Will close the PR for
now and revisit in future.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tejasapatil closed the pull request at:
https://github.com/apache/spark/pull/14702
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17938#discussion_r115890492
--- Diff: docs/sql-programming-guide.md ---
@@ -581,6 +581,46 @@ Starting from Spark 2.1, persistent datasource tables
have per-partition metadat
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17938#discussion_r115888674
--- Diff: docs/sql-programming-guide.md ---
@@ -581,6 +581,46 @@ Starting from Spark 2.1, persistent datasource tables
have per-partition metadat
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17938#discussion_r115888199
--- Diff: docs/sql-programming-guide.md ---
@@ -1766,12 +1806,6 @@ Spark SQL supports the vast majority of Hive
features, such as:
Below is a list
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
Good idea @cloud-fan !! I will give it a try.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17644
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r115356786
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -41,6 +41,42 @@ case class SortMergeJoinExec
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r115333973
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -632,9 +632,51 @@ private[spark] class HiveExternalCatalog
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17644
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17644
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
101 - 200 of 768 matches
Mail list logo