GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/15642
[SPARK-18009][SQL] Fix ClassCastException while calling toLocalIterator()
on dataframe produced by RunnableCommand
## What changes were proposed in this pull request?
A short code snippet
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15423
@cloud-fan @viirya Thank you very much !!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15423
@cloud-fan Hi wenchen, i have added the test cases for temp view. Could we
please look at this again? Thanks !
---
If your project is set up for it, you can reply to this email and have your
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15495
@gatorsmile @yhuai Many thanks !!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15423
@viirya @cloud-fan I have incorporated the review comments. Could we please
look at this again ?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@gatorsmile @yhuai Its due to difference between scala 2.10 and 2.11
compiler in the way they deal with named parameters. Looks like 2.10 is less
forgiving :-) . I have opened https
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15495#discussion_r83514164
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -587,6 +594,30 @@ class SQLQuerySuite extends
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/15495
[SPARK-17620][SQL] Determine Serde by hive.default.fileformat when Creating
Hive Serde Tables
## What changes were proposed in this pull request?
Reopens the closed PR https://github.com
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@yhuai very sorry Yin.. Let me look what happened here.. Is there a way to
open this pull request ? or i need to open a new one ?
---
If your project is set up for it, you can reply
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
Thank you @yhuai @gatorsmile @cloud-fan @viirya @dafrista
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@gatorsmile @yhuai I have added a new test. Can we please take a look at
this again ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15423#discussion_r82938410
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala ---
@@ -207,6 +208,7 @@ class SQLQueryTestSuite extends QueryTest
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@yhuai We will use Parquet format in your example. We look at ```SQL
spark.sql.sources.default ``` configuration to decide on the format to use ?
Here is the output for your perusal
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15423#discussion_r82818691
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -168,17 +168,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15423#discussion_r82817760
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala ---
@@ -207,6 +208,7 @@ class SQLQueryTestSuite extends QueryTest
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15423#discussion_r82721435
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -1713,4 +1713,19 @@ class DDLSuite extends QueryTest
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15423#discussion_r82720521
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala ---
@@ -207,6 +208,7 @@ class SQLQueryTestSuite extends QueryTest
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15423#discussion_r82717035
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala ---
@@ -207,6 +208,7 @@ class SQLQueryTestSuite extends QueryTest
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/15423
[SPARK-17860][SQL] SHOW COLUMN's database conflict check should respect
case sensitivity configuration
## What changes were proposed in this pull request?
SHOW COLUMNS command validates
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15334
@gatorsmile Thanks .. I had absolutely no idea about this PR :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r82115917
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
---
@@ -157,7 +158,8 @@ void readBatch
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r82066767
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -212,6 +212,14 @@ object SQLConf {
.booleanConf
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r82065077
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
---
@@ -164,6 +165,63 @@ class
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r82065039
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaConverter.scala
---
@@ -356,6 +363,9 @@ private
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r82064969
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
---
@@ -362,7 +363,15 @@ private void
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r81731343
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -206,6 +207,30 @@ object DateTimeUtils
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15332
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r81692772
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -206,6 +206,22 @@ object DateTimeUtils
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r81649623
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -206,6 +206,22 @@ object DateTimeUtils
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/15334
[SPARK-10634][SQL][WIP] Support Parquet logical type INTERVAL
## What changes were proposed in this pull request?
Adds support for writing and reading interval data into parquet files
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/15332
[SPARK-10634][SQL] Support Parquet logical type TIMESTAMP_MILLIS
## What changes were proposed in this pull request?
**Description** from JIRA
The TimestampType in Spark SQL
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
cc @yhuai @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@gatorsmile We have test in SQLQuerySuite "CTAS without serde without
location" where we check for default data source. I had actually added the
following test before i sa
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@gatorsmile Thanks.. didn't realize we wanted to find out the CTAS
behaviour. Here is the result..
When covertCTAS is set to true, we create a data source table with parquet
format
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@yhuai Hi Yin,
create table ... as select ... would respect the setting of
hive.default.fileformat.
``` SQL
scala> spark.sql("SET hive.default.fileformat
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@viirya In my understanding, thats the datasource table code path. I am not
sure if we should look at hive.default.fileformat property to set the default
storage for data source tables ? In my
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15190#discussion_r79988690
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -988,9 +988,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15190
@viirya I think we can come here from multiple code paths like
visitCreateTableUsing. I think we can come to DataSinks's CreateTable case
without serde being set.
---
If your project is set
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15190#discussion_r79979332
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -988,9 +988,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15190#discussion_r79964807
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveDDLCommandSuite.scala ---
@@ -556,4 +558,32 @@ class HiveDDLCommandSuite extends
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/15190
[SPARK-17620][SQL] hive.default.fileformat=orc does not set OrcSerde
## What changes were proposed in this pull request?
Make sure the hive.default.fileformat is used to when creating
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15006
Thank you @clockfly @hvanhovell @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15006
@clockfly Thank you !! One question, by visitor code, do you mean the
visitTableIdentifier code ? If so, i didn't make any change there. I just added
a post hook in ParseDriver - FYI
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15006
@clockfly I also spent some time looking into this :-) I initially tried to
handle this at lexer level and found it difficult distinguish between the
number literals and table names
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15006
@gatorsmile FYI - Hive seems to allow identifier to start with number.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/14452
@viirya Are we driving the execution of the CommonSubquery while compiling
the main query ? For example, if we are explaining the query, we should not be
executing the CommonSubquery, right
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13897
Thanks a lot @hvanhovell @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13897
cc @hvanhovell @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13897#discussion_r68464866
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowSuite.scala ---
@@ -245,6 +245,14 @@ class DataFrameWindowSuite extends QueryTest
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13897#discussion_r68464756
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowSuite.scala ---
@@ -245,6 +245,14 @@ class DataFrameWindowSuite extends QueryTest
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/13897
[SPARK-16195][SQL] Allow users to specify empty over clause in window
expressions through dataset API
## What changes were proposed in this pull request?
Allow to specify empty over clause
Github user dilipbiswal closed the pull request at:
https://github.com/apache/spark/pull/13483
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@marmbrus Thanks Michael. I am going to close this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@viirya You know, as I said above, both ways are not perfect. My inputs are
just based on my previous design experiences. All the design decisions I made
are based on usage scenarios
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@gatorsmile I am not stand for it due to actual use cases, but the API
behavior consistency. If we disallow duplicate group by columns, then we should
do filtering them all. Let
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@viirya I am just wondering why users need a dataframe with duplicate
column names? Could you give me a usage scenario?
---
If your project is set up for it, you can reply to this email
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@gatorsmile Your last example just shows the inconsistency. Given two
different parameters, `$"col1", count("*")` and `count("*")`, you get the same
outpu
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@viirya This is a design decision. So far, both ways are not perfect.
In my mind, we have to consider the use cases here. If users want to have
duplicate columns, they should not use
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@gatorsmile Thanks a lot for your explanation. I agree that if we
internally generate the aggregate expressions that is different than user
specified one , then we should eliminate
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13483#discussion_r65657150
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala ---
@@ -224,6 +224,26 @@ class DatasetAggregatorSuite extends
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/13483
@gatorsmile added comments and also updated the PR description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13483#discussion_r65650867
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala ---
@@ -46,7 +46,18 @@ class RelationalGroupedDataset protected
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13483#discussion_r65650856
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala ---
@@ -224,6 +224,21 @@ class DatasetAggregatorSuite extends
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/13483
[SPARK-15688] RelationalGroupedDataset.toDF should not add group by
xpressions that are already added in the aggregate expressions
## What changes were proposed in this pull request
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/13368
@davies Hi Davies, Thank you very much for your review. I have updated the
PR title as per your suggestion.
---
If your project is set up for it, you can reply to this email and have
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13368#discussion_r64982756
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -290,11 +290,6 @@ object TypeCoercion
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13368#discussion_r64981737
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -290,11 +290,6 @@ object TypeCoercion
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/13368
SPARK-15557] expression ((cast(99 as decimal) + '3') * '2.3' ) return NULL
In this case, the result type of the expression becomes DECIMAL(38, 36) as
we promote the individual string literals
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/12984#issuecomment-220651091
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13045#discussion_r63994025
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Column.scala ---
@@ -37,6 +38,14 @@ private[sql] object Column {
def apply(expr: Expression
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13200#discussion_r63993278
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -735,29 +731,130 @@ object SparkSession
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13200#discussion_r63992730
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -735,29 +731,130 @@ object SparkSession
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13045#discussion_r63992015
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Column.scala ---
@@ -37,6 +38,14 @@ private[sql] object Column {
def apply(expr: Expression
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13045#discussion_r63991554
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala ---
@@ -240,4 +240,15 @@ class DatasetAggregatorSuite extends
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/13045#issuecomment-220515698
cc @cloud-fan Hi Wenchen, I have made the changes per your comments. Could
you please look through it when you get a chance ? Thanks..
---
If your project is set
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/13102#issuecomment-220448360
@rxin Thank you reynold for handling this. It was taking me a lot of time
to understand the dependencies. Thanks again ..
I will close this now
Github user dilipbiswal closed the pull request at:
https://github.com/apache/spark/pull/13102
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/13102#issuecomment-219927191
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/13102
[SPARK-13485] Cleanup dependencies between SQLContext and SparkSession
## What changes were proposed in this pull request?
We currently in SparkSession.Builder use
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/13045#issuecomment-218931189
@cloud-fan Hi Wenchen, can you please look over the change and let me know
what you think ? I had a question for you. I tried to keep the expression
un-resolved
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13045#discussion_r62968394
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -861,11 +861,11 @@ def groupBy(self, *cols):
Each element should be a column name
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13045#discussion_r62966720
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -861,11 +861,11 @@ def groupBy(self, *cols):
Each element should be a column name
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/13045#issuecomment-218662623
@yhuai @cloud-fan Sure. I will change it only for typed aggregation.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13045#discussion_r62965291
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -861,11 +861,11 @@ def groupBy(self, *cols):
Each element should be a column name
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13045#discussion_r62964226
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -861,11 +861,11 @@ def groupBy(self, *cols):
Each element should be a column name
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/13045#issuecomment-218581069
cc @yhuai @cloud-fan @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/13045
[SPARK-15114][SQL] Column name generated by typed aggregate is super verbose
## What changes were proposed in this pull request?
Generate a shorter default alias
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/12924#issuecomment-217292489
@andrewor14 Thank you Andrew !!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/12924#issuecomment-217228011
@andrewor14 Sure Andrew. I will change it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/12924#issuecomment-217103321
cc @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/12460#issuecomment-217100315
@liancheng Hi Lian, in this PR, i had implemented "describe table
partition" and "describe column".
Do you want me to put this on top of
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/12924
[SPARK-14893][SQL] Re-enable HiveSparkSubmitSuite SPARK-8489 test after
HiveContext is removed
## What changes were proposed in this pull request?
Enable the test that was disabled
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/12871#discussion_r62149522
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
---
@@ -245,7 +301,21 @@ class InMemoryCatalog
Github user dilipbiswal closed the pull request at:
https://github.com/apache/spark/pull/12460
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/10428#issuecomment-216352484
@yhuai Sure :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal closed the pull request at:
https://github.com/apache/spark/pull/10428
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/12646#issuecomment-215462428
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal commented on the pull request:
https://github.com/apache/spark/pull/12460#issuecomment-215195961
@liancheng Thank you for your comment. Actually initially i started with
the idea of serving the describe command solely from `CatalogTable`. I then
realized
801 - 900 of 1257 matches
Mail list logo