Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2291#issuecomment-55301417
The last build failure was caused by streaming suites.
But I do need to update the data type parsing logic in Python.
---
If your project is set up
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2352#issuecomment-55310300
@chenghao-intel Actually this issue has bothered us for some time, and
makes the Maven build on Jenkins fail. But we had never reproduce it locally...
Would you mind
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2352#issuecomment-55365014
Hmm... I couldn't reproduce the `HiveQuerySuite` failure, but I can
steadily reproduce similar failure with `StatisticsSuite`, and your patch does
fixes this one
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2352#issuecomment-55366440
Actually the SBT Jenkins build is still alright, it's the Maven build that
is broken, that's even stranger, since you can easily reproduce it with SBT...
---
If your
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2352#issuecomment-55440621
Got more clue on this, which explains why `HiveQuerySuite` doesn't fail
previously. (but @chenghao-intel, why it fails on your side? Still mysterious.)
Basically, we
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2352#issuecomment-55441813
If you run `StatisticsSuite` separately with either `sbt test-only` or `mvn
-DwildcardSuites`, you can always reproduce the default database missing
exception. Because
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55471489
Discussed with @baishuo offline, I'll submit a PR to his branch to fix some
small styling and performance related issues, and then this should be OK to
merge
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/2375
[SPARK-3515][SQL] Moves test suite setup code to beforeAll rather than in
constructor
Please refer to the JIRA ticket for details.
**NOTE** We should check all test suites that do
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/2377
[SPARK-3481][SQL] Removes the evil MINOR HACK
This is a follow up of #2352. Now we can finally remove the evil MINOR
HACK, which covered up the eldest bug in the history of Spark SQL (see details
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2327#discussion_r17514159
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -295,8 +295,16 @@ class HiveQuerySuite extends
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526475
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -181,11 +182,25 @@ class SqlParser extends StandardTokenParsers
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526488
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -181,11 +182,25 @@ class SqlParser extends StandardTokenParsers
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526493
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/commands.scala
---
@@ -75,3 +75,8 @@ case class DescribeCommand
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526500
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -305,6 +305,8 @@ private[sql] abstract class SparkStrategies
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526580
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,22 @@ case class DescribeCommand(child: SparkPlan
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526617
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala ---
@@ -119,4 +119,16 @@ class CachedTableSuite extends QueryTest
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526625
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala ---
@@ -119,4 +119,16 @@ class CachedTableSuite extends QueryTest
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526629
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -214,6 +214,7 @@ private[hive] object HiveQl {
*/
def getAst
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526673
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -229,11 +230,17 @@ private[hive] object HiveQl {
SetCommand
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526686
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -1097,7 +1109,7 @@ private[hive] object HiveQl {
case Token
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526703
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,22 @@ case class DescribeCommand(child: SparkPlan
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2390#discussion_r17526777
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -127,6 +127,7 @@ class SqlParser extends StandardTokenParsers
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2382#discussion_r17578145
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -96,8 +101,17 @@ abstract class LogicalPlan
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2382#discussion_r17578267
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveResolutionSuite.scala
---
@@ -57,13 +57,14 @@ class HiveResolutionSuite extends
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2382#discussion_r17578483
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/package.scala
---
@@ -22,4 +22,9 @@ package org.apache.spark.sql.catalyst
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2382#discussion_r17579912
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveResolutionSuite.scala
---
@@ -57,13 +57,14 @@ class HiveResolutionSuite extends
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2382#issuecomment-55685249
LGTM except some minor issues mentioned in the comments :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17579967
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -183,9 +183,17 @@ class SqlParser extends StandardTokenParsers
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17580006
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -229,7 +229,13 @@ private[hive] object HiveQl {
SetCommand
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17580096
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -229,7 +229,13 @@ private[hive] object HiveQl {
SetCommand
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2397#issuecomment-55686030
To me the only important issue here is the laziness semantics of `CACHE
TABLE AS SELECT`. I tend to make it lazy because `SQLContext.cacheTable`,
`CACHE TABLE name
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2355#discussion_r17585678
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUdfs.scala
---
@@ -113,30 +113,32 @@ private[hive] case class
HiveSimpleUdf
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2355#discussion_r17585758
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUdfs.scala
---
@@ -113,30 +113,32 @@ private[hive] case class
HiveSimpleUdf
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2382#discussion_r17622093
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveResolutionSuite.scala
---
@@ -57,13 +57,14 @@ class HiveResolutionSuite extends
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17625214
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -100,63 +102,139 @@ case class InsertIntoHiveTable
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17625337
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -100,63 +102,139 @@ case class InsertIntoHiveTable
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17625525
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -178,6 +256,40 @@ case class InsertIntoHiveTable
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55802115
@baishuo Found some issues while refactoring this PR. Will fix them in my
following PR against yours as we've discussed offline.
---
If your project is set up
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17635163
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -229,7 +229,13 @@ private[hive] object HiveQl {
SetCommand
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r1763
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -100,63 +102,139 @@ case class InsertIntoHiveTable
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2382#issuecomment-55840530
Oh, one more thing, please help rename this test case:
https://github.com/apache/spark/blob/0a7091e689a4c8b1e7b61e9f0873e6557f40d952/sql/hive/src/test/scala/org/apache
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55850909
Hmm, considering that `SparkSubmit` can be used to start any user
application, which may call `System.exit(Int)` at any time with an arbitrary
integer, it can a good
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17648914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,20 @@ case class DescribeCommand(child: SparkPlan
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2397#issuecomment-55857224
LGTM except for the analyzed logical plan issue as mentioned in my last
comment. Thanks for working on this!
---
If your project is set up for it, you can reply
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17680055
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,20 @@ case class DescribeCommand(child: SparkPlan
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55931281
Please refer to the PR description of baishuo/spark#1 for details of the
most recent changes.
---
If your project is set up for it, you can reply to this email
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/2431
[SQL] Made Command.sideEffectResult protected
Considering `Command.executeCollect()` simply delegates to
`Command.sideEffectResult`, we no longer need to leave the latter
`protected[sql]`.
You
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2431#issuecomment-55934389
@marmbrus Would be good to merge after #2226 since I made
`InsertIntoHiveTable` a `Command` there and would have minor conflict with this
one.
---
If your project
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2344#issuecomment-55967788
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55967900
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2390#issuecomment-55968181
Would you mind to close this PR since #2397 was opened as a replacement?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2381#issuecomment-55968221
Mind to close this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2393#issuecomment-55969364
+1 for the `deleteOnExit`/`deleteRecursively` pattern.
@mattf According to its
[Javadoc](http://docs.oracle.com/javase/7/docs/api/java/io/File.html
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2186#issuecomment-55970035
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2421#discussion_r17700240
--- Diff: sbin/start-thriftserver.sh ---
@@ -27,7 +27,7 @@ set -o posix
FWDIR=$(cd `dirname $0`/..; pwd)
CLASS
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703451
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703479
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17703524
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55983475
Addressed @yhuai's comments except for adding more tests, will add them
soon.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r17704323
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SparkHadoopWriter.scala ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2421#discussion_r17704596
--- Diff: sbin/start-thriftserver.sh ---
@@ -27,7 +27,7 @@ set -o posix
FWDIR=$(cd `dirname $0`/..; pwd)
CLASS
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55985869
@WangTaoTheTonic According to the wiki page @vanzin pointed out, values
above 125 are used by bash for special purposes. Since the purpose of this PR
is to reduce
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17710806
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,20 @@ case class DescribeCommand(child: SparkPlan
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2397#issuecomment-56098131
@ravipesala Thanks for working on this! @marmbrus I think this is ready to
go :)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-56098496
@sarutak Actually Jenkins only listens to a limited group of people, and
sometimes he even ignores this group for unknown reasons. Lots of work led by
Josh had been
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/2468
[SQL] Adds sizeInBytes statistics for Limit operator when all output
attributes are of native data types
This helps to replace shuffled hash joins with broadcast hash joins in some
cases.
You
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2241#discussion_r17814180
--- Diff: sql/hive/pom.xml ---
@@ -119,6 +83,74 @@
profiles
profile
+ idhive-default/id
+ activation
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2468#discussion_r17816722
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -148,6 +148,17 @@ case class Aggregate
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2468#discussion_r17827025
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -122,6 +122,16 @@ object NativeType
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2226#discussion_r1720
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -522,6 +523,52 @@ class HiveQuerySuite extends
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-56472453
LGTM
@marmbrus This is finally good to go :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2263#issuecomment-56599320
Sorry, this LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/2513
[SPARK-3645][SQL] Makes table caching eager by default and adds syntax for
lazy caching
Although lazy caching for in-memory table seems consistent with the
`RDD.cache()` API, it's relatively
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18018270
--- Diff: sbin/spark-daemon.sh ---
@@ -142,8 +142,12 @@ case $startStop in
spark_rotate_log $log
echo starting $command, logging
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18018267
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -320,6 +320,10 @@ object SparkSubmit {
} catch {
case e
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18018268
--- Diff: sbin/spark-daemon.sh ---
@@ -142,8 +142,12 @@ case $startStop in
spark_rotate_log $log
echo starting $command, logging
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2509#issuecomment-56784684
Generally this is a good idea. But it would be better to make
`spark-daemon.sh` more general, rather than making `HiveThriftServer2` a
special case.
---
If your
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2542#discussion_r18080938
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -57,6 +57,12 @@ class SQLQuerySuite extends QueryTest
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2542#discussion_r18080957
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -110,9 +110,18 @@ abstract class LogicalPlan
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2501#discussion_r18081225
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/CacheManager.scala
---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2501#discussion_r18081221
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -73,6 +74,52 @@ abstract class LogicalPlan
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2501#discussion_r18081246
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SchemaRDDLike.scala
---
@@ -56,7 +55,7 @@ private[sql] trait SchemaRDDLike {
// happen
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2543#issuecomment-56942718
Would you mind to file a JIRA ticket for this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2501#discussion_r18090525
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala ---
@@ -20,14 +20,32 @@ package org.apache.spark.sql
import
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2542#issuecomment-57039692
@cloud-fan Tried the following snippet with ambiguous references in Hive:
```sql
CREATE TABLE t1(a STRUCTx: INT, k INT);
CREATE TABLE t2(x INT
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18122666
--- Diff: sbin/stop-thriftserver.sh ---
@@ -0,0 +1,25 @@
+#!/usr/bin/env bash
--- End diff --
This file should be executable, please `chmod
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18122679
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -320,6 +320,10 @@ object SparkSubmit {
} catch {
case e
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18122728
--- Diff: sbin/spark-daemon.sh ---
@@ -142,8 +142,12 @@ case $startStop in
spark_rotate_log $log
echo starting $command, logging
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2509#issuecomment-57047019
Thanks for working on this! I tested this PR locally and it works fine. But
there are still some minor issues pending to be resolved, please refer to the
comments
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2542#discussion_r18123371
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -110,9 +110,18 @@ abstract class LogicalPlan
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2542#discussion_r18123559
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -110,9 +110,18 @@ abstract class LogicalPlan
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2542#discussion_r18123585
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -110,9 +110,18 @@ abstract class LogicalPlan
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2542#issuecomment-57056092
@cloud-fan Sorry, made a mistake in the snippet I used, it should be:
```sql
CREATE TABLE t1(x INT);
CREATE TABLE t2(a STRUCTx: INT, k INT);
SELECT
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18126738
--- Diff: sbin/spark-daemon.sh ---
@@ -142,8 +142,12 @@ case $startStop in
spark_rotate_log $log
echo starting $command, logging
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18126747
--- Diff: sbin/spark-daemon.sh ---
@@ -142,8 +142,12 @@ case $startStop in
spark_rotate_log $log
echo starting $command, logging
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/2563
[SPARK-3713][SQL] Uses JSON to serialize DataType objects
This PR uses JSON instead of `toString` to serialize `DataType`s. The
latter is not only hard to parse but also flaky in many cases
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2291#issuecomment-57085068
PR #2563 supersedes this one. Closing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liancheng closed the pull request at:
https://github.com/apache/spark/pull/2291
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2509#issuecomment-57085803
After updating my local repo, I found that `stop-thriftserver.sh` is still
not executable. Make sure to `git add` this file after `chmod +x`. This is the
only pending
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2509#issuecomment-57108375
Ah, sorry, my fault. Then this LGTM, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
1 - 100 of 5176 matches
Mail list logo