[GitHub] spark pull request #13130: [SPARK-15340][SQL]Limit the size of the map used ...

2016-06-27 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/13130 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request #12700: [SPARK-4105][CORE] regenerate the shuffle file wh...

2016-06-27 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/12700 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-15340][SQL]Limit the size of the map us...

2016-05-30 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/13130#issuecomment-222502337 The size of JobConf is about 124k and the memory of my dirver is 10g (> 124k * 1 = 1.18g), so it is work. @rxin --- If your project is set up for it, you

[GitHub] spark pull request: [SPARK-15340][SQL]Limit the size of the map us...

2016-05-23 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/13130#issuecomment-221146422 I'm confused about it too --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [SPARK-15340][SQL]Limit the size of the map us...

2016-05-19 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/13130#issuecomment-220497354 cc @yhuai @srowen --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-15340][SQL]Limit the size of the map us...

2016-05-19 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/13130#issuecomment-220273234 I refer to other codes using the maximumSize method @akohli ``` /** A cache of Spark SQL data source tables that have been accessed. */ protected[hive

[GitHub] spark pull request: [SPARK-15340][SQL]Limit the size of the map us...

2016-05-17 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/13130#issuecomment-219905068 It's ok. If the size > 1000, the new values will replace the old values. @akohli --- If your project is set up for it, you can reply to this email and have y

[GitHub] spark pull request: [SPARK-15340][SQL]Limit the size of the map us...

2016-05-16 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/13130#discussion_r63453073 --- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala --- @@ -363,7 +363,7 @@ private[spark] object HadoopRDD extends Logging

[GitHub] spark pull request: [SPARK-15340][SQL]Limit the size of the map us...

2016-05-16 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/13130 [SPARK-15340][SQL]Limit the size of the map used to cache JobConfs to void OOM # What changes were proposed in this pull request? limit the size of the map used to cache JobConfs

[GitHub] spark pull request: [SPARK-4105][CORE] regenerate the shuffle file...

2016-04-27 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/12700#issuecomment-214983252 Now ,i just know that corrupted shuffle file could caused this problem, but i do not know why shufflle file is corrupted. @jerryshao @viper-kun --- If your

[GitHub] spark pull request: [SPARK-4105][CORE] regenerate the shuffle file...

2016-04-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/12700#issuecomment-214943061 I find that some task recompute before FAILED_TO_UNCOMPRESS happened and think that something like https://github.com/apache/spark/pull/9610 caused this problem

[GitHub] spark pull request: [SPARK-4105][CORE] regenerate the shuffle file...

2016-04-26 Thread DoingDone9
GitHub user DoingDone9 reopened a pull request: https://github.com/apache/spark/pull/12700 [SPARK-4105][CORE] regenerate the shuffle file when it is corrupted I find that some task recompute before FAILED_TO_UNCOMPRESS happened,and I think that retry operation Corrupted shuffle

[GitHub] spark pull request: [SPARK-4105][CORE] regenerate the shuffle file...

2016-04-26 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/12700 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-4105][CORE] regenerate the shuffle file...

2016-04-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/12700#issuecomment-214940051 yeah, I haven't found the root-cause yet and been troubled by this problem for a long time. Any idea for this problem @srowen --- If your project is set up

[GitHub] spark pull request: [SPARK-4105][Core] regenerate the shuffle file...

2016-04-26 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/12700 [SPARK-4105][Core] regenerate the shuffle file when it is corrupted I find that some task recompute before FAILED_TO_UNCOMPRESS happened,and I think that retry operation Corrupted shuffle file

[GitHub] spark pull request: [SPARK-11100][SQL]HiveThriftServer HA issue,Hi...

2016-03-31 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/9113#issuecomment-203886680 good job! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-11974][CORE]Not all the temp dirs had b...

2015-11-25 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/9951#issuecomment-159529657 Is it OK? @rxin --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-11974][CORE]Not all the temp dirs had b...

2015-11-24 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/9951#issuecomment-159525672 ``` shutdownDeletePaths.foreach { dirPath => try { logInfo("Deleting directory " + dirPath) Utils.deleteRecursi

[GitHub] spark pull request: [SPARK-11974][CORE]Not all the temp dirs had b...

2015-11-24 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/9951#issuecomment-159527936 Ok, get it @rxin --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-11974][CORE]Not all the temp dirs had b...

2015-11-24 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/9951#issuecomment-159521038 It can not delete all element of shutdownDeletePaths. Like the example above, this method can not delete all element of a. --- If your project is set up

[GitHub] spark pull request: [SPARK-11974][CORE]Not all the temp dirs had b...

2015-11-24 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/9951 [SPARK-11974][CORE]Not all the temp dirs had been deleted when the JVM exits deleting the temp dir like that ``` val a = mutable.Set(1,2,3,4,7,0,98,9,8) a.foreach(x

[GitHub] spark pull request: [SPARK-8552] [THRIFTSERVER] Using incorrect da...

2015-11-07 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/7118#issuecomment-154654481 @navis when i use spark-sql and run sql like that " add jar /home/udf-0.0.1-SNAPSHOT.jar; create temporary function arr_greater_

[GitHub] spark pull request: [SPARK-8811][SQL] Read array struct data from ...

2015-07-07 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/7209#issuecomment-119083697 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-7824][SQL]Collapsing operator reorderin...

2015-06-11 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6351#issuecomment-111042808 @marmbrus @yhuai @scwf /cc --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [SPARK-7824][SQL]Collapsing operator reorderin...

2015-06-09 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/6351 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-7824][SQL]Collapsing operator reorderin...

2015-06-09 Thread DoingDone9
GitHub user DoingDone9 reopened a pull request: https://github.com/apache/spark/pull/6351 [SPARK-7824][SQL]Collapsing operator reordering and constant folding into a single batch to push down the single side. SQL ``` select * from tableA join tableB on (a 3 and b = d

[GitHub] spark pull request: [SPARK-6976][SQL] drop table if exists src p...

2015-06-09 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/5553 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-7824][SQL]Collapsing operator reorderin...

2015-06-03 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6351#issuecomment-108669840 @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-7867][SQL] Support revoke role ...

2015-05-28 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/6410 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-7824][SQL]Collapsing operator reorderin...

2015-05-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6351#issuecomment-105740359 @marmbrus /cc --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-7867][SQL] Support revoke role ...

2015-05-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6410#issuecomment-105740488 @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-7867][SQL] Support revoke role ...

2015-05-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6410#issuecomment-105707355 Cancel user role in permission control --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your

[GitHub] spark pull request: [SPARK-7824][SQL] Extracting and/or condition ...

2015-05-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6351#issuecomment-10575 yes, it can work. but this batch will have two different types of optimizers. @marmbrus --- If your project is set up for it, you can reply to this email and have

[GitHub] spark pull request: [SPARK-7824][SQL] Extracting and/or condition ...

2015-05-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6351#issuecomment-105458594 @marmbrus @scwf --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-7867][SQL] Support revoke role ...

2015-05-26 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/6410 [SPARK-7867][SQL] Support revoke role ... SQL like ``` revoke role role_a from user user1; ``` You can merge this pull request into a Git repository by running: $ git pull https

[GitHub] spark pull request: [SPARK-7867][SQL] Support revoke role ...

2015-05-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6410#issuecomment-105510479 @scwf --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-7824][SQL] Extracting and/or condition ...

2015-05-22 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/6351#issuecomment-104817104 This optimizer can void CartesianProduct Tables ``` tableAtableBtableC a int cint fint b int d

[GitHub] spark pull request: [SPARK-7824][SQL] Extracting and/or condition ...

2015-05-22 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/6351 [SPARK-7824][SQL] Extracting and/or condition optimizer from BooleanSimplification optimizer and put it before PushPredicateThroughJoin optimizer to push down the single side. SQL

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-11 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5972#issuecomment-100823894 @marmbrus @yhuai @scwf --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-05-10 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/5538 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-07 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5972 [SPARK-7437][SQL] Fold literal in (item1, item2, ..., literal, ...) into false directly if not in. Just Fold literal in (item1, item2, ..., literal, ...) into true directly

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-07 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/5972 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-07 Thread DoingDone9
GitHub user DoingDone9 reopened a pull request: https://github.com/apache/spark/pull/5972 [SPARK-7437][SQL] Fold literal in (item1, item2, ..., literal, ...) into false directly if not in. Just Fold literal in (item1, item2, ..., literal, ...) into true directly

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-07 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5972#discussion_r29911099 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala --- @@ -293,7 +293,15 @@ object ConstantFolding extends

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-07 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5972#issuecomment-99842905 @scwf --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-07 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5972#discussion_r29912494 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala --- @@ -46,13 +46,13 @@ object DefaultOptimizer extends

[GitHub] spark pull request: [SPARK-7437][SQL] Fold literal in (item1, ite...

2015-05-07 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5972#discussion_r29912351 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala --- @@ -110,6 +110,7 @@ case class InSet(value

[GitHub] spark pull request: [SPARK-7225][SQL] CombineLimits optimizer does...

2015-04-29 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5770#issuecomment-97353247 @rxin i have added a test fot this. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project

[GitHub] spark pull request: [SPARK-7225][SQL] CombineLimits optimizer does...

2015-04-29 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5770 [SPARK-7225][SQL] CombineLimits optimizer does not work SQL ``` select key from (select key from src limit 100) t2 limit 10 ``` Optimized Logical Plan before modifying

[GitHub] spark pull request: [SPARK-7267][SQL]Push down Project when it's c...

2015-04-29 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5797 [SPARK-7267][SQL]Push down Project when it's child is Limit SQL ``` select key from (select key,value from t1 limit 100) t2 limit 10 ``` Optimized Logical Plan before modifying

[GitHub] spark pull request: [SPARK-7225][SQL] CombineLimits optimizer does...

2015-04-29 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5770#issuecomment-97645345 git it @marmbrus @rxin --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [SPARK-6976][SQL] drop table if exists src p...

2015-04-28 Thread DoingDone9
GitHub user DoingDone9 reopened a pull request: https://github.com/apache/spark/pull/5553 [SPARK-6976][SQL] drop table if exists src print ERROR info that should not be printed when src not exists. If table src not exists and run sql drop table if exists src, then some ERROR info

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-22 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28933363 --- Diff: sql/hive/v0.13.1/src/main/scala/org/apache/spark/sql/hive/Shim13.scala --- @@ -218,7 +218,13 @@ private[hive] object HiveShim

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-22 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28933263 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala --- @@ -0,0 +1,42 @@ +/* + * Licensed to the Apache Software

[GitHub] spark pull request: [SPARK-6768][SQL] Do not support float/double...

2015-04-21 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/5418 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-21 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28751859 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala --- @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-21 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28838875 --- Diff: sql/hive/v0.12.0/src/main/scala/org/apache/spark/sql/hive/Shim12.scala --- @@ -135,7 +135,13 @@ private[hive] object HiveShim

[GitHub] spark pull request: [SPARK-7026][SQL] make LeftSemiJoin work when ...

2015-04-21 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5610#issuecomment-95004242 Get it. This is not a good solution, I will close it . maybe i can do something in LeftSemiJoinHash and BroadcastLeftSemiJoinHash. thank you @marmbrus --- If your

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-21 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28840547 --- Diff: sql/hive/v0.12.0/src/main/scala/org/apache/spark/sql/hive/Shim12.scala --- @@ -135,7 +135,13 @@ private[hive] object HiveShim

[GitHub] spark pull request: [SPARK-7026][SQL] make LeftSemiJoin work when ...

2015-04-21 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/5610 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-21 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28839749 --- Diff: sql/hive/v0.12.0/src/main/scala/org/apache/spark/sql/hive/Shim12.scala --- @@ -135,7 +135,13 @@ private[hive] object HiveShim

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-21 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28840911 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala --- @@ -0,0 +1,42 @@ +/* + * Licensed to the Apache Software

[GitHub] spark pull request: [SPARK-7026][SQL] make LeftSemiJoin work when ...

2015-04-21 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5610 [SPARK-7026][SQL] make LeftSemiJoin work when it has both equal condition and not equal condition when leftsemijoin has both equal condition and not equal condition, it can not work. sql like

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-20 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28746146 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala --- @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-20 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5538#issuecomment-94635160 @chenghao-intel your idea is good but “select current_database” is syntax of hive. and i want to implemente it . And this UDF do not run within executor(s

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-20 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5538#issuecomment-94629707 @chenghao-intel i konw this method that can get dbName, but it can only be used with CLI. It is necessary to get dbName without cli. And i have explained

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-20 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5538#issuecomment-94617955 sorry, i changed the code then the comment disappear, i will add it again. --- If your project is set up for it, you can reply to this email and have your reply

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-20 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28746101 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala --- @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software

[GitHub] spark pull request: [SPARK-6976][SQL] drop table if exists src p...

2015-04-17 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5553 [SPARK-6976][SQL] drop table if exists src print ERROR info that should not be printed when src not exists. If table src not exists and run sql drop table if exists src, then some ERROR info

[GitHub] spark pull request: [SPARK-6976][SQL] drop table if exists src p...

2015-04-17 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/5553 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-16 Thread DoingDone9
Github user DoingDone9 commented on a diff in the pull request: https://github.com/apache/spark/pull/5538#discussion_r28498708 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala --- @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-15 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5538 [SPARK-6198][SQL] Support select current_database() to support select current_database() ``` The method(evaluate) has changed in UDFCurrentDB, it just throws a exception.But hiveUdfs

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-15 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5538#issuecomment-93625012 @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-15 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4995#issuecomment-93624977 I have opened a new pr https://github.com/apache/spark/pull/5538 --- If your project is set up for it, you can reply to this email and have your reply appear

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-13 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4995#issuecomment-92599430 I do not agree that. Because this expression is foldable, then it will be computed in ConstantFolding of Optimizer. So I will get the name of currentDB after

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4995#issuecomment-92159014 my previous test was successful and i will test it again. Thank you @marmbrus --- If your project is set up for it, you can reply to this email and have your reply

[GitHub] spark pull request: [SPARK-6493][SQL]Support numeric(a,b) in the s...

2015-04-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5166#issuecomment-92159074 i agree with you. close it. Thank you @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-04-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4995#issuecomment-92189086 Could you tell me how you got this exception? I test with three nodes , and it works again. Thank you @marmbrus --- If your project is set up for it, you can reply

[GitHub] spark pull request: [SPARK-6768][SQL] Do not support float/double...

2015-04-08 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5418 [SPARK-6768][SQL] Do not support float/double union decimal or decimal(a ,b) union decimal(c, d) Do not support sql like that ``` select cast(12.2056999 as float) from testData limit 1

[GitHub] spark pull request: [SPARK-5129][SQL] make SqlContext support sql ...

2015-04-03 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/3931 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-6546][Build] Using the wrong code that ...

2015-03-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5198#issuecomment-86416015 ok i will change @liancheng --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [SPARK-6546][Build] Using the wrong code that ...

2015-03-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5198#issuecomment-86363210 you are right @liancheng --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-03-26 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4995#issuecomment-86384179 yes, it works. I have tested it in the distributed mode with two nodes. @rxin --- If your project is set up for it, you can reply to this email and have your reply

[GitHub] spark pull request: [SPARK-6409][SQL] It is not necessary that avo...

2015-03-25 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5131#issuecomment-85914215 I have added a test for it @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project

[GitHub] spark pull request: A little spell wrong, but this will make spark...

2015-03-25 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5198 A little spell wrong, but this will make spark compile failed!! wrong code : val tmpDir = Files.createTempDir() not Files should File You can merge this pull request into a Git repository

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-03-25 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4995#issuecomment-86329564 anyone will test it ? @marmbrus @srowen --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your

[GitHub] spark pull request: [SPARK-6546][Build] Using the wrong code that ...

2015-03-25 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/5198#issuecomment-86324403 i think this is pressing @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project

[GitHub] spark pull request: [SPARK-6493][SQL]Support numeric(a,b) in the s...

2015-03-24 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5166 [SPARK-6493][SQL]Support numeric(a,b) in the sqlContext support sql like that : select cast(20.12 as numeric(4,2)) from src limit 1; You can merge this pull request into a Git repository

[GitHub] spark pull request: [SPARK-6049][SQL] It is not necessary that avo...

2015-03-22 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/5131 [SPARK-6049][SQL] It is not necessary that avoid old inteface of hive, because this will make some UDAF can not work. spark avoid old inteface of hive, then some udaf can not work like

[GitHub] spark pull request: [SPARK-6300][Spark Core] sc.addFile(path) does...

2015-03-13 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4993#issuecomment-78838440 /cc @sryza @srowen --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-03-12 Thread DoingDone9
Github user DoingDone9 closed the pull request at: https://github.com/apache/spark/pull/4926 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-03-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4926#issuecomment-78447224 I have opened a new pr for this .I create a new UDF and register it instead of intercepting code. https://github.com/apache/spark/pull/4995 @chenghao-intel

[GitHub] spark pull request: [SPARK-6198][SQL] Support select current_data...

2015-03-12 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/4995 [SPARK-6198][SQL] Support select current_database() The method(evaluate) has changed in UDFCurrentDB, it just throws a exception.But hiveUdfs call this method and failed. @Override

[GitHub] spark pull request: [SPARK-6243][SQL] The Operation of match did n...

2015-03-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4959#issuecomment-78453711 /CC @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-6271][SQL] Sort these tokens in alphabe...

2015-03-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4973#issuecomment-78453677 /CC @srowen --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-6300][Spark Core] sc.addFile(path) does...

2015-03-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4993#issuecomment-78783053 I have changed it. Please test it. @srowen --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your

[GitHub] spark pull request: [SPARK-6179][SQL] Add token for SHOW PRINCIPA...

2015-03-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4902#issuecomment-78462968 /cc @marmbrus --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-6300][Spark Core] sc.addFile(path) does...

2015-03-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4993#issuecomment-78462887 I have added the test, please test it. @sryza --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well

[GitHub] spark pull request: [SPARK-5794] [SQL] [WIP] fix add jar

2015-03-12 Thread DoingDone9
Github user DoingDone9 commented on the pull request: https://github.com/apache/spark/pull/4586#issuecomment-78463251 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-6300][Spark Core] sc.addFile(path) does...

2015-03-12 Thread DoingDone9
GitHub user DoingDone9 opened a pull request: https://github.com/apache/spark/pull/4993 [SPARK-6300][Spark Core] sc.addFile(path) does not support the relative path. when i run cmd like that sc.addFile(../test.txt), it did not work and throwed an exception

  1   2   >