[GitHub] [spark] AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType
AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType URL: https://github.com/apache/spark/pull/26644#issuecomment-559382737 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19392/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType
AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType URL: https://github.com/apache/spark/pull/26644#issuecomment-559382730 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType
AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType URL: https://github.com/apache/spark/pull/26644#issuecomment-559382730 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType
AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType URL: https://github.com/apache/spark/pull/26644#issuecomment-559382737 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19392/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType
SparkQA commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType URL: https://github.com/apache/spark/pull/26644#issuecomment-559382312 **[Test build #114564 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114564/testReport)** for PR 26644 at commit [`60a45f3`](https://github.com/apache/spark/commit/60a45f3805e59cb8c4e25020285e76b363506dc6). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559374635 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559374643 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114560/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559374643 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114560/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559374635 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
SparkQA commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559374185 **[Test build #114560 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114560/testReport)** for PR 26195 at commit [`66f0bd3`](https://github.com/apache/spark/commit/66f0bd36cdf64cfac11ed7199badfa820e7f3d38). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
SparkQA removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559329895 **[Test build #114560 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114560/testReport)** for PR 26195 at commit [`66f0bd3`](https://github.com/apache/spark/commit/66f0bd36cdf64cfac11ed7199badfa820e7f3d38). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
yaooqinn commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559373420 > Is the title `Support cache for interval data`? cache seems API-like to me, here we also use this cache for underly optimization This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559372991 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114561/ Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
SparkQA removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559333736 **[Test build #114561 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114561/testReport)** for PR 26699 at commit [`4728ba4`](https://github.com/apache/spark/commit/4728ba4ab232b2164c203d9a7677b6918e00751f). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559372991 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114561/ Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559372987 Merged build finished. Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559372987 Merged build finished. Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
SparkQA commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559372786 **[Test build #114561 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114561/testReport)** for PR 26699 at commit [`4728ba4`](https://github.com/apache/spark/commit/4728ba4ab232b2164c203d9a7677b6918e00751f). * This patch **fails Spark unit tests**. * This patch merges cleanly. * This patch adds no public classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] HeartSaVioR commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
HeartSaVioR commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559369890 cc. @vanzin @squito @gaborgsomogyi @Ngone51 I'd be really appreciated if I could get the change of new round of reviews. I think I addressed most of things and it's ready to review. (One thing may want to hear voices would be about "when" to do compaction, as it may be considerably heavy operation depending on the number/size of files to compact.) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] maropu commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
maropu commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#discussion_r351615643 ## File path: sql/core/src/test/resources/sql-tests/results/group-by-filter.sql.out ## @@ -0,0 +1,332 @@ +-- Automatically generated by SQLQueryTestSuite +-- Number of queries: 27 + + +-- !query 0 +CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES +(1, 1), (1, 2), (2, 1), (2, 2), (3, 1), (3, 2), (null, 1), (3, null), (null, null) +AS testData(a, b) +-- !query 0 schema +struct<> +-- !query 0 output + + + +-- !query 1 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData +-- !query 1 schema +struct<> +-- !query 1 output +org.apache.spark.sql.AnalysisException +grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.; + + +-- !query 2 +SELECT COUNT(a) FILTER (WHERE a = 1), COUNT(b) FILTER (WHERE a > 1) FROM testData +-- !query 2 schema +struct +-- !query 2 output +2 4 + + +-- !query 3 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData GROUP BY a +-- !query 3 schema +struct +-- !query 3 output +1 0 +2 2 +3 2 +NULL 0 + + +-- !query 4 +SELECT a, COUNT(b) FILTER (WHERE a != 2) FROM testData GROUP BY b +-- !query 4 schema +struct<> +-- !query 4 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 5 +SELECT COUNT(a) FILTER (WHERE a >= 0), COUNT(b) FILTER (WHERE a >= 3) FROM testData GROUP BY a +-- !query 5 schema +struct +-- !query 5 output +0 0 +2 0 +2 0 +3 2 + + +-- !query 6 +SELECT 'foo', COUNT(a) FILTER (WHERE b <= 2) FROM testData GROUP BY 1 +-- !query 6 schema +struct +-- !query 6 output +foo6 + + +-- !query 7 +SELECT 'foo', APPROX_COUNT_DISTINCT(a) FILTER (WHERE b >= 0) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 7 schema +struct +-- !query 7 output + + + +-- !query 8 +SELECT 'foo', MAX(STRUCT(a)) FILTER (WHERE b >= 1) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 8 schema +struct> +-- !query 8 output + + + +-- !query 9 +SELECT a + b, COUNT(b) FILTER (WHERE b >= 2) FROM testData GROUP BY a + b +-- !query 9 schema +struct<(a + b):int,count(b):bigint> +-- !query 9 output +2 0 +3 1 +4 1 +5 1 +NULL 0 + + +-- !query 10 +SELECT a + 2, COUNT(b) FILTER (WHERE b IN (1, 2)) FROM testData GROUP BY a + 1 +-- !query 10 schema +struct<> +-- !query 10 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 11 +SELECT a + 1 + 1, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY a + 1 +-- !query 11 schema +struct<((a + 1) + 1):int,count(b):bigint> +-- !query 11 output +3 2 +4 2 +5 2 +NULL 1 + + +-- !query 12 +SELECT COUNT(DISTINCT b) FILTER (WHERE b > 0), COUNT(DISTINCT b, c) FILTER (WHERE b > 0 AND c > 2) +FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a +-- !query 12 schema +struct +-- !query 12 output +1 1 + + +-- !query 13 +SELECT a AS k, COUNT(b) FILTER (WHERE b = 1 OR b = 2) FROM testData GROUP BY k +-- !query 13 schema +struct +-- !query 13 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 14 +SELECT a AS k, COUNT(b) FILTER (WHERE NOT b < 0) FROM testData GROUP BY k HAVING k > 1 +-- !query 14 schema +struct +-- !query 14 output +2 2 +3 2 + + +-- !query 15 +SELECT COUNT(b) FILTER (WHERE a > 0) AS k FROM testData GROUP BY k +-- !query 15 schema +struct<> +-- !query 15 output +org.apache.spark.sql.AnalysisException +aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`); + + +-- !query 16 +SELECT a AS k, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY k +-- !query 16 schema +struct +-- !query 16 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 17 +SELECT a, COUNT(1) FILTER (WHERE b > 1) FROM testData WHERE false GROUP BY a +-- !query 17 schema +struct +-- !query 17 output + + + +-- !query 18 +SELECT COUNT(1) FILTER (WHERE b = 2) FROM testData WHERE false +-- !query 18 schema +struct +-- !query 18 output +0 + + +-- !query 19 +SELECT 1 FROM (SELECT COUNT(1) FILTER (WHERE a >= 3 OR b <= 1) FROM testData WHERE false) t +-- !query 19 schema +struct<1:int> +-- !query 19 output +1 + + +-- !query 20 +CREATE TEMPORARY VIEW EMP AS SELECT * FROM VALUES + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (200, "emp 2", date "2003-01-01", 200.00D, 10), + (300, "emp 3", date "2002-01-01", 300.00
[GitHub] [spark] beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#discussion_r351614466 ## File path: sql/core/src/test/resources/sql-tests/results/group-by-filter.sql.out ## @@ -0,0 +1,332 @@ +-- Automatically generated by SQLQueryTestSuite +-- Number of queries: 27 + + +-- !query 0 +CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES +(1, 1), (1, 2), (2, 1), (2, 2), (3, 1), (3, 2), (null, 1), (3, null), (null, null) +AS testData(a, b) +-- !query 0 schema +struct<> +-- !query 0 output + + + +-- !query 1 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData +-- !query 1 schema +struct<> +-- !query 1 output +org.apache.spark.sql.AnalysisException +grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.; + + +-- !query 2 +SELECT COUNT(a) FILTER (WHERE a = 1), COUNT(b) FILTER (WHERE a > 1) FROM testData +-- !query 2 schema +struct +-- !query 2 output +2 4 + + +-- !query 3 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData GROUP BY a +-- !query 3 schema +struct +-- !query 3 output +1 0 +2 2 +3 2 +NULL 0 + + +-- !query 4 +SELECT a, COUNT(b) FILTER (WHERE a != 2) FROM testData GROUP BY b +-- !query 4 schema +struct<> +-- !query 4 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 5 +SELECT COUNT(a) FILTER (WHERE a >= 0), COUNT(b) FILTER (WHERE a >= 3) FROM testData GROUP BY a +-- !query 5 schema +struct +-- !query 5 output +0 0 +2 0 +2 0 +3 2 + + +-- !query 6 +SELECT 'foo', COUNT(a) FILTER (WHERE b <= 2) FROM testData GROUP BY 1 +-- !query 6 schema +struct +-- !query 6 output +foo6 + + +-- !query 7 +SELECT 'foo', APPROX_COUNT_DISTINCT(a) FILTER (WHERE b >= 0) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 7 schema +struct +-- !query 7 output + + + +-- !query 8 +SELECT 'foo', MAX(STRUCT(a)) FILTER (WHERE b >= 1) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 8 schema +struct> +-- !query 8 output + + + +-- !query 9 +SELECT a + b, COUNT(b) FILTER (WHERE b >= 2) FROM testData GROUP BY a + b +-- !query 9 schema +struct<(a + b):int,count(b):bigint> +-- !query 9 output +2 0 +3 1 +4 1 +5 1 +NULL 0 + + +-- !query 10 +SELECT a + 2, COUNT(b) FILTER (WHERE b IN (1, 2)) FROM testData GROUP BY a + 1 +-- !query 10 schema +struct<> +-- !query 10 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 11 +SELECT a + 1 + 1, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY a + 1 +-- !query 11 schema +struct<((a + 1) + 1):int,count(b):bigint> +-- !query 11 output +3 2 +4 2 +5 2 +NULL 1 + + +-- !query 12 +SELECT COUNT(DISTINCT b) FILTER (WHERE b > 0), COUNT(DISTINCT b, c) FILTER (WHERE b > 0 AND c > 2) +FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a +-- !query 12 schema +struct +-- !query 12 output +1 1 + + +-- !query 13 +SELECT a AS k, COUNT(b) FILTER (WHERE b = 1 OR b = 2) FROM testData GROUP BY k +-- !query 13 schema +struct +-- !query 13 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 14 +SELECT a AS k, COUNT(b) FILTER (WHERE NOT b < 0) FROM testData GROUP BY k HAVING k > 1 +-- !query 14 schema +struct +-- !query 14 output +2 2 +3 2 + + +-- !query 15 +SELECT COUNT(b) FILTER (WHERE a > 0) AS k FROM testData GROUP BY k +-- !query 15 schema +struct<> +-- !query 15 output +org.apache.spark.sql.AnalysisException +aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`); + + +-- !query 16 +SELECT a AS k, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY k +-- !query 16 schema +struct +-- !query 16 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 17 +SELECT a, COUNT(1) FILTER (WHERE b > 1) FROM testData WHERE false GROUP BY a +-- !query 17 schema +struct +-- !query 17 output + + + +-- !query 18 +SELECT COUNT(1) FILTER (WHERE b = 2) FROM testData WHERE false +-- !query 18 schema +struct +-- !query 18 output +0 + + +-- !query 19 +SELECT 1 FROM (SELECT COUNT(1) FILTER (WHERE a >= 3 OR b <= 1) FROM testData WHERE false) t +-- !query 19 schema +struct<1:int> +-- !query 19 output +1 + + +-- !query 20 +CREATE TEMPORARY VIEW EMP AS SELECT * FROM VALUES + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (200, "emp 2", date "2003-01-01", 200.00D, 10), + (300, "emp 3", date "2002-01-01", 300.
[GitHub] [spark] wangshuo128 commented on issue #26674: [SPARK-30059][CORE]Stop AsyncEventQueue when interrupted in dispatch
wangshuo128 commented on issue #26674: [SPARK-30059][CORE]Stop AsyncEventQueue when interrupted in dispatch URL: https://github.com/apache/spark/pull/26674#issuecomment-559363397 > do you know what version of hadoop you are on? my hadoop version is 2.7.1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns
AmplabJenkins removed a comment on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns URL: https://github.com/apache/spark/pull/26700#issuecomment-559356089 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19391/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns
AmplabJenkins removed a comment on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns URL: https://github.com/apache/spark/pull/26700#issuecomment-559356079 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns
SparkQA commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns URL: https://github.com/apache/spark/pull/26700#issuecomment-559357603 **[Test build #114563 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114563/testReport)** for PR 26700 at commit [`0c2dc77`](https://github.com/apache/spark/commit/0c2dc777cf59dcc8e0ea2f2787e9a5a6d650769d). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns
AmplabJenkins commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns URL: https://github.com/apache/spark/pull/26700#issuecomment-559356079 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns
AmplabJenkins commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns URL: https://github.com/apache/spark/pull/26700#issuecomment-559356089 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19391/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] imback82 commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns
imback82 commented on issue #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns URL: https://github.com/apache/spark/pull/26700#issuecomment-559355795 cc: @cloud-fan This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] imback82 opened a new pull request #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns
imback82 opened a new pull request #26700: [SPARK-30065][SQL] DataFrameNaFunctions.drop should handle duplicate columns URL: https://github.com/apache/spark/pull/26700 ### What changes were proposed in this pull request? `DataFrameNaFunctions.drop` doesn't handle duplicate columns even when column names are not specified. ```Scala val left = Seq(("1", null), ("3", "4")).toDF("col1", "col2") val right = Seq(("1", "2"), ("3", null)).toDF("col1", "col2") val df = left.join(right, Seq("col1")) df.printSchema df.na.drop("any").show ``` produces ``` root |-- col1: string (nullable = true) |-- col2: string (nullable = true) |-- col2: string (nullable = true) org.apache.spark.sql.AnalysisException: Reference 'col2' is ambiguous, could be: col2, col2.; at org.apache.spark.sql.catalyst.expressions.package$AttributeSeq.resolve(package.scala:240) ``` The reason for the above failure is that columns are resolved by name and if there are multiple columns with the same name, it will fail due to ambiguity. This PR updates `DataFrameNaFunctions.drop` such that if the columns to drop are not specified, it will resolve ambiguity gracefully by applying `drop` to all the eligible columns. (Note that if the user specifies the columns, it will still continue to fail due to ambiguity). ### Why are the changes needed? If column names are not specified, `drop` should not fail due to ambiguity since it should still be able to apply `drop` to the eligible columns. ### Does this PR introduce any user-facing change? Yes, now all the rows with nulls are dropped in the above example: ``` scala> df.na.drop("any").show ++++ |col1|col2|col2| ++++ ++++ ``` ### How was this patch tested? Added new unit tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] gengliangwang commented on a change in pull request #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
gengliangwang commented on a change in pull request #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#discussion_r351603262 ## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLHistoryServerPlugin.scala ## @@ -33,4 +33,12 @@ class SQLHistoryServerPlugin extends AppHistoryServerPlugin { new SQLTab(sqlStatusStore, ui) } } + + override def displayOrder: Int = { Review comment: I think we can just go with: ``` override def displayOrder: Int = 0 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] gengliangwang commented on a change in pull request #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
gengliangwang commented on a change in pull request #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#discussion_r351603169 ## File path: core/src/main/scala/org/apache/spark/status/AppHistoryServerPlugin.scala ## @@ -35,4 +35,11 @@ private[spark] trait AppHistoryServerPlugin { * Sets up UI of this plugin to rebuild the history UI. */ def setupUI(ui: SparkUI): Unit + + /** + * Order of the plugin tab that need to display in the history UI. + */ + def displayOrder: Int = { Review comment: def displayOrder: Int = Integer.MAX_VALUE This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] gengliangwang commented on a change in pull request #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
gengliangwang commented on a change in pull request #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#discussion_r351603126 ## File path: core/src/main/scala/org/apache/spark/status/AppHistoryServerPlugin.scala ## @@ -35,4 +35,11 @@ private[spark] trait AppHistoryServerPlugin { * Sets up UI of this plugin to rebuild the history UI. */ def setupUI(ui: SparkUI): Unit + + /** + * Order of the plugin tab that need to display in the history UI. Review comment: The position of a plugin tab relative to the other plugin tabs in the history UI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] iRakson commented on a change in pull request #26467: [SPARK-29477]Improve tooltip for Streaming tab
iRakson commented on a change in pull request #26467: [SPARK-29477]Improve tooltip for Streaming tab URL: https://github.com/apache/spark/pull/26467#discussion_r351601698 ## File path: streaming/src/main/scala/org/apache/spark/streaming/ui/BatchPage.scala ## @@ -37,10 +37,14 @@ private[ui] class BatchPage(parent: StreamingTab) extends WebUIPage("batch") { private def columns: Seq[Node] = { Output Op Id Description - Output Op Duration + Output Op Duration {SparkUIUtils.tooltip("Time taken for all the jobs of this batch to" + Review comment: Output Op duration is the time taken for all the jobs of that batch to finish processing from the time they were submitted. So it gives scheduling delay(time to schedule first job of the batch) + processing delay(time from start of processing of first job till the end of processing of last job of batch). While job duration gives detail about a single job, Output Op duration gives info about the entire batch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] maropu commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
maropu commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559351251 Is the title `Support cache for interval data`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] maropu commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
maropu commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#discussion_r351598109 ## File path: sql/core/src/test/resources/sql-tests/results/group-by-filter.sql.out ## @@ -0,0 +1,332 @@ +-- Automatically generated by SQLQueryTestSuite +-- Number of queries: 27 + + +-- !query 0 +CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES +(1, 1), (1, 2), (2, 1), (2, 2), (3, 1), (3, 2), (null, 1), (3, null), (null, null) +AS testData(a, b) +-- !query 0 schema +struct<> +-- !query 0 output + + + +-- !query 1 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData +-- !query 1 schema +struct<> +-- !query 1 output +org.apache.spark.sql.AnalysisException +grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.; + + +-- !query 2 +SELECT COUNT(a) FILTER (WHERE a = 1), COUNT(b) FILTER (WHERE a > 1) FROM testData +-- !query 2 schema +struct +-- !query 2 output +2 4 + + +-- !query 3 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData GROUP BY a +-- !query 3 schema +struct +-- !query 3 output +1 0 +2 2 +3 2 +NULL 0 + + +-- !query 4 +SELECT a, COUNT(b) FILTER (WHERE a != 2) FROM testData GROUP BY b +-- !query 4 schema +struct<> +-- !query 4 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 5 +SELECT COUNT(a) FILTER (WHERE a >= 0), COUNT(b) FILTER (WHERE a >= 3) FROM testData GROUP BY a +-- !query 5 schema +struct +-- !query 5 output +0 0 +2 0 +2 0 +3 2 + + +-- !query 6 +SELECT 'foo', COUNT(a) FILTER (WHERE b <= 2) FROM testData GROUP BY 1 +-- !query 6 schema +struct +-- !query 6 output +foo6 + + +-- !query 7 +SELECT 'foo', APPROX_COUNT_DISTINCT(a) FILTER (WHERE b >= 0) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 7 schema +struct +-- !query 7 output + + + +-- !query 8 +SELECT 'foo', MAX(STRUCT(a)) FILTER (WHERE b >= 1) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 8 schema +struct> +-- !query 8 output + + + +-- !query 9 +SELECT a + b, COUNT(b) FILTER (WHERE b >= 2) FROM testData GROUP BY a + b +-- !query 9 schema +struct<(a + b):int,count(b):bigint> +-- !query 9 output +2 0 +3 1 +4 1 +5 1 +NULL 0 + + +-- !query 10 +SELECT a + 2, COUNT(b) FILTER (WHERE b IN (1, 2)) FROM testData GROUP BY a + 1 +-- !query 10 schema +struct<> +-- !query 10 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 11 +SELECT a + 1 + 1, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY a + 1 +-- !query 11 schema +struct<((a + 1) + 1):int,count(b):bigint> +-- !query 11 output +3 2 +4 2 +5 2 +NULL 1 + + +-- !query 12 +SELECT COUNT(DISTINCT b) FILTER (WHERE b > 0), COUNT(DISTINCT b, c) FILTER (WHERE b > 0 AND c > 2) +FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a +-- !query 12 schema +struct +-- !query 12 output +1 1 + + +-- !query 13 +SELECT a AS k, COUNT(b) FILTER (WHERE b = 1 OR b = 2) FROM testData GROUP BY k +-- !query 13 schema +struct +-- !query 13 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 14 +SELECT a AS k, COUNT(b) FILTER (WHERE NOT b < 0) FROM testData GROUP BY k HAVING k > 1 +-- !query 14 schema +struct +-- !query 14 output +2 2 +3 2 + + +-- !query 15 +SELECT COUNT(b) FILTER (WHERE a > 0) AS k FROM testData GROUP BY k +-- !query 15 schema +struct<> +-- !query 15 output +org.apache.spark.sql.AnalysisException +aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`); + + +-- !query 16 +SELECT a AS k, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY k +-- !query 16 schema +struct +-- !query 16 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 17 +SELECT a, COUNT(1) FILTER (WHERE b > 1) FROM testData WHERE false GROUP BY a +-- !query 17 schema +struct +-- !query 17 output + + + +-- !query 18 +SELECT COUNT(1) FILTER (WHERE b = 2) FROM testData WHERE false +-- !query 18 schema +struct +-- !query 18 output +0 + + +-- !query 19 +SELECT 1 FROM (SELECT COUNT(1) FILTER (WHERE a >= 3 OR b <= 1) FROM testData WHERE false) t +-- !query 19 schema +struct<1:int> +-- !query 19 output +1 + + +-- !query 20 +CREATE TEMPORARY VIEW EMP AS SELECT * FROM VALUES + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (200, "emp 2", date "2003-01-01", 200.00D, 10), + (300, "emp 3", date "2002-01-01", 300.00
[GitHub] [spark] maropu commented on issue #26654: [SPARK-30009][CORE][SQL] Support different floating-point Ordering for Scala 2.12 / 2.13
maropu commented on issue #26654: [SPARK-30009][CORE][SQL] Support different floating-point Ordering for Scala 2.12 / 2.13 URL: https://github.com/apache/spark/pull/26654#issuecomment-559347876 The ANSI/SQL standard seems to define the order, ` -Infinity`<`1.0`<`Infinity`<`Nan`=`Nan`. PgSQL follows [that order](https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-FLOAT) now; ``` IEEE754 specifies that NaN should not compare equal to any other floating-point value (including NaN). In order to allow floating-point values to be sorted and used in tree-based indexes, PostgreSQL treats NaN values as equal, and greater than all non-NaN values. ``` ``` postgres=# insert into t values ('-NaN'), ('-Infinity'), ('+Infinity'), ('+NaN'), ('1.0'); INSERT 0 5 postgres=# select * from t; v --- NaN -Infinity Infinity NaN 1 (5 rows) postgres=# select * from t order by v; v --- -Infinity 1 Infinity NaN NaN (5 rows) ``` Oracle and Spark currently follow this, too. So, in terms of the SQL behaviours, the most important thing, I think is, to keep this order when switching from 2.12 to 2.13. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] dongjoon-hyun edited a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
dongjoon-hyun edited a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559345974 `-Phadoop-3.2 -Phive-2.3` seems to fail with a different reason first. ``` [info] Building Spark using SBT with these arguments: -Phadoop-3.2 -Phive-2.3 -Pspark-ganglia-lgpl -Pyarn -Pkubernetes -Pmesos -Phadoop-cloud -Phive -Phive-thriftserver -Pkinesis-asl test:package streaming-kinesis-asl-assembly/assembly ``` ``` org.mockito.exceptions.base.MockitoException: ClassCastException occurred while creating the mockito mock : class to mock : 'javax.servlet.http.HttpServletRequest', loaded by classloader : 'sun.misc.Launcher$AppClassLoader@490d6c15' created class : 'org.mockito.codegen.HttpServletRequest$MockitoMock$254323811', loaded by classloader : 'net.bytebuddy.dynamic.loading.MultipleParentClassLoader@6aed7392' proxy instance class : 'org.mockito.codegen.HttpServletRequest$MockitoMock$254323811', loaded by classloader : 'net.bytebuddy.dynamic.loading.MultipleParentClassLoader@6aed7392' instance creation by : ObjenesisInstantiator You might experience classloading issues, please ask the mockito mailing-list. ``` I saw the exact failure in another PR, too. Let's re-trigger after midnight (PST). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] dongjoon-hyun commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
dongjoon-hyun commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559345974 `-Phadoop-3.2 -Phive-2.3` seems to fail with a different reason first. ``` [info] Building Spark using SBT with these arguments: -Phadoop-3.2 -Phive-2.3 -Pspark-ganglia-lgpl -Pyarn -Pkubernetes -Pmesos -Phadoop-cloud -Phive -Phive-thriftserver -Pkinesis-asl test:package streaming-kinesis-asl-assembly/assembly ``` ``` org.mockito.exceptions.base.MockitoException: ClassCastException occurred while creating the mockito mock : class to mock : 'javax.servlet.http.HttpServletRequest', loaded by classloader : 'sun.misc.Launcher$AppClassLoader@490d6c15' created class : 'org.mockito.codegen.HttpServletRequest$MockitoMock$254323811', loaded by classloader : 'net.bytebuddy.dynamic.loading.MultipleParentClassLoader@6aed7392' proxy instance class : 'org.mockito.codegen.HttpServletRequest$MockitoMock$254323811', loaded by classloader : 'net.bytebuddy.dynamic.loading.MultipleParentClassLoader@6aed7392' instance creation by : ObjenesisInstantiator You might experience classloading issues, please ask the mockito mailing-list. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] dongjoon-hyun edited a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
dongjoon-hyun edited a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559345974 `-Phadoop-3.2 -Phive-2.3` seems to fail with a different reason first. ``` [info] Building Spark using SBT with these arguments: -Phadoop-3.2 -Phive-2.3 -Pspark-ganglia-lgpl -Pyarn -Pkubernetes -Pmesos -Phadoop-cloud -Phive -Phive-thriftserver -Pkinesis-asl test:package streaming-kinesis-asl-assembly/assembly ``` ``` org.mockito.exceptions.base.MockitoException: ClassCastException occurred while creating the mockito mock : class to mock : 'javax.servlet.http.HttpServletRequest', loaded by classloader : 'sun.misc.Launcher$AppClassLoader@490d6c15' created class : 'org.mockito.codegen.HttpServletRequest$MockitoMock$254323811', loaded by classloader : 'net.bytebuddy.dynamic.loading.MultipleParentClassLoader@6aed7392' proxy instance class : 'org.mockito.codegen.HttpServletRequest$MockitoMock$254323811', loaded by classloader : 'net.bytebuddy.dynamic.loading.MultipleParentClassLoader@6aed7392' instance creation by : ObjenesisInstantiator You might experience classloading issues, please ask the mockito mailing-list. ``` I saw the exact failure in another PR, too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559344895 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114558/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559344890 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559344890 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] dongjoon-hyun closed pull request #26695: [SPARK-29991][INFRA] Support `test-hive1.2` in PR Builder
dongjoon-hyun closed pull request #26695: [SPARK-29991][INFRA] Support `test-hive1.2` in PR Builder URL: https://github.com/apache/spark/pull/26695 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559344895 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114558/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
AmplabJenkins removed a comment on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559344644 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
AmplabJenkins removed a comment on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559344646 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114559/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
SparkQA removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559320853 **[Test build #114558 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114558/testReport)** for PR 26378 at commit [`2bc86c5`](https://github.com/apache/spark/commit/2bc86c503cd93993fa5ef2dee30443472895908b). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
AmplabJenkins commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559344646 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114559/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] dongjoon-hyun commented on issue #26695: [SPARK-29991][INFRA] Support `test-hive1.2` in PR Builder
dongjoon-hyun commented on issue #26695: [SPARK-29991][INFRA] Support `test-hive1.2` in PR Builder URL: https://github.com/apache/spark/pull/26695#issuecomment-559344810 Thank you for review and approval, @srowen . Yes. Right. After this PR, the remaining thing is SPARK-29988 to protect `hive-1.2` as a side profile. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
AmplabJenkins commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559344644 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] dongjoon-hyun commented on issue #26695: [SPARK-29991][INFRA] Support `test-hive1.2` in PR Builder
dongjoon-hyun commented on issue #26695: [SPARK-29991][INFRA] Support `test-hive1.2` in PR Builder URL: https://github.com/apache/spark/pull/26695#issuecomment-559344840 Merged to master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
SparkQA commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559344518 **[Test build #114558 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114558/testReport)** for PR 26378 at commit [`2bc86c5`](https://github.com/apache/spark/commit/2bc86c503cd93993fa5ef2dee30443472895908b). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA removed a comment on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
SparkQA removed a comment on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559322497 **[Test build #114559 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114559/testReport)** for PR 26416 at commit [`e5d9250`](https://github.com/apache/spark/commit/e5d925025a606cbb5c365303149272900f255e33). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
SparkQA commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559344295 **[Test build #114559 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114559/testReport)** for PR 26416 at commit [`e5d9250`](https://github.com/apache/spark/commit/e5d925025a606cbb5c365303149272900f255e33). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559342085 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114557/ Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559342085 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114557/ Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559342077 Merged build finished. Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559342077 Merged build finished. Test FAILed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
SparkQA removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559303990 **[Test build #114557 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114557/testReport)** for PR 26697 at commit [`23bef9c`](https://github.com/apache/spark/commit/23bef9cb752cc9a83eccd8e69a49eb57bdbd91fa). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
SparkQA commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559341878 **[Test build #114557 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114557/testReport)** for PR 26697 at commit [`23bef9c`](https://github.com/apache/spark/commit/23bef9cb752cc9a83eccd8e69a49eb57bdbd91fa). * This patch **fails Spark unit tests**. * This patch merges cleanly. * This patch adds no public classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#discussion_r351590672 ## File path: sql/core/src/test/resources/sql-tests/results/group-by-filter.sql.out ## @@ -0,0 +1,332 @@ +-- Automatically generated by SQLQueryTestSuite +-- Number of queries: 27 + + +-- !query 0 +CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES +(1, 1), (1, 2), (2, 1), (2, 2), (3, 1), (3, 2), (null, 1), (3, null), (null, null) +AS testData(a, b) +-- !query 0 schema +struct<> +-- !query 0 output + + + +-- !query 1 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData +-- !query 1 schema +struct<> +-- !query 1 output +org.apache.spark.sql.AnalysisException +grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.; + + +-- !query 2 +SELECT COUNT(a) FILTER (WHERE a = 1), COUNT(b) FILTER (WHERE a > 1) FROM testData +-- !query 2 schema +struct +-- !query 2 output +2 4 + + +-- !query 3 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData GROUP BY a +-- !query 3 schema +struct +-- !query 3 output +1 0 +2 2 +3 2 +NULL 0 + + +-- !query 4 +SELECT a, COUNT(b) FILTER (WHERE a != 2) FROM testData GROUP BY b +-- !query 4 schema +struct<> +-- !query 4 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 5 +SELECT COUNT(a) FILTER (WHERE a >= 0), COUNT(b) FILTER (WHERE a >= 3) FROM testData GROUP BY a +-- !query 5 schema +struct +-- !query 5 output +0 0 +2 0 +2 0 +3 2 + + +-- !query 6 +SELECT 'foo', COUNT(a) FILTER (WHERE b <= 2) FROM testData GROUP BY 1 +-- !query 6 schema +struct +-- !query 6 output +foo6 + + +-- !query 7 +SELECT 'foo', APPROX_COUNT_DISTINCT(a) FILTER (WHERE b >= 0) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 7 schema +struct +-- !query 7 output + + + +-- !query 8 +SELECT 'foo', MAX(STRUCT(a)) FILTER (WHERE b >= 1) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 8 schema +struct> +-- !query 8 output + + + +-- !query 9 +SELECT a + b, COUNT(b) FILTER (WHERE b >= 2) FROM testData GROUP BY a + b +-- !query 9 schema +struct<(a + b):int,count(b):bigint> +-- !query 9 output +2 0 +3 1 +4 1 +5 1 +NULL 0 + + +-- !query 10 +SELECT a + 2, COUNT(b) FILTER (WHERE b IN (1, 2)) FROM testData GROUP BY a + 1 +-- !query 10 schema +struct<> +-- !query 10 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 11 +SELECT a + 1 + 1, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY a + 1 +-- !query 11 schema +struct<((a + 1) + 1):int,count(b):bigint> +-- !query 11 output +3 2 +4 2 +5 2 +NULL 1 + + +-- !query 12 +SELECT COUNT(DISTINCT b) FILTER (WHERE b > 0), COUNT(DISTINCT b, c) FILTER (WHERE b > 0 AND c > 2) +FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a +-- !query 12 schema +struct +-- !query 12 output +1 1 + + +-- !query 13 +SELECT a AS k, COUNT(b) FILTER (WHERE b = 1 OR b = 2) FROM testData GROUP BY k +-- !query 13 schema +struct +-- !query 13 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 14 +SELECT a AS k, COUNT(b) FILTER (WHERE NOT b < 0) FROM testData GROUP BY k HAVING k > 1 +-- !query 14 schema +struct +-- !query 14 output +2 2 +3 2 + + +-- !query 15 +SELECT COUNT(b) FILTER (WHERE a > 0) AS k FROM testData GROUP BY k +-- !query 15 schema +struct<> +-- !query 15 output +org.apache.spark.sql.AnalysisException +aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`); + + +-- !query 16 +SELECT a AS k, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY k +-- !query 16 schema +struct +-- !query 16 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 17 +SELECT a, COUNT(1) FILTER (WHERE b > 1) FROM testData WHERE false GROUP BY a +-- !query 17 schema +struct +-- !query 17 output + + + +-- !query 18 +SELECT COUNT(1) FILTER (WHERE b = 2) FROM testData WHERE false +-- !query 18 schema +struct +-- !query 18 output +0 + + +-- !query 19 +SELECT 1 FROM (SELECT COUNT(1) FILTER (WHERE a >= 3 OR b <= 1) FROM testData WHERE false) t +-- !query 19 schema +struct<1:int> +-- !query 19 output +1 + + +-- !query 20 +CREATE TEMPORARY VIEW EMP AS SELECT * FROM VALUES + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (200, "emp 2", date "2003-01-01", 200.00D, 10), + (300, "emp 3", date "2002-01-01", 300.
[GitHub] [spark] prakharjain09 commented on issue #26569: [SPARK-29938] [SQL] Add batching support in Alter table add partition flow
prakharjain09 commented on issue #26569: [SPARK-29938] [SQL] Add batching support in Alter table add partition flow URL: https://github.com/apache/spark/pull/26569#issuecomment-559337010 Gentel reminder for review. cc - @cloud-fan @dongjoon-hyun @srowen This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] maropu commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
maropu commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#discussion_r351587438 ## File path: sql/core/src/test/resources/sql-tests/results/group-by-filter.sql.out ## @@ -0,0 +1,332 @@ +-- Automatically generated by SQLQueryTestSuite +-- Number of queries: 27 + + +-- !query 0 +CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES +(1, 1), (1, 2), (2, 1), (2, 2), (3, 1), (3, 2), (null, 1), (3, null), (null, null) +AS testData(a, b) +-- !query 0 schema +struct<> +-- !query 0 output + + + +-- !query 1 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData +-- !query 1 schema +struct<> +-- !query 1 output +org.apache.spark.sql.AnalysisException +grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.; + + +-- !query 2 +SELECT COUNT(a) FILTER (WHERE a = 1), COUNT(b) FILTER (WHERE a > 1) FROM testData +-- !query 2 schema +struct +-- !query 2 output +2 4 + + +-- !query 3 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData GROUP BY a +-- !query 3 schema +struct +-- !query 3 output +1 0 +2 2 +3 2 +NULL 0 + + +-- !query 4 +SELECT a, COUNT(b) FILTER (WHERE a != 2) FROM testData GROUP BY b +-- !query 4 schema +struct<> +-- !query 4 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 5 +SELECT COUNT(a) FILTER (WHERE a >= 0), COUNT(b) FILTER (WHERE a >= 3) FROM testData GROUP BY a +-- !query 5 schema +struct +-- !query 5 output +0 0 +2 0 +2 0 +3 2 + + +-- !query 6 +SELECT 'foo', COUNT(a) FILTER (WHERE b <= 2) FROM testData GROUP BY 1 +-- !query 6 schema +struct +-- !query 6 output +foo6 + + +-- !query 7 +SELECT 'foo', APPROX_COUNT_DISTINCT(a) FILTER (WHERE b >= 0) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 7 schema +struct +-- !query 7 output + + + +-- !query 8 +SELECT 'foo', MAX(STRUCT(a)) FILTER (WHERE b >= 1) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 8 schema +struct> +-- !query 8 output + + + +-- !query 9 +SELECT a + b, COUNT(b) FILTER (WHERE b >= 2) FROM testData GROUP BY a + b +-- !query 9 schema +struct<(a + b):int,count(b):bigint> +-- !query 9 output +2 0 +3 1 +4 1 +5 1 +NULL 0 + + +-- !query 10 +SELECT a + 2, COUNT(b) FILTER (WHERE b IN (1, 2)) FROM testData GROUP BY a + 1 +-- !query 10 schema +struct<> +-- !query 10 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 11 +SELECT a + 1 + 1, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY a + 1 +-- !query 11 schema +struct<((a + 1) + 1):int,count(b):bigint> +-- !query 11 output +3 2 +4 2 +5 2 +NULL 1 + + +-- !query 12 +SELECT COUNT(DISTINCT b) FILTER (WHERE b > 0), COUNT(DISTINCT b, c) FILTER (WHERE b > 0 AND c > 2) +FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a +-- !query 12 schema +struct +-- !query 12 output +1 1 + + +-- !query 13 +SELECT a AS k, COUNT(b) FILTER (WHERE b = 1 OR b = 2) FROM testData GROUP BY k +-- !query 13 schema +struct +-- !query 13 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 14 +SELECT a AS k, COUNT(b) FILTER (WHERE NOT b < 0) FROM testData GROUP BY k HAVING k > 1 +-- !query 14 schema +struct +-- !query 14 output +2 2 +3 2 + + +-- !query 15 +SELECT COUNT(b) FILTER (WHERE a > 0) AS k FROM testData GROUP BY k +-- !query 15 schema +struct<> +-- !query 15 output +org.apache.spark.sql.AnalysisException +aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`); + + +-- !query 16 +SELECT a AS k, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY k +-- !query 16 schema +struct +-- !query 16 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 17 +SELECT a, COUNT(1) FILTER (WHERE b > 1) FROM testData WHERE false GROUP BY a +-- !query 17 schema +struct +-- !query 17 output + + + +-- !query 18 +SELECT COUNT(1) FILTER (WHERE b = 2) FROM testData WHERE false +-- !query 18 schema +struct +-- !query 18 output +0 + + +-- !query 19 +SELECT 1 FROM (SELECT COUNT(1) FILTER (WHERE a >= 3 OR b <= 1) FROM testData WHERE false) t +-- !query 19 schema +struct<1:int> +-- !query 19 output +1 + + +-- !query 20 +CREATE TEMPORARY VIEW EMP AS SELECT * FROM VALUES + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (200, "emp 2", date "2003-01-01", 200.00D, 10), + (300, "emp 3", date "2002-01-01", 300.00
[GitHub] [spark] AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#issuecomment-559335317 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#issuecomment-559335320 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19390/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559335233 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#discussion_r351585975 ## File path: sql/core/src/test/resources/sql-tests/results/group-by-filter.sql.out ## @@ -0,0 +1,332 @@ +-- Automatically generated by SQLQueryTestSuite +-- Number of queries: 27 + + +-- !query 0 +CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES +(1, 1), (1, 2), (2, 1), (2, 2), (3, 1), (3, 2), (null, 1), (3, null), (null, null) +AS testData(a, b) +-- !query 0 schema +struct<> +-- !query 0 output + + + +-- !query 1 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData +-- !query 1 schema +struct<> +-- !query 1 output +org.apache.spark.sql.AnalysisException +grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.; + + +-- !query 2 +SELECT COUNT(a) FILTER (WHERE a = 1), COUNT(b) FILTER (WHERE a > 1) FROM testData +-- !query 2 schema +struct +-- !query 2 output +2 4 + + +-- !query 3 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData GROUP BY a +-- !query 3 schema +struct +-- !query 3 output +1 0 +2 2 +3 2 +NULL 0 + + +-- !query 4 +SELECT a, COUNT(b) FILTER (WHERE a != 2) FROM testData GROUP BY b +-- !query 4 schema +struct<> +-- !query 4 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 5 +SELECT COUNT(a) FILTER (WHERE a >= 0), COUNT(b) FILTER (WHERE a >= 3) FROM testData GROUP BY a +-- !query 5 schema +struct +-- !query 5 output +0 0 +2 0 +2 0 +3 2 + + +-- !query 6 +SELECT 'foo', COUNT(a) FILTER (WHERE b <= 2) FROM testData GROUP BY 1 +-- !query 6 schema +struct +-- !query 6 output +foo6 + + +-- !query 7 +SELECT 'foo', APPROX_COUNT_DISTINCT(a) FILTER (WHERE b >= 0) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 7 schema +struct +-- !query 7 output + + + +-- !query 8 +SELECT 'foo', MAX(STRUCT(a)) FILTER (WHERE b >= 1) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 8 schema +struct> +-- !query 8 output + + + +-- !query 9 +SELECT a + b, COUNT(b) FILTER (WHERE b >= 2) FROM testData GROUP BY a + b +-- !query 9 schema +struct<(a + b):int,count(b):bigint> +-- !query 9 output +2 0 +3 1 +4 1 +5 1 +NULL 0 + + +-- !query 10 +SELECT a + 2, COUNT(b) FILTER (WHERE b IN (1, 2)) FROM testData GROUP BY a + 1 +-- !query 10 schema +struct<> +-- !query 10 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 11 +SELECT a + 1 + 1, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY a + 1 +-- !query 11 schema +struct<((a + 1) + 1):int,count(b):bigint> +-- !query 11 output +3 2 +4 2 +5 2 +NULL 1 + + +-- !query 12 +SELECT COUNT(DISTINCT b) FILTER (WHERE b > 0), COUNT(DISTINCT b, c) FILTER (WHERE b > 0 AND c > 2) +FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a +-- !query 12 schema +struct +-- !query 12 output +1 1 + + +-- !query 13 +SELECT a AS k, COUNT(b) FILTER (WHERE b = 1 OR b = 2) FROM testData GROUP BY k +-- !query 13 schema +struct +-- !query 13 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 14 +SELECT a AS k, COUNT(b) FILTER (WHERE NOT b < 0) FROM testData GROUP BY k HAVING k > 1 +-- !query 14 schema +struct +-- !query 14 output +2 2 +3 2 + + +-- !query 15 +SELECT COUNT(b) FILTER (WHERE a > 0) AS k FROM testData GROUP BY k +-- !query 15 schema +struct<> +-- !query 15 output +org.apache.spark.sql.AnalysisException +aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`); + + +-- !query 16 +SELECT a AS k, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY k +-- !query 16 schema +struct +-- !query 16 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 17 +SELECT a, COUNT(1) FILTER (WHERE b > 1) FROM testData WHERE false GROUP BY a +-- !query 17 schema +struct +-- !query 17 output + + + +-- !query 18 +SELECT COUNT(1) FILTER (WHERE b = 2) FROM testData WHERE false +-- !query 18 schema +struct +-- !query 18 output +0 + + +-- !query 19 +SELECT 1 FROM (SELECT COUNT(1) FILTER (WHERE a >= 3 OR b <= 1) FROM testData WHERE false) t +-- !query 19 schema +struct<1:int> +-- !query 19 output +1 + + +-- !query 20 +CREATE TEMPORARY VIEW EMP AS SELECT * FROM VALUES + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (200, "emp 2", date "2003-01-01", 200.00D, 10), + (300, "emp 3", date "2002-01-01", 300.
[GitHub] [spark] AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#issuecomment-559335320 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19390/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559335237 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114555/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559335233 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#issuecomment-559335317 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
AmplabJenkins commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559335237 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114555/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
SparkQA commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#issuecomment-559335068 **[Test build #114562 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114562/testReport)** for PR 26656 at commit [`14daee6`](https://github.com/apache/spark/commit/14daee696abe01e2886496720a1479e3deff35e7). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
SparkQA removed a comment on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559294655 **[Test build #114555 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114555/testReport)** for PR 26697 at commit [`23bef9c`](https://github.com/apache/spark/commit/23bef9cb752cc9a83eccd8e69a49eb57bdbd91fa). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
SparkQA commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559334935 **[Test build #114555 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114555/testReport)** for PR 26697 at commit [`23bef9c`](https://github.com/apache/spark/commit/23bef9cb752cc9a83eccd8e69a49eb57bdbd91fa). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559333993 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559333996 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19389/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins removed a comment on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559333993 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
AmplabJenkins commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559333996 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19389/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression
beliefer commented on a change in pull request #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression URL: https://github.com/apache/spark/pull/26656#discussion_r351585975 ## File path: sql/core/src/test/resources/sql-tests/results/group-by-filter.sql.out ## @@ -0,0 +1,332 @@ +-- Automatically generated by SQLQueryTestSuite +-- Number of queries: 27 + + +-- !query 0 +CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES +(1, 1), (1, 2), (2, 1), (2, 2), (3, 1), (3, 2), (null, 1), (3, null), (null, null) +AS testData(a, b) +-- !query 0 schema +struct<> +-- !query 0 output + + + +-- !query 1 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData +-- !query 1 schema +struct<> +-- !query 1 output +org.apache.spark.sql.AnalysisException +grouping expressions sequence is empty, and 'testdata.`a`' is not an aggregate function. Wrap '(count(testdata.`b`) AS `count(b)`)' in windowing function(s) or wrap 'testdata.`a`' in first() (or first_value) if you don't care which value you get.; + + +-- !query 2 +SELECT COUNT(a) FILTER (WHERE a = 1), COUNT(b) FILTER (WHERE a > 1) FROM testData +-- !query 2 schema +struct +-- !query 2 output +2 4 + + +-- !query 3 +SELECT a, COUNT(b) FILTER (WHERE a >= 2) FROM testData GROUP BY a +-- !query 3 schema +struct +-- !query 3 output +1 0 +2 2 +3 2 +NULL 0 + + +-- !query 4 +SELECT a, COUNT(b) FILTER (WHERE a != 2) FROM testData GROUP BY b +-- !query 4 schema +struct<> +-- !query 4 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 5 +SELECT COUNT(a) FILTER (WHERE a >= 0), COUNT(b) FILTER (WHERE a >= 3) FROM testData GROUP BY a +-- !query 5 schema +struct +-- !query 5 output +0 0 +2 0 +2 0 +3 2 + + +-- !query 6 +SELECT 'foo', COUNT(a) FILTER (WHERE b <= 2) FROM testData GROUP BY 1 +-- !query 6 schema +struct +-- !query 6 output +foo6 + + +-- !query 7 +SELECT 'foo', APPROX_COUNT_DISTINCT(a) FILTER (WHERE b >= 0) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 7 schema +struct +-- !query 7 output + + + +-- !query 8 +SELECT 'foo', MAX(STRUCT(a)) FILTER (WHERE b >= 1) FROM testData WHERE a = 0 GROUP BY 1 +-- !query 8 schema +struct> +-- !query 8 output + + + +-- !query 9 +SELECT a + b, COUNT(b) FILTER (WHERE b >= 2) FROM testData GROUP BY a + b +-- !query 9 schema +struct<(a + b):int,count(b):bigint> +-- !query 9 output +2 0 +3 1 +4 1 +5 1 +NULL 0 + + +-- !query 10 +SELECT a + 2, COUNT(b) FILTER (WHERE b IN (1, 2)) FROM testData GROUP BY a + 1 +-- !query 10 schema +struct<> +-- !query 10 output +org.apache.spark.sql.AnalysisException +expression 'testdata.`a`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.; + + +-- !query 11 +SELECT a + 1 + 1, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY a + 1 +-- !query 11 schema +struct<((a + 1) + 1):int,count(b):bigint> +-- !query 11 output +3 2 +4 2 +5 2 +NULL 1 + + +-- !query 12 +SELECT COUNT(DISTINCT b) FILTER (WHERE b > 0), COUNT(DISTINCT b, c) FILTER (WHERE b > 0 AND c > 2) +FROM (SELECT 1 AS a, 2 AS b, 3 AS c) GROUP BY a +-- !query 12 schema +struct +-- !query 12 output +1 1 + + +-- !query 13 +SELECT a AS k, COUNT(b) FILTER (WHERE b = 1 OR b = 2) FROM testData GROUP BY k +-- !query 13 schema +struct +-- !query 13 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 14 +SELECT a AS k, COUNT(b) FILTER (WHERE NOT b < 0) FROM testData GROUP BY k HAVING k > 1 +-- !query 14 schema +struct +-- !query 14 output +2 2 +3 2 + + +-- !query 15 +SELECT COUNT(b) FILTER (WHERE a > 0) AS k FROM testData GROUP BY k +-- !query 15 schema +struct<> +-- !query 15 output +org.apache.spark.sql.AnalysisException +aggregate functions are not allowed in GROUP BY, but found count(testdata.`b`); + + +-- !query 16 +SELECT a AS k, COUNT(b) FILTER (WHERE b > 0) FROM testData GROUP BY k +-- !query 16 schema +struct +-- !query 16 output +1 2 +2 2 +3 2 +NULL 1 + + +-- !query 17 +SELECT a, COUNT(1) FILTER (WHERE b > 1) FROM testData WHERE false GROUP BY a +-- !query 17 schema +struct +-- !query 17 output + + + +-- !query 18 +SELECT COUNT(1) FILTER (WHERE b = 2) FROM testData WHERE false +-- !query 18 schema +struct +-- !query 18 output +0 + + +-- !query 19 +SELECT 1 FROM (SELECT COUNT(1) FILTER (WHERE a >= 3 OR b <= 1) FROM testData WHERE false) t +-- !query 19 schema +struct<1:int> +-- !query 19 output +1 + + +-- !query 20 +CREATE TEMPORARY VIEW EMP AS SELECT * FROM VALUES + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (100, "emp 1", date "2005-01-01", 100.00D, 10), + (200, "emp 2", date "2003-01-01", 200.00D, 10), + (300, "emp 3", date "2002-01-01", 300.
[GitHub] [spark] SparkQA commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types
SparkQA commented on issue #26699: [SPARK-30066][SQL] Columnar execution support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559333736 **[Test build #114561 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114561/testReport)** for PR 26699 at commit [`4728ba4`](https://github.com/apache/spark/commit/4728ba4ab232b2164c203d9a7677b6918e00751f). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn commented on issue #26699: [SPARK-30066][SQL] Columnar executor support for interval types
yaooqinn commented on issue #26699: [SPARK-30066][SQL] Columnar executor support for interval types URL: https://github.com/apache/spark/pull/26699#issuecomment-559333176 cc @cloud-fan @maropu, we may need https://github.com/apache/spark/pull/26680 to be merged to make this forward here, thanks for reviewing This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn opened a new pull request #26699: [SPARK-30066][SQL] Columnar executor support for interval types
yaooqinn opened a new pull request #26699: [SPARK-30066][SQL] Columnar executor support for interval types URL: https://github.com/apache/spark/pull/26699 ### What changes were proposed in this pull request? Columnar execution support for interval types ### Why are the changes needed? support cache tables with interval columns improve performance too ### Does this PR introduce any user-facing change? Yes cache table with accept interval columns ### How was this patch tested? add ut This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
SparkQA commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559329895 **[Test build #114560 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114560/testReport)** for PR 26195 at commit [`66f0bd3`](https://github.com/apache/spark/commit/66f0bd36cdf64cfac11ed7199badfa820e7f3d38). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559328822 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19388/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559328822 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19388/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins removed a comment on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559328818 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
AmplabJenkins commented on issue #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#issuecomment-559328818 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] Ngone51 commented on a change in pull request #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path
Ngone51 commented on a change in pull request #26195: [SPARK-29537][SQL] throw exception when user defined a wrong base path URL: https://github.com/apache/spark/pull/26195#discussion_r351580701 ## File path: sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileIndexSuite.scala ## @@ -352,6 +352,25 @@ class FileIndexSuite extends SharedSparkSession { "driver side must not be negative")) } + test ("SPARK-29537: throw exception when user defined a wrong base path") { Review comment: Added 261b9ad This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] shahidki31 removed a comment on issue #26616: [SPARK-25392][Webui]Prevent error page when accessing pools page from history server
shahidki31 removed a comment on issue #26616: [SPARK-25392][Webui]Prevent error page when accessing pools page from history server URL: https://github.com/apache/spark/pull/26616#issuecomment-556906782 Jenkins, test this please This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
SparkQA commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559322497 **[Test build #114559 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114559/testReport)** for PR 26416 at commit [`e5d9250`](https://github.com/apache/spark/commit/e5d925025a606cbb5c365303149272900f255e33). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] HeartSaVioR commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
HeartSaVioR commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559321154 > org.apache.spark.sql.hive.thriftserver.ThriftServerWithSparkContextSuite.SPARK-29911: Uncache cached tables when session closed https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114556/testReport/org.apache.spark.sql.hive.thriftserver/ThriftServerWithSparkContextSuite/SPARK_29911__Uncache_cached_tables_when_session_closed/history/ That UT seems to fail only once for recent 30 runs, so let's file an issue if we see the failure once more. Anyway, not related to the PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] HeartSaVioR commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup
HeartSaVioR commented on issue #26416: [SPARK-29779][CORE] Compact old event log files and cleanup URL: https://github.com/apache/spark/pull/26416#issuecomment-559321182 retest this, please. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559321142 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559321142 Merged build finished. Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559321146 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19387/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
AmplabJenkins removed a comment on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559321146 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19387/ Test PASSed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] SparkQA commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI
SparkQA commented on issue #26378: [SPARK-29724][SPARK-29726][WEBUI][SQL] Support JDBC/ODBC tab for HistoryServer WebUI URL: https://github.com/apache/spark/pull/26378#issuecomment-559320853 **[Test build #114558 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114558/testReport)** for PR 26378 at commit [`2bc86c5`](https://github.com/apache/spark/commit/2bc86c503cd93993fa5ef2dee30443472895908b). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] huangtianhua commented on issue #26690: [SPARK-30057][DOCS]Add a statement of platforms Spark runs on
huangtianhua commented on issue #26690: [SPARK-30057][DOCS]Add a statement of platforms Spark runs on URL: https://github.com/apache/spark/pull/26690#issuecomment-559320550 @HeartSaVioR @srowen Thanks for your attention of this. Yes, we have integrated the maven and python tests into amplab jenkins CI, the arm tests as daily jobs and have been stablely running for few weeks. And as @srowen said, the platform Spark runs on is almost entirely the JVM, I hope to add this to Spark docs to make users know Spark should run on any platform that runs a supported version of Java, it should include ARM, not X86_64 only. Also this will make users have more interest, confidence and will in running Spark on ARM platform. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] HyukjinKwon commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column
HyukjinKwon commented on issue #26697: [SPARK-28461][SQL][test-hadoop3.2] Pad Decimal numbers with trailing zeros to the scale of the column URL: https://github.com/apache/spark/pull/26697#issuecomment-559320004 Thanks @dongjoon-hyun and @wangyum This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org