[GitHub] spark issue #13704: [SPARK-15985][SQL] Reduce runtime overhead of a program ...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13704 **[Test build #61677 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61677/consoleFull)** for PR 13704 at commit [`77859cf`](https://github.com/apache/spark/commit/77859cf4397b8a5022b93ffa4996203b36dfef1b). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69385942 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala --- @@ -652,6 +656,163 @@ case class StringRPad(str: Expression, len: Expression, pad: Expression) override def prettyName: String = "rpad" } +object ParseUrl { + private val HOST = UTF8String.fromString("HOST") + private val PATH = UTF8String.fromString("PATH") + private val QUERY = UTF8String.fromString("QUERY") + private val REF = UTF8String.fromString("REF") + private val PROTOCOL = UTF8String.fromString("PROTOCOL") + private val FILE = UTF8String.fromString("FILE") + private val AUTHORITY = UTF8String.fromString("AUTHORITY") + private val USERINFO = UTF8String.fromString("USERINFO") + private val REGEXPREFIX = "(&|^)" + private val REGEXSUBFIX = "=([^&]*)" +} + +/** + * Extracts a part from a URL + */ +@ExpressionDescription( + usage = "_FUNC_(url, partToExtract[, key]) - extracts a part from a URL", + extended = """Parts: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO. +Key specifies which query to extract. +Examples: + > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'HOST') + 'spark.apache.org' + > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'QUERY') + 'query=1' + > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'QUERY', 'query') + '1'""") +case class ParseUrl(children: Seq[Expression]) + extends Expression with ImplicitCastInputTypes with CodegenFallback { + + override def nullable: Boolean = true + override def inputTypes: Seq[DataType] = Seq.fill(children.size)(StringType) + override def dataType: DataType = StringType + override def prettyName: String = "parse_url" + + // If the url is a constant, cache the URL object so that we don't need to convert url + // from UTF8String to String to URL for every row. + @transient private lazy val cachedUrl = stringExprs(0) match { +case Literal(url: UTF8String, _) => getUrl(url) +case _ => null + } + + // If the key is a constant, cache the Pattern object so that we don't need to convert key + // from UTF8String to String to StringBuilder to String to Pattern for every row. + @transient private lazy val cachedPattern = stringExprs(2) match { +case Literal(key: UTF8String, _) => getPattern(key) --- End diff -- Hi, @janplus . When I said before, I thought you could do validation here. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13976 **[Test build #61676 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61676/consoleFull)** for PR 13976 at commit [`e260359`](https://github.com/apache/spark/commit/e26035968c73210dda38e82654fc335390fc6c1e). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user janplus commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69385849 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- Yes, definitely I can do that. In fact I have finished it. But before I do the commit, let us get thought it first. In `checkAnalysis` method for `LogicalPlan`, the only method will be called for `Expression` is `checkInputDataTypes` https://github.com/apache/spark/blob/d1e8108854deba3de8e2d87eb4389d11fb17ee57/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L64 Which means we can only implement this validation in `checkInputDataTypes` of `ParseUrl`. In that circumstance spark will give the AnalysisException like this > org.apache.spark.sql.AnalysisException: cannot resolve 'parse_url("http://spark.apache.org/path?;, "QUERY", "???")' due to data type mismatch: wrong key "???"; line 1 pos 0 But obviously this should not be a data type mismatch. This message may confuse the users. Also the different message for **Literal** `key` and **Not Literal** `key` may make them confused too. Otherwise, if we do not validate the **Literal** `key`, the `Executor` will get an exception at the first row. It seems not that unacceptable. So compared the both sides, I think we should not do the Literal `key` validation. How do you think about this? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14033: [SPARK-16286][SQL] Implement stack table generating func...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14033 **[Test build #61675 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61675/consoleFull)** for PR 14033 at commit [`6de93a1`](https://github.com/apache/spark/commit/6de93a1582ac5877a932ea47e86811e228b5c2f6). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14033: [SPARK-16286][SQL] Implement stack table generati...
GitHub user dongjoon-hyun opened a pull request: https://github.com/apache/spark/pull/14033 [SPARK-16286][SQL] Implement stack table generating function ## What changes were proposed in this pull request? This PR implements `stack` table generating function. ## How was this patch tested? Pass the Jenkins tests including new testcases. You can merge this pull request into a Git repository by running: $ git pull https://github.com/dongjoon-hyun/spark SPARK-16286 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/14033.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #14033 commit 6de93a1582ac5877a932ea47e86811e228b5c2f6 Author: Dongjoon HyunDate: 2016-07-03T05:18:16Z [SPARK-16286][SQL] Implement stack table generating function --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13976 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13976 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61670/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13976 **[Test build #61670 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61670/consoleFull)** for PR 13976 at commit [`fed3ba2`](https://github.com/apache/spark/commit/fed3ba2bde5f82946c49fef5c06c85791400cea5). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14031: [SPARK-16353][BUILD][DOC] Missing javadoc options...
Github user mallman commented on a diff in the pull request: https://github.com/apache/spark/pull/14031#discussion_r69385673 --- Diff: project/SparkBuild.scala --- @@ -723,8 +723,8 @@ object Unidoc { .map(_.filterNot(_.getCanonicalPath.contains("org/apache/hadoop"))) }, -// Javadoc options: create a window title, and group key packages on index page -javacOptions in doc := Seq( +// Javadoc options: create a window title --- End diff -- I think we can either change it to just `// Javadoc options` to clarify that the following `javacOptions` are in fact for Javadoc, or we can remove the comment entirely. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13976 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61669/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13976 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13976 **[Test build #61669 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61669/consoleFull)** for PR 13976 at commit [`31ffa75`](https://github.com/apache/spark/commit/31ffa758cfd5aa41851cb77a15b03da6d54e9198). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69385614 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- Thank you for nice investigation. Yes, the validation of Hive seems to be too limited. I think you can be better than Hive if you supports **Literal** `key` validation? How do you think about that? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14032: [Minor][SQL] Replace Parquet deprecations
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14032 **[Test build #61674 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61674/consoleFull)** for PR 14032 at commit [`20aa871`](https://github.com/apache/spark/commit/20aa871a02d08d45f716a9974abe479f077ccd30). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14032: [Minor][SQL] Replace Parquet deprecations
GitHub user techaddict opened a pull request: https://github.com/apache/spark/pull/14032 [Minor][SQL] Replace Parquet deprecations ## What changes were proposed in this pull request? 1. Replace `Binary.fromByteArray` with `Binary.fromReusedByteArray` 2. Replace `ConversionPatterns.listType ` with`ConversionPatterns.listOfElements` ## How was this patch tested? Existing Tests You can merge this pull request into a Git repository by running: $ git pull https://github.com/techaddict/spark depre-1 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/14032.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #14032 commit 20aa871a02d08d45f716a9974abe479f077ccd30 Author: Sandeep SinghDate: 2016-07-03T04:45:54Z [Minor][SQL] Replace Parquet deprecations 1. Replace `Binary.fromByteArray` with `Binary.fromReusedByteArray` 2. Replace `ConversionPatterns.listType ` with `ConversionPatterns.listOfElements` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13680 **[Test build #61673 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61673/consoleFull)** for PR 13680 at commit [`243252a`](https://github.com/apache/spark/commit/243252a460794c2b6e2dff3757e421e2532e87bf). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user lw-lin commented on the issue: https://github.com/apache/spark/pull/14030 @zsxwing could you take a look at this? Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #12203: [SPARK-14423][YARN] Avoid same name files added to distr...
Github user RicoGit commented on the issue: https://github.com/apache/spark/pull/12203 Hi guys, it is possible to apply this patch to version 1.6? What can I do for this? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61668/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14030 **[Test build #61668 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61668/consoleFull)** for PR 14030 at commit [`2f8ba28`](https://github.com/apache/spark/commit/2f8ba2859c521979deacae87fa03460fec5c8191). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13517: [SPARK-14839][SQL] Support for other types as option in ...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13517 **[Test build #61672 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61672/consoleFull)** for PR 13517 at commit [`4b67bab`](https://github.com/apache/spark/commit/4b67bab4b8fc663284ac29b1e2b83ad75eb2ba74). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13680 **[Test build #61671 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61671/consoleFull)** for PR 13680 at commit [`2cf96b4`](https://github.com/apache/spark/commit/2cf96b48c1bac00a162fe2c813d587982ad11263). * This patch **fails Scala style tests**. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13680 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61671/ Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13680 Merged build finished. Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13680 **[Test build #61671 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61671/consoleFull)** for PR 13680 at commit [`2cf96b4`](https://github.com/apache/spark/commit/2cf96b48c1bac00a162fe2c813d587982ad11263). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user janplus commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69385193 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- Hi, @dongjoon-hyun It seems only when `url`, `partToExtract` and `key` is all `Literal`, then hive may give a `SemanticException`. > hive> select * from url_parse_data; OK http://spark/path? QUERY ??? Time taken: 0.054 seconds, Fetched: 1 row(s) > hive> select parse_url("http://spark/path?;, "QUERY", "???") from url_parse_data; FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '"???"': org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String org.apache.hadoop.hive.ql.udf.UDFParseUrl.evaluate(java.lang.String,java.lang.String,java.lang.String) on object org.apache.hadoop.hive.ql.udf.UDFParseUrl@59e082f8 of class org.apache.hadoop.hive.ql.udf.UDFParseUrl with arguments {http://spark/path?:java.lang.String, QUERY:java.lang.String, ???:java.lang.String} of size 3 > hive> select parse_url(url, "QUERY", "???") from url_parse_data; OK Failed with exception java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String org.apache.hadoop.hive.ql.udf.UDFParseUrl.evaluate(java.lang.String,java.lang.String,java.lang.String) on object org.apache.hadoop.hive.ql.udf.UDFParseUrl@7d1f3fe9 of class org.apache.hadoop.hive.ql.udf.UDFParseUrl with arguments {http://spark/path?:java.lang.String, QUERY:java.lang.String, ???:java.lang.String} of size 3 > hive> select parse_url("http://spark/path?;, part, "???") from url_parse_data; OK Failed with exception java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String org.apache.hadoop.hive.ql.udf.UDFParseUrl.evaluate(java.lang.String,java.lang.String,java.lang.String) on object org.apache.hadoop.hive.ql.udf.UDFParseUrl@37fef327 of class org.apache.hadoop.hive.ql.udf.UDFParseUrl with arguments {http://spark/path?:java.lang.String, QUERY:java.lang.String, ???:java.lang.String} of size 3 > hive> select parse_url("http://spark/path?;, "QUERY", key) from url_parse_data; OK Failed with exception
[GitHub] spark pull request #13517: [SPARK-14839][SQL] Support for other types as opt...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/13517#discussion_r69385174 --- Diff: sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 --- @@ -45,11 +45,11 @@ statement | ALTER DATABASE identifier SET DBPROPERTIES tablePropertyList #setDatabaseProperties | DROP DATABASE (IF EXISTS)? identifier (RESTRICT | CASCADE)? #dropDatabase | createTableHeader ('(' colTypeList ')')? tableProvider -(OPTIONS tablePropertyList)? +(OPTIONS optionParameterList)? (PARTITIONED BY partitionColumnNames=identifierList)? bucketSpec? #createTableUsing | createTableHeader tableProvider -(OPTIONS tablePropertyList)? --- End diff -- @hvanhovell I see. Thanks, I didn't expect you are on holidays.. I will push some commits and wait. Please feel free to review when you have some time! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69384995 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- Thank you, @janplus . --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13976 **[Test build #61670 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61670/consoleFull)** for PR 13976 at commit [`fed3ba2`](https://github.com/apache/spark/commit/fed3ba2bde5f82946c49fef5c06c85791400cea5). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/13976 Actually, that is not a bug, but I found that there exists a little difference between Spark and Hive with the following query. ``` SELECT inline(array(struct(a), struct(b))) FROM (SELECT 1 a, 2 b) T ``` In short, Spark does more strict type-checking, e.g., `[struct, struct]` is considered heterogeneous due to name difference. I only add more tests to clarify the cases. We cannot touch that because it depends on many things. The following query is a workaround which both Spark/Hive work. ``` SELECT inline(array(struct(a), named_struct('a', b))) FROM (SELECT 1 a, 2 b) T ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13976 **[Test build #61669 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61669/consoleFull)** for PR 13976 at commit [`31ffa75`](https://github.com/apache/spark/commit/31ffa758cfd5aa41851cb77a15b03da6d54e9198). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user janplus commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69384848 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- I'll have a investigation on this. It should be different whether `key` is `Literal`. > hive> select parse_url("http://spark/path?;, "QUERY", "???"); FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '"???"': org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String org.apache.hadoop.hive.ql.udf.UDFParseUrl.evaluate(java.lang.String,java.lang.String,java.lang.String) on object org.apache.hadoop.hive.ql.udf.UDFParseUrl@6682e6a5 of class org.apache.hadoop.hive.ql.udf.UDFParseUrl with arguments {http://spark/path?:java.lang.String, QUERY:java.lang.String, ???:java.lang.String} of size 3 > > hive> select parse_url("http://spark/path?;, "QUERY", name) from test; OK Failed with exception java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String org.apache.hadoop.hive.ql.udf.UDFParseUrl.evaluate(java.lang.String,java.lang.String,java.lang.String) on object org.apache.hadoop.hive.ql.udf.UDFParseUrl@2035d65b of class org.apache.hadoop.hive.ql.udf.UDFParseUrl with arguments {http://spark/path?:java.lang.String, QUERY:java.lang.String, ???:java.lang.String} of size 3 Time taken: 0.039 seconds --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14030 **[Test build #61668 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61668/consoleFull)** for PR 14030 at commit [`2f8ba28`](https://github.com/apache/spark/commit/2f8ba2859c521979deacae87fa03460fec5c8191). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user lw-lin commented on the issue: https://github.com/apache/spark/pull/14030 Jenkins retest this please --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13976: [SPARK-16288][SQL] Implement inline table generating fun...
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/13976 Oh, I found a bug and am working on this. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13680: [SPARK-15962][SQL] Introduce implementation with ...
Github user kiszk commented on a diff in the pull request: https://github.com/apache/spark/pull/13680#discussion_r69384068 --- Diff: sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java --- @@ -341,63 +324,113 @@ public UnsafeArrayData copy() { return arrayCopy; } - public static UnsafeArrayData fromPrimitiveArray(int[] arr) { -if (arr.length > (Integer.MAX_VALUE - 4) / 8) { - throw new UnsupportedOperationException("Cannot convert this array to unsafe format as " + -"it's too big."); -} + @Override + public boolean[] toBooleanArray() { +int size = numElements(); +boolean[] values = new boolean[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.BYTE_ARRAY_OFFSET, size); +return values; + } + + @Override + public byte[] toByteArray() { +int size = numElements(); +byte[] values = new byte[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.BYTE_ARRAY_OFFSET, size); +return values; + } + + @Override + public short[] toShortArray() { +int size = numElements(); +short[] values = new short[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.SHORT_ARRAY_OFFSET, size * 2); +return values; + } -final int offsetRegionSize = 4 * arr.length; -final int valueRegionSize = 4 * arr.length; -final int totalSize = 4 + offsetRegionSize + valueRegionSize; -final byte[] data = new byte[totalSize]; + @Override + public int[] toIntArray() { +int size = numElements(); +int[] values = new int[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.INT_ARRAY_OFFSET, size * 4); +return values; + } -Platform.putInt(data, Platform.BYTE_ARRAY_OFFSET, arr.length); + @Override + public long[] toLongArray() { +int size = numElements(); +long[] values = new long[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.LONG_ARRAY_OFFSET, size * 8); +return values; + } -int offsetPosition = Platform.BYTE_ARRAY_OFFSET + 4; -int valueOffset = 4 + offsetRegionSize; -for (int i = 0; i < arr.length; i++) { - Platform.putInt(data, offsetPosition, valueOffset); - offsetPosition += 4; - valueOffset += 4; + @Override + public float[] toFloatArray() { +int size = numElements(); +float[] values = new float[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.FLOAT_ARRAY_OFFSET, size * 4); +return values; + } + + @Override + public double[] toDoubleArray() { +int size = numElements(); +double[] values = new double[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.DOUBLE_ARRAY_OFFSET, size * 8); +return values; + } + + private static UnsafeArrayData fromPrimitiveArray(Object arr, int length, final int elementSize) { +final int headerSize = calculateHeaderPortionInBytes(length); +if (length > (Integer.MAX_VALUE - headerSize) / elementSize) { + throw new UnsupportedOperationException("Cannot convert this array to unsafe format as " + +"it's too big."); } +final int valueRegionSize = elementSize * length; +final byte[] data = new byte[valueRegionSize + headerSize]; --- End diff -- I decided not to change from 4 bytes to 8 bytes for ```numElements```. This is because ```numElements()``` is defined as ```int``` in ```ArrayData```. It would be good to create another PR to change a type for ```numElements()```. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14026: [SPARK-13569][STREAMING][KAFKA] pattern based top...
Github user koeninger commented on a diff in the pull request: https://github.com/apache/spark/pull/14026#discussion_r69383724 --- Diff: external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala --- @@ -79,8 +81,71 @@ private case class Subscribe[K, V]( def onStart(currentOffsets: ju.Map[TopicPartition, jl.Long]): Consumer[K, V] = { val consumer = new KafkaConsumer[K, V](kafkaParams) consumer.subscribe(topics) -if (currentOffsets.isEmpty) { - offsets.asScala.foreach { case (topicPartition, offset) => +val toSeek = if (currentOffsets.isEmpty) { + offsets +} else { + currentOffsets +} +if (!toSeek.isEmpty) { + // work around KAFKA-3370 when reset is none + val aor = kafkaParams.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG) + val shouldSuppress = aor != null && aor.asInstanceOf[String].toUpperCase == "NONE" + try { +consumer.poll(0) + } catch { +case x: NoOffsetForPartitionException if shouldSuppress => + // silence exception + } + toSeek.asScala.foreach { case (topicPartition, offset) => + consumer.seek(topicPartition, offset) --- End diff -- Foreach is a scope, case is a nested scope. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14008: [SPARK-16281][SQL] Implement parse_url SQL function
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/14008 Hi, @janplus . Overall this PR looks solid to me now. It implemented most logic in the same way with Hive parse_url. The remaining difference from Hive is about `SymanticException` behavior. I left a comment about that. Thank you, @janplus . --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69383569 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- In other words, Spark of this PR runs the execution for that problematic parameter while Hive does not. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69383544 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- In case of Hive, it's also `SemanticException`, not a raw `PatternSyntaxException`. You may need to investigate Hive `SemanticException` logic. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/14008#discussion_r69383504 --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala --- @@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper { checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 0) checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 0) } + + test("ParseUrl") { +def checkParseUrl(expected: String, urlStr: String, partToExtract: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), + Literal.create(partToExtract, StringType))), expected) +} +def checkParseUrlWithKey( +expected: String, urlStr: String, +partToExtract: String, key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal.create(urlStr, StringType), Literal.create(partToExtract, StringType), + Literal.create(key, StringType))), expected) +} + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, "QUERY") +checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF") +checkParseUrl("http", "http://spark.apache.org/path?query=1;, "PROTOCOL") +checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, "FILE") +checkParseUrl("spark.apache.org:8080", "http://spark.apache.org:8080/path?query=1;, "AUTHORITY") +checkParseUrl("userinfo", "http://useri...@spark.apache.org/path?query=1;, "USERINFO") +checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, "QUERY", "query") + +// Null checking +checkParseUrl(null, null, "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null) +checkParseUrl(null, null, null) +checkParseUrl(null, "test", "HOST") +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "HOST", "query") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "quer") +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", null) +checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, "QUERY", "") + +// exceptional cases +intercept[java.util.regex.PatternSyntaxException] { --- End diff -- Hi, @janplus . I thought about this a little more. Currently, this exception happens in `Executor` side. It's not desirable. IMO, we had better make this as `AnalysisException`. Could you add some simple validation logic for `key`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13532: [SPARK-15204][SQL] improve nullability inference for Agg...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13532 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61667/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13532: [SPARK-15204][SQL] improve nullability inference for Agg...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13532 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13532: [SPARK-15204][SQL] improve nullability inference for Agg...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13532 **[Test build #61667 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61667/consoleFull)** for PR 13532 at commit [`46ced5c`](https://github.com/apache/spark/commit/46ced5c5022bc930241724c6cc6e118293321dd3). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61665/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14030 **[Test build #61665 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61665/consoleFull)** for PR 14030 at commit [`02cb6b5`](https://github.com/apache/spark/commit/02cb6b5fd8f6877d86c3307654060316ea14f815). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14004: [SPARK-16285][SQL] Implement sentences SQL functions
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14004 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14004: [SPARK-16285][SQL] Implement sentences SQL functions
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14004 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61664/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14004: [SPARK-16285][SQL] Implement sentences SQL functions
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14004 **[Test build #61664 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61664/consoleFull)** for PR 14004 at commit [`c9e235a`](https://github.com/apache/spark/commit/c9e235a3ea35bbd2cdf08503bce7156f8f3a4d1d). * This patch passes all tests. * This patch merges cleanly. * This patch adds the following public classes _(experimental)_: * `case class Sentences(` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61666/ Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Merged build finished. Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix support for incremental planning ...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14030 **[Test build #61666 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61666/consoleFull)** for PR 14030 at commit [`2f8ba28`](https://github.com/apache/spark/commit/2f8ba2859c521979deacae87fa03460fec5c8191). * This patch **fails Spark unit tests**. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13967: [SPARK-16278][SPARK-16279][SQL] Implement map_keys/map_v...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13967 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13967: [SPARK-16278][SPARK-16279][SQL] Implement map_keys/map_v...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13967 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61661/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13765: [SPARK-16052][SQL] Improve `CollapseRepartition` optimiz...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13765 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13765: [SPARK-16052][SQL] Improve `CollapseRepartition` optimiz...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13765 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61663/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13967: [SPARK-16278][SPARK-16279][SQL] Implement map_keys/map_v...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13967 **[Test build #61661 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61661/consoleFull)** for PR 13967 at commit [`8db1e65`](https://github.com/apache/spark/commit/8db1e656f27aa1647fca7c86405959262c3365fd). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13765: [SPARK-16052][SQL] Improve `CollapseRepartition` optimiz...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13765 **[Test build #61663 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61663/consoleFull)** for PR 13765 at commit [`e26e956`](https://github.com/apache/spark/commit/e26e956c89593bbae52c2cdc32b788ed7eea29c7). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13876: [SPARK-16174][SQL] Improve `OptimizeIn` optimizer to rem...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13876 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13876: [SPARK-16174][SQL] Improve `OptimizeIn` optimizer to rem...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13876 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61662/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13876: [SPARK-16174][SQL] Improve `OptimizeIn` optimizer to rem...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13876 **[Test build #61662 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61662/consoleFull)** for PR 13876 at commit [`63b3ecd`](https://github.com/apache/spark/commit/63b3ecd98eafa6363d3c07835cb06909ea1a23e8). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14020: [SPARK-16349][sql] Fall back to isolated class lo...
Github user jaceklaskowski commented on a diff in the pull request: https://github.com/apache/spark/pull/14020#discussion_r69382827 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala --- @@ -264,7 +270,7 @@ private[hive] class IsolatedClientLoader( throw new ClassNotFoundException( s"$cnf when creating Hive client using classpath: ${execJars.mkString(", ")}\n" + "Please make sure that jars for your version of hive and hadoop are included in the " + --- End diff -- Just a nitpick...should 'hive' be Hive as the line above + Hadoop? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14026: [SPARK-13569][STREAMING][KAFKA] pattern based top...
Github user jaceklaskowski commented on a diff in the pull request: https://github.com/apache/spark/pull/14026#discussion_r69382788 --- Diff: external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala --- @@ -79,8 +81,71 @@ private case class Subscribe[K, V]( def onStart(currentOffsets: ju.Map[TopicPartition, jl.Long]): Consumer[K, V] = { val consumer = new KafkaConsumer[K, V](kafkaParams) consumer.subscribe(topics) -if (currentOffsets.isEmpty) { - offsets.asScala.foreach { case (topicPartition, offset) => +val toSeek = if (currentOffsets.isEmpty) { + offsets +} else { + currentOffsets +} +if (!toSeek.isEmpty) { + // work around KAFKA-3370 when reset is none + val aor = kafkaParams.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG) + val shouldSuppress = aor != null && aor.asInstanceOf[String].toUpperCase == "NONE" + try { +consumer.poll(0) + } catch { +case x: NoOffsetForPartitionException if shouldSuppress => + // silence exception + } + toSeek.asScala.foreach { case (topicPartition, offset) => + consumer.seek(topicPartition, offset) --- End diff -- 4 chars for indent? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14031: [SPARK-16353][BUILD][DOC] Missing javadoc options...
Github user jaceklaskowski commented on a diff in the pull request: https://github.com/apache/spark/pull/14031#discussion_r69382719 --- Diff: project/SparkBuild.scala --- @@ -723,8 +723,8 @@ object Unidoc { .map(_.filterNot(_.getCanonicalPath.contains("org/apache/hadoop"))) }, -// Javadoc options: create a window title, and group key packages on index page -javacOptions in doc := Seq( +// Javadoc options: create a window title --- End diff -- Do we really need that line? It's in the git history at the very least and JIRA. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14030: [SPARK-16350][SQL] Fix support for incremental pl...
Github user jaceklaskowski commented on a diff in the pull request: https://github.com/apache/spark/pull/14030#discussion_r69382676 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ForeachSinkSuite.scala --- @@ -35,35 +35,109 @@ class ForeachSinkSuite extends StreamTest with SharedSQLContext with BeforeAndAf sqlContext.streams.active.foreach(_.stop()) } - test("foreach") { + test("foreach() with `append` output mode") { withTempDir { checkpointDir => val input = MemoryStream[Int] val query = input.toDS().repartition(2).writeStream .option("checkpointLocation", checkpointDir.getCanonicalPath) +.outputMode("append") .foreach(new TestForeachWriter()) .start() + + // -- batch 0 --- input.addData(1, 2, 3, 4) query.processAllAvailable() - val expectedEventsForPartition0 = Seq( + var expectedEventsForPartition0 = Seq( ForeachSinkSuite.Open(partition = 0, version = 0), ForeachSinkSuite.Process(value = 1), ForeachSinkSuite.Process(value = 3), ForeachSinkSuite.Close(None) ) - val expectedEventsForPartition1 = Seq( + var expectedEventsForPartition1 = Seq( ForeachSinkSuite.Open(partition = 1, version = 0), ForeachSinkSuite.Process(value = 2), ForeachSinkSuite.Process(value = 4), ForeachSinkSuite.Close(None) ) - val allEvents = ForeachSinkSuite.allEvents() + var allEvents = ForeachSinkSuite.allEvents() + assert(allEvents.size === 2) + assert { +allEvents === Seq(expectedEventsForPartition0, expectedEventsForPartition1) || + allEvents === Seq(expectedEventsForPartition1, expectedEventsForPartition0) + } + + ForeachSinkSuite.clear() + + // -- batch 1 --- + input.addData(5, 6, 7, 8) + query.processAllAvailable() + + expectedEventsForPartition0 = Seq( +ForeachSinkSuite.Open(partition = 0, version = 1), +ForeachSinkSuite.Process(value = 5), +ForeachSinkSuite.Process(value = 7), +ForeachSinkSuite.Close(None) + ) + expectedEventsForPartition1 = Seq( +ForeachSinkSuite.Open(partition = 1, version = 1), +ForeachSinkSuite.Process(value = 6), +ForeachSinkSuite.Process(value = 8), +ForeachSinkSuite.Close(None) + ) + + allEvents = ForeachSinkSuite.allEvents() assert(allEvents.size === 2) assert { allEvents === Seq(expectedEventsForPartition0, expectedEventsForPartition1) || allEvents === Seq(expectedEventsForPartition1, expectedEventsForPartition0) } + + query.stop() +} + } + + test("foreach() with `complete` output mode") { +withTempDir { checkpointDir => + val input = MemoryStream[Int] + + val query = input.toDS() +.groupBy().count().as[Long].map(_.toInt) +.writeStream +.option("checkpointLocation", checkpointDir.getCanonicalPath) +.outputMode("complete") --- End diff -- Are really output modes strings? No enums or similar more type-safe values? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14030: [SPARK-16350][SQL] Fix support for incremental pl...
Github user jaceklaskowski commented on a diff in the pull request: https://github.com/apache/spark/pull/14030#discussion_r69382669 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ForeachSinkSuite.scala --- @@ -35,35 +35,109 @@ class ForeachSinkSuite extends StreamTest with SharedSQLContext with BeforeAndAf sqlContext.streams.active.foreach(_.stop()) } - test("foreach") { + test("foreach() with `append` output mode") { withTempDir { checkpointDir => val input = MemoryStream[Int] val query = input.toDS().repartition(2).writeStream .option("checkpointLocation", checkpointDir.getCanonicalPath) +.outputMode("append") .foreach(new TestForeachWriter()) .start() + + // -- batch 0 --- input.addData(1, 2, 3, 4) query.processAllAvailable() - val expectedEventsForPartition0 = Seq( + var expectedEventsForPartition0 = Seq( ForeachSinkSuite.Open(partition = 0, version = 0), ForeachSinkSuite.Process(value = 1), ForeachSinkSuite.Process(value = 3), ForeachSinkSuite.Close(None) ) - val expectedEventsForPartition1 = Seq( + var expectedEventsForPartition1 = Seq( ForeachSinkSuite.Open(partition = 1, version = 0), ForeachSinkSuite.Process(value = 2), ForeachSinkSuite.Process(value = 4), ForeachSinkSuite.Close(None) ) - val allEvents = ForeachSinkSuite.allEvents() + var allEvents = ForeachSinkSuite.allEvents() + assert(allEvents.size === 2) + assert { +allEvents === Seq(expectedEventsForPartition0, expectedEventsForPartition1) || + allEvents === Seq(expectedEventsForPartition1, expectedEventsForPartition0) + } + + ForeachSinkSuite.clear() + + // -- batch 1 --- + input.addData(5, 6, 7, 8) + query.processAllAvailable() + + expectedEventsForPartition0 = Seq( +ForeachSinkSuite.Open(partition = 0, version = 1), +ForeachSinkSuite.Process(value = 5), +ForeachSinkSuite.Process(value = 7), +ForeachSinkSuite.Close(None) + ) + expectedEventsForPartition1 = Seq( +ForeachSinkSuite.Open(partition = 1, version = 1), +ForeachSinkSuite.Process(value = 6), +ForeachSinkSuite.Process(value = 8), +ForeachSinkSuite.Close(None) + ) + + allEvents = ForeachSinkSuite.allEvents() assert(allEvents.size === 2) assert { allEvents === Seq(expectedEventsForPartition0, expectedEventsForPartition1) || allEvents === Seq(expectedEventsForPartition1, expectedEventsForPartition0) --- End diff -- Same as above --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14030: [SPARK-16350][SQL] Fix support for incremental pl...
Github user jaceklaskowski commented on a diff in the pull request: https://github.com/apache/spark/pull/14030#discussion_r69382667 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ForeachSinkSuite.scala --- @@ -35,35 +35,109 @@ class ForeachSinkSuite extends StreamTest with SharedSQLContext with BeforeAndAf sqlContext.streams.active.foreach(_.stop()) } - test("foreach") { + test("foreach() with `append` output mode") { withTempDir { checkpointDir => val input = MemoryStream[Int] val query = input.toDS().repartition(2).writeStream .option("checkpointLocation", checkpointDir.getCanonicalPath) +.outputMode("append") .foreach(new TestForeachWriter()) .start() + + // -- batch 0 --- input.addData(1, 2, 3, 4) query.processAllAvailable() - val expectedEventsForPartition0 = Seq( + var expectedEventsForPartition0 = Seq( ForeachSinkSuite.Open(partition = 0, version = 0), ForeachSinkSuite.Process(value = 1), ForeachSinkSuite.Process(value = 3), ForeachSinkSuite.Close(None) ) - val expectedEventsForPartition1 = Seq( + var expectedEventsForPartition1 = Seq( ForeachSinkSuite.Open(partition = 1, version = 0), ForeachSinkSuite.Process(value = 2), ForeachSinkSuite.Process(value = 4), ForeachSinkSuite.Close(None) ) - val allEvents = ForeachSinkSuite.allEvents() + var allEvents = ForeachSinkSuite.allEvents() + assert(allEvents.size === 2) + assert { +allEvents === Seq(expectedEventsForPartition0, expectedEventsForPartition1) || + allEvents === Seq(expectedEventsForPartition1, expectedEventsForPartition0) --- End diff -- `should contain theSameElementsAs`? See http://www.scalatest.org/user_guide/using_matchers#workingWithAggregations --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13680 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61660/ Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/13680 Merged build finished. Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13680 **[Test build #61660 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61660/consoleFull)** for PR 13680 at commit [`7576c19`](https://github.com/apache/spark/commit/7576c19dfc872221d10abf7851e0782a76822ab0). * This patch **fails Spark unit tests**. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13532: [SPARK-15204][SQL] improve nullability inference for Agg...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13532 **[Test build #61667 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61667/consoleFull)** for PR 13532 at commit [`46ced5c`](https://github.com/apache/spark/commit/46ced5c5022bc930241724c6cc6e118293321dd3). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [SPARK-16350][SQL] Fix `foreach` for streaming Dataset
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14030 **[Test build #61666 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61666/consoleFull)** for PR 14030 at commit [`2f8ba28`](https://github.com/apache/spark/commit/2f8ba2859c521979deacae87fa03460fec5c8191). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [WIP][SPARK-16350][SQL] Fix `foreach` for streaming Data...
Github user lw-lin commented on the issue: https://github.com/apache/spark/pull/14030 Jenkins retest this please --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [WIP][SPARK-16350][SQL] Fix `foreach` for streaming Data...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14030 **[Test build #61665 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61665/consoleFull)** for PR 14030 at commit [`02cb6b5`](https://github.com/apache/spark/commit/02cb6b5fd8f6877d86c3307654060316ea14f815). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14031: [SPARK-16353][BUILD][DOC] Missing javadoc options for ja...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14031 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14031: [SPARK-16353][BUILD][DOC] Missing javadoc options...
Github user mallman commented on a diff in the pull request: https://github.com/apache/spark/pull/14031#discussion_r69382212 --- Diff: project/SparkBuild.scala --- @@ -723,8 +723,8 @@ object Unidoc { .map(_.filterNot(_.getCanonicalPath.contains("org/apache/hadoop"))) }, -// Javadoc options: create a window title, and group key packages on index page --- End diff -- BTW, I removed the mention of package groupings because none are defined. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14031: [SPARK-16353][BUILD][DOC] Missing javadoc options...
GitHub user mallman opened a pull request: https://github.com/apache/spark/pull/14031 [SPARK-16353][BUILD][DOC] Missing javadoc options for java unidoc ## What changes were proposed in this pull request? The javadoc options for the java unidoc generation are ignored when generating the java unidoc. For example, the generated `index.html` has the wrong HTML page title. This can be seen at http://spark.apache.org/docs/latest/api/java/index.html. I changed the relevant setting scope from `doc` to `(JavaUnidoc, unidoc)`. ## How was this patch tested? I ran `docs/jekyll build` and verified that the java unidoc `index.html` has the correct HTML page title. You can merge this pull request into a Git repository by running: $ git pull https://github.com/VideoAmp/spark-public spark-16353 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/14031.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #14031 commit 939e8b5d3a3b502f3a7870d437cb38ee9564e6c4 Author: Michael AllmanDate: 2016-07-02T19:55:39Z [SPARK-16353][BUILD][DOC] The javadoc options for the java unidoc generation are not honored. The scope of the relevant javacOptions key should be `(JavaUnidoc, unidoc)` not `doc` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14004: [SPARK-16285][SQL] Implement sentences SQL functions
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14004 **[Test build #61664 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61664/consoleFull)** for PR 14004 at commit [`c9e235a`](https://github.com/apache/spark/commit/c9e235a3ea35bbd2cdf08503bce7156f8f3a4d1d). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14004: [SPARK-16285][SQL] Implement sentences SQL functions
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/14004 Just rebased. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13876: [SPARK-16174][SQL] Improve `OptimizeIn` optimizer to rem...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13876 **[Test build #61662 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61662/consoleFull)** for PR 13876 at commit [`63b3ecd`](https://github.com/apache/spark/commit/63b3ecd98eafa6363d3c07835cb06909ea1a23e8). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13967: [SPARK-16278][SPARK-16279][SQL] Implement map_keys/map_v...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13967 **[Test build #61661 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61661/consoleFull)** for PR 13967 at commit [`8db1e65`](https://github.com/apache/spark/commit/8db1e656f27aa1647fca7c86405959262c3365fd). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13765: [SPARK-16052][SQL] Improve `CollapseRepartition` optimiz...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13765 **[Test build #61663 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61663/consoleFull)** for PR 13765 at commit [`e26e956`](https://github.com/apache/spark/commit/e26e956c89593bbae52c2cdc32b788ed7eea29c7). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13967: [SPARK-16278][SPARK-16279][SQL] Implement map_keys/map_v...
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/13967 Rebased to the master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13876: [SPARK-16174][SQL] Improve `OptimizeIn` optimizer to rem...
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/13876 Rebased to the master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13765: [SPARK-16052][SQL] Improve `CollapseRepartition` optimiz...
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/13765 Rebased to the master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14017: [MINOR][BUILD] Fix Java linter errors
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/14017 Thank you for merging, @srowen . --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13680: [SPARK-15962][SQL] Introduce implementation with ...
Github user kiszk commented on a diff in the pull request: https://github.com/apache/spark/pull/13680#discussion_r69381782 --- Diff: sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java --- @@ -341,63 +324,113 @@ public UnsafeArrayData copy() { return arrayCopy; } - public static UnsafeArrayData fromPrimitiveArray(int[] arr) { -if (arr.length > (Integer.MAX_VALUE - 4) / 8) { - throw new UnsupportedOperationException("Cannot convert this array to unsafe format as " + -"it's too big."); -} + @Override + public boolean[] toBooleanArray() { +int size = numElements(); +boolean[] values = new boolean[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.BYTE_ARRAY_OFFSET, size); +return values; + } + + @Override + public byte[] toByteArray() { +int size = numElements(); +byte[] values = new byte[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.BYTE_ARRAY_OFFSET, size); +return values; + } + + @Override + public short[] toShortArray() { +int size = numElements(); +short[] values = new short[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.SHORT_ARRAY_OFFSET, size * 2); +return values; + } -final int offsetRegionSize = 4 * arr.length; -final int valueRegionSize = 4 * arr.length; -final int totalSize = 4 + offsetRegionSize + valueRegionSize; -final byte[] data = new byte[totalSize]; + @Override + public int[] toIntArray() { +int size = numElements(); +int[] values = new int[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.INT_ARRAY_OFFSET, size * 4); +return values; + } -Platform.putInt(data, Platform.BYTE_ARRAY_OFFSET, arr.length); + @Override + public long[] toLongArray() { +int size = numElements(); +long[] values = new long[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.LONG_ARRAY_OFFSET, size * 8); +return values; + } -int offsetPosition = Platform.BYTE_ARRAY_OFFSET + 4; -int valueOffset = 4 + offsetRegionSize; -for (int i = 0; i < arr.length; i++) { - Platform.putInt(data, offsetPosition, valueOffset); - offsetPosition += 4; - valueOffset += 4; + @Override + public float[] toFloatArray() { +int size = numElements(); +float[] values = new float[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.FLOAT_ARRAY_OFFSET, size * 4); +return values; + } + + @Override + public double[] toDoubleArray() { +int size = numElements(); +double[] values = new double[size]; +Platform.copyMemory( + baseObject, baseOffset + headerInBytes, values, Platform.DOUBLE_ARRAY_OFFSET, size * 8); +return values; + } + + private static UnsafeArrayData fromPrimitiveArray(Object arr, int length, final int elementSize) { +final int headerSize = calculateHeaderPortionInBytes(length); +if (length > (Integer.MAX_VALUE - headerSize) / elementSize) { + throw new UnsupportedOperationException("Cannot convert this array to unsafe format as " + +"it's too big."); } +final int valueRegionSize = elementSize * length; +final byte[] data = new byte[valueRegionSize + headerSize]; --- End diff -- Yes, when tests on Jenkins pass, I will expand fields for ```long[]``` (e.g. 4 bytes -> 8bytes for ```numElements```) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #13680: [SPARK-15962][SQL] Introduce implementation with a dense...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/13680 **[Test build #61660 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61660/consoleFull)** for PR 13680 at commit [`7576c19`](https://github.com/apache/spark/commit/7576c19dfc872221d10abf7851e0782a76822ab0). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13680: [SPARK-15962][SQL] Introduce implementation with ...
Github user kiszk commented on a diff in the pull request: https://github.com/apache/spark/pull/13680#discussion_r69381568 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/UnsafeArrayDataBenchmark.scala --- @@ -0,0 +1,298 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.execution.benchmark + +import org.apache.spark.SparkConf +import org.apache.spark.sql.catalyst.expressions.{UnsafeArrayData, UnsafeRow} +import org.apache.spark.sql.catalyst.expressions.codegen.{BufferHolder, UnsafeArrayWriter} +import org.apache.spark.unsafe.Platform +import org.apache.spark.util.Benchmark + +/** + * Benchmark [[UnsafeArrayDataBenchmark]] for UnsafeArrayData + * To run this: + * build/sbt "sql/test-only *benchmark.UnsafeArrayDataBenchmark" + * + * Benchmarks in this file are skipped in normal builds. + */ +class UnsafeArrayDataBenchmark extends BenchmarkBase { + + new SparkConf() +.setMaster("local[1]") +.setAppName("microbenchmark") +.set("spark.driver.memory", "3g") + + def calculateHeaderPortionInBytes(count: Int) : Int = { +// Use this assignment for SPARK-15962 +// val size = 4 + 4 * count +val size = UnsafeArrayData.calculateHeaderPortionInBytes(count) +size + } + + def readUnsafeArray(iters: Int): Unit = { --- End diff -- I will update an allocation method of ```UnsafeArrayData``` for "normal read". For "normal write" I think that it is not possible to turn it into ```UnsafeArray``` for write. This is because ```UnsafeArrayData``` does not have ```write(int)``` or ```putInt()``` method. This is why we use ```UnsafeArrayWriter```. We have already done for "from primitive array" and "to primitive array". --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [WIP][SPARK-16350][SQL] Fix `foreach` for streaming Data...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [WIP][SPARK-16350][SQL] Fix `foreach` for streaming Data...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14030 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61659/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14030: [WIP][SPARK-16350][SQL] Fix `foreach` for streaming Data...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14030 **[Test build #61659 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61659/consoleFull)** for PR 14030 at commit [`f3f60f9`](https://github.com/apache/spark/commit/f3f60f919a2070a6946d0d908b54225d3c2263fc). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13517: [SPARK-14839][SQL] Support for other types as opt...
Github user hvanhovell commented on a diff in the pull request: https://github.com/apache/spark/pull/13517#discussion_r69380919 --- Diff: sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 --- @@ -45,11 +45,11 @@ statement | ALTER DATABASE identifier SET DBPROPERTIES tablePropertyList #setDatabaseProperties | DROP DATABASE (IF EXISTS)? identifier (RESTRICT | CASCADE)? #dropDatabase | createTableHeader ('(' colTypeList ')')? tableProvider -(OPTIONS tablePropertyList)? +(OPTIONS optionParameterList)? (PARTITIONED BY partitionColumnNames=identifierList)? bucketSpec? #createTableUsing | createTableHeader tableProvider -(OPTIONS tablePropertyList)? --- End diff -- @HyukjinKwon I am on holiday - so I am bit slow with my responses. Yo have understood me correctly. What I am suggesting will affect the DBPROPERTIES and TBLPROPERTIES; it will also allow for boolean and numeric options. I don't think this is a bad thing, it is better to have a lenient parser and to constrain behavior in the `AstBuilder` (this allows us to throw much better error messages). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14029: [MINOR] [DOCS] Remove unused images; crush PNGs that cou...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14029 Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61658/ Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14029: [MINOR] [DOCS] Remove unused images; crush PNGs that cou...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/14029 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14029: [MINOR] [DOCS] Remove unused images; crush PNGs that cou...
Github user SparkQA commented on the issue: https://github.com/apache/spark/pull/14029 **[Test build #61658 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61658/consoleFull)** for PR 14029 at commit [`c54e560`](https://github.com/apache/spark/commit/c54e5602c39e107680e681786287ab723586ad80). * This patch passes all tests. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14030: [WIP][SPARK-16350][SQL] Fix `foreach` for streami...
GitHub user lw-lin opened a pull request: https://github.com/apache/spark/pull/14030 [WIP][SPARK-16350][SQL] Fix `foreach` for streaming Dataset ## What changes were proposed in this pull request? - [x] add tests - [ ] fix `foreach` ## How was this patch tested? (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests) (If this patch involves UI changes, please attach a screenshot; otherwise, remove this) You can merge this pull request into a Git repository by running: $ git pull https://github.com/lw-lin/spark fix-foreach-complete Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/14030.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #14030 commit f3f60f919a2070a6946d0d908b54225d3c2263fc Author: Liwei LinDate: 2016-07-02T14:56:06Z Add test(`complete`) & expand test(`append`) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org