[GitHub] spark pull request #22553: [SPARK-25541][SQL] CaseInsensitiveMap should be s...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/22553#discussion_r220770689 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala --- @@ -42,7 +42,11 @@ class CaseInsensitiveMap[T] private (val originalMap: Map[String, T]) extends Ma override def iterator: Iterator[(String, T)] = keyLowerCasedMap.iterator override def -(key: String): Map[String, T] = { -new CaseInsensitiveMap(originalMap.filterKeys(!_.equalsIgnoreCase(key))) +new CaseInsensitiveMap(originalMap.filter(!_._1.equalsIgnoreCase(key))) + } + + override def filterKeys(p: (String) => Boolean): Map[String, T] = { --- End diff -- It's safe but it violates the documented semantics. Let's remove it. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22553: [SPARK-25541][SQL] CaseInsensitiveMap should be s...
Github user gengliangwang commented on a diff in the pull request: https://github.com/apache/spark/pull/22553#discussion_r220770316 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala --- @@ -42,7 +42,11 @@ class CaseInsensitiveMap[T] private (val originalMap: Map[String, T]) extends Ma override def iterator: Iterator[(String, T)] = keyLowerCasedMap.iterator override def -(key: String): Map[String, T] = { -new CaseInsensitiveMap(originalMap.filterKeys(!_.equalsIgnoreCase(key))) +new CaseInsensitiveMap(originalMap.filter(!_._1.equalsIgnoreCase(key))) + } + + override def filterKeys(p: (String) => Boolean): Map[String, T] = { --- End diff -- It seems safer to override the method though.. Anyway I will change all the methods calls of `filterKeys ` and add comment in `CaseInsensitiveMap`. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22553: [SPARK-25541][SQL] CaseInsensitiveMap should be s...
Github user zsxwing commented on a diff in the pull request: https://github.com/apache/spark/pull/22553#discussion_r220718383 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala --- @@ -42,7 +42,11 @@ class CaseInsensitiveMap[T] private (val originalMap: Map[String, T]) extends Ma override def iterator: Iterator[(String, T)] = keyLowerCasedMap.iterator override def -(key: String): Map[String, T] = { -new CaseInsensitiveMap(originalMap.filterKeys(!_.equalsIgnoreCase(key))) +new CaseInsensitiveMap(originalMap.filter(!_._1.equalsIgnoreCase(key))) + } + + override def filterKeys(p: (String) => Boolean): Map[String, T] = { --- End diff -- This method right now doesn't follow the Scaladoc: ``` * @return an immutable map consisting only of those key value pairs of this map where the key satisfies * the predicate `p`. The resulting map wraps the original map without copying any elements. ``` Maybe we should not add it. This is a Scala issue but they didn't fix in until 2.13 (https://github.com/scala/bug/issues/4776). --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22553: [SPARK-25541][SQL] CaseInsensitiveMap should be s...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/22553 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22553: [SPARK-25541][SQL] CaseInsensitiveMap should be s...
Github user gengliangwang commented on a diff in the pull request: https://github.com/apache/spark/pull/22553#discussion_r220464759 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala --- @@ -42,7 +42,11 @@ class CaseInsensitiveMap[T] private (val originalMap: Map[String, T]) extends Ma override def iterator: Iterator[(String, T)] = keyLowerCasedMap.iterator override def -(key: String): Map[String, T] = { -new CaseInsensitiveMap(originalMap.filterKeys(!_.equalsIgnoreCase(key))) --- End diff -- Yes, we need. This is for filtering the `originalMap`. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22553: [SPARK-25541][SQL] CaseInsensitiveMap should be s...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/22553#discussion_r220463389 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala --- @@ -42,7 +42,11 @@ class CaseInsensitiveMap[T] private (val originalMap: Map[String, T]) extends Ma override def iterator: Iterator[(String, T)] = keyLowerCasedMap.iterator override def -(key: String): Map[String, T] = { -new CaseInsensitiveMap(originalMap.filterKeys(!_.equalsIgnoreCase(key))) --- End diff -- since we override `filterKeys`, do we still need to change here? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22553: [SPARK-25541][SQL] CaseInsensitiveMap should be s...
GitHub user gengliangwang opened a pull request: https://github.com/apache/spark/pull/22553 [SPARK-25541][SQL] CaseInsensitiveMap should be serializable after '-' or 'filterKeys' ## What changes were proposed in this pull request? `CaseInsensitiveMap` is declared as Serializable. However, it is no serializable after `-` operator or `filterKeys` method. This PR fix the issue by overriding the operator `-` and method `filterKeys`. So the we can avoid potential `NotSerializableException` on using `CaseInsensitiveMap`. ## How was this patch tested? New test suite. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gengliangwang/spark fixCaseInsensitiveMap Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/22553.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #22553 commit 54ace1541a8db2e49daeda6b5573193527a95f24 Author: Gengliang Wang Date: 2018-09-26T06:36:53Z fix and add test suite --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org