GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21236
[SPARK-23935][SQL] Adding map_entries function
## What changes were proposed in this pull request?
This PR adds `map_entries` function that returns an unordered array of all
entries
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21236
cc @ueshin @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186408332
--- Diff: python/pyspark/sql/functions.py ---
@@ -1798,6 +1798,22 @@ def create_map(*cols):
return Column(jc
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186410288
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -235,6 +235,69 @@ case class CreateMap
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186410190
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -235,6 +235,69 @@ case class CreateMap
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186409860
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -235,6 +235,69 @@ case class CreateMap
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186408382
--- Diff: python/pyspark/sql/functions.py ---
@@ -1798,6 +1798,22 @@ def create_map(*cols):
return Column(jc
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186410991
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1033,6 +1033,17 @@ object functions {
@scala.annotation.varargs
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186410897
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ComplexTypeSuite.scala
---
@@ -186,6 +186,37 @@ class ComplexTypeSuite
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186408884
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -405,6 +405,7 @@ object FunctionRegistry
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186410527
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -235,6 +235,69 @@ case class CreateMap
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186409077
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -235,6 +235,69 @@ case class CreateMap
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21236#discussion_r187996983
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -118,6 +119,162 @@ case class MapValues
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21236#discussion_r187995110
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -118,6 +119,161 @@ case class MapValues
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188189553
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,110 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188189939
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -372,6 +372,24 @@ class DataFrameFunctionsSuite extends
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21045
@DylanGuedes As @kiszk mentioned, I also recommend you to start using
IntelliJ IDEA. I think it will make your life easier. You can build, run tests,
refactor code and search for existing classes
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188189736
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CollectionExpressionsSuite.scala
---
@@ -199,6 +200,20 @@ class
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188188916
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,110 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188187779
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,110 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21236#discussion_r187001475
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -118,6 +119,161 @@ case class MapValues
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21282
cc @ueshin @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21282
[SPARK-23934][SQL] Adding map_from_entries function
## What changes were proposed in this pull request?
The PR adds the `map_from_entries` function that returns a map created from
the given
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21121
@ueshin What about combining `zip_with_index` with
[`map_from_entries`](https://issues.apache.org/jira/browse/SPARK-23934
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21181
It makes sense. Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21028#discussion_r184700686
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -288,6 +288,114 @@ case class
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21028#discussion_r184730604
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -19,14 +19,41 @@ package
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21282#discussion_r187282249
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -118,6 +120,229 @@ case class MapValues
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21294
[SPARK-24197][SparkR][SQL] Adding array_sort function to SparkR
## What changes were proposed in this pull request?
The PR adds array_sort function to SparkR.
## How
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21294#discussion_r187381390
--- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
@@ -1497,12 +1496,18 @@ test_that("column functions", {
result <- collect(select(d
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21298
[SPARK-24198][SparkR][SQL] Adding slice function to SparkR
## What changes were proposed in this pull request?
The PR adds the `slice` function to SparkR. The function returns a subset
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21298
cc @HyukjinKwon @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21236#discussion_r187813418
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -118,6 +119,161 @@ case class MapValues
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21298
Oh, sorry for that. I'll group several functions next time. Thanks guys.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186474451
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -235,6 +235,69 @@ case class CreateMap
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r186493539
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -235,6 +235,69 @@ case class CreateMap
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21244
@huaxingao Isn't the correct Jira number
[SPARK-24185](https://issues.apache.org/jira/browse/SPARK-24185
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21045
@DylanGuedes What about `eval.value`?
Example:
```
val evals = children.map(_.genCode(ctx))
val args = ctx.freshName("args")
val inputs = evals.zipWithIndex.
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21282
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21352
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21258#discussion_r188552288
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ComplexTypeSuite.scala
---
@@ -186,6 +186,37 @@ class ComplexTypeSuite
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188535575
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21045
@DylanGuedes What do you mean by 2d structure? Evaluation of any child
should produce `null` of an instance of `ArrayData`. `CodeGenerator.getValue`
should work. Isn't there any other reason
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188539523
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188540566
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188537695
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188536372
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188543281
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188536830
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21045#discussion_r188541075
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -90,6 +90,117 @@ case class MapKeys
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21362
cc @felixcheung @HyukjinKwon @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21294#discussion_r189244494
--- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
@@ -1497,10 +1496,16 @@ test_that("column functions", {
result <- collect(select(d
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21362
[SPARK-24197][SparkR][FOLLOWUP] Fixing failing tests for array_sort and
sort_array
## What changes were proposed in this pull request?
The PR tries to fix [the
problem](https
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21236
@ueshin, @kiszk Thank you for valuable comments! Do you have any more?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21352#discussion_r188958648
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -552,7 +553,8 @@ case class Slice(x
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21352#discussion_r188943212
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -870,6 +870,7 @@ case class ArrayMin
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21352#discussion_r188958139
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -388,7 +388,8 @@ case class Reverse
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21352
cc @ueshin @mgaido91
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21352#discussion_r188959290
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -837,6 +839,7 @@ case class ArrayMin
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21352#discussion_r188961645
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -1324,15 +1329,17 @@ case class Concat
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21352
[SPARK-24305][SQL][FOLLOWUP] Avoid serialization of private fields in
collection expressions.
## What changes were proposed in this pull request?
The PR tries to avoid serialization
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21352#discussion_r188959881
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -1147,6 +1151,7 @@ case class Concat
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21352#discussion_r188982249
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -388,7 +388,8 @@ case class Reverse
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21045
@DylanGuedes What about `CodeGenerator.getValue(s"arrVals[$j]",
jThChildDataType, i)`?
I recommend you to use some debbuger to check and understand what gets
generated out of
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21386#discussion_r189722351
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -555,6 +557,100 @@ case class ArraySort
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21386#discussion_r189722777
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -555,6 +557,100 @@ case class ArraySort
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21386#discussion_r189720688
--- Diff: python/pyspark/sql/functions.py ---
@@ -2268,6 +2268,21 @@ def array_sort(col):
return Column(sc._jvm.functions.array_sort
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21386#discussion_r189725334
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -555,6 +557,100 @@ case class ArraySort
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21386#discussion_r189725931
--- Diff: python/pyspark/sql/functions.py ---
@@ -2268,6 +2268,21 @@ def array_sort(col):
return Column(sc._jvm.functions.array_sort
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21434
cc @HyukjinKwon @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21434
[SPARK-24331][SparkR][SQL] Adding arrays_overlap, array_repeat, map_entries
to SparkR
## What changes were proposed in this pull request?
The PR adds functions `arrays_overlap
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21434
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21434
Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21282
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21236
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21236
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21362
@ueshin Have you experienced the same problem with the failing tests?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21434#discussion_r191071987
--- Diff: R/pkg/R/functions.R ---
@@ -3062,6 +3077,21 @@ setMethod("array_sort",
column(jc)
})
+#
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21282#discussion_r192875181
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -308,6 +309,234 @@ case class
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21282#discussion_r192573167
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -118,6 +120,229 @@ case class MapValues
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21687
[SPARK-24165][SQL] Fixing the output data type of CaseWhen expression
## What changes were proposed in this pull request?
This PR is proposing a fix for the output data type of ```CaseWhen
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21687#discussion_r199425774
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -129,7 +129,7 @@ case class CaseWhen
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21687#discussion_r199426921
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -129,7 +129,7 @@ case class CaseWhen
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21687#discussion_r199427016
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ConditionalExpressionSuite.scala
---
@@ -113,6 +113,35 @@ class
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21687
@viirya Yeah, it looks like the same problem, but It's worked around via
different implementation of ```IfCoercion``` rule. This rule utilizes ```!=```
operator for comparison. So if two types
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21687#discussion_r199451068
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -129,7 +129,7 @@ case class CaseWhen
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21620#discussion_r197768113
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -536,6 +536,14 @@ object TypeCoercion
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21620#discussion_r197716236
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -536,6 +536,11 @@ object TypeCoercion
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21620
[SPARK-24636][SQL] Type coercion of arrays for array_join function
## What changes were proposed in this pull request?
Presto's implementation accepts arbitrary arrays of primitive types
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21620
cc @ueshin @mgaido91
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21620#discussion_r197730826
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -536,6 +536,11 @@ object TypeCoercion
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21704
@ueshin Thanks for bringing this topic! This problem with different
```nullable```/```containsNull``` flags seems to be more generic. In
[21687](https://github.com/apache/spark/pull/21687), we've
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21121
@rxin Oh, I see. In that case, I'm happy to close the PR. @hvanhovell Can
you confirm that the `transform` function will pass the index into lambda
functions
GitHub user mn-mikke opened a pull request:
https://github.com/apache/spark/pull/21215
[SPARK-24148][SQL] Overloading array function to support typed empty arrays
## What changes were proposed in this pull request?
The PR proposes to overload `array` function and allow users
Github user mn-mikke commented on the issue:
https://github.com/apache/spark/pull/21215
@lokm01 @gatorsmile @maropu @ueshin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21208#discussion_r185534457
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -1229,3 +1229,98 @@ case class Flatten
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21208#discussion_r185532437
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -1229,3 +1229,98 @@ case class Flatten
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21208#discussion_r185544657
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -798,6 +798,111 @@ class DataFrameFunctionsSuite extends
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21208#discussion_r185540852
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -1229,3 +1229,98 @@ case class Flatten
Github user mn-mikke commented on a diff in the pull request:
https://github.com/apache/spark/pull/21208#discussion_r185538873
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -1229,3 +1229,98 @@ case class Flatten
1 - 100 of 295 matches
Mail list logo