[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-05-16 Thread mn-mikke
Github user mn-mikke commented on the issue: https://github.com/apache/spark/pull/21045 @DylanGuedes What about `CodeGenerator.getValue(s"arrVals[$j]", jThChildDataType, i)`? I recommend you to use some debbuger to check and understand what gets generated out of your code or if

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-05-16 Thread DylanGuedes
Github user DylanGuedes commented on the issue: https://github.com/apache/spark/pull/21045 @mn-mikke I thought that `CodeGenerator.getValue` was directly used to retrieve values from 1d arrays (such as an arraydata) - but I don't get how to use it for 2d arrays (such as

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-05-16 Thread mn-mikke
Github user mn-mikke commented on the issue: https://github.com/apache/spark/pull/21045 @DylanGuedes What do you mean by 2d structure? Evaluation of any child should produce `null` of an instance of `ArrayData`. `CodeGenerator.getValue` should work. Isn't there any other reason?

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-05-15 Thread DylanGuedes
Github user DylanGuedes commented on the issue: https://github.com/apache/spark/pull/21045 @mn-mikke thank you! Any idea on how to access elements of individual arrays? In the old version I written a 'getValue' that uses `CodeGenerator.getValue`, but since now it is a 2d data

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-05-15 Thread mn-mikke
Github user mn-mikke commented on the issue: https://github.com/apache/spark/pull/21045 @DylanGuedes What about `eval.value`? Example: ``` val evals = children.map(_.genCode(ctx)) val args = ctx.freshName("args") val inputs = evals.zipWithIndex.map { case

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-04-16 Thread kiszk
Github user kiszk commented on the issue: https://github.com/apache/spark/pull/21045 `UT` stands for unit test. Developers usually use IntelliJ. It is highly recommended. --- - To unsubscribe, e-mail:

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-04-16 Thread DylanGuedes
Github user DylanGuedes commented on the issue: https://github.com/apache/spark/pull/21045 @mgaido91 thank you, the suggestions were VERY enlightening! You are correct, I tried to return the expected output in `doGenCode`, according with others implementations I thougth that it was

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-04-13 Thread mgaido91
Github user mgaido91 commented on the issue: https://github.com/apache/spark/pull/21045 @DylanGuedes the first suggestion I can give you is: do not use spark-shell for testing, but write UT and run them with a debugger. Then, you can breakpoint to check the generated code (or you can

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-04-13 Thread DylanGuedes
Github user DylanGuedes commented on the issue: https://github.com/apache/spark/pull/21045 Ok so It works fine in spark-shell but in pyspark I got this error: ```shell File "/home/dguedes/Workspace/spark/python/pyspark/sql/functions.py", line 2155, in pyspark.sql.functions.zip

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-04-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/21045 Can one of the admins verify this patch? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional

[GitHub] spark issue #21045: [WIP][SPARK-23931][SQL] Adds zip function to sparksql

2018-04-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/21045 Can one of the admins verify this patch? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional