GitHub user mike0sv opened a pull request:
https://github.com/apache/spark/pull/18488
Enum support
## What changes were proposed in this pull request?
Fixed NPE when creating encoder for enum.
## How was this patch tested?
Unit test in EnumEncoderSuite which
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r125055741
--- Diff:
sql/catalyst/src/test/java/org/apache/spark/sql/catalyst/EnumEncoderSuite.java
---
@@ -0,0 +1,32 @@
+package org.apache.spark.sql.catalyst
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@kiszk check it out
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
I reworked the code to ser/de enums into ints (according to declaring
order). However, I recreate mapping for each object, which is very bad
obviously. I need to create mapping once (for each
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
It won't work if I have enum field inside regular java bean.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r125358120
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -127,19 +128,24 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r125360539
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -344,6 +352,28 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r125360956
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -344,6 +352,28 @@ object JavaTypeInference
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
Ran into something strange. Changed ints to strings and it worked fine. But
then I added a test for encoding bean with enum inside and the test failed. It
failed because in my implementation
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@srowen @HyukjinKwon what's your status on this? anything else I can do?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134475250
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -118,6 +119,10 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134475878
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -303,6 +309,11 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134477280
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -345,6 +356,30 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134477534
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -345,6 +356,30 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134478386
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -345,6 +356,30 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134478774
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -429,6 +464,11 @@ object JavaTypeInference
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134479564
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -81,9 +81,19 @@ object ExpressionEncoder
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134480556
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -154,13 +154,13 @@ case class StaticInvoke
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@srowen you are right, we store string values of constant names (for test
example, we would get A and B values, not google/elgoog)
I commented some of the changes for clarity, but I don'
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
Found this in janino documentation, it explains the need for explicit
casting: "Type arguments: Are parsed, but otherwise ignored. The most
significant restriction that follows is that you
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134622727
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -154,13 +154,13 @@ case class StaticInvoke
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/19066#discussion_r135511606
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -81,19 +81,9 @@ object ExpressionEncoder
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@srowen @HyukjinKwon hey guys, I think i got this, take a look. some sparkr
tests failed for some reason, but I think it's not my fault =|
---
If your project is set up for it, you can rep
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r132489991
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/ExpressionInfo.java
---
@@ -79,7 +79,7 @@ public ExpressionInfo
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@srowen @HyukjinKwon it seems like it's all ok now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r133194603
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/ExpressionInfo.java
---
@@ -79,7 +79,7 @@ public ExpressionInfo
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@srowen @HyukjinKwon , retest this please :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
27 matches
Mail list logo