[GitHub] spark issue #15274: [SPARK-17699] Support for parsing JSON string columns

2016-10-18 Thread DanielMe
Github user DanielMe commented on the issue: https://github.com/apache/spark/pull/15274 @yhuai thanks! My impression was that `get_json_object` does not convert json arrays to `ArrayType`s, maybe I misunderstood the way it's supposed to be used though. --- If your project is set up

[GitHub] spark issue #15274: [SPARK-17699] Support for parsing JSON string columns

2016-10-17 Thread DanielMe
Github user DanielMe commented on the issue: https://github.com/apache/spark/pull/15274 Is there any workaround I can use to achieve a similar effect in 1.6? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your

[GitHub] spark issue #12855: [SPARK-10216][SQL] Avoid creating empty files during ove...

2016-06-21 Thread DanielMe
Github user DanielMe commented on the issue: https://github.com/apache/spark/pull/12855 I can reproduce the issue that @jurriaan reports on 1.6.0 and on 1.5.2. The issue does not occur on 1.3.1. I have added a comment to the JIRA issue with more detailed instructions how

[GitHub] spark pull request: [SPARK-2097][SQL] UDF Support

2015-04-14 Thread DanielMe
Github user DanielMe commented on the pull request: https://github.com/apache/spark/pull/1063#issuecomment-92678825 Okay, thanks for the clarification. Initially, I had naively assumed that the functionality you added was just a layer above the Hive API hence it was a bit confusing

[GitHub] spark pull request: [SPARK-2097][SQL] UDF Support

2015-04-13 Thread DanielMe
Github user DanielMe commented on the pull request: https://github.com/apache/spark/pull/1063#issuecomment-92301932 Excuse my naive question, however, it seems that this does not use the regular Hive UDF API, right? (Like when I would run `hiveContext.sql(CREATE TEMPORARY FUNCTION