[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hivemall/pull/61


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread myui
Github user myui commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105100218
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

@maropu 👍 LGTM. Could you merge and close JIRA ticket as FIXED?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105099394
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

@myui How about the latest fix? As you suggested, I added an option for 
separator.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105093086
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

Actually, we can access this column like this;
```
scala> val df = Seq((1, (1.0, "a"))).toDF()
df: org.apache.spark.sql.DataFrame = [_1: int, _2: struct<_1: double, _2: 
string>]

scala> val ds1 = df.flatten().select("`_2._1`").show
+-+
|_2._1|
+-+
|  1.0|
+-+

```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread myui
Github user myui commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105092196
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

`ds.select($"_2._2")` works for nested scheme. So, flatten'ed `ds.flatten` 
should be accessed without `.`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105090944
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

Ah, I found an issue;
```
scala> val df = Seq((1, (1.0, "a"))).toDF()
df: org.apache.spark.sql.DataFrame = [_1: int, _2: struct<_1: double, _2: 
string>]

scala> val ds1 = df.flatten().select("_2._1")
org.apache.spark.sql.AnalysisException: cannot resolve '`_2._1`' given 
input columns: [_1, _2._1, _2._2];;
'Project ['_2._1]
+- Project [_1#67 AS _1#73, _2#68._1 AS _2._1#74, _2#68._2 AS _2._2#75]
   +- LocalRelation [_1#67, _2#68]

  at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:75)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:72)
  at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
```
So, I'll reconsider this and please give me a sec. Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #62: [HIVEMALL-89][SQL] Support to_from/from...

2017-03-08 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/62#discussion_r105090294
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/execution/datasources/csv/csvExpressions.scala
 ---
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.datasources.csv
+
+import java.io.CharArrayWriter
+
+import jodd.util.CsvUtil
--- End diff --

Updated


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105090100
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

So, the dot is more natural for Spark users.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105089894
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

In Spark, the dot is used as the separator of column names in nested schema.
Currently, Spark users cannot change this separator via configurations.
For example,

```
scala> val ds = Seq((1, (1.0, "a"))).toDS()
ds: org.apache.spark.sql.Dataset[(Int, (Double, String))] = [_1: int, _2: 
struct<_1: double, _2: string>]

scala> ds.printSchema
root
 |-- _1: integer (nullable = false)
 |-- _2: struct (nullable = true)
 ||-- _1: double (nullable = false)
 ||-- _2: string (nullable = true)


scala> ds.select($"_2._2").show
+---+
| _2|
+---+
|  a|
+---+
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105088535
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

I know, but this is a Spark-local specification. So, the change you 
suggested make `doFlatten` fail.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall issue #61: [HIVEMALL-88][SPARK] Support a function to fla...

2017-03-08 Thread coveralls
Github user coveralls commented on the issue:

https://github.com/apache/incubator-hivemall/pull/61
  

[![Coverage 
Status](https://coveralls.io/builds/10503982/badge)](https://coveralls.io/builds/10503982)

Coverage remained the same at 36.739% when pulling 
**880bed97b48889ee8bc5a51807a97c2fdc032bee on maropu:HIVEMALL-88** into 
**210b7765b9395e372edbdce925edb48cd180ee48 on apache:master**.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hivemall pull request #61: [HIVEMALL-88][SPARK] Support a function...

2017-03-08 Thread myui
Github user myui commented on a diff in the pull request:

https://github.com/apache/incubator-hivemall/pull/61#discussion_r105071319
  
--- Diff: 
spark/spark-2.1/src/main/scala/org/apache/spark/sql/hive/HivemallOps.scala ---
@@ -805,6 +805,47 @@ final class HivemallOps(df: DataFrame) extends Logging 
{
 JoinTopK(kInt, df.logicalPlan, right.logicalPlan, Inner, 
Option(joinExprs.expr))(score.named)
   }
 
+  private def doFlatten(schema: StructType, prefix: Option[String] = None) 
: Seq[Column] = {
+schema.fields.flatMap { f =>
+  val colName = prefix.map(p => s"$p.${f.name}").getOrElse(f.name)
--- End diff --

Dot `.` is a special symbol in SQL. So, better to change the default 
separator (e.g., `$` ) and have an option to choose the separator. 
`news20.train` means train table in news20 database.

SQL identifiers are depending on RDBMSs but generally includes 
`[0-9,a-z,A-Z$_]`.
https://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html
https://msdn.microsoft.com/en-us/library/ms175874.aspx
https://dev.mysql.com/doc/refman/5.7/en/identifiers.html


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HIVEMALL-89) Support to_csv/from_csv in HivemallOps

2017-03-08 Thread Takeshi Yamamuro (JIRA)
Takeshi Yamamuro created HIVEMALL-89:


 Summary: Support to_csv/from_csv in HivemallOps
 Key: HIVEMALL-89
 URL: https://issues.apache.org/jira/browse/HIVEMALL-89
 Project: Hivemall
  Issue Type: Improvement
Reporter: Takeshi Yamamuro


It is useful to support to_csv/from_csv for Spark (See SPARK-15463 for related 
discussion)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)