GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/17623
[SPARK-20292][SQL][WIP] Clean up string representation of TreeNode
## What changes were proposed in this pull request?
Currently we have a lot of string representations for
`QueryPlan`/`Expression`: `toString`, `simpleString`, `verboseString`,
`treeString`, etc.
The logic between them is not very clear and as a result.
The most obvious problem now is that, `Expression.treeString` is mostly
unreadable as it contains a lot duplicated information.
We should clean it up.
For the following deserializer:
val encoder = ExpressionEncoder.tuple(
ExpressionEncoder[StringLongClass],
ExpressionEncoder[Long])
val attrs = Seq('a.struct('a.string, 'b.byte), 'b.int)
encoder.resolveAndBind(attrs).fromRow(InternalRow(InternalRow(str,
1.toByte), 2))
Before this patch, the output of `treeString` looks like:
newInstance(class scala.Tuple2)
:- if (isnull(input[0, struct<a:string,b:tinyint>, true])) null
else newInstance(class org.apache.spark.sql.catalys
t.encoders.StringLongClass)
: :- isnull(input[0, struct<a:string,b:tinyint>, true])
: : +- input[0, struct<a:string,b:tinyint>, true]
: :- null
: +- newInstance(class
org.apache.spark.sql.catalyst.encoders.StringLongClass)
: :- input[0, struct<a:string,b:tinyint>, true].a.toString
: : +- input[0, struct<a:string,b:tinyint>, true].a
: : +- input[0, struct<a:string,b:tinyint>, true]
: +- assertnotnull(cast(input[0, struct<a:string,b:tinyint>,
true].b as bigint), - field (class: "scala.Long",
name: "b"), - root class:
"org.apache.spark.sql.catalyst.encoders.StringLongClass")
: +- cast(input[0, struct<a:string,b:tinyint>, true].b as
bigint)
: +- input[0, struct<a:string,b:tinyint>, true].b
: +- input[0, struct<a:string,b:tinyint>, true]
+- cast(input[1, int, true] as bigint)
+- input[1, int, true]
After this patch, the output of `treeString` looks like:
newInstance(class scala.Tuple2)
:- if
: :- isnull
: : +- input[0, struct<a:string,b:tinyint>, true]
: :- null
: +- newInstance(class
org.apache.spark.sql.catalyst.encoders.StringLongClass)
: :- input[0, struct<a:string,b:tinyint>, true].a.toString
: : +- getstructfield(0, a)
: : +- input[0, struct<a:string,b:tinyint>, true]
: +- assertnotnull([- field (class: "scala.Long", name: "b"), -
root class: "org.apache.spark.sql.catalyst.enco
ders.StringLongClass"])
: +- cast to bigint
: +- getstructfield(1, b)
: +- input[0, struct<a:string,b:tinyint>, true]
+- cast to bigint
+- input[1, int, true]
## How was this patch tested?
Existing tests.
Please review http://spark.apache.org/contributing.html before opening a
pull request.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/viirya/spark-1 clean-string
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/17623.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #17623
----
commit 8ba6627b1a2dd3a62c60c203fc098c18755ed80a
Author: Liang-Chi Hsieh <[email protected]>
Date: 2017-04-12T13:56:32Z
Clean up string functions.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]