[
https://issues.apache.org/jira/browse/FLINK-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015915#comment-17015915
]
Zili Chen commented on FLINK-15573:
-----------------------------------
cc [~jark][~tiwalter] what do you think of this?
> Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode as its
> default charset
> ---------------------------------------------------------------------------------------------
>
> Key: FLINK-15573
> URL: https://issues.apache.org/jira/browse/FLINK-15573
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / Planner
> Reporter: Lsw_aka_laplace
> Priority: Minor
>
> Now I am talking about the `PlannerExpressionParserImpl`
> For now the fieldRefrence‘s charset is JavaIdentifier,why not change it
> to UnicodeIdentifier?
> Currently in my team, we do actually have this problem. For instance,
> data from Es always contains `@timestamp` field , which JavaIdentifier can
> not meet. So what we did is just let the fieldRefrence Charset use Unicode
>
> {code:scala}
> lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace
> rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '"
> + _ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(:
> Char))) ^^ (.mkString) )
> lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] =
> (STAR | ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
>
> It is simple but really makes sense~
> Looking forward for any opinion
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)