Lsw_aka_laplace created FLINK-15573:
---------------------------------------

             Summary: Let Flink SQL PlannerExpressionParserImpl#FieldRefrence 
use Unicode  as its default charset  
                 Key: FLINK-15573
                 URL: https://issues.apache.org/jira/browse/FLINK-15573
             Project: Flink
          Issue Type: Improvement
          Components: Table SQL / Planner
            Reporter: Lsw_aka_laplace


Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which can not meet JavaIdentifier. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really make sense~

Looking forward for any opinion

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to