HyukjinKwon commented on pull request #30984:
URL: https://github.com/apache/spark/pull/30984#issuecomment-757409794


   @tedyu, the special characters are not allowed in some sources such as Hive 
as you tested. However, they are allowed in some other sources when you use 
DSLs:
   
   ```scala
   scala> 
spark.range(1).toDF("GetJsonObject(phone#37,$.phone)").write.option("header", 
true).mode("overwrite").csv("/tmp/foo")
   
   scala> spark.read.option("header", true).csv("/tmp/foo").show()
   +-------------------------------+
   |GetJsonObject(phone#37,$.phone)|
   +-------------------------------+
   |                              0|
   +-------------------------------+
   ```
   
   In this case, the filters will still be pushed down to the datasource 
implementation site. We should have a way to identify if the pushed 
`GetJsonObject(phone#37,$.phone)` is a field or expression pushed down.
   
   This makes me believe the current implementation based on strings are flaky, 
and incomplete. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to