AngersZhuuuu opened a new pull request #35229:
URL: https://github.com/apache/spark/pull/35229


   ### What changes were proposed in this pull request?
   It's OK for Spark to forbid special chars in the column name, but when we 
read existing parquet files, there is no point to forbid it at the Spark side. 
This pr remove checking filed name when reading existing parquet files.
   
   
   
   ### Why are the changes needed?
   Support spark reading existing parquet files with special chars in column 
names.
   
   ### Does this PR introduce _any_ user-facing change?
   User can use spark to read existing parquet files with special chars in 
column names. And then can use `` to wrap special column name such as `max(t)` 
or use `max(t)` as max_t, then user can use it.
   
   
   ### How was this patch tested?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to