mohaidoss commented on PR #32702:
URL: https://github.com/apache/spark/pull/32702#issuecomment-3319779856

   Sorry to jump on a very old thread.
   @viirya is this still the valid method to do this ? In my example, we 
changed the sink s3 bucket names `s3a://old_path` to `s3a://new_path`, however 
now we can't batch read the files, because the _spark_metadata still points to 
`s3a://old_path`.
   
   Using the config `spark.sql.streaming.fileStreamSink.ignoreMetadata` solves 
this for us, but I am curious if there is an elegant way, for example by 
actually fixing the spark_metadata folder maybe ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to