Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/21329
  
    The history is exactly like what @JoshRosen said: the conf setting logic is 
there at the write side since day 1, and then #101 applied it to the read side. 
My major concern is the driver side just sets some dummy values(job id 0, task 
id 0, numPartitions 0), and these confs will be set again at executor side by 
real values. It seems to me we did conf setting at driver side just to make the 
behavior consistent between driver and executor, there is no specific reason.
    
    After migrating file source to data source v2, the implementation will be 
the best data source v2 example, and hopefully we don't have mysterious code to 
confuse our readers :)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to