cxzl25 commented on pull request #30725:
URL: https://github.com/apache/spark/pull/30725#issuecomment-745878535


   To clarify, the partition here refers to the partition of the hive table, 
not the rdd partition.
   For example, using spark sql to read a hive table, the hive table has 10,000 
partitions.
   `HadoopTableReader#makeRDDForPartitionedTable` will create 10,000 Rdd, which 
means there are 10,000 jobconfs.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to