Github user yucai commented on the issue:

    https://github.com/apache/spark/pull/19788
  
    @jerryshao @cloud-fan @gczsjdy 
    
    Because this feature is only used in adaptive execution, how about this way:
    
    - Remove `spark.shuffle.continuousFetch`
    - When `spark.sql.adaptive.enabled` is `true`, we do contiguous partition 
IDs fetch optimization with `ContinuousShuffleBlockId` way.
    - When `spark.sql.adaptive.enabled` is `false` (by default), Spark will use 
`ShuffleBlockId` like before.
    
    With above solution, user no needs upgrade their external shuffle service 
for new spark version if they don't use adaptive execution (very likely).
    
    If user wants to use adaptive execution, they have to upgrade external 
shuffle service because old way does not know `length` info.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to