attilapiros commented on a change in pull request #26869: [SPARK-30235][CORE] 
Switching off host local disk reading of shuffle blocks in case of 
useOldFetchProtocol
URL: https://github.com/apache/spark/pull/26869#discussion_r358589705
 
 

 ##########
 File path: docs/sql-migration-guide.md
 ##########
 @@ -97,7 +97,7 @@ license: |
 
   - Since Spark 3.0, when Avro files are written with user provided 
non-nullable schema, even the catalyst schema is nullable, Spark is still able 
to write the files. However, Spark will throw runtime NPE if any of the records 
contains null.
 
-  - Since Spark 3.0, we use a new protocol for fetching shuffle blocks, for 
external shuffle service users, we need to upgrade the server correspondingly. 
Otherwise, we'll get the error message `UnsupportedOperationException: 
Unexpected message: FetchShuffleBlocks`. If it is hard to upgrade the shuffle 
service right now, you can still use the old protocol by setting 
`spark.shuffle.useOldFetchProtocol` to `true`.
+  - Since Spark 3.0, we use a new protocol for fetching shuffle blocks, for 
external shuffle service users, we need to upgrade the server correspondingly. 
Otherwise, we'll get the error message `IllegalArgumentException: Unexpected 
message type: <number>`. If it is hard to upgrade the shuffle service right 
now, you can still use the old protocol by setting 
`spark.shuffle.useOldFetchProtocol` to `true`.
 
 Review comment:
   I have moved it to core migration guide and removed the "Since Spark 3.0, " 
as having subsections with the Spark versions. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to