attilapiros commented on issue #23510: [SPARK-26590][CORE] make 
fetch-block-to-disk backward compatible
URL: https://github.com/apache/spark/pull/23510#issuecomment-454500943
 
 
   Yes, I tried to solve the same issue by adding ChunkFetchSuccess an extra 
attribute remainingFrameSize to store the size which are not yet read for the 
frame (as it could be will be streamed to disk). If the incoming 
ChunkFetchSuccess body size was over a spark.maxRemoteBlockSizeFetchToMem then 
I hijacked reading the whole body in TransportFrameDecoder and filled this size 
(my TransportFrameDecoder is even not produced a simple ByteBuf instances but a 
half parsed message which contained the message type and the size of the body 
or this size (called ParsedFrame) and specific messages were created from the 
ParsedFrames).
   
   Anyway the source is available here 
https://github.com/attilapiros/spark/pull/1/files#diff-fa724c37d2f4d18795dabb9124a71213
 (but I doubt whether it is useful for you right now).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to