Hi Matt, Thanks for the great document and proposal, I want to +1 for the reliable shuffle data and give some feedback. I think a reliable shuffle service based on DFS is necessary on Spark, especially running Spark job over unstable environment. For example, while mixed deploying Spark with online service, Spark executor will be killed any time. Current stage retry strategy will make the job many times slower than normal job. Actually we(Baidu inc) solved this problem by stable shuffle service over Hadoop, and we are now docking Spark to this shuffle service. The POC work will be done at October as expect. We'll post more benchmark and detailed work at that time. I'm still reading your discussion document and happy to give more feedback in the doc.
Thanks, Yuanjian Li Matt Cheah <mch...@palantir.com> 于2018年9月1日周六 上午8:42写道: > Hi everyone, > > > > I filed SPARK-25299 <https://issues.apache.org/jira/browse/SPARK-25299> > to promote discussion on how we can improve the shuffle operation in Spark. > The basic premise is to discuss the ways we can leverage distributed > storage to improve the reliability and isolation of Spark’s shuffle > architecture. > > > > A few designs and a full problem statement are outlined in this architecture > discussion document > <https://docs.google.com/document/d/1uCkzGGVG17oGC6BJ75TpzLAZNorvrAU3FRd2X-rVHSM/edit#heading=h.btqugnmt2h40> > . > > > > This is a complex problem and it would be great to get feedback from the > community about the right direction to take this work in. Note that we have > not yet committed to a specific implementation and architecture – there’s a > lot that needs to be discussed for this improvement, so we hope to get as > much input as possible before moving forward with a design. > > > > Please feel free to leave comments and suggestions on the JIRA ticket or > on the discussion document. > > > > Thank you! > > > > -Matt Cheah >