Github user mateiz commented on the pull request:

    https://github.com/apache/incubator-spark/pull/468#issuecomment-35179416
  
    Hey @RongGu, I looked at this and I do see a major problem with the design. 
The current design only seems to pass an AppID to the executor in standalone 
mode, so it won't work on Mesos and YARN, and furthermore the AppID passed to 
SparkEnv on the driver (<driver> + app name) is not guaranteed to be unique 
because the app name is set by the user. Why not just generate a random name 
for a temp folder in Tachyon inside SparkContext, and use that throughout the 
application? That way the driver and worker files can be in the same directory 
(perhaps under driver/ and executor-<execID>/), it will work with any deploy 
mode, and it will be guaranteed to be unique.
    
    Also, you can pass the name of this folder to the workers by just setting a 
property in the SparkConf. There's no need to add a new appID command-line 
argument and then pass it around throughout the code. I'd prefer using the 
SparkConf for this.


If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your response.
If your project does not have this feature enabled and wishes so, or if the
feature is enabled but not working, please contact infrastructure at
infrastruct...@apache.org or file a JIRA ticket with INFRA.

Reply via email to