Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/1140#issuecomment-47048769
  
    Hey There,
    
    This code is actually pretty dense - so great job finding this bug! I've 
created a separate JIRA to clean up this code a bit.
    
    WRT your fix. It's okay to assume that the executorId equals the slave id. 
At least, the existing code already assumes that when it creates a 
`WorkerOffer` and passes `offer.getSlaveId.getValue` as the executorId value.
    
    It would be good to fix the fact that you do an `N^2` operation here. I 
gave a suggestion how to do it. I'd like to get this into Spark 1.0.1 if 
possible (which we are shipping imminently), so if you don't have time to jump 
on this in the near future I can take just finish it off on top of your patch. 
Lemme know.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to