Github user ilganeli commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5636#discussion_r35719942
  
    --- Diff: 
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
    @@ -473,6 +473,322 @@ class DAGSchedulerSuite
         assertDataStructuresEmpty()
       }
     
    +  // Helper function to validate state when creating tests for task 
failures
    +  def checkStageId(stageId: Int, attempt: Int, stageAttempt: TaskSet) {
    +    assert(stageAttempt.stageId === stageId)
    +    assert(stageAttempt.stageAttemptId == attempt)
    +  }
    +
    +  def makeCompletions(stageAttempt: TaskSet): Seq[(Success.type, 
MapStatus)] = {
    +    stageAttempt.tasks.zipWithIndex.map { case (task, idx) =>
    +      (Success, makeMapStatus("host" + ('A' + idx).toChar, 
stageAttempt.tasks.size))
    --- End diff --
    
    Ahh makes sense.
    
    
    
    Thank you,
    Ilya Ganelin
    
    
    
    -----Original Message-----
    From: Imran Rashid 
[[email protected]<mailto:[email protected]>]
    Sent: Tuesday, July 28, 2015 09:09 PM Eastern Standard Time
    To: apache/spark
    Cc: Ganelin, Ilya
    Subject: Re: [spark] [SPARK-5945] Spark should not retry a stage infinitely 
on a FetchFailedException (#5636)
    
    
    In 
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala<https://github.com/apache/spark/pull/5636#discussion_r35719826>:
    
    
    > @@ -473,6 +473,322 @@ class DAGSchedulerSuite
    >      assertDataStructuresEmpty()
    >    }
    >
    > +  // Helper function to validate state when creating tests for task 
failures
    > +  def checkStageId(stageId: Int, attempt: Int, stageAttempt: TaskSet) {
    > +    assert(stageAttempt.stageId === stageId)
    > +    assert(stageAttempt.stageAttemptId == attempt)
    > +  }
    > +
    > +  def makeCompletions(stageAttempt: TaskSet): Seq[(Success.type, 
MapStatus)] = {
    > +    stageAttempt.tasks.zipWithIndex.map { case (task, idx) =>
    > +      (Success, makeMapStatus("host" + ('A' + idx).toChar, 
stageAttempt.tasks.size))
    
    
    the last arg to makeMapStatus is actually the number of partitions for the 
next stage, so you can't just use stageAttempt.tasks.size. You need to add a 
reduceParts arg to makeCompletions.
    
    —
    Reply to this email directly or view it on 
GitHub<https://github.com/apache/spark/pull/5636/files#r35719826>.
    ________________________________________________________
    
    The information contained in this e-mail is confidential and/or proprietary 
to Capital One and/or its affiliates and may only be used solely in performance 
of work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to