This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 6bc6b0d  [MINOR][DOCS] Fix a typo in ContainerPlacementStrategy's 
class comment
6bc6b0d is described below

commit 6bc6b0d4400f2ba0338770662ebafad8a0de41ac
Author: Cong Du <asclepius1...@gmail.com>
AuthorDate: Wed Apr 22 09:44:43 2020 -0500

    [MINOR][DOCS] Fix a typo in ContainerPlacementStrategy's class comment
    
    ### What changes were proposed in this pull request?
    This PR fixes a typo in 
deploy/yarn/LocalityPreferredContainerPlacementStrategy.scala file.
    
    ### Why are the changes needed?
    To deliver correct explanation about how the placement policy works.
    
    ### Does this PR introduce any user-facing change?
    No
    
    ### How was this patch tested?
    UT as specified, although shouldn't influence any functionality since it's 
in the comment.
    
    Closes #28267 from asclepiusaka/master.
    
    Authored-by: Cong Du <asclepius1...@gmail.com>
    Signed-off-by: Sean Owen <sro...@gmail.com>
    (cherry picked from commit 54b97b2e143774a7238fc5a5f63e0d6eec138c41)
    Signed-off-by: Sean Owen <sro...@gmail.com>
---
 .../yarn/LocalityPreferredContainerPlacementStrategy.scala     | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/LocalityPreferredContainerPlacementStrategy.scala
 
b/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/LocalityPreferredContainerPlacementStrategy.scala
index 2288bb5..3e33382 100644
--- 
a/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/LocalityPreferredContainerPlacementStrategy.scala
+++ 
b/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/LocalityPreferredContainerPlacementStrategy.scala
@@ -40,7 +40,7 @@ private[yarn] case class ContainerLocalityPreferences(nodes: 
Array[String], rack
  * and cpus per task is 1, so the required container number is 15,
  * and host ratio is (host1: 30, host2: 30, host3: 20, host4: 10).
  *
- * 1. If requested container number (18) is more than the required container 
number (15):
+ * 1. If the requested container number (18) is more than the required 
container number (15):
  *
  * requests for 5 containers with nodes: (host1, host2, host3, host4)
  * requests for 5 containers with nodes: (host1, host2, host3)
@@ -63,16 +63,16 @@ private[yarn] case class 
ContainerLocalityPreferences(nodes: Array[String], rack
  * follow the method of 1 and 2.
  *
  * 4. If containers exist and some of them can match the requested localities.
- * For example if we have 1 containers on each node (host1: 1, host2: 1: 
host3: 1, host4: 1),
+ * For example if we have 1 container on each node (host1: 1, host2: 1: host3: 
1, host4: 1),
  * and the expected containers on each node would be (host1: 5, host2: 5, 
host3: 4, host4: 2),
  * so the newly requested containers on each node would be updated to (host1: 
4, host2: 4,
  * host3: 3, host4: 1), 12 containers by total.
  *
  *   4.1 If requested container number (18) is more than newly required 
containers (12). Follow
- *   method 1 with updated ratio 4 : 4 : 3 : 1.
+ *   method 1 with an updated ratio 4 : 4 : 3 : 1.
  *
- *   4.2 If request container number (10) is more than newly required 
containers (12). Follow
- *   method 2 with updated ratio 4 : 4 : 3 : 1.
+ *   4.2 If request container number (10) is less than newly required 
containers (12). Follow
+ *   method 2 with an updated ratio 4 : 4 : 3 : 1.
  *
  * 5. If containers exist and existing localities can fully cover the 
requested localities.
  * For example if we have 5 containers on each node (host1: 5, host2: 5, 
host3: 5, host4: 5),


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to