[jira] [Commented] (YARN-7275) NM Statestore cleanup for Container updates

2017-09-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186878#comment-16186878
 ] 

Arun Suresh commented on YARN-7275:
---

[~andrew.wang], yup.. i would consider this a blocker for 3.0.0 (not so much 
for beta1 though) - IIUC, the patch is mostly ready. Plan to get this in next 
week before 2.9.0 code freeze.

> NM Statestore cleanup for Container updates
> ---
>
> Key: YARN-7275
> URL: https://issues.apache.org/jira/browse/YARN-7275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>Priority: Blocker
>
> Currently, only resource updates are recorded in the NM state store, we need 
> to add ExecutionType updates as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7277) Container Launch expand environment needs to consider bracket matching

2017-09-29 Thread balloons (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

balloons updated YARN-7277:
---
Description: 
The SPARK application I submitted always failed and I finally found that the 
commands I specified to launch AM Container were changed by NM.

*The following is part of the excerpt I submitted to RM to see the command:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
{code}

*The following is an excerpt from the corresponding command used when I observe 
the NM launch container:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
{code}

Finally, I found that NM made the following transformation in launch container 
which led to this situation:
{code:java}
@VisibleForTesting
  public static String expandEnvironment(String var,
  Path containerLogDir) {
var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
  containerLogDir.toString());
var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
  File.pathSeparator);

// replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
// as %VAR% and on Linux replaced as "$VAR"
if (Shell.WINDOWS) {
  var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
} else {
  var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
  *var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");*
}
return var;
  }
{code}

I think this is a Bug that doesn't even consider the pairing of 
"PARAMETER_EXPANSION_LEFT" and "*PARAMETER_EXPANSION_RIGHT*" when substituting. 
But simply substituting for simple violence.


  was:
The SPARK application I submitted always failed and I finally found that the 
commands I specified to launch AM Container were changed by NM.

*The following is part of the excerpt I submitted to RM to see the command:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
{code}

*The following is an excerpt from the corresponding command used when I observe 
the NM launch container:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
{code}

Finally, I found that NM made the following transformation in launch container 
which led to this situation:
{code:java}
@VisibleForTesting
  public static String expandEnvironment(String var,
  Path containerLogDir) {
var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
  containerLogDir.toString());
var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
  File.pathSeparator);

// replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
// as %VAR% and on Linux replaced as "$VAR"
if (Shell.WINDOWS) {
  var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
} else {
  var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
  *var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");*
}
return var;
  }
{code}

I think this is a Bug that doesn't even consider the pairing of "*{{*" and 
"*}}*" when substituting, but simply substituting for simple violence.



> Container Launch expand environment needs to consider bracket matching
> --
>
> Key: YARN-7277
> URL: https://issues.apache.org/jira/browse/YARN-7277
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: balloons
>Priority: Critical
>
> The SPARK application I submitted always failed and I finally found that the 
> commands I specified to launch AM Container were changed by NM.
> *The following is part of the excerpt I submitted to RM to see the command:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
> {code}
> *The following is an excerpt from the corresponding command used when I 
> observe the NM launch container:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
> {code}
> Finally, I found that NM made the following transformation in launch 
> container which led to this situation:
> {code:java}
> @VisibleForTesting
>   

[jira] [Updated] (YARN-7277) Container Launch expand environment needs to consider bracket matching

2017-09-29 Thread balloons (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

balloons updated YARN-7277:
---
Description: 
The SPARK application I submitted always failed and I finally found that the 
commands I specified to launch AM Container were changed by NM.

*The following is part of the excerpt I submitted to RM to see the command:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
{code}

*The following is an excerpt from the corresponding command used when I observe 
the NM launch container:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
{code}

Finally, I found that NM made the following transformation in launch container 
which led to this situation:
{code:java}
@VisibleForTesting
  public static String expandEnvironment(String var,
  Path containerLogDir) {
var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
  containerLogDir.toString());
var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
  File.pathSeparator);

// replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
// as %VAR% and on Linux replaced as "$VAR"
if (Shell.WINDOWS) {
  var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
} else {
  var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
  *var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");*
}
return var;
  }
{code}

I think this is a Bug that doesn't even consider the pairing of 
"*PARAMETER_EXPANSION_LEFT*" and "*PARAMETER_EXPANSION_RIGHT*" when 
substituting. But simply substituting for simple violence.


  was:
The SPARK application I submitted always failed and I finally found that the 
commands I specified to launch AM Container were changed by NM.

*The following is part of the excerpt I submitted to RM to see the command:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
{code}

*The following is an excerpt from the corresponding command used when I observe 
the NM launch container:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
{code}

Finally, I found that NM made the following transformation in launch container 
which led to this situation:
{code:java}
@VisibleForTesting
  public static String expandEnvironment(String var,
  Path containerLogDir) {
var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
  containerLogDir.toString());
var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
  File.pathSeparator);

// replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
// as %VAR% and on Linux replaced as "$VAR"
if (Shell.WINDOWS) {
  var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
} else {
  var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
  *var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");*
}
return var;
  }
{code}

I think this is a Bug that doesn't even consider the pairing of 
"PARAMETER_EXPANSION_LEFT" and "*PARAMETER_EXPANSION_RIGHT*" when substituting. 
But simply substituting for simple violence.



> Container Launch expand environment needs to consider bracket matching
> --
>
> Key: YARN-7277
> URL: https://issues.apache.org/jira/browse/YARN-7277
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: balloons
>Priority: Critical
>
> The SPARK application I submitted always failed and I finally found that the 
> commands I specified to launch AM Container were changed by NM.
> *The following is part of the excerpt I submitted to RM to see the command:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
> {code}
> *The following is an excerpt from the corresponding command used when I 
> observe the NM launch container:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
> {code}
> Finally, I found that NM made the following transformation in launch 
> container which led to this 

[jira] [Updated] (YARN-7277) Container Launch expand environment needs to consider bracket matching

2017-09-29 Thread balloons (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

balloons updated YARN-7277:
---
Description: 
The SPARK application I submitted always failed and I finally found that the 
commands I specified to launch AM Container were changed by NM.

*The following is part of the excerpt I submitted to RM to see the command:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
{code}

*The following is an excerpt from the corresponding command used when I observe 
the NM launch container:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
{code}

Finally, I found that NM made the following transformation in launch container 
which led to this situation:
{code:java}
@VisibleForTesting
  public static String expandEnvironment(String var,
  Path containerLogDir) {
var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
  containerLogDir.toString());
var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
  File.pathSeparator);

// replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
// as %VAR% and on Linux replaced as "$VAR"
if (Shell.WINDOWS) {
  var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
} else {
  var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
  *var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");*
}
return var;
  }
{code}

I think this is a Bug that doesn't even consider the pairing of "*{{*" and 
"*}}*" when substituting, but simply substituting for simple violence.


  was:
The SPARK application I submitted always failed and I finally found that the 
commands I specified to launch AM Container were changed by NM.

*The following is part of the excerpt I submitted to RM to see the command:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
{code}

*The following is an excerpt from the corresponding command used when I observe 
the NM launch container:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
{code}

Finally, I found that NM made the following transformation in launch container 
which led to this situation:
{code:java}
@VisibleForTesting
  public static String expandEnvironment(String var,
  Path containerLogDir) {
var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
  containerLogDir.toString());
var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
  File.pathSeparator);

// replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
// as %VAR% and on Linux replaced as "$VAR"
if (Shell.WINDOWS) {
  var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
} else {
  var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
*{color:#f6c342}  var = 
var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");{color}*
}
return var;
  }
{code}

I think this is a Bug that doesn't even consider the pairing of "{{" and "}}" 
when substituting, but simply substituting for simple violence.



> Container Launch expand environment needs to consider bracket matching
> --
>
> Key: YARN-7277
> URL: https://issues.apache.org/jira/browse/YARN-7277
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: balloons
>Priority: Critical
>
> The SPARK application I submitted always failed and I finally found that the 
> commands I specified to launch AM Container were changed by NM.
> *The following is part of the excerpt I submitted to RM to see the command:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
> {code}
> *The following is an excerpt from the corresponding command used when I 
> observe the NM launch container:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
> {code}
> Finally, I found that NM made the following transformation in launch 
> container which led to this situation:
> {code:java}
> @VisibleForTesting
>   public static String 

[jira] [Updated] (YARN-7277) Container Launch expand environment needs to consider bracket matching

2017-09-29 Thread balloons (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

balloons updated YARN-7277:
---
Description: 
The SPARK application I submitted always failed and I finally found that the 
commands I specified to launch AM Container were changed by NM.

*The following is part of the excerpt I submitted to RM to see the command:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
{code}

*The following is an excerpt from the corresponding command used when I observe 
the NM launch container:*
{code:java}
*'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
{code}

Finally, I found that NM made the following transformation in launch container 
which led to this situation:
{code:java}
@VisibleForTesting
  public static String expandEnvironment(String var,
  Path containerLogDir) {
var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
  containerLogDir.toString());
var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
  File.pathSeparator);

// replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
// as %VAR% and on Linux replaced as "$VAR"
if (Shell.WINDOWS) {
  var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
} else {
  var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
*{color:#f6c342}  var = 
var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");{color}*
}
return var;
  }
{code}

I think this is a Bug that doesn't even consider the pairing of "{{" and "}}" 
when substituting, but simply substituting for simple violence.


> Container Launch expand environment needs to consider bracket matching
> --
>
> Key: YARN-7277
> URL: https://issues.apache.org/jira/browse/YARN-7277
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: balloons
>Priority: Critical
>
> The SPARK application I submitted always failed and I finally found that the 
> commands I specified to launch AM Container were changed by NM.
> *The following is part of the excerpt I submitted to RM to see the command:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}}}'*
> {code}
> *The following is an excerpt from the corresponding command used when I 
> observe the NM launch container:*
> {code:java}
> *'{\"handler\":\"FILLER\",\"inputTable\":\"engine_arch.adult_train\",\"outputTable\":[\"ether_features_filler_\$experimentId_\$taskId_out0\"],\"params\":{\"age\":{\"param\":[\"0\"]}*
> {code}
> Finally, I found that NM made the following transformation in launch 
> container which led to this situation:
> {code:java}
> @VisibleForTesting
>   public static String expandEnvironment(String var,
>   Path containerLogDir) {
> var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
>   containerLogDir.toString());
> var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
>   File.pathSeparator);
> // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
> // as %VAR% and on Linux replaced as "$VAR"
> if (Shell.WINDOWS) {
>   var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
> } else {
>   var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
> *{color:#f6c342}  var = 
> var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");{color}*
> }
> return var;
>   }
> {code}
> I think this is a Bug that doesn't even consider the pairing of "{{" and "}}" 
> when substituting, but simply substituting for simple violence.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7277) Container Launch expand environment needs to consider bracket matching

2017-09-29 Thread balloons (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

balloons updated YARN-7277:
---
Component/s: nodemanager

> Container Launch expand environment needs to consider bracket matching
> --
>
> Key: YARN-7277
> URL: https://issues.apache.org/jira/browse/YARN-7277
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: balloons
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7277) Container Launch expand environment needs to consider bracket matching

2017-09-29 Thread balloons (JIRA)
balloons created YARN-7277:
--

 Summary: Container Launch expand environment needs to consider 
bracket matching
 Key: YARN-7277
 URL: https://issues.apache.org/jira/browse/YARN-7277
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: balloons
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7222) Merge org.apache.hadoop.yarn.server.resourcemanager.NodeManager with MockNM

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186853#comment-16186853
 ] 

Hadoop QA commented on YARN-7222:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 33 new + 562 unchanged - 38 fixed = 595 total (was 600) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 52s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7222 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889815/YARN-7222.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 912b81604117 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 66c4171 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17719/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186847#comment-16186847
 ] 

Hadoop QA commented on YARN-3661:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  4s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-3661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889817/YARN-3661-015.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 

[jira] [Comment Edited] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-09-29 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186821#comment-16186821
 ] 

Suma Shivaprasad edited comment on YARN-7117 at 9/30/17 2:38 AM:
-

Thanks for your feedback [~jlowe] 

the syntax would be more concise and easier to read if the queue could be 
specified as a sub-path that can optionally include the parent queue. For 
example, rather than u:user1:queue1(parent-queue=marketing) the syntax could be 
simplified to: u:user1:marketing.queue1.

{quote} Agree .The current CS code has a bug in that it allows "." in the queue 
names. Do you think we should fix this for 3.0? {quote}

I'm not really familiar with queue mappings, but I'm assuming the order they 
are specified is significant to deterministically resolve cases where more than 
one specified rule would apply to a user. If so then the example is confusing 
since it looks like the u:user2:%primary_group(parent-queue=finance) rule will 
always be eclipsed by the preceding u:%user:%user(parent=engineering) rule.

{quote} Good catch. The example needs to be fixed {quote}


IMHO if the admin wants to configure guarantees for auto-created queues then we 
should not assume that they're going to be OK with auto-created queues that do 
not meet those specifications. Otherwise I'd assume the admin would forgo 
guaranteed capacities on the auto queues and just have them carve up the parent 
queue proportionally.

{quote} Thats a good point. We could have a configuration that allows admins to 
have parent queues which allow creation of auto queues with guaranteed 
resources and fail submissions when parent doesnt have room. By default this 
could allow best effort queues as in current design and could be controlled by 
admins when they configure the parent queue.

The document implies that there's SLAs with guaranteed capacity auto-queues, 
but that's clearly not the case. In the example, it's true that the 
applications submitted to q4 and q5 eventually ran with guaranteed capacities. 
However they waited an unbounded amount of time to start running which means we 
cannot always hit SLAs. Users in q1/q2/q3 can collectively deny apps in q4/q5 
ever running, for example.

{quote}  This should be addressed by the above configurable policy. SLA 
guarantees are better with the admin able to configure parent queues with 
guaranteed capacities for auto created leaf queues {quote}

For the alternative approach where all of the queues are "best effort" we don't 
have to always have the max-am-resource at 0%. We could specify the max-am as a 
percent of the max cap for those queues or a separate config specific to them, 
or whatever. Or we could have the queues auto-distribute the capacities of the 
parent as new queues are added. In other words the auto-queue capacity is 
1/(num auto queues) of the parent and the max-capacity is always 100%. 
Preemption can be used to keep the queues fair if one user tries to dominate 
over the others, but capacities of underutilized queues can be leveraged by 
others.

{quote} This could be done though it might be a backward in compatible change. 
{quote} 

If a user has ACLs to the parent queue then I believe they have those ACLs to 
the entire hierarchy of that queue. That means if the parent queue says they 
can submit then they'll be able to submit to any auto-queue underneath that 
parent. We'll either need a new ACL for auto-queue creation separate from app 
submission or change the semantics of ACL inheritance for auto-queues. Probably 
the former makes more sense and would be more intuitive since admins will be 
used to the inheritance features of today's queue ACLs and allow admins to 
configure parent-queue-privileged users that can get admin-like access to all 
the auto-queues of a parent queue but aren't fully admin users across all 
queues.

{quote}  Good point. Having separate ACLs  for auto-creation would be better 
{quote}

I don't know if it's critical to show auto-queues as a different color, but I 
think it would be important to be able to determine somehow via the UI that the 
queue was auto-created so the admin doesn't wonder why they can't find the 
queue in the static queue configs. This might be as simple as a "Auto-Queue: 
true/false" line in the queue details box in the UI.

{quote } Agree {quote}






was (Author: suma.shivaprasad):
Thanks for your feedback [~jlowe] the syntax would be more concise and easier 
to read if the queue could be specified as a sub-path that can optionally 
include the parent queue. For example, rather than 
u:user1:queue1(parent-queue=marketing) the syntax could be simplified to: 
u:user1:marketing.queue1.

{quote} Agree .The current CS code has a bug in that it allows "." in the queue 
names. Do you think we should fix this for 3.0? {quote}

I'm not really familiar with queue mappings, but I'm assuming the order they 
are specified 

[jira] [Commented] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-09-29 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186821#comment-16186821
 ] 

Suma Shivaprasad commented on YARN-7117:


Thanks for your feedback [~jlowe] the syntax would be more concise and easier 
to read if the queue could be specified as a sub-path that can optionally 
include the parent queue. For example, rather than 
u:user1:queue1(parent-queue=marketing) the syntax could be simplified to: 
u:user1:marketing.queue1.

{quote} Agree .The current CS code has a bug in that it allows "." in the queue 
names. Do you think we should fix this for 3.0? {quote}

I'm not really familiar with queue mappings, but I'm assuming the order they 
are specified is significant to deterministically resolve cases where more than 
one specified rule would apply to a user. If so then the example is confusing 
since it looks like the u:user2:%primary_group(parent-queue=finance) rule will 
always be eclipsed by the preceding u:%user:%user(parent=engineering) rule.

{quote} Good catch. The example needs to be fixed {quote}


IMHO if the admin wants to configure guarantees for auto-created queues then we 
should not assume that they're going to be OK with auto-created queues that do 
not meet those specifications. Otherwise I'd assume the admin would forgo 
guaranteed capacities on the auto queues and just have them carve up the parent 
queue proportionally.

{quote} Thats a good point. We could have a configuration that allows admins to 
have parent queues which allow creation of auto queues with guaranteed 
resources and fail submissions when parent doesnt have room. By default this 
could allow best effort queues as in current design and could be controlled by 
admins when they configure the parent queue.

The document implies that there's SLAs with guaranteed capacity auto-queues, 
but that's clearly not the case. In the example, it's true that the 
applications submitted to q4 and q5 eventually ran with guaranteed capacities. 
However they waited an unbounded amount of time to start running which means we 
cannot always hit SLAs. Users in q1/q2/q3 can collectively deny apps in q4/q5 
ever running, for example.

{quote}  This should be addressed by the above configurable policy. SLA 
guarantees are better with the admin able to configure parent queues with 
guaranteed capacities for auto created leaf queues {quote}

For the alternative approach where all of the queues are "best effort" we don't 
have to always have the max-am-resource at 0%. We could specify the max-am as a 
percent of the max cap for those queues or a separate config specific to them, 
or whatever. Or we could have the queues auto-distribute the capacities of the 
parent as new queues are added. In other words the auto-queue capacity is 
1/(num auto queues) of the parent and the max-capacity is always 100%. 
Preemption can be used to keep the queues fair if one user tries to dominate 
over the others, but capacities of underutilized queues can be leveraged by 
others.

{quote} This could be done though it might be a backward in compatible change. 
{quote} 

If a user has ACLs to the parent queue then I believe they have those ACLs to 
the entire hierarchy of that queue. That means if the parent queue says they 
can submit then they'll be able to submit to any auto-queue underneath that 
parent. We'll either need a new ACL for auto-queue creation separate from app 
submission or change the semantics of ACL inheritance for auto-queues. Probably 
the former makes more sense and would be more intuitive since admins will be 
used to the inheritance features of today's queue ACLs and allow admins to 
configure parent-queue-privileged users that can get admin-like access to all 
the auto-queues of a parent queue but aren't fully admin users across all 
queues.

{quote}  Good point. Having separate ACLs  for auto-creation would be better 
{quote}

I don't know if it's critical to show auto-queues as a different color, but I 
think it would be important to be able to determine somehow via the UI that the 
queue was auto-created so the admin doesn't wonder why they can't find the 
queue in the static queue configs. This might be as simple as a "Auto-Queue: 
true/false" line in the queue details box in the UI.

{quote } Agree {quote}





> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: 
> YARN-7117.Capacity.Scheduler.Support.Auto.Creation.Of.Leaf.Queue.pdf
>
>
> Currently Capacity Scheduler doesn't support auto creation 

[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-015.patch

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch, YARN-3661-015.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: (was: YARN-3661-015.patch)

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-015.patch

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch, YARN-3661-015.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7222) Merge org.apache.hadoop.yarn.server.resourcemanager.NodeManager with MockNM

2017-09-29 Thread Sen Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao reassigned YARN-7222:
--

Assignee: Sen Zhao

I agree with you. If you have time, please review the patch.

> Merge org.apache.hadoop.yarn.server.resourcemanager.NodeManager with MockNM
> ---
>
> Key: YARN-7222
> URL: https://issues.apache.org/jira/browse/YARN-7222
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Sen Zhao
> Attachments: YARN-7222.001.patch
>
>
> The existence of org.apache.hadoop.yarn.server.resourcemanager.NodeManager is 
> confusing. It is only for RM testing and basically another MockNM. There is 
> no Java Doc for class, which easily let people consider it a real 
> Nodemanager. Suggest to merge it with MockNM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7222) Merge org.apache.hadoop.yarn.server.resourcemanager.NodeManager with MockNM

2017-09-29 Thread Sen Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao updated YARN-7222:
---
Attachment: YARN-7222.001.patch

> Merge org.apache.hadoop.yarn.server.resourcemanager.NodeManager with MockNM
> ---
>
> Key: YARN-7222
> URL: https://issues.apache.org/jira/browse/YARN-7222
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
> Attachments: YARN-7222.001.patch
>
>
> The existence of org.apache.hadoop.yarn.server.resourcemanager.NodeManager is 
> confusing. It is only for RM testing and basically another MockNM. There is 
> no Java Doc for class, which easily let people consider it a real 
> Nodemanager. Suggest to merge it with MockNM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-09-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186785#comment-16186785
 ] 

Subru Krishnan commented on YARN-4859:
--

Thanks [~yufeigu]! 

You need to turn on {{ReservationSystem}} as described 
[here|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ReservationSystem.html]
 and submit a job. We have the documentation for {{CapacityScheduler}} 
[here|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Configuring_ReservationSystem_with_CapacityScheduler],
 {{FairScheduler}} follows the exact same pattern.

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Assignee: Yufei Gu
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6333) Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and fairSharePreemptionThreshold

2017-09-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186784#comment-16186784
 ] 

Hudson commented on YARN-6333:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13000 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13000/])
YARN-6333. Improve doc for minSharePreemptionTimeout, (yufei: rev 
66c417167ae0d24467360ae3b222a10f00fa2700)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md


> Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and 
> fairSharePreemptionThreshold
> --
>
> Key: YARN-6333
> URL: https://issues.apache.org/jira/browse/YARN-6333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Chetna Chaudhari
>  Labels: newbie++
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-6333-1.patch, YARN-6333-2.patch
>
>
> Default values of them are not mentioned in doc. For example, default value 
> of minSharePreemptionTimeout is {{Long.MAX_VALUE}}, which means the min share 
> preemption won't happen until you set a meaningful value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186782#comment-16186782
 ] 

Hadoop QA commented on YARN-3661:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
41s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 41s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  9s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
8s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-3661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889804/YARN-3661-014.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Commented] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-09-29 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186780#comment-16186780
 ] 

Yufei Gu commented on YARN-4859:


Yes. I can. [~subru], can you provide more details about how to reproduce this 
issue?
[~andrew.wang], what the timeline for this?

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Assignee: Yufei Gu
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-09-29 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-4859:
--

Assignee: Yufei Gu

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Assignee: Yufei Gu
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186771#comment-16186771
 ] 

Hadoop QA commented on YARN-3661:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 31 new + 4 unchanged - 0 fixed = 35 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-3661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889795/YARN-3661-013.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (YARN-6333) Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and fairSharePreemptionThreshold

2017-09-29 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186769#comment-16186769
 ] 

Yufei Gu commented on YARN-6333:


+1. Committed to trunk and branch-2. Thanks for the patch, [~chetna]!

> Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and 
> fairSharePreemptionThreshold
> --
>
> Key: YARN-6333
> URL: https://issues.apache.org/jira/browse/YARN-6333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Chetna Chaudhari
>  Labels: newbie++
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-6333-1.patch, YARN-6333-2.patch
>
>
> Default values of them are not mentioned in doc. For example, default value 
> of minSharePreemptionTimeout is {{Long.MAX_VALUE}}, which means the min share 
> preemption won't happen until you set a meaningful value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7001) If shared cache upload is terminated in the middle, the temp file will never be deleted

2017-09-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186756#comment-16186756
 ] 

Miklos Szegedi commented on YARN-7001:
--

[~ctrezzo], would you like to review this jira?

> If shared cache upload is terminated in the middle, the temp file will never 
> be deleted
> ---
>
> Key: YARN-7001
> URL: https://issues.apache.org/jira/browse/YARN-7001
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Sen Zhao
> Attachments: YARN-7001.001.patch, YARN-7001.002.patch, 
> YARN-7001.003.patch, YARN-7001.004.patch
>
>
> There is a missing deleteTempFile(tempPath);
> {code}
>   tempPath = new Path(directoryPath, getTemporaryFileName(actualPath));
>   if (!uploadFile(actualPath, tempPath)) {
> LOG.warn("Could not copy the file to the shared cache at " + 
> tempPath);
> return false;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-014.patch

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: (was: YARN-3661-014.patch)

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7276) Federation Router Web Service fixes

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-7276:
--
Description: 
While testing YARN-3661, I found a few issues with the REST interface in the 
Router:
* No support for empty content (error 204)
* Media type support
* Attributes in {{FederationInterceptorREST}}
* Support for empty states and labels
* DefaultMetricsSystem initialization is missing

  was:
While testing YARN-3661, I found a few issues with the REST interface in the 
Router:
* No support for empty content (error 204)
* Media type support
* Attributes in {{FederationInterceptorREST}}
* Support for empty states and labels


> Federation Router Web Service fixes
> ---
>
> Key: YARN-7276
> URL: https://issues.apache.org/jira/browse/YARN-7276
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>
> While testing YARN-3661, I found a few issues with the REST interface in the 
> Router:
> * No support for empty content (error 204)
> * Media type support
> * Attributes in {{FederationInterceptorREST}}
> * Support for empty states and labels
> * DefaultMetricsSystem initialization is missing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-014.patch

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7276) Federation Router Web Service fixes

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned YARN-7276:
-

Assignee: Íñigo Goiri

> Federation Router Web Service fixes
> ---
>
> Key: YARN-7276
> URL: https://issues.apache.org/jira/browse/YARN-7276
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>
> While testing YARN-3661, I found a few issues with the REST interface in the 
> Router:
> * No support for empty content (error 204)
> * Media type support
> * Attributes in {{FederationInterceptorREST}}
> * Support for empty states and labels



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7276) Federation Router Web Service fixes

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned YARN-7276:
-

Assignee: Giovanni Matteo Fumarola  (was: Íñigo Goiri)

> Federation Router Web Service fixes
> ---
>
> Key: YARN-7276
> URL: https://issues.apache.org/jira/browse/YARN-7276
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>
> While testing YARN-3661, I found a few issues with the REST interface in the 
> Router:
> * No support for empty content (error 204)
> * Media type support
> * Attributes in {{FederationInterceptorREST}}
> * Support for empty states and labels



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186733#comment-16186733
 ] 

Íñigo Goiri commented on YARN-3661:
---

Thanks [~subru], I created YARN-7276 to track that part.
I'll remove those from this patch.

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7276) Federation Router Web Service fixes

2017-09-29 Thread JIRA
Íñigo Goiri created YARN-7276:
-

 Summary: Federation Router Web Service fixes
 Key: YARN-7276
 URL: https://issues.apache.org/jira/browse/YARN-7276
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Íñigo Goiri


While testing YARN-3661, I found a few issues with the REST interface in the 
Router:
* No support for empty content (error 204)
* Media type support
* Attributes in {{FederationInterceptorREST}}
* Support for empty states and labels



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs to a separate log file

2017-09-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186728#comment-16186728
 ] 

Hudson commented on YARN-6550:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12999 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12999/])
YARN-6550. Capture launch_container.sh logs to a separate log file. (wangda: 
rev febeead5f95c6fc245ea3735f5b538d4bb4dc8a4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> Capture launch_container.sh logs to a separate log file
> ---
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, 
> YARN-6550.branch-2.001.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6509) Add a size threshold beyond which yarn logs will require a force option

2017-09-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186729#comment-16186729
 ] 

Hudson commented on YARN-6509:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12999 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12999/])
YARN-6509. Add a size threshold beyond which yarn logs will require a (wangda: 
rev ec2ae3060a807c8754826af2135a68c08b2e4f13)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogCLIHelpers.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/tfile/LogAggregationTFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java


> Add a size threshold beyond which yarn logs will require a force option
> ---
>
> Key: YARN-6509
> URL: https://issues.apache.org/jira/browse/YARN-6509
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-6509.1.patch, YARN-6509.2.patch, YARN-6509.3.patch, 
> YARN-6509.4.patch, YARN-6509.5.patch
>
>
> An accidental fetch for a long running application can lead to scenario which 
> the large size of log can fill up a disk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6509) Add a size threshold beyond which yarn logs will require a force option

2017-09-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186715#comment-16186715
 ] 

Wangda Tan edited comment on YARN-6509 at 9/29/17 11:55 PM:


Thanks [~xgong], pushed to branch-2/trunk/branch-3.0


was (Author: leftnoteasy):
Thanks [~xgong], pushed to branch-2. 

> Add a size threshold beyond which yarn logs will require a force option
> ---
>
> Key: YARN-6509
> URL: https://issues.apache.org/jira/browse/YARN-6509
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-6509.1.patch, YARN-6509.2.patch, YARN-6509.3.patch, 
> YARN-6509.4.patch, YARN-6509.5.patch
>
>
> An accidental fetch for a long running application can lead to scenario which 
> the large size of log can fill up a disk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs to a separate log file

2017-09-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186711#comment-16186711
 ] 

Wangda Tan commented on YARN-6550:
--

Just pushed to trunk/branch-3.0. Thanks [~suma.shivaprasad] and thanks reviews 
from [~aw]!

> Capture launch_container.sh logs to a separate log file
> ---
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, 
> YARN-6550.branch-2.001.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-09-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186712#comment-16186712
 ] 

Subru Krishnan commented on YARN-3661:
--

Thanks [~elgoiri] for the updated patch. Can you work with [~giovanni.fumarola] 
and open a separate bug for the Federation REST interface as it needs to be 
tracked seperately?

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6550) Capture launch_container.sh logs to a separate log file

2017-09-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6550:
-
Fix Version/s: 3.0.0

> Capture launch_container.sh logs to a separate log file
> ---
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, 
> YARN-6550.branch-2.001.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7072) Add a new log aggregation file format controller

2017-09-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186709#comment-16186709
 ] 

Wangda Tan commented on YARN-7072:
--

Pushed to branch-3.0 as well.

> Add a new log aggregation file format controller
> 
>
> Key: YARN-7072
> URL: https://issues.apache.org/jira/browse/YARN-7072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-7072-branch-2.001.patch, YARN-7072-trunk.001.patch, 
> YARN-7072.trunk.002.patch, YARN-7072-trunk.003.patch, 
> YARN-7072-trunk.004.patch, YARN-7072-trunk.005.patch, 
> YARN-7072-trunk.006.patch, YARN-7072-trunk.007.patch, 
> YARN-7072-trunk.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4827) Document configuration of ReservationSystem for FairScheduler

2017-09-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186703#comment-16186703
 ] 

Subru Krishnan commented on YARN-4827:
--

[~andrew.wang], unfortunately this is a blocker for 2.9.0 (and consequently 
3.0.0 I guess) as we are planning to move the {{ReservationSystem}} feature to 
GA mode.

[~yufeigu], can you work on this as we need someone with {{FairScheduler}} 
expertise? I can help with reviews.

Thanks!

> Document configuration of ReservationSystem for FairScheduler
> -
>
> Key: YARN-4827
> URL: https://issues.apache.org/jira/browse/YARN-4827
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Subru Krishnan
>Priority: Blocker
>
> This JIRA tracks the effort to add documentation on how to configure 
> ReservationSystem for FairScheduler



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-09-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186701#comment-16186701
 ] 

Subru Krishnan commented on YARN-4859:
--

[~andrew.wang], unfortunately this is a blocker for 2.9.0 (and consequently 
3.0.0 I guess) as we are planning to move the {{ReservationSystem}} feature to 
GA mode.

[~yufeigu], can you work on this as we need someone with {{FairScheduler}} 
expertise? I can help with reviews.

Thanks!

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4827) Document configuration of ReservationSystem for FairScheduler

2017-09-29 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reassigned YARN-4827:


Assignee: (was: Subru Krishnan)

> Document configuration of ReservationSystem for FairScheduler
> -
>
> Key: YARN-4827
> URL: https://issues.apache.org/jira/browse/YARN-4827
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Subru Krishnan
>Priority: Blocker
>
> This JIRA tracks the effort to add documentation on how to configure 
> ReservationSystem for FairScheduler



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-09-29 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reassigned YARN-4859:


Assignee: (was: Sean Po)

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3661) Basic Federation UI

2017-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-013.patch

Tackled comments but I had to fix a couple things in the REST interface.
Not sure if you guys want to open a new JIRA for that.

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6550) Capture launch_container.sh logs to a separate log file

2017-09-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6550:
-
Summary: Capture launch_container.sh logs to a separate log file  (was: 
Capture launch_container.sh logs)

> Capture launch_container.sh logs to a separate log file
> ---
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, 
> YARN-6550.branch-2.001.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4599) Set OOM control for memory cgroups

2017-09-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186681#comment-16186681
 ] 

Miklos Szegedi commented on YARN-4599:
--

[~sandflee], do you plan to continue working on the patch? I have a similar 
one, YARN-6677 and it would be great to coordinate. I think it would be better 
to do the event logic in a separate application detached from the java process.

> Set OOM control for memory cgroups
> --
>
> Key: YARN-4599
> URL: https://issues.apache.org/jira/browse/YARN-4599
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: sandflee
>  Labels: oct16-medium
> Attachments: yarn-4599-not-so-useful.patch, YARN-4599.sandflee.patch
>
>
> YARN-1856 adds memory cgroups enforcing support. We should also explicitly 
> set OOM control so that containers are not killed as soon as they go over 
> their usage. Today, one could set the swappiness to control this, but 
> clusters with swap turned off exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7178) Add documentation for Container Update API

2017-09-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186676#comment-16186676
 ] 

Andrew Wang commented on YARN-7178:
---

Is this a blocker for 3.0.0? Saw this pop up on my dashboard.

> Add documentation for Container Update API
> --
>
> Key: YARN-7178
> URL: https://issues.apache.org/jira/browse/YARN-7178
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-09-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186674#comment-16186674
 ] 

Andrew Wang commented on YARN-4859:
---

Is this a blocker for 3.0.0? Saw this pop up on my dashboard.

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Assignee: Sean Po
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7275) NM Statestore cleanup for Container updates

2017-09-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186677#comment-16186677
 ] 

Andrew Wang commented on YARN-7275:
---

Is this a blocker for 3.0.0? Saw this pop up on my dashboard.

> NM Statestore cleanup for Container updates
> ---
>
> Key: YARN-7275
> URL: https://issues.apache.org/jira/browse/YARN-7275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>Priority: Blocker
>
> Currently, only resource updates are recorded in the NM state store, we need 
> to add ExecutionType updates as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7134) AppSchedulingInfo has a dependency on capacity scheduler

2017-09-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186671#comment-16186671
 ] 

Andrew Wang commented on YARN-7134:
---

This one popped up on my dashboard as a 3.0.0 GA blocker, could I get a status 
update? Is there an assignee?

> AppSchedulingInfo has a dependency on capacity scheduler
> 
>
> Key: YARN-7134
> URL: https://issues.apache.org/jira/browse/YARN-7134
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Priority: Blocker
>
> The common scheduling code should be independent of all scheduler 
> implementations.  YARN-6040 introduced capacity scheduler's 
> {{SchedulingMode}} into {{AppSchedulingInfo}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4827) Document configuration of ReservationSystem for FairScheduler

2017-09-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186672#comment-16186672
 ] 

Andrew Wang commented on YARN-4827:
---

Is this a blocker for 3.0.0? Saw this pop up on my dashboard.

> Document configuration of ReservationSystem for FairScheduler
> -
>
> Key: YARN-4827
> URL: https://issues.apache.org/jira/browse/YARN-4827
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Blocker
>
> This JIRA tracks the effort to add documentation on how to configure 
> ReservationSystem for FairScheduler



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6487) FairScheduler: remove continuous scheduling (YARN-1010)

2017-09-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6487:
--
Target Version/s:   (was: 3.0.0)

> FairScheduler: remove continuous scheduling (YARN-1010)
> ---
>
> Key: YARN-6487
> URL: https://issues.apache.org/jira/browse/YARN-6487
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.7.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>
> Remove deprecated FairScheduler continuous scheduler code



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6241) Remove -jt flag

2017-09-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186659#comment-16186659
 ] 

Andrew Wang commented on YARN-6241:
---

Dropping 3.0.0 target version since this is an incompatible change and beta1 is 
up, this will have to wait for the next major release.

> Remove -jt flag
> ---
>
> Key: YARN-6241
> URL: https://issues.apache.org/jira/browse/YARN-6241
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>
> The -jt flag is used to send a job to a remote resourcemanager.  Given the 
> flag, this is clearly left over from pre-YARN days.  With the addition of the 
> time line server and other YARN services, the flag doesn't really work that 
> well anymore.  It's probably better to deprecate it in 2.x and remove from 
> 3.x than attempt to fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6487) FairScheduler: remove continuous scheduling (YARN-1010)

2017-09-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186658#comment-16186658
 ] 

Andrew Wang commented on YARN-6487:
---

Dropping 3.0.0 target version since this is an incompatible change and beta1 is 
up, this will have to wait for the next major release.

> FairScheduler: remove continuous scheduling (YARN-1010)
> ---
>
> Key: YARN-6487
> URL: https://issues.apache.org/jira/browse/YARN-6487
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.7.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>
> Remove deprecated FairScheduler continuous scheduler code



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6241) Remove -jt flag

2017-09-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6241:
--
Target Version/s:   (was: 3.0.0)

> Remove -jt flag
> ---
>
> Key: YARN-6241
> URL: https://issues.apache.org/jira/browse/YARN-6241
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>
> The -jt flag is used to send a job to a remote resourcemanager.  Given the 
> flag, this is clearly left over from pre-YARN days.  With the addition of the 
> time line server and other YARN services, the flag doesn't really work that 
> well anymore.  It's probably better to deprecate it in 2.x and remove from 
> 3.x than attempt to fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7234) Kill Application button shows "404 Error" even though the application gets killed

2017-09-29 Thread Sumana Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumana Sathish updated YARN-7234:
-
Release Note:   (was: Do not see the issue anymore)

Do not see the issue anymore. Hence closing the jira

> Kill Application button shows "404 Error" even though the application gets 
> killed
> -
>
> Key: YARN-7234
> URL: https://issues.apache.org/jira/browse/YARN-7234
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
>
> In Secured UI,
> Run an application as 'hrt_qa'
> Kill the application by clicking "kill application" button in UI once you 
> logged in as hrt_qa user in UI
> 404 error is shown even though the application gets killed.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6333) Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and fairSharePreemptionThreshold

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186645#comment-16186645
 ] 

Hadoop QA commented on YARN-6333:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6333 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889778/YARN-6333-2.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux cd319abdb310 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 373d0a5 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17716/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and 
> fairSharePreemptionThreshold
> --
>
> Key: YARN-6333
> URL: https://issues.apache.org/jira/browse/YARN-6333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Chetna Chaudhari
>  Labels: newbie++
> Attachments: YARN-6333-1.patch, YARN-6333-2.patch
>
>
> Default values of them are not mentioned in doc. For example, default value 
> of minSharePreemptionTimeout is {{Long.MAX_VALUE}}, which means the min share 
> preemption won't happen until you set a meaningful value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7234) Kill Application button shows "404 Error" even though the application gets killed

2017-09-29 Thread Sumana Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumana Sathish resolved YARN-7234.
--
  Resolution: Cannot Reproduce
Release Note: Do not see the issue anymore

> Kill Application button shows "404 Error" even though the application gets 
> killed
> -
>
> Key: YARN-7234
> URL: https://issues.apache.org/jira/browse/YARN-7234
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
>
> In Secured UI,
> Run an application as 'hrt_qa'
> Kill the application by clicking "kill application" button in UI once you 
> logged in as hrt_qa user in UI
> 404 error is shown even though the application gets killed.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7268) testCompareXmlAgainstConfigurationClass fails due to 1 missing property from yarn-default

2017-09-29 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong resolved YARN-7268.
-
Resolution: Invalid

> testCompareXmlAgainstConfigurationClass fails due to 1 missing property from 
> yarn-default
> -
>
> Key: YARN-7268
> URL: https://issues.apache.org/jira/browse/YARN-7268
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>
> {code}
> Error Message
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
> Stacktrace
> java.lang.AssertionError: yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:414)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Standard Output
> File yarn-default.xml (253 properties)
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   yarn.log-aggregation.file-controller.TFile.class
> ={code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7268) testCompareXmlAgainstConfigurationClass fails due to 1 missing property from yarn-default

2017-09-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186633#comment-16186633
 ] 

Xuan Gong commented on YARN-7268:
-

[~vinodkv]
bq.If this configuration yarn.log-aggregation.file-controller.TFile.class is 
not expected to be configurable by admin, it should be hardcoded and not be a 
config at all!

Now, I can remember why I am doing this at very beginning. Actually, I did this 
on purpose. First of all, to use new log aggregation file format, we need at 
least two configurations:
{code}
yarn.log-aggregation.file-formats
yarn.log-aggregation.file-controller.%s.class
{code}

To support backward compatibility, we always expect the user to append TFile to 
the configuration"yarn.log-aggregation.file-formats" which has already been 
documented. 
Of course, we could always hardcode "TFile" in code, but that means we will 
always support TFile which is OK. 
But to make this configurable, then the users can disable TFile support if they 
want.

> testCompareXmlAgainstConfigurationClass fails due to 1 missing property from 
> yarn-default
> -
>
> Key: YARN-7268
> URL: https://issues.apache.org/jira/browse/YARN-7268
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>
> {code}
> Error Message
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
> Stacktrace
> java.lang.AssertionError: yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:414)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Standard Output
> File yarn-default.xml (253 properties)
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   yarn.log-aggregation.file-controller.TFile.class
> ={code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7268) testCompareXmlAgainstConfigurationClass fails due to 1 missing property from yarn-default

2017-09-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186628#comment-16186628
 ] 

Xuan Gong commented on YARN-7268:
-

Thanks for the verification. Ray, 
This test will not fail in both trunk and branch-2.

So, I will close this jira as invalid

> testCompareXmlAgainstConfigurationClass fails due to 1 missing property from 
> yarn-default
> -
>
> Key: YARN-7268
> URL: https://issues.apache.org/jira/browse/YARN-7268
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>
> {code}
> Error Message
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
> Stacktrace
> java.lang.AssertionError: yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:414)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Standard Output
> File yarn-default.xml (253 properties)
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   yarn.log-aggregation.file-controller.TFile.class
> ={code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-09-29 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186627#comment-16186627
 ] 

Sidharta Seethana commented on YARN-7226:
-

I'll take a look at the latest patch today or Monday at the latest.

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3514) Active directory usernames like domain\login cause YARN failures

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186593#comment-16186593
 ] 

Hadoop QA commented on YARN-3514:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-3514 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3514 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12726964/YARN-3514.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17714/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Active directory usernames like domain\login cause YARN failures
> 
>
> Key: YARN-3514
> URL: https://issues.apache.org/jira/browse/YARN-3514
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.2.0
> Environment: CentOS6
>Reporter: john lilley
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-3514.001.patch, YARN-3514.002.patch
>
>
> We have a 2.2.0 (Cloudera 5.3) cluster running on CentOS6 that is 
> Kerberos-enabled and uses an external AD domain controller for the KDC.  We 
> are able to authenticate, browse HDFS, etc.  However, YARN fails during 
> localization because it seems to get confused by the presence of a \ 
> character in the local user name.
> Our AD authentication on the nodes goes through sssd and set configured to 
> map AD users onto the form domain\username.  For example, our test user has a 
> Kerberos principal of hadoopu...@domain.com and that maps onto a CentOS user 
> "domain\hadoopuser".  We have no problem validating that user with PAM, 
> logging in as that user, su-ing to that user, etc.
> However, when we attempt to run a YARN application master, the localization 
> step fails when setting up the local cache directory for the AM.  The error 
> that comes out of the RM logs:
> 2015-04-17 12:47:09 INFO net.redpoint.yarnapp.Client[0]: monitorApplication: 
> ApplicationReport: appId=1, state=FAILED, progress=0.0, finalStatus=FAILED, 
> diagnostics='Application application_1429295486450_0001 failed 1 times due to 
> AM Container for appattempt_1429295486450_0001_01 exited with  exitCode: 
> -1000 due to: Application application_1429295486450_0001 initialization 
> failed (exitCode=255) with output: main : command provided 0
> main : user is DOMAIN\hadoopuser
> main : requested yarn user is domain\hadoopuser
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Cannot create 
> directory: 
> /data/yarn/nm/usercache/domain%5Chadoopuser/appcache/application_1429295486450_0001/filecache/10
> at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:105)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.download(ContainerLocalizer.java:199)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:241)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)
> .Failing this attempt.. Failing the application.'
> However, when we look on the node launching the AM, we see this:
> [root@rpb-cdh-kerb-2 ~]# cd /data/yarn/nm/usercache
> [root@rpb-cdh-kerb-2 usercache]# ls -l
> drwxr-s--- 4 DOMAIN\hadoopuser yarn 4096 Apr 17 12:10 domain\hadoopuser
> There appears to be different treatment of the \ character in different 
> places.  Something creates the directory as "domain\hadoopuser" but something 
> else later attempts to use it as "domain%5Chadoopuser".  I’m not sure where 
> or why the URL escapement converts the \ to %5C or why this is not consistent.
> I should also mention, for the sake of completeness, our auth_to_local rule 
> is set up to map u...@domain.com to domain\user:
> RULE:[1:$1@$0](^.*@DOMAIN\.COM$)s/^(.*)@DOMAIN\.COM$/domain\\$1/g



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186594#comment-16186594
 ] 

Hadoop QA commented on YARN-2919:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-2919 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-2919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875365/YARN-2919.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17715/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4858) start-yarn and stop-yarn scripts to support timeline and sharedcachemanager

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186591#comment-16186591
 ] 

Hadoop QA commented on YARN-4858:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 2 new + 15 unchanged - 0 fixed = 17 
total (was 15) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | YARN-4858 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795001/YARN-4858-branch-2.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux cd3991781a57 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / b0ba31c |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-YARN-Build/17708/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17708/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17708/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> start-yarn and stop-yarn scripts to support timeline and sharedcachemanager
> ---
>
> Key: YARN-4858
> URL: https://issues.apache.org/jira/browse/YARN-4858
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-4858-001.patch, YARN-4858-branch-2.001.patch
>
>
> The start-yarn and stop-yarn scripts don't have any (even commented out) 
> support for the  timeline and sharedcachemanager
> Proposed:
> * bash and cmd start-yarn scripts have commented out start actions
> * stop-yarn scripts stop the servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186583#comment-16186583
 ] 

Hadoop QA commented on YARN-574:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  3m 48s{color} 
| {color:red} YARN-574 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17713/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.05.patch, 
> YARN-574.1.patch, YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6333) Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and fairSharePreemptionThreshold

2017-09-29 Thread Chetna Chaudhari (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186580#comment-16186580
 ] 

Chetna Chaudhari commented on YARN-6333:


Thanks [~yufeigu] for reviewing this patch. I have made the suggested changes 
in latest patch. Please review. 

> Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and 
> fairSharePreemptionThreshold
> --
>
> Key: YARN-6333
> URL: https://issues.apache.org/jira/browse/YARN-6333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Chetna Chaudhari
>  Labels: newbie++
> Attachments: YARN-6333-1.patch, YARN-6333-2.patch
>
>
> Default values of them are not mentioned in doc. For example, default value 
> of minSharePreemptionTimeout is {{Long.MAX_VALUE}}, which means the min share 
> preemption won't happen until you set a meaningful value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6333) Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and fairSharePreemptionThreshold

2017-09-29 Thread Chetna Chaudhari (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetna Chaudhari updated YARN-6333:
---
Attachment: YARN-6333-2.patch

> Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and 
> fairSharePreemptionThreshold
> --
>
> Key: YARN-6333
> URL: https://issues.apache.org/jira/browse/YARN-6333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Chetna Chaudhari
>  Labels: newbie++
> Attachments: YARN-6333-1.patch, YARN-6333-2.patch
>
>
> Default values of them are not mentioned in doc. For example, default value 
> of minSharePreemptionTimeout is {{Long.MAX_VALUE}}, which means the min share 
> preemption won't happen until you set a meaningful value. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3625) RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186569#comment-16186569
 ] 

Hadoop QA commented on YARN-3625:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-3625 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12732257/YARN-3625.2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17709/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put
> --
>
> Key: YARN-3625
> URL: https://issues.apache.org/jira/browse/YARN-3625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>  Labels: oct16-medium
> Attachments: YARN-3625.1.patch, YARN-3625.2.patch
>
>
> RollingLevelDBTimelineStore batches all entities in the same put to improve 
> performance. This causes an error when relating to an entity in the same put 
> however.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5762) Summarize ApplicationNotFoundException in the RM log

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186572#comment-16186572
 ] 

Hadoop QA commented on YARN-5762:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-5762 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5762 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834569/YARN-5762.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17711/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Summarize ApplicationNotFoundException in the RM log
> 
>
> Key: YARN-5762
> URL: https://issues.apache.org/jira/browse/YARN-5762
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.7.2
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Minor
> Attachments: YARN-5762.01.patch
>
>
> We found a lot of {{ApplicationNotFoundException}} in the RM logs. These were 
> most likely caused by the {{AggregatedLogDeletionService}} [which 
> checks|https://github.com/apache/hadoop/blob/262827cf75bf9c48cd95335eb04fd8ff1d64c538/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.java#L156]
>  that the application is not running anymore. e.g.
> {code}2016-10-17 15:25:26,542 INFO org.apache.hadoop.ipc.Server: IPC Server 
> handler 20 on 8032, call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplicationReport 
> from :12205 Call#35401 Retry#0
> org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application 
> with id 'application_1473396553140_1451' doesn't exist in RM.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:327)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> 2016-10-17 15:25:26,633 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 47 on 8032, call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplicationReport 
> from :12205 Call#35404 Retry#0
> org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application 
> with id 'application_1473396553140_1452' doesn't exist in RM.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:327)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, 

[jira] [Commented] (YARN-2031) YARN Proxy model doesn't support REST APIs in AMs

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186571#comment-16186571
 ] 

Hadoop QA commented on YARN-2031:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-2031 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-2031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12857354/YARN-2031-005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17710/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN Proxy model doesn't support REST APIs in AMs
> -
>
> Key: YARN-2031
> URL: https://issues.apache.org/jira/browse/YARN-2031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-2031-002.patch, YARN-2031-003.patch, 
> YARN-2031-004.patch, YARN-2031-005.patch, YARN-2031.patch.001
>
>
> AMs can't support REST APIs because
> # the AM filter redirects all requests to the proxy with a 302 response (not 
> 307)
> # the proxy doesn't forward PUT/POST/DELETE verbs
> Either the AM filter needs to return 307 and the proxy to forward the verbs, 
> or Am filter should not filter a REST bit of the web site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-09-29 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7262:

Target Version/s: 3.0.0  (was: 3.0.0, 2.10.0)

> Add a hierarchy into the ZKRMStateStore for delegation token znodes to 
> prevent jute buffer overflow
> ---
>
> Key: YARN-7262
> URL: https://issues.apache.org/jira/browse/YARN-7262
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7262.001.patch
>
>
> We've seen users who are running into a problem where the RM is storing so 
> many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those 
> znodes is higher than the jute buffer. This is fine during operations, but 
> becomes a problem on a fail over because the RM will try to read in all of 
> the token znodes (i.e. call {{getChildren}} on the parent znode).  This is 
> particularly bad because everything appears to be okay, but then if a 
> failover occurs you end up with no active RMs.
> There was a similar problem with the Yarn application data that was fixed in 
> YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull 
> subchildren without overflowing the jute buffer (though it's off by default).
> We should add a hierarchy similar to that of YARN-2962, but for the 
> delegation token znodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186568#comment-16186568
 ] 

Hadoop QA commented on YARN-7269:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
7m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  8s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:
 The patch generated 6 new + 22 unchanged - 0 fixed = 28 total (was 22) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 32s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889771/YARN-7269.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1bcb596a3273 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 373d0a5 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17705/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17705/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
|
| Console output | 

[jira] [Updated] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-09-29 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7262:

Target Version/s: 3.0.0, 2.10.0  (was: 2.9.0, 3.0.0)

I'm fine pushing this out.

> Add a hierarchy into the ZKRMStateStore for delegation token znodes to 
> prevent jute buffer overflow
> ---
>
> Key: YARN-7262
> URL: https://issues.apache.org/jira/browse/YARN-7262
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7262.001.patch
>
>
> We've seen users who are running into a problem where the RM is storing so 
> many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those 
> znodes is higher than the jute buffer. This is fine during operations, but 
> becomes a problem on a fail over because the RM will try to read in all of 
> the token znodes (i.e. call {{getChildren}} on the parent znode).  This is 
> particularly bad because everything appears to be okay, but then if a 
> failover occurs you end up with no active RMs.
> There was a similar problem with the Yarn application data that was fixed in 
> YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull 
> subchildren without overflowing the jute buffer (though it's off by default).
> We should add a hierarchy similar to that of YARN-2962, but for the 
> delegation token znodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-09-29 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186558#comment-16186558
 ] 

Carlo Curino commented on YARN-5329:


Yes it is targeting 2.9.0, and it shouldn't take long to fix the tests. (Also 
goes hand-in-hand with few other JIRAs that are already in from the YARN-5326 
umbrella).

> ReservationAgent enhancements required to support recurring reservations in 
> the YARN ReservationSystem
> --
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2681) Support bandwidth enforcement for containers while reading from HDFS

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186552#comment-16186552
 ] 

Hadoop QA commented on YARN-2681:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-2681 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-2681 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12745824/YARN-2681.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17707/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support bandwidth enforcement for containers while reading from HDFS
> 
>
> Key: YARN-2681
> URL: https://issues.apache.org/jira/browse/YARN-2681
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Affects Versions: 2.5.1
> Environment: Linux
>Reporter: Nam H. Do
> Attachments: Traffic Control Design.png, YARN-2681.001.patch, 
> YARN-2681.002.patch, YARN-2681.003.patch, YARN-2681.004.patch, 
> YARN-2681.005.patch, YARN-2681.patch
>
>
> To read/write data from HDFS on data node, applications establise TCP/IP 
> connections with the datanode. The HDFS read can be controled by setting 
> Linux Traffic Control  (TC) subsystem on the data node to make filters on 
> appropriate connections.
> The current cgroups net_cls concept can not be applied on the node where the 
> container is launched, netheir on data node since:
> -   TC hanldes outgoing bandwidth only, so it can be set on container node 
> (HDFS read = incoming data for the container)
> -   Since HDFS data node is handled by only one process,  it is not possible 
> to use net_cls to separate connections from different containers to the 
> datanode.
> Tasks:
> 1) Extend Resource model to define bandwidth enforcement rate
> 2) Monitor TCP/IP connection estabilised by container handling process and 
> its child processes
> 3) Set Linux Traffic Control rules on data node base on address:port pairs in 
> order to enforce bandwidth of outgoing data
> Concept: http://www.hit.bme.hu/~do/papers/EnforcementDesign.pdf
> Implementation: 
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl.pdf
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl_UML_diagram.png



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4721) RM to try to auth with HDFS on startup, retry with max diagnostics on failure

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186549#comment-16186549
 ] 

Hadoop QA commented on YARN-4721:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-4721 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4721 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12799703/HADOOP-12289-003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17706/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RM to try to auth with HDFS on startup, retry with max diagnostics on failure
> -
>
> Key: YARN-4721
> URL: https://issues.apache.org/jira/browse/YARN-4721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: oct16-medium
> Attachments: HADOOP-12289-002.patch, HADOOP-12289-003.patch, 
> HADOOP-12889-001.patch
>
>
> If the RM can't auth with HDFS, this can first surface during job submission, 
> which can cause confusion about what's wrong and whose credentials are 
> playing up.
> Instead, the RM could try to talk to HDFS on launch, {{ls /}} should suffice. 
> If it can't auth, it can then tell UGI to log more and retry.
> I don't know what the policy should be if the RM can't auth to HDFS at this 
> point. Certainly it can't currently accept work. But should it fail fast or 
> keep going in the hope that the problem is in the KDC or NN and will fix 
> itself without an RM restart?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4858) start-yarn and stop-yarn scripts to support timeline and sharedcachemanager

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4858:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> start-yarn and stop-yarn scripts to support timeline and sharedcachemanager
> ---
>
> Key: YARN-4858
> URL: https://issues.apache.org/jira/browse/YARN-4858
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-4858-001.patch, YARN-4858-branch-2.001.patch
>
>
> The start-yarn and stop-yarn scripts don't have any (even commented out) 
> support for the  timeline and sharedcachemanager
> Proposed:
> * bash and cmd start-yarn scripts have commented out start actions
> * stop-yarn scripts stop the servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2037) Add restart support for Unmanaged AMs

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2037:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Add restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Attachments: YARN-2037.v1.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5762) Summarize ApplicationNotFoundException in the RM log

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5762:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Summarize ApplicationNotFoundException in the RM log
> 
>
> Key: YARN-5762
> URL: https://issues.apache.org/jira/browse/YARN-5762
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.7.2
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Minor
> Attachments: YARN-5762.01.patch
>
>
> We found a lot of {{ApplicationNotFoundException}} in the RM logs. These were 
> most likely caused by the {{AggregatedLogDeletionService}} [which 
> checks|https://github.com/apache/hadoop/blob/262827cf75bf9c48cd95335eb04fd8ff1d64c538/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.java#L156]
>  that the application is not running anymore. e.g.
> {code}2016-10-17 15:25:26,542 INFO org.apache.hadoop.ipc.Server: IPC Server 
> handler 20 on 8032, call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplicationReport 
> from :12205 Call#35401 Retry#0
> org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application 
> with id 'application_1473396553140_1451' doesn't exist in RM.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:327)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> 2016-10-17 15:25:26,633 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 47 on 8032, call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplicationReport 
> from :12205 Call#35404 Retry#0
> org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application 
> with id 'application_1473396553140_1452' doesn't exist in RM.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:327)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7226:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4721) RM to try to auth with HDFS on startup, retry with max diagnostics on failure

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4721:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> RM to try to auth with HDFS on startup, retry with max diagnostics on failure
> -
>
> Key: YARN-4721
> URL: https://issues.apache.org/jira/browse/YARN-4721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: oct16-medium
> Attachments: HADOOP-12289-002.patch, HADOOP-12289-003.patch, 
> HADOOP-12889-001.patch
>
>
> If the RM can't auth with HDFS, this can first surface during job submission, 
> which can cause confusion about what's wrong and whose credentials are 
> playing up.
> Instead, the RM could try to talk to HDFS on launch, {{ls /}} should suffice. 
> If it can't auth, it can then tell UGI to log more and retry.
> I don't know what the policy should be if the RM can't auth to HDFS at this 
> point. Certainly it can't currently accept work. But should it fail fast or 
> keep going in the hope that the problem is in the KDC or NN and will fix 
> itself without an RM restart?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5329:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> ReservationAgent enhancements required to support recurring reservations in 
> the YARN ReservationSystem
> --
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7262:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Add a hierarchy into the ZKRMStateStore for delegation token znodes to 
> prevent jute buffer overflow
> ---
>
> Key: YARN-7262
> URL: https://issues.apache.org/jira/browse/YARN-7262
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7262.001.patch
>
>
> We've seen users who are running into a problem where the RM is storing so 
> many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those 
> znodes is higher than the jute buffer. This is fine during operations, but 
> becomes a problem on a fail over because the RM will try to read in all of 
> the token znodes (i.e. call {{getChildren}} on the parent znode).  This is 
> particularly bad because everything appears to be okay, but then if a 
> failover occurs you end up with no active RMs.
> There was a similar problem with the Yarn application data that was fixed in 
> YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull 
> subchildren without overflowing the jute buffer (though it's off by default).
> We should add a hierarchy similar to that of YARN-2962, but for the 
> delegation token znodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2919:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3861) Add fav icon to YARN & MR daemons web UI

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-3861:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Add fav icon to YARN & MR daemons web UI
> 
>
> Key: YARN-3861
> URL: https://issues.apache.org/jira/browse/YARN-3861
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Reporter: Devaraj K
>Assignee: Devaraj K
>  Labels: oct16-easy
> Attachments: hadoop-fav.png, hadoop-fav-transparent.png, RM UI in 
> Chrome-Without Patch.png, RM UI in Chrome-With Patch.png, RM UI in IE-Without 
> Patch.png.png, RM UI in IE-With Patch.png, YARN-3861.patch
>
>
> Add fav icon image to all YARN & MR daemons web UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3514) Active directory usernames like domain\login cause YARN failures

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-3514:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Active directory usernames like domain\login cause YARN failures
> 
>
> Key: YARN-3514
> URL: https://issues.apache.org/jira/browse/YARN-3514
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.2.0
> Environment: CentOS6
>Reporter: john lilley
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-3514.001.patch, YARN-3514.002.patch
>
>
> We have a 2.2.0 (Cloudera 5.3) cluster running on CentOS6 that is 
> Kerberos-enabled and uses an external AD domain controller for the KDC.  We 
> are able to authenticate, browse HDFS, etc.  However, YARN fails during 
> localization because it seems to get confused by the presence of a \ 
> character in the local user name.
> Our AD authentication on the nodes goes through sssd and set configured to 
> map AD users onto the form domain\username.  For example, our test user has a 
> Kerberos principal of hadoopu...@domain.com and that maps onto a CentOS user 
> "domain\hadoopuser".  We have no problem validating that user with PAM, 
> logging in as that user, su-ing to that user, etc.
> However, when we attempt to run a YARN application master, the localization 
> step fails when setting up the local cache directory for the AM.  The error 
> that comes out of the RM logs:
> 2015-04-17 12:47:09 INFO net.redpoint.yarnapp.Client[0]: monitorApplication: 
> ApplicationReport: appId=1, state=FAILED, progress=0.0, finalStatus=FAILED, 
> diagnostics='Application application_1429295486450_0001 failed 1 times due to 
> AM Container for appattempt_1429295486450_0001_01 exited with  exitCode: 
> -1000 due to: Application application_1429295486450_0001 initialization 
> failed (exitCode=255) with output: main : command provided 0
> main : user is DOMAIN\hadoopuser
> main : requested yarn user is domain\hadoopuser
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Cannot create 
> directory: 
> /data/yarn/nm/usercache/domain%5Chadoopuser/appcache/application_1429295486450_0001/filecache/10
> at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:105)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.download(ContainerLocalizer.java:199)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:241)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)
> .Failing this attempt.. Failing the application.'
> However, when we look on the node launching the AM, we see this:
> [root@rpb-cdh-kerb-2 ~]# cd /data/yarn/nm/usercache
> [root@rpb-cdh-kerb-2 usercache]# ls -l
> drwxr-s--- 4 DOMAIN\hadoopuser yarn 4096 Apr 17 12:10 domain\hadoopuser
> There appears to be different treatment of the \ character in different 
> places.  Something creates the directory as "domain\hadoopuser" but something 
> else later attempts to use it as "domain%5Chadoopuser".  I’m not sure where 
> or why the URL escapement converts the \ to %5C or why this is not consistent.
> I should also mention, for the sake of completeness, our auth_to_local rule 
> is set up to map u...@domain.com to domain\user:
> RULE:[1:$1@$0](^.*@DOMAIN\.COM$)s/^(.*)@DOMAIN\.COM$/domain\\$1/g



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3625) RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-3625:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put
> --
>
> Key: YARN-3625
> URL: https://issues.apache.org/jira/browse/YARN-3625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>  Labels: oct16-medium
> Attachments: YARN-3625.1.patch, YARN-3625.2.patch
>
>
> RollingLevelDBTimelineStore batches all entities in the same put to improve 
> performance. This causes an error when relating to an entity in the same put 
> however.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-574:
-

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.05.patch, 
> YARN-574.1.patch, YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2681) Support bandwidth enforcement for containers while reading from HDFS

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2681:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Support bandwidth enforcement for containers while reading from HDFS
> 
>
> Key: YARN-2681
> URL: https://issues.apache.org/jira/browse/YARN-2681
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Affects Versions: 2.5.1
> Environment: Linux
>Reporter: Nam H. Do
> Attachments: Traffic Control Design.png, YARN-2681.001.patch, 
> YARN-2681.002.patch, YARN-2681.003.patch, YARN-2681.004.patch, 
> YARN-2681.005.patch, YARN-2681.patch
>
>
> To read/write data from HDFS on data node, applications establise TCP/IP 
> connections with the datanode. The HDFS read can be controled by setting 
> Linux Traffic Control  (TC) subsystem on the data node to make filters on 
> appropriate connections.
> The current cgroups net_cls concept can not be applied on the node where the 
> container is launched, netheir on data node since:
> -   TC hanldes outgoing bandwidth only, so it can be set on container node 
> (HDFS read = incoming data for the container)
> -   Since HDFS data node is handled by only one process,  it is not possible 
> to use net_cls to separate connections from different containers to the 
> datanode.
> Tasks:
> 1) Extend Resource model to define bandwidth enforcement rate
> 2) Monitor TCP/IP connection estabilised by container handling process and 
> its child processes
> 3) Set Linux Traffic Control rules on data node base on address:port pairs in 
> order to enforce bandwidth of outgoing data
> Concept: http://www.hit.bme.hu/~do/papers/EnforcementDesign.pdf
> Implementation: 
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl.pdf
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl_UML_diagram.png



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1564) add workflow YARN services

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-1564:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> add workflow YARN services
> --
>
> Key: YARN-1564
> URL: https://issues.apache.org/jira/browse/YARN-1564
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-hard
> Attachments: YARN-1564-001.patch, YARN-1564-002.patch, 
> YARN-1564-003.patch
>
>   Original Estimate: 24h
>  Time Spent: 48h
>  Remaining Estimate: 0h
>
> I've been using some alternative composite services to help build workflows 
> of process execution in a YARN AM.
> They and their tests could be moved in YARN for the use by others -this would 
> make it easier to build aggregate services in an AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2031) YARN Proxy model doesn't support REST APIs in AMs

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2031:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> YARN Proxy model doesn't support REST APIs in AMs
> -
>
> Key: YARN-2031
> URL: https://issues.apache.org/jira/browse/YARN-2031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-2031-002.patch, YARN-2031-003.patch, 
> YARN-2031-004.patch, YARN-2031-005.patch, YARN-2031.patch.001
>
>
> AMs can't support REST APIs because
> # the AM filter redirects all requests to the proxy with a 302 response (not 
> 307)
> # the proxy doesn't forward PUT/POST/DELETE verbs
> Either the AM filter needs to return 307 and the proxy to forward the verbs, 
> or Am filter should not filter a REST bit of the web site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2748) Upload logs in the sub-folders under the local log dir when aggregating logs

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2748:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Upload logs in the sub-folders under the local log dir when aggregating logs
> 
>
> Key: YARN-2748
> URL: https://issues.apache.org/jira/browse/YARN-2748
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Affects Versions: 2.6.0
>Reporter: Zhijie Shen
>Assignee: Varun Saxena
> Attachments: YARN-2748.001.patch, YARN-2748.002.patch, 
> YARN-2748.03.patch, YARN-2748.04.patch
>
>
> YARN-2734 has a temporal fix to skip sub folders to avoid exception. Ideally, 
> if the app is creating a sub folder and putting its rolling logs there, we 
> need to upload these logs as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186470#comment-16186470
 ] 

Hadoop QA commented on YARN-7237:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 36 new + 208 unchanged - 1 fixed = 244 total (was 209) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 55s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | 

[jira] [Updated] (YARN-5926) clean up registry code for java 7/8

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5926:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> clean up registry code for java 7/8
> ---
>
> Key: YARN-5926
> URL: https://issues.apache.org/jira/browse/YARN-5926
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-5926-001.patch
>
>
> Clean up the registry code to stop the java 7/8 warnings



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6916:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6916
> URL: https://issues.apache.org/jira/browse/YARN-6916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch, YARN-6916.002.patch, 
> YARN-6916.003.patch, YARN-6916.004.patch, YARN-6916.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6623:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6426) Compress ZK YARN keys to scale up (especially AppStateData

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6426:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Compress ZK YARN keys to scale up (especially AppStateData
> --
>
> Key: YARN-6426
> URL: https://issues.apache.org/jira/browse/YARN-6426
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Roni Burd
>Assignee: Roni Burd
>  Labels: patch
> Attachments: zkcompression.patch
>
>
> ZK today stores the protobuf files uncompressed. This is not an issue except 
> that if a customer job has thousands of files, AppStateData will store the 
> user context as a string with multiple URLs and it is easy to get to 1MB or 
> more. 
> This can put unnecessary strain on ZK and make the process slow. 
> The proposal is to simply compress protobufs before sending them to ZK



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4988) Limit filter in ApplicationBaseProtocol#getApplications should return latest applications

2017-09-29 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4988:
--

Is this still on target for 2.9.0 ? If not, can we we push this out to the next 
major release ?

> Limit filter in ApplicationBaseProtocol#getApplications should return latest 
> applications
> -
>
> Key: YARN-4988
> URL: https://issues.apache.org/jira/browse/YARN-4988
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-medium
> Attachments: YARN-4988-wip.patch
>
>
> When ever limit filter is used to get application report using 
> ApplicationBaseProtocol#getApplications, the applications retrieved are not 
> the latest. The retrieved applications are random based on the hashcode. 
> The reason for above problem is RM maintains the apps in MAP where in 
> insertion of application id is based on the hashcode. So if there are 10 
> applications from app-1 to app-10 and then limit is 5, then supposed to 
> expect that applications from app-6 to app-10 should be retrieved. But now 
> some first 5 apps in the MAP are retrieved. So applications retrieved are 
> random 5!!
> I think limit should retrieve latest applications only.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7269:
-
Attachment: YARN-7269.002.patch

Thanks [~yufeigu] for review! Addressed all comments and attached ver.2 patch.

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
> Attachments: YARN-7269.001.patch, YARN-7269.002.patch
>
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >