[jira] [Commented] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599812#comment-17599812
 ] 

ASF GitHub Bot commented on YARN-11290:
---

slfan1989 commented on code in PR #4846:
URL: https://github.com/apache/hadoop/pull/4846#discussion_r962108067


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -255,14 +261,33 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
-List result =
-new ArrayList();
-for (Entry e : applications.entrySet()) {
-  result
-  .add(ApplicationHomeSubCluster.newInstance(e.getKey(), 
e.getValue()));
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
+List result = new ArrayList<>();
+List applicationIdList =
+applications.keySet().stream().collect(Collectors.toList());
+
+SubClusterId requestSubClusterId = request.getSubClusterId();
+int appCount = 0;
+for (int i = 0; i < applicationIdList.size(); i++) {
+  if (appCount >= maxAppsInStateStore) {

Review Comment:
   Thanks for your suggestion, I will modify the code!





> Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster
> -
>
> Key: YARN-11290
> URL: https://issues.apache.org/jira/browse/YARN-11290
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>
> 1. Currently this interface returns the number of apps in all sub-clusters, 
> increasing the limit on the number of query apps, limited to 1000 apps.
> 2. Allows to query the App based on the specified HomeSubCluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599811#comment-17599811
 ] 

ASF GitHub Bot commented on YARN-11290:
---

slfan1989 commented on code in PR #4846:
URL: https://github.com/apache/hadoop/pull/4846#discussion_r962108019


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -255,14 +261,33 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
-List result =
-new ArrayList();
-for (Entry e : applications.entrySet()) {
-  result
-  .add(ApplicationHomeSubCluster.newInstance(e.getKey(), 
e.getValue()));
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
+List result = new ArrayList<>();
+List applicationIdList =
+applications.keySet().stream().collect(Collectors.toList());
+
+SubClusterId requestSubClusterId = request.getSubClusterId();
+int appCount = 0;
+for (int i = 0; i < applicationIdList.size(); i++) {
+  if (appCount >= maxAppsInStateStore) {
+break;
+  }
+  ApplicationId applicationId = applicationIdList.get(i);
+  SubClusterId subClusterId = applications.get(applicationId);
+  // If the requestSubClusterId that needs to be filtered in the request
+  // is inconsistent with the SubClusterId in the data, continue to the 
next round
+  if (requestSubClusterId != null && 
!requestSubClusterId.equals(subClusterId)){
+continue;
+  }
+  result.add(ApplicationHomeSubCluster.newInstance(applicationId, 
subClusterId));
+  appCount++;
 }
 
-GetApplicationsHomeSubClusterResponse.newInstance(result);

Review Comment:
   This line of code should be an extra line of code, which has no practical 
significance. The following is directly constructed and returned.





> Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster
> -
>
> Key: YARN-11290
> URL: https://issues.apache.org/jira/browse/YARN-11290
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>
> 1. Currently this interface returns the number of apps in all sub-clusters, 
> increasing the limit on the number of query apps, limited to 1000 apps.
> 2. Allows to query the App based on the specified HomeSubCluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11286) Make AsyncDispatcher#printEventDetailsExecutor thread pool parameter configurable

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599810#comment-17599810
 ] 

ASF GitHub Bot commented on YARN-11286:
---

slfan1989 commented on PR #4824:
URL: https://github.com/apache/hadoop/pull/4824#issuecomment-1236054148

   @ayushtkn Can you help review this pr? Thank you very much!




> Make AsyncDispatcher#printEventDetailsExecutor thread pool parameter 
> configurable
> -
>
> Key: YARN-11286
> URL: https://issues.apache.org/jira/browse/YARN-11286
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>
> AsyncDispatcher#printEventDetailsExecutor thread pool parameters are 
> hard-coded, extract this part of hard-coded configuration parameters to the 
> configuration file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599736#comment-17599736
 ] 

ASF GitHub Bot commented on YARN-11290:
---

slfan1989 commented on code in PR #4846:
URL: https://github.com/apache/hadoop/pull/4846#discussion_r962057944


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java:
##
@@ -255,23 +260,41 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
 long start = clock.getTime();
 List result = new ArrayList<>();
+SubClusterId requestSubClusterId = request.getSubClusterId();
+int appCount = 0;
 
 try {
-  for (String child : zkManager.getChildren(appsZNode)) {
+  List childrens = zkManager.getChildren(appsZNode);
+  for (String child : childrens) {
+if (appCount >= maxAppsInStateStore) {
+  break;
+}
 ApplicationId appId = ApplicationId.fromString(child);
 SubClusterId homeSubCluster = getApp(appId);
-ApplicationHomeSubCluster app =
-ApplicationHomeSubCluster.newInstance(appId, homeSubCluster);
+// If the requestSubClusterId that needs to be filtered in the request
+// is inconsistent with the SubClusterId in the data, continue to the 
next round
+if (requestSubClusterId != null && 
!requestSubClusterId.equals(homeSubCluster)) {
+  continue;

Review Comment:
   I will fix it.





> Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster
> -
>
> Key: YARN-11290
> URL: https://issues.apache.org/jira/browse/YARN-11290
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>
> 1. Currently this interface returns the number of apps in all sub-clusters, 
> increasing the limit on the number of query apps, limited to 1000 apps.
> 2. Allows to query the App based on the specified HomeSubCluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599735#comment-17599735
 ] 

ASF GitHub Bot commented on YARN-11290:
---

slfan1989 commented on code in PR #4846:
URL: https://github.com/apache/hadoop/pull/4846#discussion_r962057774


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java:
##
@@ -726,13 +731,23 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
 CallableStatement cstmt = null;
 ResultSet rs = null;
-List appsHomeSubClusters =
-new ArrayList();
+List appsHomeSubClusters = new ArrayList<>();
 
 try {
   cstmt = getCallableStatement(CALL_SP_GET_APPLICATIONS_HOME_SUBCLUSTER);
+  cstmt.setInt("limit_IN", maxAppsInStateStore);
+  String homeSubClusterIN = null;;

Review Comment:
   I will fix it.





> Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster
> -
>
> Key: YARN-11290
> URL: https://issues.apache.org/jira/browse/YARN-11290
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>
> 1. Currently this interface returns the number of apps in all sub-clusters, 
> increasing the limit on the number of query apps, limited to 1000 apps.
> 2. Allows to query the App based on the specified HomeSubCluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599734#comment-17599734
 ] 

ASF GitHub Bot commented on YARN-11273:
---

slfan1989 commented on PR #4817:
URL: https://github.com/apache/hadoop/pull/4817#issuecomment-1235964512

   @goiri Thanks for your help reviewing the code!




> [RESERVATION] Federation StateStore: Support storage/retrieval of 
> Reservations With SQL
> ---
>
> Key: YARN-11273
> URL: https://issues.apache.org/jira/browse/YARN-11273
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11289) [Federation] Improve NM FederationInterceptor removeAppFromRegistry

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599733#comment-17599733
 ] 

ASF GitHub Bot commented on YARN-11289:
---

slfan1989 commented on PR #4836:
URL: https://github.com/apache/hadoop/pull/4836#issuecomment-1235963969

   @goiri Thank you very much for helping to review the code!




> [Federation] Improve NM FederationInterceptor removeAppFromRegistry
> ---
>
> Key: YARN-11289
> URL: https://issues.apache.org/jira/browse/YARN-11289
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation, nodemanager
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>
> [Federation] Improve NM FederationInterceptor removeAppFromRegistry
> 1.FederationInterceptor#finishApplicationMaster needs to check 
> getFinalApplicationStatus, and clean up when the status is SUCCESS.
> 2. When FederationInterceptor#shutdown, perform CleanUp on Application.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599706#comment-17599706
 ] 

ASF GitHub Bot commented on YARN-11290:
---

goiri commented on code in PR #4846:
URL: https://github.com/apache/hadoop/pull/4846#discussion_r961990524


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -255,14 +261,33 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
-List result =
-new ArrayList();
-for (Entry e : applications.entrySet()) {
-  result
-  .add(ApplicationHomeSubCluster.newInstance(e.getKey(), 
e.getValue()));
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
+List result = new ArrayList<>();
+List applicationIdList =
+applications.keySet().stream().collect(Collectors.toList());
+
+SubClusterId requestSubClusterId = request.getSubClusterId();
+int appCount = 0;
+for (int i = 0; i < applicationIdList.size(); i++) {
+  if (appCount >= maxAppsInStateStore) {

Review Comment:
   Move this to the for condition.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java:
##
@@ -255,23 +260,41 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
 long start = clock.getTime();
 List result = new ArrayList<>();
+SubClusterId requestSubClusterId = request.getSubClusterId();
+int appCount = 0;
 
 try {
-  for (String child : zkManager.getChildren(appsZNode)) {
+  List childrens = zkManager.getChildren(appsZNode);
+  for (String child : childrens) {
+if (appCount >= maxAppsInStateStore) {
+  break;
+}
 ApplicationId appId = ApplicationId.fromString(child);
 SubClusterId homeSubCluster = getApp(appId);
-ApplicationHomeSubCluster app =
-ApplicationHomeSubCluster.newInstance(appId, homeSubCluster);
+// If the requestSubClusterId that needs to be filtered in the request
+// is inconsistent with the SubClusterId in the data, continue to the 
next round
+if (requestSubClusterId != null && 
!requestSubClusterId.equals(homeSubCluster)) {
+  continue;

Review Comment:
   reverse the if



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java:
##
@@ -726,13 +731,23 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
 CallableStatement cstmt = null;
 ResultSet rs = null;
-List appsHomeSubClusters =
-new ArrayList();
+List appsHomeSubClusters = new ArrayList<>();
 
 try {
   cstmt = getCallableStatement(CALL_SP_GET_APPLICATIONS_HOME_SUBCLUSTER);
+  cstmt.setInt("limit_IN", maxAppsInStateStore);
+  String homeSubClusterIN = null;;

Review Comment:
   Extra ;



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -255,14 +261,33 @@ public GetApplicationHomeSubClusterResponse 
getApplicationHomeSubCluster(
   @Override
   public GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   GetApplicationsHomeSubClusterRequest request) throws YarnException {
-List result =
-new ArrayList();
-for (Entry e : applications.entrySet()) {
-  result
-  .add(ApplicationHomeSubCluster.newInstance(e.getKey(), 
e.getValue()));
+
+if (request == null) {
+  throw new YarnException("Missing getApplicationsHomeSubCluster request");
+}
+
+List result = new ArrayList<>();
+List applicationIdList =
+applications.keySet().stream().collect(Collectors.toList());
+
+SubClusterId requestSubClusterId = 

[jira] [Commented] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599675#comment-17599675
 ] 

ASF GitHub Bot commented on YARN-11290:
---

hadoop-yetus commented on PR #4846:
URL: https://github.com/apache/hadoop/pull/4846#issuecomment-1235812702

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 29s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   8m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   7m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  15m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   6m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  cc  |   9m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |   8m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 46s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4846/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 6 new + 165 unchanged 
- 0 fixed = 171 total (was 165)  |
   | +1 :green_heart: |  mvnsite  |   6m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  16m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 238m 45s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4846/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn in the patch passed.  |
   | -1 :x: |  unit  |   1m 33s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4846/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 28s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 442m 18s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
   |   | hadoop.yarn.conf.TestYarnConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 

[jira] [Commented] (YARN-11289) [Federation] Improve NM FederationInterceptor removeAppFromRegistry

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599652#comment-17599652
 ] 

ASF GitHub Bot commented on YARN-11289:
---

goiri merged PR #4836:
URL: https://github.com/apache/hadoop/pull/4836




> [Federation] Improve NM FederationInterceptor removeAppFromRegistry
> ---
>
> Key: YARN-11289
> URL: https://issues.apache.org/jira/browse/YARN-11289
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation, nodemanager
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>
> [Federation] Improve NM FederationInterceptor removeAppFromRegistry
> 1.FederationInterceptor#finishApplicationMaster needs to check 
> getFinalApplicationStatus, and clean up when the status is SUCCESS.
> 2. When FederationInterceptor#shutdown, perform CleanUp on Application.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599651#comment-17599651
 ] 

ASF GitHub Bot commented on YARN-11273:
---

goiri merged PR #4817:
URL: https://github.com/apache/hadoop/pull/4817




> [RESERVATION] Federation StateStore: Support storage/retrieval of 
> Reservations With SQL
> ---
>
> Key: YARN-11273
> URL: https://issues.apache.org/jira/browse/YARN-11273
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved YARN-11273.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> [RESERVATION] Federation StateStore: Support storage/retrieval of 
> Reservations With SQL
> ---
>
> Key: YARN-11273
> URL: https://issues.apache.org/jira/browse/YARN-11273
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599649#comment-17599649
 ] 

ASF GitHub Bot commented on YARN-11273:
---

goiri commented on PR #4817:
URL: https://github.com/apache/hadoop/pull/4817#issuecomment-1235756082

   The tests seem to run properly:
   
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/15/testReport/org.apache.hadoop.yarn.server.federation.store.impl/TestSQLFederationStateStore/




> [RESERVATION] Federation StateStore: Support storage/retrieval of 
> Reservations With SQL
> ---
>
> Key: YARN-11273
> URL: https://issues.apache.org/jira/browse/YARN-11273
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7614) [RESERVATION] Support ListReservation APIs in Federation Router

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-7614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599648#comment-17599648
 ] 

ASF GitHub Bot commented on YARN-7614:
--

goiri commented on code in PR #4843:
URL: https://github.com/apache/hadoop/pull/4843#discussion_r961885232


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java:
##
@@ -788,4 +819,67 @@ public AppActivitiesInfo getAppActivities(
 
 return appActivitiesInfo;
   }
+
+  @Override
+  public Response listReservation(String queue, String reservationId, long 
startTime, long endTime,
+  boolean includeResourceAllocations, HttpServletRequest hsr) throws 
Exception {
+
+if (!isRunning) {
+  throw new RuntimeException("RM is stopped");
+}
+
+if (!StringUtils.equals(queue, QUEUE_DEDICATED_FULL)) {
+  throw new RuntimeException("The specified queue: " + queue +
+  " is not managed by reservation system." +
+  " Please try again with a valid reservable queue.");
+}
+
+ReservationId reservationID = 
ReservationId.parseReservationId(reservationId);
+ReservationSystem reservationSystem = mockRM.getReservationSystem();
+reservationSystem.synchronizePlan(QUEUE_DEDICATED_FULL, true);
+
+// Generate reserved resources
+ClientRMService clientService = mockRM.getClientRMService();
+long arrival = Time.now();
+long duration = 6;
+long deadline = (long) (arrival + 1.05 * duration);
+ReservationSubmissionRequest submissionRequest =
+
ReservationSystemTestUtil.createSimpleReservationRequest(reservationID, 4,
+arrival, deadline, duration);

Review Comment:
   The indentation is not correct.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##
@@ -1808,6 +1829,32 @@ private SubClusterInfo 
getHomeSubClusterInfoByAppId(String appId)
 throw new YarnException("Unable to get subCluster by applicationId = " + 
appId);
   }
 
+  /**
+   * get the HomeSubCluster according to ReservationId.
+   *
+   * @param resId reservationId
+   * @return HomeSubCluster
+   * @throws YarnException on failure
+   */
+  private SubClusterInfo getHomeSubClusterInfoByReservationId(String resId)
+  throws YarnException {
+SubClusterInfo subClusterInfo = null;

Review Comment:
   Declare it where we use it or even just return.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java:
##
@@ -788,4 +819,67 @@ public AppActivitiesInfo getAppActivities(
 
 return appActivitiesInfo;
   }
+
+  @Override
+  public Response listReservation(String queue, String reservationId, long 
startTime, long endTime,
+  boolean includeResourceAllocations, HttpServletRequest hsr) throws 
Exception {
+
+if (!isRunning) {
+  throw new RuntimeException("RM is stopped");
+}
+
+if (!StringUtils.equals(queue, QUEUE_DEDICATED_FULL)) {
+  throw new RuntimeException("The specified queue: " + queue +
+  " is not managed by reservation system." +
+  " Please try again with a valid reservable queue.");
+}
+
+ReservationId reservationID = 
ReservationId.parseReservationId(reservationId);
+ReservationSystem reservationSystem = mockRM.getReservationSystem();
+reservationSystem.synchronizePlan(QUEUE_DEDICATED_FULL, true);
+
+// Generate reserved resources
+ClientRMService clientService = mockRM.getClientRMService();
+long arrival = Time.now();
+long duration = 6;

Review Comment:
   We probably want to have reasons for these constants even though is just a 
mock.





> [RESERVATION] Support ListReservation APIs in Federation Router
> ---
>
> Key: YARN-7614
> URL: https://issues.apache.org/jira/browse/YARN-7614
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, reservation system
>Reporter: Carlo Curino
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-11284) [Federation] Improve UnmanagedAMPoolManager WithoutBlock ServiceStop

2022-09-02 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved YARN-11284.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> [Federation] Improve UnmanagedAMPoolManager WithoutBlock ServiceStop
> 
>
> Key: YARN-11284
> URL: https://issues.apache.org/jira/browse/YARN-11284
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> There is a todo in UnmanagedAMPoolManager#ServiceStop
> {code:java}
> TODO: move waiting for the kill to finish into a separate thread, without 
> blocking the serviceStop. {code}
> I use a separate thread for this work, no longer Block blocking the 
> serviceStop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11284) [Federation] Improve UnmanagedAMPoolManager WithoutBlock ServiceStop

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599642#comment-17599642
 ] 

ASF GitHub Bot commented on YARN-11284:
---

goiri merged PR #4814:
URL: https://github.com/apache/hadoop/pull/4814




> [Federation] Improve UnmanagedAMPoolManager WithoutBlock ServiceStop
> 
>
> Key: YARN-11284
> URL: https://issues.apache.org/jira/browse/YARN-11284
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>
> There is a todo in UnmanagedAMPoolManager#ServiceStop
> {code:java}
> TODO: move waiting for the kill to finish into a separate thread, without 
> blocking the serviceStop. {code}
> I use a separate thread for this work, no longer Block blocking the 
> serviceStop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6667) Handle containerId duplicate without failing the heartbeat in Federation Interceptor

2022-09-02 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved YARN-6667.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Handle containerId duplicate without failing the heartbeat in Federation 
> Interceptor
> 
>
> Key: YARN-6667
> URL: https://issues.apache.org/jira/browse/YARN-6667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> From the actual situation, the probability of this happening is very low. 
> It can only be caused by the master-slave fail-hover of YARN and the wrong 
> Epoch parameter configuration.
> We will try to be compatible with this situation and let the Application run 
> as much as possible, using the following measures:
> 1. Select a node whose heartbeat does not time out for allocation, and at the 
> same time require the node to be in the RUNNING state.
> 2. If the heartbeat of both RMs does not time out, and both are in the 
> RUNNING state, select the previously allocated RM for Container processing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6667) Handle containerId duplicate without failing the heartbeat in Federation Interceptor

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599640#comment-17599640
 ] 

ASF GitHub Bot commented on YARN-6667:
--

goiri merged PR #4810:
URL: https://github.com/apache/hadoop/pull/4810




> Handle containerId duplicate without failing the heartbeat in Federation 
> Interceptor
> 
>
> Key: YARN-6667
> URL: https://issues.apache.org/jira/browse/YARN-6667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>
> From the actual situation, the probability of this happening is very low. 
> It can only be caused by the master-slave fail-hover of YARN and the wrong 
> Epoch parameter configuration.
> We will try to be compatible with this situation and let the Application run 
> as much as possible, using the following measures:
> 1. Select a node whose heartbeat does not time out for allocation, and at the 
> same time require the node to be in the RUNNING state.
> 2. If the heartbeat of both RMs does not time out, and both are in the 
> RUNNING state, select the previously allocated RM for Container processing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599562#comment-17599562
 ] 

ASF GitHub Bot commented on YARN-11273:
---

hadoop-yetus commented on PR #4817:
URL: https://github.com/apache/hadoop/pull/4817#issuecomment-1235640214

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  11m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   9m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  15m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 22s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  10m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   9m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   6m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  15m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 240m 19s |  |  hadoop-yarn in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 31s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 443m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4817 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux 9df9fb1b2c70 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6cfb79be148605a2437bdef9346810bbc4dddff0 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/15/testReport/ |
   | Max. process+thread count 

[jira] [Commented] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599560#comment-17599560
 ] 

ASF GitHub Bot commented on YARN-11273:
---

hadoop-yetus commented on PR #4817:
URL: https://github.com/apache/hadoop/pull/4817#issuecomment-1235638863

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 33s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  11m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   9m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  15m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 22s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  10m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   9m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   6m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  15m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 239m 45s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/14/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 42s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 18s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 453m 23s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4817 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux a2f07b2a892c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6cfb79be148605a2437bdef9346810bbc4dddff0 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | 

[jira] [Commented] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599559#comment-17599559
 ] 

ASF GitHub Bot commented on YARN-11273:
---

slfan1989 commented on PR #4817:
URL: https://github.com/apache/hadoop/pull/4817#issuecomment-1235635028

   @goiri Please help to review the code, thank you very much!




> [RESERVATION] Federation StateStore: Support storage/retrieval of 
> Reservations With SQL
> ---
>
> Key: YARN-11273
> URL: https://issues.apache.org/jira/browse/YARN-11273
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599450#comment-17599450
 ] 

ASF GitHub Bot commented on YARN-11290:
---

slfan1989 opened a new pull request, #4846:
URL: https://github.com/apache/hadoop/pull/4846

   JIRA: YARN-11290. Improve Query Condition of 
FederationStateStore#getApplicationsHomeSubCluster.
   
   1. Currently this interface returns the number of apps in all sub-clusters, 
increasing the limit on the number of query apps, limited to 1000 apps.
   2. Allows to query the App based on the specified HomeSubCluster.




> Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster
> -
>
> Key: YARN-11290
> URL: https://issues.apache.org/jira/browse/YARN-11290
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>
> 1. Currently this interface returns the number of apps in all sub-clusters, 
> increasing the limit on the number of query apps, limited to 1000 apps.
> 2. Allows to query the App based on the specified HomeSubCluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-11290) Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster

2022-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated YARN-11290:
--
Labels: pull-request-available  (was: )

> Improve Query Condition of FederationStateStore#getApplicationsHomeSubCluster
> -
>
> Key: YARN-11290
> URL: https://issues.apache.org/jira/browse/YARN-11290
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>
> 1. Currently this interface returns the number of apps in all sub-clusters, 
> increasing the limit on the number of query apps, limited to 1000 apps.
> 2. Allows to query the App based on the specified HomeSubCluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11291) Can hadoop3.3.4 support spark job containing jars compiled on java17

2022-09-02 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599442#comment-17599442
 ] 

Steve Loughran commented on YARN-11291:
---

bq. I have installed a hadoop cluster versioned3.3.4 on java1.8.

you might want to try upgrading the cluster. 3.3.4 has only been tested on java 
11 though

> Can hadoop3.3.4 support spark job containing jars compiled on java17
> 
>
> Key: YARN-11291
> URL: https://issues.apache.org/jira/browse/YARN-11291
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: hadoop: 3.3.4
> spark: 3.3.0
> java: 17
>Reporter: jiangjiguang0719
>Priority: Major
>
> I have installed a hadoop cluster versioned3.3.4 on java1.8.
> Now I submit spark job to yarn, it does not works, and an error occurred
> The spark job contains jars  compiled on java17 .
> How can I do to support spark job containing jar compiled on java17 ?
> The error log is:
> 22/09/01 13:18:09 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 1824) 
> (worker03 executor 34): java.lang.UnsupportedClassVersionError: 
> org/apache/parquet/column/values/bitpacking/Packer has been compiled by a 
> more recent version of the Java Runtime (class file version 61.0), this 
> version of the Java Runtime only recognizes class file versions up to 52.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>         at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.init(VectorizedRleValuesReader.java:132)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.(VectorizedRleValuesReader.java:97)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPageV2(VectorizedColumnReader.java:381)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.access$100(VectorizedColumnReader.java:49)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:281)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:268)
>         at 
> org.apache.parquet.column.page.DataPageV2.accept(DataPageV2.java:192)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPage(VectorizedColumnReader.java:268)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:186)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:316)
>         at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:212)
>         at 
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
>         at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
>         at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:274)
>         at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
>         at 
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:553)
>         at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.columnartorow_nextBatch_0$(Unknown
>  Source)
>         at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown
>  Source)
>         at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>         at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
>         at 
> 

[jira] [Commented] (YARN-9708) Yarn Router Support DelegationToken

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599403#comment-17599403
 ] 

ASF GitHub Bot commented on YARN-9708:
--

hadoop-yetus commented on PR #4746:
URL: https://github.com/apache/hadoop/pull/4746#issuecomment-1235358305

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   8m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   8m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  22m 27s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  cc  |   9m 39s |  |  the patch passed  |
   | -1 :x: |  javac  |   9m 39s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4746/16/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 740 
unchanged - 0 fixed = 741 total (was 740)  |
   | +1 :green_heart: |  compile  |   8m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |   8m 47s |  |  the patch passed  |
   | -1 :x: |  javac  |   8m 47s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4746/16/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 3 new 
+ 650 unchanged - 2 fixed = 653 total (was 652)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 56s |  |  
hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 26 unchanged - 2 
fixed = 26 total (was 28)  |
   | +1 :green_heart: |  mvnsite  |   4m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 18s 

[jira] [Updated] (YARN-5936) when cpu strict mode is closed, yarn couldn't assure scheduling fairness between containers

2022-09-02 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-5936:
--
Target Version/s:   (was: 2.7.1)

> when cpu strict mode is closed, yarn couldn't assure scheduling fairness 
> between containers
> ---
>
> Key: YARN-5936
> URL: https://issues.apache.org/jira/browse/YARN-5936
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
> Environment: CentOS7.1
>Reporter: zhengchenyu
>Priority: Critical
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When using LinuxContainer, the setting that 
> "yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage" is 
> true could assure scheduling fairness with the cpu bandwith of cgroup. But 
> the cpu bandwidth of cgroup would lead to bad performance in our experience. 
> Without cpu bandwidth of cgroup, cpu.share of cgroup is our only way to 
> assure scheduling fairness, but it is not completely effective. For example, 
> There are two container that have same vcore(means same cpu.share), one 
> container is single-threaded, the other container is multi-thread. the 
> multi-thread will have more CPU time, It's unreasonable!
> Here is my test case, I submit two distributedshell application. And two 
> commmand are below:
> {code}
> hadoop jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> -shell_script ./run.sh  -shell_args 10 -num_containers 1 -container_memory 
> 1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10
> hadoop jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> -shell_script ./run.sh  -shell_args 1  -num_containers 1 -container_memory 
> 1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10
> {code}
>  here show the cpu time of the two container:
> {code}
>   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 15448 yarn  20   0 9059592  28336   9180 S 998.7  0.1  24:09.30 java
> 15026 yarn  20   0 9050340  27480   9188 S 100.0  0.1   3:33.97 java
> 13767 yarn  20   0 1799816 381208  18528 S   4.6  1.2   0:30.55 java
>77 root  rt   0   0  0  0 S   0.3  0.0   0:00.74 
> migration/1   
> {code}
> We find the cpu time of Muliti-Thread are ten times than the cpu time of 
> Single-Thread, though the two container have same cpu.share.
> notes:
> run.sh
> {code} 
>   java -cp /home/yarn/loop.jar:$CLASSPATH loop.loop $1
> {code} 
> loop.java
> {code} 
> package loop;
> public class loop {
>   public static void main(String[] args) {
>   // TODO Auto-generated method stub
>   int loop = 1;
>   if(args.length>=1) {
>   System.out.println(args[0]);
>   loop = Integer.parseInt(args[0]);
>   }
>   for(int i=0;i   System.out.println("start thread " + i);
>   new Thread(new Runnable() {
>   @Override
>   public void run() {
>   // TODO Auto-generated method stub
>   int j=0;
>   while(true){j++;}
>   }
>   }).start();
>   }
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5936) when cpu strict mode is closed, yarn couldn't assure scheduling fairness between containers

2022-09-02 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu resolved YARN-5936.
---
Resolution: Not A Problem

> when cpu strict mode is closed, yarn couldn't assure scheduling fairness 
> between containers
> ---
>
> Key: YARN-5936
> URL: https://issues.apache.org/jira/browse/YARN-5936
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
> Environment: CentOS7.1
>Reporter: zhengchenyu
>Priority: Critical
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When using LinuxContainer, the setting that 
> "yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage" is 
> true could assure scheduling fairness with the cpu bandwith of cgroup. But 
> the cpu bandwidth of cgroup would lead to bad performance in our experience. 
> Without cpu bandwidth of cgroup, cpu.share of cgroup is our only way to 
> assure scheduling fairness, but it is not completely effective. For example, 
> There are two container that have same vcore(means same cpu.share), one 
> container is single-threaded, the other container is multi-thread. the 
> multi-thread will have more CPU time, It's unreasonable!
> Here is my test case, I submit two distributedshell application. And two 
> commmand are below:
> {code}
> hadoop jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> -shell_script ./run.sh  -shell_args 10 -num_containers 1 -container_memory 
> 1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10
> hadoop jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> -shell_script ./run.sh  -shell_args 1  -num_containers 1 -container_memory 
> 1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10
> {code}
>  here show the cpu time of the two container:
> {code}
>   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 15448 yarn  20   0 9059592  28336   9180 S 998.7  0.1  24:09.30 java
> 15026 yarn  20   0 9050340  27480   9188 S 100.0  0.1   3:33.97 java
> 13767 yarn  20   0 1799816 381208  18528 S   4.6  1.2   0:30.55 java
>77 root  rt   0   0  0  0 S   0.3  0.0   0:00.74 
> migration/1   
> {code}
> We find the cpu time of Muliti-Thread are ten times than the cpu time of 
> Single-Thread, though the two container have same cpu.share.
> notes:
> run.sh
> {code} 
>   java -cp /home/yarn/loop.jar:$CLASSPATH loop.loop $1
> {code} 
> loop.java
> {code} 
> package loop;
> public class loop {
>   public static void main(String[] args) {
>   // TODO Auto-generated method stub
>   int loop = 1;
>   if(args.length>=1) {
>   System.out.println(args[0]);
>   loop = Integer.parseInt(args[0]);
>   }
>   for(int i=0;i   System.out.println("start thread " + i);
>   new Thread(new Runnable() {
>   @Override
>   public void run() {
>   // TODO Auto-generated method stub
>   int j=0;
>   while(true){j++;}
>   }
>   }).start();
>   }
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5936) when cpu strict mode is closed, yarn couldn't assure scheduling fairness between containers

2022-09-02 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599379#comment-17599379
 ] 

zhengchenyu commented on YARN-5936:
---

For work change, I miss long long time.

In fact, we need make sure all task of the container in same task group. Use 
task group as linux shed entity, will assure fairness. (Note: here task is 
linux task).

I redo the experiment in hadoop-3.2.1, linux kernel is 3.10.0-862.el7.x86_64. 
The containers nearly fairness.

Note: original experiment is not serious, redo it!
{code:java}
## The first container is 100 thread, the second is 150 thread. The logical 
core of host is 64. 
hadoop jar 
/home/ke/bin/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar
 org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
/home/ke/bin/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar
 -shell_script ./run.sh  -shell_args 100 -num_containers 1 -container_memory 
1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10

hadoop jar 
/home/ke/bin/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar
 org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
/home/ke/bin/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar
 -shell_script ./run.sh  -shell_args 150 -num_containers 1 -container_memory 
1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10{code}
The cpu usage of containers nearly fairness. So close it.

> when cpu strict mode is closed, yarn couldn't assure scheduling fairness 
> between containers
> ---
>
> Key: YARN-5936
> URL: https://issues.apache.org/jira/browse/YARN-5936
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
> Environment: CentOS7.1
>Reporter: zhengchenyu
>Priority: Critical
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When using LinuxContainer, the setting that 
> "yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage" is 
> true could assure scheduling fairness with the cpu bandwith of cgroup. But 
> the cpu bandwidth of cgroup would lead to bad performance in our experience. 
> Without cpu bandwidth of cgroup, cpu.share of cgroup is our only way to 
> assure scheduling fairness, but it is not completely effective. For example, 
> There are two container that have same vcore(means same cpu.share), one 
> container is single-threaded, the other container is multi-thread. the 
> multi-thread will have more CPU time, It's unreasonable!
> Here is my test case, I submit two distributedshell application. And two 
> commmand are below:
> {code}
> hadoop jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> -shell_script ./run.sh  -shell_args 10 -num_containers 1 -container_memory 
> 1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10
> hadoop jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar 
> -shell_script ./run.sh  -shell_args 1  -num_containers 1 -container_memory 
> 1024 -container_vcores 1 -master_memory 1024 -master_vcores 1 -priority 10
> {code}
>  here show the cpu time of the two container:
> {code}
>   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 15448 yarn  20   0 9059592  28336   9180 S 998.7  0.1  24:09.30 java
> 15026 yarn  20   0 9050340  27480   9188 S 100.0  0.1   3:33.97 java
> 13767 yarn  20   0 1799816 381208  18528 S   4.6  1.2   0:30.55 java
>77 root  rt   0   0  0  0 S   0.3  0.0   0:00.74 
> migration/1   
> {code}
> We find the cpu time of Muliti-Thread are ten times than the cpu time of 
> Single-Thread, though the two container have same cpu.share.
> notes:
> run.sh
> {code} 
>   java -cp /home/yarn/loop.jar:$CLASSPATH loop.loop $1
> {code} 
> loop.java
> {code} 
> package loop;
> public class loop {
>   public static void main(String[] args) {
>   // TODO Auto-generated method stub
>   int loop = 1;
>   if(args.length>=1) {
>   System.out.println(args[0]);
>   loop = Integer.parseInt(args[0]);
>   }
>   for(int i=0;i   System.out.println("start thread " + i);
>   new Thread(new Runnable() {
>   @Override
>  

[jira] [Created] (YARN-11292) resourcemanager no longer reconnects to zk

2022-09-02 Thread chenwencan (Jira)
chenwencan created YARN-11292:
-

 Summary: resourcemanager no longer reconnects to zk
 Key: YARN-11292
 URL: https://issues.apache.org/jira/browse/YARN-11292
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.3.3
Reporter: chenwencan


this problem has occurred in our environment ,the process of the problem is as 
follow:
 # network exception between resourcemanager and zookeeper
 # resourcemanger reconnect zookeeper successful
 # zookeeper session expire occurred
 # resourcemanager create new zookeeper client and reconnect it
 # if reconnect zk failed,will trigger RMFatalEvent
 # then start new thread to continue reconnect and rejoin election,while the 
variable  hasAlreadyRun controll just run once,so if still reconnect 
failed,there have no chance to reconnect

{code:java}
    private class StandByTransitionRunnable implements Runnable {
      // The atomic variable to make sure multiple threads with the same 
runnable
      // run only once.
      private final AtomicBoolean hasAlreadyRun = new AtomicBoolean(false);     
 @Override
      public void run() {
        // Run this only once, even if multiple threads end up triggering
        // this simultaneously.
        if (hasAlreadyRun.getAndSet(true)) {
          return;
        }        if (rmContext.isHAEnabled()) {
          try {
            // Transition to standby and reinit active services
            LOG.info("Transitioning RM to Standby mode");
            transitionToStandby(true);
            EmbeddedElector elector = rmContext.getLeaderElectorService();
            if (elector != null) {
              elector.rejoinElection();
            }
          } catch (Exception e) {
            LOG.error(FATAL, "Failed to transition RM to Standby mode.", e);
            ExitUtil.terminate(1, e);
          }
        }
      }
    } {code}
so, i think use a lock here will be better



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-11289) [Federation] Improve NM FederationInterceptor removeAppFromRegistry

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599295#comment-17599295
 ] 

ASF GitHub Bot commented on YARN-11289:
---

hadoop-yetus commented on PR #4836:
URL: https://github.com/apache/hadoop/pull/4836#issuecomment-1235169827

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  25m 18s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 135m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4836/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4836 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d098cf732689 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / af340243fabdf00a153c15d42eb8583b20ab979b |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4836/6/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4836/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[jira] [Created] (YARN-11291) Can hadoop3.3.4 support spark job containing jars compiled on java17

2022-09-02 Thread jiangjiguang0719 (Jira)
jiangjiguang0719 created YARN-11291:
---

 Summary: Can hadoop3.3.4 support spark job containing jars 
compiled on java17
 Key: YARN-11291
 URL: https://issues.apache.org/jira/browse/YARN-11291
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: hadoop: 3.3.4

spark: 3.3.0

java: 17
Reporter: jiangjiguang0719


I have installed a hadoop cluster versioned3.3.4 on java1.8.
Now I submit spark job to yarn, it does not works, and an error occurred
The spark job contains jars  compiled on java17 .

How can I do to support spark job containing jar compiled on java17.

The error log is:

22/09/01 13:18:09 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 1824) 
(worker03 executor 34): java.lang.UnsupportedClassVersionError: 
org/apache/parquet/column/values/bitpacking/Packer has been compiled by a more 
recent version of the Java Runtime (class file version 61.0), this version of 
the Java Runtime only recognizes class file versions up to 52.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.init(VectorizedRleValuesReader.java:132)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.(VectorizedRleValuesReader.java:97)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPageV2(VectorizedColumnReader.java:381)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.access$100(VectorizedColumnReader.java:49)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:281)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:268)
        at org.apache.parquet.column.page.DataPageV2.accept(DataPageV2.java:192)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPage(VectorizedColumnReader.java:268)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:186)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:316)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:212)
        at 
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:274)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
        at 
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:553)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.columnartorow_nextBatch_0$(Unknown
 Source)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:136)
        at 

[jira] [Updated] (YARN-11291) Can hadoop3.3.4 support spark job containing jars compiled on java17

2022-09-02 Thread jiangjiguang0719 (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiangjiguang0719 updated YARN-11291:

Description: 
I have installed a hadoop cluster versioned3.3.4 on java1.8.
Now I submit spark job to yarn, it does not works, and an error occurred
The spark job contains jars  compiled on java17 .

How can I do to support spark job containing jar compiled on java17 ?

The error log is:

22/09/01 13:18:09 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 1824) 
(worker03 executor 34): java.lang.UnsupportedClassVersionError: 
org/apache/parquet/column/values/bitpacking/Packer has been compiled by a more 
recent version of the Java Runtime (class file version 61.0), this version of 
the Java Runtime only recognizes class file versions up to 52.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.init(VectorizedRleValuesReader.java:132)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.(VectorizedRleValuesReader.java:97)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPageV2(VectorizedColumnReader.java:381)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.access$100(VectorizedColumnReader.java:49)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:281)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:268)
        at org.apache.parquet.column.page.DataPageV2.accept(DataPageV2.java:192)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPage(VectorizedColumnReader.java:268)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:186)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:316)
        at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:212)
        at 
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:274)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
        at 
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:553)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.columnartorow_nextBatch_0$(Unknown
 Source)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:136)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
        at 

[jira] [Commented] (YARN-11273) [RESERVATION] Federation StateStore: Support storage/retrieval of Reservations With SQL

2022-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599271#comment-17599271
 ] 

ASF GitHub Bot commented on YARN-11273:
---

hadoop-yetus commented on PR #4817:
URL: https://github.com/apache/hadoop/pull/4817#issuecomment-1235099460

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 52s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   8m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 55s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  14m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 41s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  8s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 11s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   9m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   8m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   6m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 10s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  14m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 237m 13s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/13/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn in the patch passed.  |
   | -1 :x: |  unit  |   3m 26s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/13/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt)
 |  hadoop-yarn-server-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   4m  8s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 438m 45s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.federation.store.impl.TestSQLFederationStateStore |
   |   | hadoop.yarn.server.federation.store.impl.TestSQLFederationStateStore |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4817/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4817 |
   | Optional Tests | dupname asflicense