Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2020-10-28 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/41/

[Oct 27, 2020 7:56:30 AM] (noreply) HDFS-15461. TestDFSClientRetries 
testGetFileChecksum fails (#2404)
[Oct 27, 2020 10:18:08 AM] (noreply) HDFS-15580. [JDK 12] 
DFSTestUtil#addDataNodeLayoutVersion fails (#2309)
[Oct 27, 2020 11:45:00 AM] (noreply) HDFS-9776. 
testMultipleAppendsDuringCatchupTailing is flaky (#2410)
[Oct 28, 2020 1:13:25 AM] (noreply) HDFS-15652. Make block size from 
NNThroughputBenchmark configurable (#2416)
[Oct 28, 2020 1:52:56 AM] (noreply) HDFS-15457. TestFsDatasetImpl fails 
intermittently (#2407)
[Oct 28, 2020 1:56:40 AM] (noreply) HDFS-15460. 
TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently. 
(#2406)




-1 overall


The following subsystems voted -1:
blanks findbugs mvnsite pathlen shadedclient unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 694] 

findbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 694] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 333] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Redundant nullcheck of it, which is known to be non-null in 

[jira] [Resolved] (YARN-10477) runc launch failure should not cause nodemanager to go unhealthy

2020-10-28 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan resolved YARN-10477.

Resolution: Invalid

Closing this as invalid.  The problem was only there in our internal version of 
container-executor.  I should have checked the code in trunk before filing.


> runc launch failure should not cause nodemanager to go unhealthy
> 
>
> Key: YARN-10477
> URL: https://issues.apache.org/jira/browse/YARN-10477
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.3.1, 3.4.1
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> We have observed some failures when launching containers with runc.  We have 
> not yet identified the root cause of those failures, but a side-effect of 
> these failures was the Nodemanager marked itself unhealthy.  Since these are 
> rare failures that only affect a single launch, they should not cause the 
> Nodemanager to be marked unhealthy.
> Here is an example RM log:
> {noformat}
> resourcemanager.log.2020-10-02-03.bz2:2020-10-02 03:20:10,255 [RM Event 
> dispatcher] INFO rmnode.RMNodeImpl: Node node:8041 reported UNHEALTHY with 
> details: Linux Container Executor reached unrecoverable exception
> {noformat}
> And here is an example of the NM log:
> {noformat}
> 2020-10-02 03:20:02,033 [ContainersLauncher #434] INFO 
> runtime.RuncContainerRuntime: Launch container failed for 
> container_e25_1601602719874_10691_01_001723
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=24: OCI command has bad/missing local dire
> ctories
> {noformat}
> The problem is that the runc code in container-executor is re-using exit code 
> 24 (INVALID_CONFIG_FILE) which is intended for problems with the 
> container-executor.cfg file, and those failures are fatal for the NM.  We 
> should use a different exit code for these.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10477) runc launch failure should not cause nodemanager to go unhealthy

2020-10-28 Thread Jim Brennan (Jira)
Jim Brennan created YARN-10477:
--

 Summary: runc launch failure should not cause nodemanager to go 
unhealthy
 Key: YARN-10477
 URL: https://issues.apache.org/jira/browse/YARN-10477
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.3.1, 3.4.1
Reporter: Jim Brennan
Assignee: Jim Brennan


We have observed some failures when launching containers with runc.  We have 
not yet identified the root cause of those failures, but a side-effect of these 
failures was the Nodemanager marked itself unhealthy.  Since these are rare 
failures that only affect a single launch, they should not cause the 
Nodemanager to be marked unhealthy.

Here is an example RM log:
{noformat}
resourcemanager.log.2020-10-02-03.bz2:2020-10-02 03:20:10,255 [RM Event 
dispatcher] INFO rmnode.RMNodeImpl: Node node:8041 reported UNHEALTHY with 
details: Linux Container Executor reached unrecoverable exception
{noformat}
And here is an example of the NM log:
{noformat}
2020-10-02 03:20:02,033 [ContainersLauncher #434] INFO 
runtime.RuncContainerRuntime: Launch container failed for 
container_e25_1601602719874_10691_01_001723
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
 ExitCodeException exitCode=24: OCI command has bad/missing local dire
ctories
{noformat}

The problem is that the runc code in container-executor is re-using exit code 
24 (INVALID_CONFIG_FILE) which is intended for problems with the 
container-executor.cfg file, and those failures are fatal for the NM.  We 
should use a different exit code for these.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-10-28 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/

[Oct 27, 2020 2:41:10 AM] (Yiqun Lin) HDFS-15640. Add diff threshold to 
FedBalance. Contributed by Jinglun.
[Oct 27, 2020 7:56:30 AM] (noreply) HDFS-15461. TestDFSClientRetries 
testGetFileChecksum fails (#2404)
[Oct 27, 2020 10:18:08 AM] (noreply) HDFS-15580. [JDK 12] 
DFSTestUtil#addDataNodeLayoutVersion fails (#2309)
[Oct 27, 2020 11:45:00 AM] (noreply) HDFS-9776. 
testMultipleAppendsDuringCatchupTailing is flaky (#2410)
[Oct 28, 2020 1:13:25 AM] (noreply) HDFS-15652. Make block size from 
NNThroughputBenchmark configurable (#2416)




-1 overall


The following subsystems voted -1:
pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.TestDeletionService 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
   hadoop.fs.azure.TestNativeAzureFileSystemMocked 
   hadoop.fs.azure.TestBlobMetadata 
   hadoop.fs.azure.TestNativeAzureFileSystemConcurrency 
   hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck 
   hadoop.fs.azure.TestNativeAzureFileSystemContractMocked 
   hadoop.fs.azure.TestWasbFsck 
   hadoop.fs.azure.TestOutOfBandAzureBlobOperations 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/whitespace-tabs.txt
  [2.0M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/diff-javadoc-javadoc-root.txt
  [2.0M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [396K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/308/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [176K]
   

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-10-28 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.balancer.TestBalancer 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-compile-javac-root.txt
  [460K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [280K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/99/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [116K]
   

Re: Wire compatibility between Hadoop 3.x client and 2.x server

2020-10-28 Thread Vinayakumar B
Thanks Jianliang,

I saw jira assigned to you.  Are you planning to provide a fix as well for
this?

If not, would you mind assigning to me?

-Vinay

On Wed, 28 Oct 2020 at 3:46 PM, Wu,Jianliang(vip.com) <
jianliang...@vipshop.com> wrote:

> Hi  VinayaKumar and Wei-Chiu
>
> I filed jira https://issues.apache.org/jira/browse/HDFS-15660 for details
>
>
> 2020年10月28日 17:08,Vinayakumar B  写道:
>
> Hi Wu,Jianliang,
> Have you created the Jira for the issue you mentioned due to
> getContentSummary?
>
> I might have a fix for this. Of-course it needs to be applied both client
> and server side.
>
> Let me know.
>
> -Vinay
>
>
> On Wed, Oct 14, 2020 at 12:26 PM Wu,Jianliang(vip.com) <
> jianliang...@vipshop.com> wrote:
>
> Ok, I will file a HDFS jira to report this issue.
>
>
> 2020年10月13日 20:43,Wei-Chiu Chuang  写道:
>
> Thanks Jialiang for reporting the issue.
> That sounds bad and should've not happened. Could you file a HDFS jira
>
> and
>
> fill in more details?
>
> On Mon, Oct 12, 2020 at 8:59 PM Wu,Jianliang(vip.com) <
> jianliang...@vipshop.com> wrote:
>
> In our case, when nn has upgraded to 3.1.3 and dn’s version was still
> 2.6,  we found hive to call getContentSummary method , the client and
> server was not compatible  because of hadoop3 added new PROVIDED storage
> type.
>
> 2020年10月13日 06:41,Chao Sun 
> sunc...@apache.org>>
>
> 写道:
>
>
>
>
>
> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
>
> This communication is intended only for the addressee(s) and may contain
> information that is privileged and confidential. You are hereby notified
> that, if you are not an intended recipient listed above, or an
>
> authorized
>
> employee or agent of an addressee of this communication responsible for
> delivering e-mail messages to an intended recipient, any dissemination,
> distribution or reproduction of this communication (including any
> attachments hereto) is strictly prohibited. If you have received this
> communication in error, please notify us immediately by a reply e-mail
> addressed to the sender and permanently delete the original e-mail
> communication and any attachments from all storage devices without
>
> making
>
> or otherwise retaining a copy.
>
>
>
> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
> This communication is intended only for the addressee(s) and may contain
> information that is privileged and confidential. You are hereby notified
> that, if you are not an intended recipient listed above, or an authorized
> employee or agent of an addressee of this communication responsible for
> delivering e-mail messages to an intended recipient, any dissemination,
> distribution or reproduction of this communication (including any
> attachments hereto) is strictly prohibited. If you have received this
> communication in error, please notify us immediately by a reply e-mail
> addressed to the sender and permanently delete the original e-mail
> communication and any attachments from all storage devices without making
> or otherwise retaining a copy.
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> 
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 
>
>
> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
> This communication is intended only for the addressee(s) and may contain
> information that is privileged and confidential. You are hereby notified
> that, if you are not an intended recipient listed above, or an authorized
> employee or agent of an addressee of this communication responsible for
> delivering e-mail messages to an intended recipient, any dissemination,
> distribution or reproduction of this communication (including any
> attachments hereto) is strictly prohibited. If you have received this
> communication in error, please notify us immediately by a reply e-mail
> addressed to the sender and permanently delete the original e-mail
> communication and any attachments from all storage devices without making
> or otherwise retaining a copy.
>
-- 
-Vinay


[jira] [Created] (YARN-10476) Queue metrics of Unmanaged applications

2020-10-28 Thread Cyrus Jackson (Jira)
Cyrus Jackson created YARN-10476:


 Summary:  Queue metrics of Unmanaged applications
 Key: YARN-10476
 URL: https://issues.apache.org/jira/browse/YARN-10476
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Cyrus Jackson
Assignee: Cyrus Jackson






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Wire compatibility between Hadoop 3.x client and 2.x server

2020-10-28 Thread Wu,Jianliang(vip.com)
Hi  VinayaKumar and Wei-Chiu

I filed jira https://issues.apache.org/jira/browse/HDFS-15660 for details

2020年10月28日 17:08,Vinayakumar B 
mailto:vinayakum...@apache.org>> 写道:

Hi Wu,Jianliang,
Have you created the Jira for the issue you mentioned due to
getContentSummary?

I might have a fix for this. Of-course it needs to be applied both client
and server side.

Let me know.

-Vinay


On Wed, Oct 14, 2020 at 12:26 PM Wu,Jianliang(vip.com) <
jianliang...@vipshop.com> wrote:

Ok, I will file a HDFS jira to report this issue.


2020年10月13日 20:43,Wei-Chiu Chuang 
mailto:weic...@cloudera.com.INVALID>> 写道:

Thanks Jialiang for reporting the issue.
That sounds bad and should've not happened. Could you file a HDFS jira
and
fill in more details?

On Mon, Oct 12, 2020 at 8:59 PM Wu,Jianliang(vip.com) <
jianliang...@vipshop.com> wrote:

In our case, when nn has upgraded to 3.1.3 and dn’s version was still
2.6,  we found hive to call getContentSummary method , the client and
server was not compatible  because of hadoop3 added new PROVIDED storage
type.

2020年10月13日 06:41,Chao Sun 
mailto:sunc...@apache.org>mailto:sunc...@apache.org>>>
写道:




本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
This communication is intended only for the addressee(s) and may contain
information that is privileged and confidential. You are hereby notified
that, if you are not an intended recipient listed above, or an
authorized
employee or agent of an addressee of this communication responsible for
delivering e-mail messages to an intended recipient, any dissemination,
distribution or reproduction of this communication (including any
attachments hereto) is strictly prohibited. If you have received this
communication in error, please notify us immediately by a reply e-mail
addressed to the sender and permanently delete the original e-mail
communication and any attachments from all storage devices without
making
or otherwise retaining a copy.


本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
This communication is intended only for the addressee(s) and may contain
information that is privileged and confidential. You are hereby notified
that, if you are not an intended recipient listed above, or an authorized
employee or agent of an addressee of this communication responsible for
delivering e-mail messages to an intended recipient, any dissemination,
distribution or reproduction of this communication (including any
attachments hereto) is strictly prohibited. If you have received this
communication in error, please notify us immediately by a reply e-mail
addressed to the sender and permanently delete the original e-mail
communication and any attachments from all storage devices without making
or otherwise retaining a copy.

-
To unsubscribe, e-mail: 
hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 
hdfs-dev-h...@hadoop.apache.org


本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail addressed to the sender and permanently 
delete the original e-mail communication and any attachments from all storage 
devices without making or otherwise retaining a copy.


Re: Wire compatibility between Hadoop 3.x client and 2.x server

2020-10-28 Thread Vinayakumar B
Hi Wu,Jianliang,
Have you created the Jira for the issue you mentioned due to
getContentSummary?

I might have a fix for this. Of-course it needs to be applied both client
and server side.

Let me know.

-Vinay


On Wed, Oct 14, 2020 at 12:26 PM Wu,Jianliang(vip.com) <
jianliang...@vipshop.com> wrote:

> Ok, I will file a HDFS jira to report this issue.
>
>
> > 2020年10月13日 20:43,Wei-Chiu Chuang  写道:
> >
> > Thanks Jialiang for reporting the issue.
> > That sounds bad and should've not happened. Could you file a HDFS jira
> and
> > fill in more details?
> >
> > On Mon, Oct 12, 2020 at 8:59 PM Wu,Jianliang(vip.com) <
> > jianliang...@vipshop.com> wrote:
> >
> >> In our case, when nn has upgraded to 3.1.3 and dn’s version was still
> >> 2.6,  we found hive to call getContentSummary method , the client and
> >> server was not compatible  because of hadoop3 added new PROVIDED storage
> >> type.
> >>
> >> 2020年10月13日 06:41,Chao Sun  sunc...@apache.org>>
> >> 写道:
> >>
> >>
> >>
> >>
> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
> >> This communication is intended only for the addressee(s) and may contain
> >> information that is privileged and confidential. You are hereby notified
> >> that, if you are not an intended recipient listed above, or an
> authorized
> >> employee or agent of an addressee of this communication responsible for
> >> delivering e-mail messages to an intended recipient, any dissemination,
> >> distribution or reproduction of this communication (including any
> >> attachments hereto) is strictly prohibited. If you have received this
> >> communication in error, please notify us immediately by a reply e-mail
> >> addressed to the sender and permanently delete the original e-mail
> >> communication and any attachments from all storage devices without
> making
> >> or otherwise retaining a copy.
> >>
>
> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
> This communication is intended only for the addressee(s) and may contain
> information that is privileged and confidential. You are hereby notified
> that, if you are not an intended recipient listed above, or an authorized
> employee or agent of an addressee of this communication responsible for
> delivering e-mail messages to an intended recipient, any dissemination,
> distribution or reproduction of this communication (including any
> attachments hereto) is strictly prohibited. If you have received this
> communication in error, please notify us immediately by a reply e-mail
> addressed to the sender and permanently delete the original e-mail
> communication and any attachments from all storage devices without making
> or otherwise retaining a copy.
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>