[jira] [Created] (HDFS-13829) Needn't get min value of scan index and length for blockpoolReport

2018-08-15 Thread liaoyuxiangqin (JIRA)
liaoyuxiangqin created HDFS-13829:
-

 Summary: Needn't get min value of scan index and length for 
blockpoolReport
 Key: HDFS-13829
 URL: https://issues.apache.org/jira/browse/HDFS-13829
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.2.0
 Environment: 


Reporter: liaoyuxiangqin
Assignee: liaoyuxiangqin


When i read the scan() of DirectoryScanner class in datanode, i found the 
following condition code could be more simplify and easy to understand.

DirectoryScanner.java
{code:java}
if (d < blockpoolReport.length) {
// There may be multiple on-disk records for the same block, don't 
increment
// the memory record pointer if so.
ScanInfo nextInfo = blockpoolReport[Math.min(d, 
blockpoolReport.length - 1)];
if (nextInfo.getBlockId() != info.getBlockId()) {
  ++m;
}
} else {
++m;
 }
{code}
as described above code segmet, i find the code of d < blockpoolReport.length 
and the max of d is blockpoolReport.length-1, so that result of Math.min(d, 
blockpoolReport.length - 1) always is d, so i think needn't to get min value of 
scan index and length for blockpoolReport.

thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-351) Add chill mode state to SCM

2018-08-15 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-351:
---

 Summary: Add chill mode state to SCM
 Key: HDDS-351
 URL: https://issues.apache.org/jira/browse/HDDS-351
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar


Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-350) ContainerMapping#flushContainerInfo doesn't set containerId

2018-08-15 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-350:
---

 Summary: ContainerMapping#flushContainerInfo doesn't set 
containerId
 Key: HDDS-350
 URL: https://issues.apache.org/jira/browse/HDDS-350
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar


ContainerMapping#flushContainerInfo doesn't set containerId which results in 
containerId being null in flushed containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-15 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/559/

[Aug 14, 2018 8:47:22 PM] (aw) YETUS-643. default custom maven repo should use 
workspace when in
[Aug 13, 2018 5:08:58 PM] (aengineer) HDDS-346. ozoneShell show the new volume 
info after updateVolume command
[Aug 13, 2018 5:40:31 PM] (xiao) HADOOP-15638. KMS Accept Queue Size default 
changed from 500 to 128 in
[Aug 13, 2018 6:35:19 PM] (arp) HDFS-13823. NameNode UI : "Utilities -> Browse 
the file system -> open a
[Aug 13, 2018 7:47:49 PM] (xyao) HDDS-324. Use pipeline name as Ratis groupID 
to allow datanode to report
[Aug 13, 2018 8:50:00 PM] (eyang) YARN-7417. Remove duplicated code from 
IndexedFileAggregatedLogsBlock   
[Aug 13, 2018 11:12:37 PM] (weichiu) HDFS-13813. Exit NameNode if dangling 
child inode is detected when
[Aug 14, 2018 12:36:13 AM] (weichiu) HDFS-13738. fsck -list-corruptfileblocks 
has infinite loop if user is
[Aug 14, 2018 6:33:01 AM] (elek) HDDS-345. Upgrade RocksDB version from 5.8.0 
to 5.14.2. Contributed by
[Aug 14, 2018 3:21:03 PM] (jlowe) YARN-8640. Restore previous state in 
container-executor after failure.
[Aug 14, 2018 3:36:26 PM] (eyang) YARN-8160.  Support upgrade of service that 
use docker containers.  
[Aug 14, 2018 6:51:27 PM] (weichiu) HDFS-13758. DatanodeManager should throw 
exception if it has
[Aug 14, 2018 6:54:33 PM] (elek) HDDS-324. Addendum: remove the q letter which 
is accidentally added to
[Aug 14, 2018 6:57:22 PM] (xiao) HDFS-13788. Update EC documentation about rack 
fault tolerance.
[Aug 14, 2018 9:57:46 PM] (xyao) HDDS-298. Implement 
SCMClientProtocolServer.getContainerWithPipeline for
[Aug 15, 2018 12:19:00 AM] (weichiu) HADOOP-14212. Expose SecurityEnabled 
boolean field in JMX for other
[Aug 15, 2018 12:22:15 AM] (templedf) HDFS-13819. 
TestDirectoryScanner#testDirectoryScannerInFederatedCluster
[Aug 15, 2018 1:25:38 AM] (weichiu) HADOOP-14212. Addendum patch: Expose 
SecurityEnabled boolean field in
[Aug 15, 2018 4:15:54 AM] (wwei) YARN-8614. Fix few annotation typos in 
YarnConfiguration. Contributed by




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.cli.TestAclCLI 
   hadoop.cli.TestAclCLIWithPosixAclInheritance 
   hadoop.cli.TestCacheAdminCLI 
   hadoop.cli.TestCryptoAdminCLI 
   hadoop.cli.TestDeleteCLI 
   hadoop.cli.TestErasureCodingCLI 
   hadoop.cli.TestHDFSCLI 
   hadoop.cli.TestXAttrCLI 
   hadoop.fs.contract.hdfs.TestHDFSContractAppend 
   hadoop.fs.contract.hdfs.TestHDFSContractConcat 
   hadoop.fs.contract.hdfs.TestHDFSContractCreate 
   hadoop.fs.contract.hdfs.TestHDFSContractDelete 
   hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus 
   hadoop.fs.contract.hdfs.TestHDFSContractMkdir 
   hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader 
   hadoop.fs.contract.hdfs.TestHDFSContractOpen 
   hadoop.fs.contract.hdfs.TestHDFSContractPathHandle 
   hadoop.fs.contract.hdfs.TestHDFSContractRename 
   hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory 
   hadoop.fs.contract.hdfs.TestHDFSContractSeek 
   hadoop.fs.contract.hdfs.TestHDFSContractSetTimes 
   hadoop.fs.loadGenerator.TestLoadGenerator 
   hadoop.fs.permission.TestStickyBit 
   hadoop.fs.shell.TestHdfsTextCommand 
   hadoop.fs.TestEnhancedByteBufferAccess 
   hadoop.fs.TestFcHdfsCreateMkdir 
   hadoop.fs.TestFcHdfsPermission 
   hadoop.fs.TestFcHdfsSetUMask 
   hadoop.fs.TestGlobPaths 
   hadoop.fs.TestHDFSFileContextMainOperations 
   hadoop.fs.TestHdfsNativeCodeLoader 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.fs.TestSWebHdfsFileContextMainOperations 
   hadoop.fs.TestSymlinkHdfsDisable 
   hadoop.fs.TestSymlinkHdfsFileContext 
   hadoop.fs.TestSymlinkHdfsFileSystem 
   hadoop.fs.TestUnbuffer 
   hadoop.fs.TestUrlStreamHandler 
   hadoop.fs.TestWebHdfsFileContextMainOperations 
   hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.fs.viewfs.TestViewFileSystemLinkFallback 
   hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash 
   hadoop.fs.viewfs.TestViewFileSystemWithAcls 
   hadoop.fs.viewfs.TestViewFileSystemWithTruncate 
   hadoop.fs.viewfs.TestViewFileSystemWithXAttrs 
   hadoop.fs.viewfs.TestViewFsAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFsDefaultValue 
   hadoop.fs.viewfs.TestViewFsFileStatusHdfs 
   hadoop.fs.viewfs.TestViewFsHdfs 
   hadoop.fs.viewfs.TestViewFsWithAcls 
   hadoop.fs.viewfs.TestViewFsWithXAttrs 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   

[jira] [Reopened] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-15 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDDS-119:
---

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Checkstyle shows false positive report

2018-08-15 Thread Anu Engineer
Just reverted, Thanks for root causing this.

Thanks
Anu


On 8/15/18, 9:37 AM, "Allen Wittenauer"  
wrote:


> On Aug 15, 2018, at 4:49 AM, Kitti Nánási  
wrote:
> 
> Hi All,
> 
> We noticed that the checkstyle run by the pre commit job started to show
> false positive reports, so I created HADOOP-15665
> .
> 
> Until that is fixed, keep in mind to run the checkstyle by your IDE
> manually for the patches you upload or review.


I’ve tracked it down to HDDS-119.  I have no idea why that JIRA Is 
changing the checkstyle suppressions file, since the asf license check is it’s 
own thing and check style wouldn’t be looking at those files anyway.

That said, there is a bug in Yetus in that it should have reported that 
checkstyle failed to run. I’ve filed YETUS-660 for that.
-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





Re: Checkstyle shows false positive report

2018-08-15 Thread Allen Wittenauer


> On Aug 15, 2018, at 4:49 AM, Kitti Nánási  
> wrote:
> 
> Hi All,
> 
> We noticed that the checkstyle run by the pre commit job started to show
> false positive reports, so I created HADOOP-15665
> .
> 
> Until that is fixed, keep in mind to run the checkstyle by your IDE
> manually for the patches you upload or review.


I’ve tracked it down to HDDS-119.  I have no idea why that JIRA Is 
changing the checkstyle suppressions file, since the asf license check is it’s 
own thing and check style wouldn’t be looking at those files anyway.

That said, there is a bug in Yetus in that it should have reported that 
checkstyle failed to run. I’ve filed YETUS-660 for that.
-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/

[Aug 14, 2018 6:33:01 AM] (elek) HDDS-345. Upgrade RocksDB version from 5.8.0 
to 5.14.2. Contributed by
[Aug 14, 2018 3:21:03 PM] (jlowe) YARN-8640. Restore previous state in 
container-executor after failure.
[Aug 14, 2018 3:36:26 PM] (eyang) YARN-8160.  Support upgrade of service that 
use docker containers.  
[Aug 14, 2018 6:51:27 PM] (weichiu) HDFS-13758. DatanodeManager should throw 
exception if it has
[Aug 14, 2018 6:54:33 PM] (elek) HDDS-324. Addendum: remove the q letter which 
is accidentally added to
[Aug 14, 2018 6:57:22 PM] (xiao) HDFS-13788. Update EC documentation about rack 
fault tolerance.
[Aug 14, 2018 9:57:46 PM] (xyao) HDDS-298. Implement 
SCMClientProtocolServer.getContainerWithPipeline for
[Aug 15, 2018 12:19:00 AM] (weichiu) HADOOP-14212. Expose SecurityEnabled 
boolean field in JMX for other
[Aug 15, 2018 12:22:15 AM] (templedf) HDFS-13819. 
TestDirectoryScanner#testDirectoryScannerInFederatedCluster
[Aug 15, 2018 1:25:38 AM] (weichiu) HADOOP-14212. Addendum patch: Expose 
SecurityEnabled boolean field in




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/869/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [68K]
   

[jira] [Created] (HDFS-13828) DataNode breaching Xceiver Count

2018-08-15 Thread Amithsha (JIRA)
Amithsha created HDFS-13828:
---

 Summary: DataNode breaching Xceiver Count
 Key: HDFS-13828
 URL: https://issues.apache.org/jira/browse/HDFS-13828
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: Amithsha


We were observing the breach of the xceiver count 4096, On a particular set of 
nodes from 5 - 8 nodes in a 900 nodes cluster.
And we stopped the datanode services on those nodes and made to replicate 
across the cluster. After that also, we observed the same issue on a new set of 
nodes.

Q1: Why on a particular node, and also after decommissioning the node the data 
should be replicated across the cluster, But why again difference set of node?

Assumptions :
Reading a particular block/ data on that node might be the cause for this but 
it should be mitigated after the decommission but not why? So suspected that 
those MR jobs are triggered from Hive, so the query might be referring to the 
same block mulitple times  in different stages and creating this issue?

>From Thread Dump :

Thread dump of datanode says that out of 4090+ xceiver threads created on that 
node nearly 4000+ where belong to the same AppId of multiple mappers with state 
no operation.

 

Any suggestions on this?

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Checkstyle shows false positive report

2018-08-15 Thread Kitti Nánási
Hi All,

We noticed that the checkstyle run by the pre commit job started to show
false positive reports, so I created HADOOP-15665
.

Until that is fixed, keep in mind to run the checkstyle by your IDE
manually for the patches you upload or review.

Thanks,
Kitti