Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-25 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/538/

[Jul 24, 2018 7:01:04 PM] (aw) YETUS-242. hadoop: add -Drequire.valgrind
[Jul 24, 2018 3:53:20 PM] (msingh) HDDS-272. TestBlockDeletingService is 
failing with
[Jul 24, 2018 4:50:17 PM] (sunilg) YARN-7748.
[Jul 24, 2018 5:17:03 PM] (xyao) HDDS-282. Consolidate logging in 
scm/container-service. Contributed by
[Jul 24, 2018 5:56:59 PM] (bibinchundatt) YARN-8541. RM startup failure on 
recovery after user deletion.
[Jul 24, 2018 7:46:59 PM] (haibochen) YARN-7133. Clean up lock-try order in 
fair scheduler. (Szilard Nemeth
[Jul 24, 2018 9:32:30 PM] (gera) HADOOP-15612. Improve exception when tfile 
fails to load LzoCodec.
[Jul 24, 2018 11:05:27 PM] (templedf) HDFS-13448. HDFS Block Placement - Ignore 
Locality for First Block
[Jul 25, 2018 4:42:47 AM] (xiao) HDFS-13761. Add toString Method to AclFeature 
Class. Contributed by
[Jul 25, 2018 4:45:43 AM] (xiao) HADOOP-15609. Retry KMS calls when 
SSLHandshakeException occurs.
[Jul 25, 2018 8:45:54 AM] (msingh) HDDS-203. Add getCommittedBlockLength API in 
datanode. Contributed by
[Jul 25, 2018 9:35:27 AM] (wwei) YARN-8546. Resource leak caused by a reserved 
container being released




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy 
   hadoop.hdfs.TestBlocksScheduledCounter 
   hadoop.hdfs.TestBlockStoragePolicy 
   hadoop.hdfs.TestDatanodeReport 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestDistributedFileSystem 
   hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy 
   hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestFileAppend 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileConcurrentReader 
   hadoop.hdfs.TestFileCorruption 
   hadoop.hdfs.TestHDFSFileSystemContract 
   

[jira] [Created] (HDDS-295) TestCloseContainerByPipeline is failing because of timeout

2018-07-25 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-295:
--

 Summary: TestCloseContainerByPipeline is failing because of timeout
 Key: HDDS-295
 URL: https://issues.apache.org/jira/browse/HDDS-295
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Mukul Kumar Singh


The test is failing because the test is timing out waiting for the container to 
be closed.

The details are logged at 
https://builds.apache.org/job/PreCommit-HDDS-Build/627/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-07-25 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13768:


 Summary:  Adding replicas to volume map makes DataNode start 
slowly 
 Key: HDFS-13768
 URL: https://issues.apache.org/jira/browse/HDFS-13768
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.0
Reporter: Yiqun Lin


We find DN starting so slowly when rolling upgrade our cluster. When we restart 
DNs, the DNs start so slowly and not register to NN immediately. And this cause 
a lots of following error:
{noformat}
DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
dst: /xx.xx.xx.xx:50010
java.io.IOException: Not ready to serve the block pool, 
BP-1508644862-xx.xx.xx.xx-1493781183457.
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Looking into the logic of DN startup, it will do the initial block pool 
operation before the registration. And during initializing block pool 
operation, we found the adding replicas to volume map is the most expensive 
operation.  Related log:
{noformat}
2018-07-26 10:46:23,771 INFO [Thread-105] 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
volume /home/hard_disk/1/dfs/dn/current: 242722ms
2018-07-26 10:46:26,231 INFO [Thread-109] 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
volume /home/hard_disk/5/dfs/dn/current: 245182ms
2018-07-26 10:46:32,146 INFO [Thread-112] 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
volume /home/hard_disk/8/dfs/dn/current: 251097ms
2018-07-26 10:47:08,283 INFO [Thread-106] 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
volume /home/hard_disk/2/dfs/dn/current: 287235ms
{noformat}

Currently DN uses independent thread to scan and add replica for each volume, 
but we still need to wait the slowest thread to finish its work. So the main 
problem here is that we could make the thread to run faster.

The jstack we get when DN blocking in the adding replica:
{noformat}
"Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
runnable [0x7f4043a38000]
   java.lang.Thread.State: RUNNABLE
at java.io.UnixFileSystem.list(Native Method)
at java.io.File.list(File.java:1122)
at java.io.File.listFiles(File.java:1207)
at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
{noformat}

One improvement maybe we can use ForkJoinPool to do this recursive task, rather 
than a sync way.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-294) Desstroy ratis pipeline on datanode on pipeline close event.

2018-07-25 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-294:
--

 Summary: Desstroy ratis pipeline on datanode on pipeline close 
event.
 Key: HDDS-294
 URL: https://issues.apache.org/jira/browse/HDDS-294
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Mukul Kumar Singh


Once a ratis pipeline is closed, the corresponding metadata on the datanode 
should be destroyed as well. This jira proposes to remove the ratis metadata 
and destroy the ratis ring on datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-07-25 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/848/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [116K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [192K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [336K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [112K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   

[jira] [Created] (HDDS-293) Reduce memory usage in KeyData

2018-07-25 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDDS-293:


 Summary: Reduce memory usage in KeyData
 Key: HDDS-293
 URL: https://issues.apache.org/jira/browse/HDDS-293
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Currently, the field chunks is declared as a List in KeyData as 
shown below.
{code}
//KeyData.java
  private List chunks;
{code}
It is expected that many KeyData objects only have a single chunk.  We could 
reduce the memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13767) Add msync server implementation.

2018-07-25 Thread Chen Liang (JIRA)
Chen Liang created HDFS-13767:
-

 Summary: Add msync server implementation.
 Key: HDFS-13767
 URL: https://issues.apache.org/jira/browse/HDFS-13767
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Chen Liang
Assignee: Chen Liang


This is a followup on HDFS-13688, where msync API is introduced to 
{{ClientProtocol}} but the server side implementation is missing. This is Jira 
is to implement the server side logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13766) HDFS Classes used for implementation of Multipart uploads to move to hadoop-common JAR

2018-07-25 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13766:
-

 Summary: HDFS Classes used for implementation of Multipart uploads 
to move to hadoop-common JAR
 Key: HDFS-13766
 URL: https://issues.apache.org/jira/browse/HDFS-13766
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.2.0
Reporter: Steve Loughran


the multipart upload API uses classes which are only in {{hadoop-hdfs-client}}

These need to be moved to hadoop-common so that cloud deployments which don't 
have the hdfs-client JAR on their CP (HD/I, possibly google dataproc) can 
implement and use the API.

Sorry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13765) Fix javadoc for FSDirMkdirOp#createParentDirectories

2018-07-25 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDFS-13765:
--

 Summary: Fix javadoc for FSDirMkdirOp#createParentDirectories
 Key: HDFS-13765
 URL: https://issues.apache.org/jira/browse/HDFS-13765
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Javadoc needs to be fixed for FSDirMkdirOp#createParentDirectories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-292) Fix ContainerMapping#getMatchingContainerWithPipeline

2018-07-25 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-292:
---

 Summary: Fix ContainerMapping#getMatchingContainerWithPipeline
 Key: HDDS-292
 URL: https://issues.apache.org/jira/browse/HDDS-292
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
 Fix For: 0.2.1


The current code does not update the pipeline that is newly allocated 

{code}
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -446,7 +446,7 @@ public ContainerWithPipeline 
getMatchingContainerWithPipeline(final long size,
 .getPipeline(containerInfo.getPipelineName(),
 containerInfo.getReplicationType());
 if (pipeline == null) {
-  pipelineSelector
+  pipeline = pipelineSelector
   .getReplicationPipeline(containerInfo.getReplicationType(),
   containerInfo.getReplicationFactor());
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-292) Fix ContainerMapping#getMatchingContainerWithPipeline

2018-07-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-292.
-
Resolution: Fixed

Based on offline discussion with [~msingh], this will be fixed as part of 
HDDS-277. 

> Fix ContainerMapping#getMatchingContainerWithPipeline
> -
>
> Key: HDDS-292
> URL: https://issues.apache.org/jira/browse/HDDS-292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
>
> The current code does not update the pipeline that is newly allocated 
> {code}
> --- 
> a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
> +++ 
> b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
> @@ -446,7 +446,7 @@ public ContainerWithPipeline 
> getMatchingContainerWithPipeline(final long size,
>  .getPipeline(containerInfo.getPipelineName(),
>  containerInfo.getReplicationType());
>  if (pipeline == null) {
> -  pipelineSelector
> +  pipeline = pipelineSelector
>.getReplicationPipeline(containerInfo.getReplicationType(),
>containerInfo.getReplicationFactor());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13553) RBF: Support global quota

2018-07-25 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin resolved HDFS-13553.
--
  Resolution: Fixed
Hadoop Flags: Reviewed

Close this.  Had updated the release note.
Thanks everyone who contributing for this feature.

> RBF: Support global quota
> -
>
> Key: HDFS-13553
> URL: https://issues.apache.org/jira/browse/HDFS-13553
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.1.0
>
> Attachments: RBF support  global quota.pdf
>
>
> Add quota management to Router-based federation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-291) Initialize hadoop metrics system in standalon hdds datanodes

2018-07-25 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-291:
-

 Summary: Initialize hadoop metrics system in standalon hdds 
datanodes
 Key: HDDS-291
 URL: https://issues.apache.org/jira/browse/HDDS-291
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.2.1


Since HDDS-94 we can start a standalone HDDS datanode process without HDFS 
datanode parts.

But to see the hadoop metrics over the jmx interface we need to initialize the 
hadoop metrics system (we have existing metrics by the storage io layer).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13764) [DOC] update flag is not necessary to avoid verifying checksums

2018-07-25 Thread Yuexin Zhang (JIRA)
Yuexin Zhang created HDFS-13764:
---

 Summary: [DOC] update flag is not necessary to avoid verifying 
checksums
 Key: HDFS-13764
 URL: https://issues.apache.org/jira/browse/HDFS-13764
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.3
Reporter: Yuexin Zhang


We mentioned to use "-update" option to avoid checksum in the following doc:

[https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Copying_between_encrypted_and_unencrypted_locations]
{code:java}
// Copying between encrypted and unencrypted locations
By default, distcp compares checksums provided by the filesystem to verify that 
the data was successfully copied to the destination. When copying between an 
unencrypted and encrypted location, the filesystem checksums will not match 
since the underlying block data is different. In this case, specify the 
-skipcrccheck and -update distcp flags to avoid verifying checksums.
{code}
 

But actually, "-update" option is not necessary, only "-skipcrccheck" is needed.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org