Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-06-27 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/511/

[Jun 26, 2018 6:25:15 PM] (eyang) YARN-8214.  Change default RegistryDNS port.  
   Contributed by
[Jun 26, 2018 9:34:57 PM] (eyang) YARN-8108.  Added option to disable loading 
existing filters to prevent 
[Jun 26, 2018 10:21:35 PM] (miklos.szegedi) YARN-8461. Support strict memory 
control on individual container with
[Jun 27, 2018 2:25:57 AM] (wangda) YARN-8423. GPU does not get released even 
though the application gets
[Jun 27, 2018 2:27:17 AM] (wangda) YARN-8464. Async scheduling thread could be 
interrupted when there are




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestGroupsCaching 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLocalDFS 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestLeveldbConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 
   
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 
   

答复: [DISCUSS] Merge Storage Policy Satisfier (SPS) [HDFS-10285] feature branch to trunk

2018-06-27 Thread Lin,Yiqun(vip.com)
Hi Uma,

Seeing the discussion under JIRA HDFS-10285, external SPS running will be a 
recommend way for the users, right? As you also mentioned, there are some 
additional works need to test and be integrated. One more question from me, 
that maybe also cared by other developers, what's the major differences between 
internal SPS and external SPS? Just not to affect NN running?
Could you describe a little more for this?

Thanks
Yiqun

-邮件原件-
发件人: Uma Maheswara Rao G [mailto:hadoop@gmail.com]
发送时间: 2018年6月28日 6:22
收件人: hdfs-dev@hadoop.apache.org
主题: Re: [DISCUSS] Merge Storage Policy Satisfier (SPS) [HDFS-10285] feature 
branch to trunk

Hi All,

  After long discussions(offline and on JIRA) on SPS, we came to a conclusion 
on JIRA(HDFS-10285) that, we will go ahead with External SPS merge in first 
phase. In this phase process will not be running inside Namenode.
  We will continue discussion on Internal SPS. Current code base supports both 
internal and external option. We have review comments for Internal which needs 
some additional works for analysis and testing etc. We will move Internal SPS 
work to under HDFS-12226 (Follow-on work for SPS in NN) We are working on 
cleanup task HDFS-13076 for the merge. .
For more clarity on Internal and External SPS proposal thoughts, please refer 
to JIRA HDFS-10285.

If there are no objections with this, I will go ahead for voting soon.

Regards,
Uma

On Fri, Nov 17, 2017 at 3:16 PM, Uma Maheswara Rao G 
wrote:

> Update: We worked on the review comments and additional JIRAs above
> mentioned.
>
> >1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we
> planned to take up the support for recursive API support. HDFS-12291<
> https://issues.apache.org/jira/browse/HDFS-12291>
>
> We provided the recursive API support now.
>
> >2. Xattr optimizations HDFS-12225 he.org/jira/browse/HDFS-12225>
> Improved this portion as well
>
> >3. Few other review comments already fixed and committed HDFS-12214<
> https://issues.apache.org/jira/browse/HDFS-12214>
> Fixed the comments.
>
> We are continuing to test the feature and working so far well. Also we
> uploaded a combined patch and got the good QA report.
>
> If there are no further objections, we would like to go for merge vote
> tomorrow. Please by default this feature will be disabled.
>
> Regards,
> Uma
>
> On Fri, Aug 18, 2017 at 11:27 PM, Gangumalla, Uma <
> uma.ganguma...@intel.com> wrote:
>
>> Hi Andrew,
>>
>> >Great to hear. It'd be nice to define which use cases are met by the
>> current version of SPS, and which will be handled after the merge.
>> After the discussions in JIRA, we planned to support recursive API as
>> well. The primary use cases we planned was for Hbase. Please check
>> next point for use case details.
>>
>> >A bit more detail in the design doc on how HBase would use this
>> >feature
>> would also be helpful. Is there an HBase JIRA already?
>> Please find the usecase details at this comment in JIRA:
>> https://issues.apache.org/jira/browse/HDFS-10285?focusedComm
>> entId=16120227=com.atlassian.jira.plugin.system.issueta
>> bpanels:comment-tabpanel#comment-16120227
>>
>> >I also spent some more time with the design doc and posted a few
>> questions on the JIRA.
>> Thank you for the reviews.
>>
>> To summarize the discussions in JIRA:
>> 1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we
>> planned to take up the support for recursive API support. HDFS-12291<
>> https://issues.apache.org/jira/browse/HDFS-12291> (Rakesh started the
>> work on it) 2. Xattr optimizations HDFS-12225> he.org/jira/browse/HDFS-12225> (Patch available) 3. Few other review
>> comments already fixed and committed HDFS-12214<
>> https://issues.apache.org/jira/browse/HDFS-12214>
>>
>> For tracking the follow-up tasks we filed JIRA HDFS-12226, they
>> should not be critical for merge.
>>
>> Regards,
>> Uma
>>
>> From: Andrew Wang > andrew.w...@cloudera.com>>
>> Date: Friday, July 28, 2017 at 11:33 AM
>> To: Uma Gangumalla > uma.ganguma...@intel.com>>
>> Cc: "hdfs-dev@hadoop.apache.org" <
>> hdfs-dev@hadoop.apache.org>
>> Subject: Re: [DISCUSS] Merge Storage Policy Satisfier (SPS)
>> [HDFS-10285] feature branch to trunk
>>
>> Hi Uma,
>>
>> > If there are still plans to make changes that affect compatibility
>> > (the
>> hybrid RPC and bulk DN work mentioned sound like they would), then we
>> can cut branch-3 first, or wait to merge until after these tasks are 
>> finished.
>> [Uma] We don’t see that 2 items as high priority for the feature.
>> Users would be able to use the feature with current code base and
>> API. So, we would consider them after branch-3 only. That should be 
>> perfectly fine IMO.
>> The current API is very much useful for Hbase scenario. In Hbase
>> case, they will rename files under to different policy directory.
>> They will not set the policies always. 

RE: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-27 Thread Chen, Sammi
Hi Yongjun,


The artifacts will be pushed to 
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project after step 
6 of Publishing steps.
For 2.9.1, I remember I absolutely did the step before. I redo the step 6 today 
and now 2.9.1 is pushed to the mvn repo.
You can double check it. I suspect sometimes Nexus may fail to notify user when 
this is unexpected failures.


Bests,
Sammi
From: Yongjun Zhang [mailto:yzh...@cloudera.com]
Sent: Sunday, June 17, 2018 12:17 PM
To: Jonathan Eagles ; Chen, Sammi 
Cc: Eric Payne ; Hadoop Common 
; Hdfs-dev ; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

+ Junping, Sammi

Hi Jonathan,

Many thanks for reporting the issues and sorry for the inconvenience.

1. Shouldn't the build be looking for artifacts in

https://repository.apache.org/content/repositories/releases
rather than

https://repository.apache.org/content/repositories/snapshots
?

2.
Not seeing the artifact published here as well.
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project

Indeed, I did not see 2.9.1 there too. So included Sammi Chen.

Hi Junping, would you please share which step in
https://wiki.apache.org/hadoop/HowToRelease
should have done this?

Thanks a lot.

--Yongjun

On Fri, Jun 15, 2018 at 10:52 PM, Jonathan Eagles 
mailto:jeag...@gmail.com>> wrote:
Upgraded Tez dependency to hadoop 3.0.3 and found this issue. Anyone else 
seeing this issue?

[ERROR] Failed to execute goal on project hadoop-shim: Could not resolve 
dependencies for project org.apache.tez:hadoop-shim:jar:0.10.0-SNAPSHOT: Failed 
to collect dependencies at org.apache.hadoop:hadoop-yarn-api:jar:3.0.3: Failed 
to read artifact descriptor for org.apache.hadoop:hadoop-yarn-api:jar:3.0.3: 
Could not find artifact org.apache.hadoop:hadoop-project:pom:3.0.3 in 
apache.snapshots.https 
(https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-shim

Not seeing the artifact published here as well.
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project

On Tue, Jun 12, 2018 at 6:44 PM, Yongjun Zhang 
mailto:yzh...@cloudera.com>> wrote:
Thanks Eric!

--Yongjun

On Mon, Jun 11, 2018 at 8:05 AM, Eric Payne 
mailto:erichadoo...@yahoo.com>> wrote:

> Sorry, Yongjun. My +1 is also binding
> +1 (binding)
> -Eric Payne
>
> On Friday, June 1, 2018, 12:25:36 PM CDT, Eric Payne <
> eric.payne1...@yahoo.com> wrote:
>
>
>
>
> Thanks a lot, Yongjun, for your hard work on this release.
>
> +1
> - Built from source
> - Installed on 6 node pseudo cluster
>
>
> Tested the following in the Capacity Scheduler:
> - Verified that running apps in labelled queues restricts tasks to the
> labelled nodes.
> - Verified that various queue config properties for CS are refreshable
> - Verified streaming jobs work as expected
> - Verified that user weights work as expected
> - Verified that FairOrderingPolicy in a CS queue will evenly assign
> resources
> - Verified running yarn shell application runs as expected
>
>
>
>
>
>
>
> On Friday, June 1, 2018, 12:48:26 AM CDT, Yongjun Zhang <
> yjzhan...@apache.org> wrote:
>
>
>
>
>
> Greetings all,
>
> I've created the first release candidate (RC0) for Apache Hadoop
> 3.0.3. This is our next maintenance release to follow up 3.0.2. It includes
> about 249
> important fixes and improvements, among which there are 8 blockers. See
> https://issues.apache.org/jira/issues/?filter=12343997
>
> The RC artifacts are available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/
>
> The maven artifacts are available via
> https://repository.apache.org/content/repositories/orgapachehadoop-1126
>
> Please try the release and vote; the vote will run for the usual 5 working
> days, ending on 06/07/2018 PST time. Would really appreciate your
> participation here.
>
> I bumped into quite some issues along the way, many thanks to quite a few
> people who helped, especially Sammi Chen, Andrew Wang, Junping Du, Eddy Xu.
>
> Thanks,
>
> --Yongjun
>




[jira] [Created] (HDDS-201) Add name for LeaseManager

2018-06-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-201:
-

 Summary: Add name for LeaseManager
 Key: HDDS-201
 URL: https://issues.apache.org/jira/browse/HDDS-201
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton


During the review of HDDS-195 we realised that one server could have multiple 
LeaseManagers (for example one for the watchers one for the container creation).

To make it easier to monitor it would be good to use some specific names for 
the release manager.

This jira is about adding a new field (name) to the release manager which 
should be defined by a constructor parameter and should be required.

It should be used in the name of the Threads and all the log message (Something 
like "Starting CommandWatcher LeasManager")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13704) Support expiry time in AdlFileSystem

2018-06-27 Thread JIRA
Íñigo Goiri created HDFS-13704:
--

 Summary: Support expiry time in AdlFileSystem
 Key: HDFS-13704
 URL: https://issues.apache.org/jira/browse/HDFS-13704
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Íñigo Goiri
Assignee: Anbang Hu


ADLS supports setting an expiration time for a file.
We can leverage Xattr in FileSystem to set the expiration time.
This could use the same xattr as HDFS-6382.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Merge Storage Policy Satisfier (SPS) [HDFS-10285] feature branch to trunk

2018-06-27 Thread Uma Maheswara Rao G
Hi All,

  After long discussions(offline and on JIRA) on SPS, we came to a
conclusion on JIRA(HDFS-10285) that, we will go ahead with External SPS
merge in first phase. In this phase process will not be running inside
Namenode.
  We will continue discussion on Internal SPS. Current code base supports
both internal and external option. We have review comments for Internal
which needs some additional works for analysis and testing etc. We will
move Internal SPS work to under HDFS-12226 (Follow-on work for SPS in NN)
We are working on cleanup task HDFS-13076 for the merge. .
For more clarity on Internal and External SPS proposal thoughts, please
refer to JIRA HDFS-10285.

If there are no objections with this, I will go ahead for voting soon.

Regards,
Uma

On Fri, Nov 17, 2017 at 3:16 PM, Uma Maheswara Rao G 
wrote:

> Update: We worked on the review comments and additional JIRAs above
> mentioned.
>
> >1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we
> planned to take up the support for recursive API support. HDFS-12291<
> https://issues.apache.org/jira/browse/HDFS-12291>
>
> We provided the recursive API support now.
>
> >2. Xattr optimizations HDFS-12225 he.org/jira/browse/HDFS-12225>
> Improved this portion as well
>
> >3. Few other review comments already fixed and committed HDFS-12214<
> https://issues.apache.org/jira/browse/HDFS-12214>
> Fixed the comments.
>
> We are continuing to test the feature and working so far well. Also we
> uploaded a combined patch and got the good QA report.
>
> If there are no further objections, we would like to go for merge vote
> tomorrow. Please by default this feature will be disabled.
>
> Regards,
> Uma
>
> On Fri, Aug 18, 2017 at 11:27 PM, Gangumalla, Uma <
> uma.ganguma...@intel.com> wrote:
>
>> Hi Andrew,
>>
>> >Great to hear. It'd be nice to define which use cases are met by the
>> current version of SPS, and which will be handled after the merge.
>> After the discussions in JIRA, we planned to support recursive API as
>> well. The primary use cases we planned was for Hbase. Please check next
>> point for use case details.
>>
>> >A bit more detail in the design doc on how HBase would use this feature
>> would also be helpful. Is there an HBase JIRA already?
>> Please find the usecase details at this comment in JIRA:
>> https://issues.apache.org/jira/browse/HDFS-10285?focusedComm
>> entId=16120227=com.atlassian.jira.plugin.system.issueta
>> bpanels:comment-tabpanel#comment-16120227
>>
>> >I also spent some more time with the design doc and posted a few
>> questions on the JIRA.
>> Thank you for the reviews.
>>
>> To summarize the discussions in JIRA:
>> 1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we
>> planned to take up the support for recursive API support. HDFS-12291<
>> https://issues.apache.org/jira/browse/HDFS-12291> (Rakesh started the
>> work on it)
>> 2. Xattr optimizations HDFS-12225> he.org/jira/browse/HDFS-12225> (Patch available)
>> 3. Few other review comments already fixed and committed HDFS-12214<
>> https://issues.apache.org/jira/browse/HDFS-12214>
>>
>> For tracking the follow-up tasks we filed JIRA HDFS-12226, they should
>> not be critical for merge.
>>
>> Regards,
>> Uma
>>
>> From: Andrew Wang > andrew.w...@cloudera.com>>
>> Date: Friday, July 28, 2017 at 11:33 AM
>> To: Uma Gangumalla > uma.ganguma...@intel.com>>
>> Cc: "hdfs-dev@hadoop.apache.org" <
>> hdfs-dev@hadoop.apache.org>
>> Subject: Re: [DISCUSS] Merge Storage Policy Satisfier (SPS) [HDFS-10285]
>> feature branch to trunk
>>
>> Hi Uma,
>>
>> > If there are still plans to make changes that affect compatibility (the
>> hybrid RPC and bulk DN work mentioned sound like they would), then we can
>> cut branch-3 first, or wait to merge until after these tasks are finished.
>> [Uma] We don’t see that 2 items as high priority for the feature. Users
>> would be able to use the feature with current code base and API. So, we
>> would consider them after branch-3 only. That should be perfectly fine IMO.
>> The current API is very much useful for Hbase scenario. In Hbase case, they
>> will rename files under to different policy directory. They will not set
>> the policies always. So, when rename files under to different policy
>> directory, they can simply call satisfyStoragePolicy, they don’t need any
>> hybrid API.
>>
>> Great to hear. It'd be nice to define which usecases are met by the
>> current version of SPS, and which will be handled after the merge.
>>
>> A bit more detail in the design doc on how HBase would use this feature
>> would also be helpful. Is there an HBase JIRA already?
>>
>> I also spent some more time with the design doc and posted a few
>> questions on the JIRA.
>>
>> Best,
>> Andrew
>>
>
>


[jira] [Created] (HDDS-200) Create CloseContainerWatcher

2018-06-27 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-200:
---

 Summary: Create CloseContainerWatcher
 Key: HDDS-200
 URL: https://issues.apache.org/jira/browse/HDDS-200
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This will be based on HDDS-195.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-199) Implement ReplicationManager to replicate ClosedContainer

2018-06-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-199:
-

 Summary: Implement ReplicationManager to replicate ClosedContainer
 Key: HDDS-199
 URL: https://issues.apache.org/jira/browse/HDDS-199
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton
 Fix For: 0.2.1


HDDS/Ozone supports Open and Closed containers. In case of specific conditions 
(container is full, node is failed) the container will be closed and will be 
replicated in a different way. The replication of Open containers are handled 
with Ratis and PipelineManger.

The ReplicationManager should handle the replication of the ClosedContainers. 
The replication information will be sent as an event 
(UnderReplicated/OverReplicated). 

The Replication manager will collect all of the events in a priority queue (to 
replicate first the containers where more replica is missing) calculate the 
destination datanode (first with a very simple algorithm, later with 
calculating scatter-width) and send the Copy/Delete container to the datanode 
(CommandQueue).

A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
copy/delete in case of failure. This is an in-memory structure (based on 
HDDS-195) which can requeue the underreplicated/overreplicated events to the 
prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-198) Create AuditLogger mechanism to be used by KSM, SCM and Datanode

2018-06-27 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-198:
--

 Summary: Create AuditLogger mechanism to be used by KSM, SCM and 
Datanode
 Key: HDDS-198
 URL: https://issues.apache.org/jira/browse/HDDS-198
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


This Jira tracks the work to create a custom AuditLogger which can be used by 
KSM, SCM, Datanode for auditing read/write events.

The AuditLogger will be designed using log4j2 and leveraging the MarkerFilter 
approach to be able to turn on/off audit of read/write events by simple 
changing the log config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-197) DataNode should return Datanode to return ContainerClosingException/ContainerClosedException (CCE) to client if the container is in Closing/Closed State

2018-06-27 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-197:


 Summary: DataNode should return Datanode to return 
ContainerClosingException/ContainerClosedException (CCE) to client if the 
container is in Closing/Closed State
 Key: HDDS-197
 URL: https://issues.apache.org/jira/browse/HDDS-197
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client, Ozone Datanode
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


SCM queues the CloeContainer command to DataNode over herabeat response which 
is handled by the Ratis Server inside the Datanode. In Case, the container 
transitions to CLOSING/CLOSED state, while the ozone client is writing Data, It 
should throw 
ContainerClosingException/ContainerClosedExceptionContainerClosingException/ContainerClosedException
 accordingly. These exceptions will be handled by the client which will retry 
to get the last committed BlockInfo from Datanode and update the OzoneMaster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-06-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/

[Jun 26, 2018 4:12:05 AM] (Bharat) HDDS-192:Create new SCMCommand to request a 
replication of a container.
[Jun 26, 2018 6:25:15 PM] (eyang) YARN-8214.  Change default RegistryDNS port.  
   Contributed by
[Jun 26, 2018 9:34:57 PM] (eyang) YARN-8108.  Added option to disable loading 
existing filters to prevent 
[Jun 26, 2018 10:21:35 PM] (miklos.szegedi) YARN-8461. Support strict memory 
control on individual container with




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.TestFileAppend 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/diff-compile-javac-root.txt
  [352K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/824/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   

Re: HADOOP-15124 review

2018-06-27 Thread Arpit Agarwal
Hi Igor, it is perfectly fine to request a code review on the dev mailing list.


From: Igor Dvorzhak 
Date: Tuesday, June 26, 2018 at 9:27 PM
To: 
Cc: , 
Subject: Re: HADOOP-15124 review

Hi Yiqun,

Thank you for the explanation. I didn't know that this is not appropriate and 
will not do so in future.

Thanks,
Igor


On Tue, Jun 26, 2018 at 7:18 PM Lin,Yiqun(vip.com) 
mailto:yiqun01@vipshop.com>> wrote:
Hi Igor,

It’s not appropriate to ask for a review request in dev mailing list. Dev 
mailing list is mainly used for discussing and answering user’s questions. You 
can ask for the review under specific JIRA, that will be seen by committers or 
others. If they have time, they will help take the review.

Yiqun
Thanks

发件人: Igor Dvorzhak 
[mailto:i...@google.com.INVALID]
发送时间: 2018年6月26日 23:52
收件人: hdfs-dev@hadoop.apache.org; 
common-...@hadoop.apache.org
主题: Re: HADOOP-15124 review

+common-...@hadoop.apache.org>

On Tue, Jun 26, 2018 at 8:49 AM Igor Dvorzhak 
mailto:i...@google.com>>>
 wrote:
Hello,

I have a patch that 
improves FileSystem.Statistics implementation and I would like to commit it.

May somebody review it?

Best regards,
Igor Dvorzhak
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail addressed to the sender and permanently 
delete the original e-mail communication and any attachments from all storage 
devices without making or otherwise retaining a copy.


Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-06-27 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/510/

[Jun 25, 2018 1:38:57 PM] (stevel) HADOOP-14396. Add builder interface to 
FileContext. Contributed by  Lei
[Jun 25, 2018 4:50:27 PM] (inigoiri) HADOOP-15458. 
TestLocalFileSystem#testFSOutputStreamBuilder fails on
[Jun 25, 2018 5:38:03 PM] (rohithsharmaks) YARN-8457. Compilation is broken 
with -Pyarn-ui.
[Jun 25, 2018 8:05:22 PM] (aengineer) HDDS-191. Queue SCMCommands via 
EventQueue in SCM. Contributed by Elek,
[Jun 25, 2018 8:59:41 PM] (mackrorysd) HADOOP-15423. Merge fileCache and 
dirCache into ine single cache in
[Jun 25, 2018 10:36:45 PM] (todd) HADOOP-15550. Avoid static initialization of 
ObjectMappers
[Jun 25, 2018 10:47:54 PM] (miklos.szegedi) YARN-8438. 
TestContainer.testKillOnNew flaky on trunk. Contributed by
[Jun 26, 2018 4:12:05 AM] (Bharat) HDDS-192:Create new SCMCommand to request a 
replication of a container.




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.server.namenode.TestReencryptionWithKMS 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestLocalDFS 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore