[jira] [Created] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2018-06-25 Thread Chen Liang (JIRA)
Chen Liang created HDFS-13699:
-

 Summary: Add DFSClient sending handshake token to DataNode, and 
allow DataNode overwrite downstream QOP
 Key: HDFS-13699
 URL: https://issues.apache.org/jira/browse/HDFS-13699
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to 
redirect the encrypt secret to DataNode. The encrypted message is the QOP that 
client and NameNode have used. DataNode decrypts the message and enforce the 
QOP for the client connection. Also, this Jira will also include overwriting 
downstream QOP, as mentioned in the HDFS-13541 design doc. Namely, this is to 
allow inter-DN QOP that is different from client-DN QOP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-194) Remove NodePoolManager and node pool handling from SCM

2018-06-25 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-194:
-

 Summary: Remove NodePoolManager and node pool handling from SCM
 Key: HDDS-194
 URL: https://issues.apache.org/jira/browse/HDDS-194
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.2.1


The current code use NodePoolManager and ContainerSupervisor to group the nodes 
to smaller groups (pools) and handle the pull based node reports group by group.

But this code is not used any more as we switch back to use a push based model. 
In the datanode the reports could be handled by the specific report handlers, 
and in the scm side the reports will be processed by the SCMHeartbeatDispatcher 
which will send the events to the EventQueue.

As of now the NodePool abstraction could be removed from the code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based

2018-06-25 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-193:
-

 Summary: Make Datanode heartbeat dispatcher in SCM event based
 Key: HDDS-193
 URL: https://issues.apache.org/jira/browse/HDDS-193
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.2.1


HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat 
report parts to the appropriate listeners. I propose to make it EventQueue 
based to handle/monitor these async calls in the same way as the other events.

Report handlers would subscribe to the specific events to process the 
information.

  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-06-25 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/

[Jun 25, 2018 1:15:31 AM] (wwei) YARN-8443. Total #VCores in cluster metrics is 
wrong when




-1 overall


The following subsystems voted -1:
asflicense compile findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/patch-compile-root.txt
  [560K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/patch-compile-root.txt
  [560K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/patch-compile-root.txt
  [560K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [340K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/822/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
  [32K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-06-25 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/508/

[Jun 23, 2018 10:49:44 PM] (aengineer) HDDS-184. Upgrade common-langs version 
to 3.7 in
[Jun 24, 2018 8:05:04 AM] (aengineer) HDDS-177. Create a releasable ozonefs 
artifact Contributed by Marton,




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.server.namenode.TestReencryptionWithKMS 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestFileCorruption 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestLeveldbConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   

[jira] [Created] (HDFS-13698) [PROVIDED Phase 2] Provided ReplicaMap should be LRU with separate lookup from normal Replicas

2018-06-25 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-13698:
-

 Summary: [PROVIDED Phase 2] Provided ReplicaMap should be LRU with 
separate lookup from normal Replicas
 Key: HDFS-13698
 URL: https://issues.apache.org/jira/browse/HDFS-13698
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ewan Higgs
Assignee: Virajith Jalaparti


The existing ReplicaMap uses {{ExtendedBlock}} to lookup the replica 
information. However, Provided replicas should not be in the ReplicaMap; 
instead they should be lookups in the AliasMap. In order to handle this case, 
the ReplicaMap lookups should be split into two phases:

Lookup by normal ReplicaMap (as is done now) and lookup in AliasMap to see if 
there is also a Provided replica. The performance of this second provided 
lookup should be sped up using an LRU cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext

2018-06-25 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13697:


 Summary: EDEK decrypt fails due to proxy user being lost because 
of empty AccessControllerContext
 Key: HDFS-13697
 URL: https://issues.apache.org/jira/browse/HDFS-13697
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel


While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
might not have doAs privileged execution call (in the DFSClient for example). 
This results in loosing the proxy user from UGI as UGI.getCurrentUser finds no 
AccessControllerContext and does a re-login for the login user only.

This can cause the following for example: if we have set up the oozie user to 
be entitled to perform actions on behalf of example_user but oozie is forbidden 
to decrypt any EDEK (for security reasons), due to the above issue, 
example_user entitlements are lost from UGI and the following error is reported:

{code}
[0] 
SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
JOB[0020905-180313191552532-oozie-oozi-W] 
ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
[FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL 
name [encrypted_key]!!]
org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
 at 
org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
 at 
org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
 at 
org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
 at 
org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
 at 
org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
 at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
 at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
 at org.apache.oozie.command.XCommand.call(XCommand.java:286)
 at 
org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
 at 
org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User 
[oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name 
[encrypted_key]!!
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
 at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205)
 at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
 at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440)
 at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1542)
 at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1527)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:408)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:401)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:401)
 at