Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-21 Thread Allen Wittenauer

To whoever set this up:

There was a job config problem where the Jenkins branch parameter wasn’t passed 
to Yetus.  Therefore both of these reports have been against trunk.  I’ve fixed 
this job (as well as the other jobs) to honor that parameter.  I’ve kicked off 
a new run with these changes.




> On Oct 21, 2017, at 9:58 AM, Apache Jenkins Server 
>  wrote:
> 
> For more details, see 
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/
> 
> [Oct 20, 2017 9:27:59 PM] (stevel) HADOOP-14942. DistCp#cleanup() should 
> check whether jobFS is null.
> [Oct 21, 2017 12:19:29 AM] (subru) YARN-6871. Add additional deSelects params 
> in
> 
> 
> 
> 
> -1 overall
> 
> 
> The following subsystems voted -1:
>asflicense unit
> 
> 
> The following subsystems voted -1 but
> were configured to be filtered/ignored:
>cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
> 
> 
> The following subsystems are considered long running:
> (runtime bigger than 1h  0m  0s)
>unit
> 
> 
> Specific tests:
> 
>Failed junit tests :
> 
>   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 
>   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
>   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
>   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
>   hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler 
>   hadoop.yarn.server.resourcemanager.TestApplicationMasterService 
>   hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
>   hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
>   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue 
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
>  
>   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler 
>   hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher 
>   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
>   hadoop.yarn.server.resourcemanager.TestRMHA 
>   
> hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification 
>   
> hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched 
>   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps 
>   hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl 
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation 
>   hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors 
>   hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
>   hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
>   
> hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities
>  
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
>  
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior
>  
>   hadoop.yarn.server.TestDiskFailures 
> 
>Timed out junit tests :
> 
>   
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestZKConfigurationStore
>  
>   
> org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
>   org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
>   org.apache.hadoop.mapred.pipes.TestPipeApplication 
> 
> 
>   cc:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-compile-cc-root.txt
>   [4.0K]
> 
>   javac:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-compile-javac-root.txt
>   [284K]
> 
>   checkstyle:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-checkstyle-root.txt
>   [17M]
> 
>   pylint:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-pylint.txt
>   [20K]
> 
>   shellcheck:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-shellcheck.txt
>   [20K]
> 
>   shelldocs:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-shelldocs.txt
>   [12K]
> 
>   whitespace:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/whitespace-eol.txt
>   [8.5M]
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/whitespace-tabs.txt
>   [292K]
> 
>   javadoc:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-javadoc-javadoc-root.txt
>   [760K]
> 
>   unit:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>   [308K]
>   
> https://builds.apache.or

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/

[Oct 20, 2017 9:27:59 PM] (stevel) HADOOP-14942. DistCp#cleanup() should check 
whether jobFS is null.
[Oct 21, 2017 12:19:29 AM] (subru) YARN-6871. Add additional deSelects params in




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler 
   hadoop.yarn.server.resourcemanager.TestApplicationMasterService 
   hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler 
   hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
   hadoop.yarn.server.resourcemanager.TestRMHA 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps 
   hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation 
   hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors 
   hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior
 
   hadoop.yarn.server.TestDiskFailures 

Timed out junit tests :

   
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestZKConfigurationStore
 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   org.apache.hadoop.mapred.pipes.TestPipeApplication 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-compile-javac-root.txt
  [284K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/whitespace-eol.txt
  [8.5M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/whitespace-tabs.txt
  [292K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [308K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [728K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [12K]
   
https://builds.apache.org/job/hadoop-q

Re: [DISCUSS] Feature Branch Merge and Security Audits

2017-10-21 Thread larry mccay
New Revision...

This revision acknowledges the reality that we often have multiple phases
of feature lifecycle and that we need to account for each phase.
It has also been made more generic.
I have created a Tech Preview Security Audit list and a GA Readiness
Security Audit list.
I've also included suggested items into the GA Readiness list.

It has also been suggested that we publish the information as part of docs
so that the state of such features can be easily determined from these
pages. We can discuss this aspect as well.

Thoughts?

*Tech Preview Security Audit*
For features that are being merged without full security model coverage,
there need to be a base line of assurances that they do not introduce new
attack vectors in deployments that are from actual releases or even just
built from trunk.

*1. UIs*

1.1. Are there new UIs added with this merge?
1.2. Are they enabled/accessible by default?
1.3. Are they hosted in existing processes or as part of a new
process/server?
1.4. If new process/server, is it launched by default?

*2. APIs*

2.1. Are there new REST APIs added with this merge?
2.2. Are they enabled by default?
2.3. Are there RPC based APIs added with this merge?
2.4. Are they enabled by default?

*3. Secure Clusters*

3.1. Is this feature disabled completely in secure deployments?
3.2. If not, is there some justification as to why it should be available?

*4. CVEs*

4.1. Have all dependencies introduced by this merge been checked for known
issues?


--


*GA Readiness Security Audit*
At this point, we are merging full or partial security model
implementations.
Let's inventory what is covered by the model at this point and whether
there are future merges required to be full.

*1. UIs*

1.1. What sort of validation is being done on any accepted user input?
(pointers to code would be appreciated)
1.2. What explicit protections have been built in for (pointers to code
would be appreciated):
  1.2.1. cross site scripting
  1.2.2. cross site request forgery
  1.2.3. click jacking (X-Frame-Options)
1.3. What sort of authentication is required for access to the UIs?
  1.3.1. Kerberos
1.3.1.1. has TGT renewal been accounted for
1.3.1.2. SPNEGO support?
1.3.1.3. Delegation token?
  1.3.2. Proxy User ACL?
1.4. What authorization is available for determining who can access what
capabilities of the UIs for either viewing, modifying data and/or related
processes?
1.5. Is there any input that will ultimately be persisted in configuration
for executing shell commands or processes?
1.6. Do the UIs support the trusted proxy pattern with doas impersonation?
1.7. Is there TLS/SSL support?

*2. REST APIs*

2.1. Do the REST APIs support the trusted proxy pattern with doas
impersonation capabilities?
2.2. What explicit protections have been built in for:
  2.2.1. cross site scripting (XSS)
  2.2.2. cross site request forgery (CSRF)
  2.2.3. XML External Entity (XXE)
2.3. What is being used for authentication - Hadoop Auth Module?
2.4. Are there separate processes for the HTTP resources (UIs and REST
endpoints) or are they part of existing processes?
2.5. Is there TLS/SSL support?
2.6. Are there new CLI commands and/or clients for accessing the REST APIs?
2.7. What authorization enforcement points are there within the REST APIs?

*3. Encryption*

3.1. Is there any support for encryption of persisted data?
3.2. If so, is KMS and the hadoop key command used for key management?
3.3. KMS interaction with Proxy Users?

*4. Configuration*

4.1. Are there any passwords or secrets being added to configuration?
4.2. If so, are they accessed via Configuration.getPassword() to allow for
provisioning to credential providers?
4.3. Are there any settings that are used to launch docker containers or
shell out command execution, etc?

*5. HA*

5.1. Are there provisions for HA?
5.2. Are there any single point of failures?

*6. CVEs*

Dependencies need to have been checked for known issues before we merge.
We don't however want to list any CVEs that have been fixed but not
released yet.

6.1. All dependencies checked for CVEs?




On Sat, Oct 21, 2017 at 10:26 AM, larry mccay  wrote:

> Hi Marton -
>
> I don't think there is any denying that it would be great to have such
> documentation for all of those reasons.
> If it is a natural extension of getting the checklist information as an
> assertion of security state when merging then we can certainly include it.
>
> I think that backfilling all such information across the project is a
> different topic altogether and wouldn't want to expand the scope of this
> discussion in that direction.
>
> Thanks for the great thoughts on this!
>
> thanks,
>
> --larry
>
>
>
>
>
> On Sat, Oct 21, 2017 at 3:00 AM, Elek, Marton  wrote:
>
>>
>>
>> On 10/21/2017 02:41 AM, larry mccay wrote:
>>
>>>
>>> "We might want to start a security sectio

Re: [DISCUSS] Feature Branch Merge and Security Audits

2017-10-21 Thread larry mccay
Hi Marton -

I don't think there is any denying that it would be great to have such
documentation for all of those reasons.
If it is a natural extension of getting the checklist information as an
assertion of security state when merging then we can certainly include it.

I think that backfilling all such information across the project is a
different topic altogether and wouldn't want to expand the scope of this
discussion in that direction.

Thanks for the great thoughts on this!

thanks,

--larry





On Sat, Oct 21, 2017 at 3:00 AM, Elek, Marton  wrote:

>
>
> On 10/21/2017 02:41 AM, larry mccay wrote:
>
>>
>> "We might want to start a security section for Hadoop wiki for each of the
>>> services and components.
>>> This helps to track what has been completed."
>>>
>>
>> Do you mean to keep the audit checklist for each service and component
>> there?
>> Interesting idea, I wonder what sort of maintenance that implies and
>> whether we want to take on that burden even though it would be great
>> information to have for future reviewers.
>>
>
> I think we should care about the maintenance of the documentation anyway.
> We also need to maintain all the other documentations. I think it could be
> even part of the generated docs and not the wiki.
>
> I also suggest to fill this list about the current trunk/3.0 as a first
> step.
>
> 1. It would be a very usefull documentation for the end-users (some
> answers could link the existing documentation, it exists, but I am not sure
> if all the answers are in the current documentation.)
>
> 2. It would be a good example who the questions could be answered.
>
> 3. It would help to check, if something is missing from the list.
>
> 4. There are future branches where some of the components are not touched.
> For example, no web ui or no REST service. A prefilled list could help to
> check if the branch doesn't break any old security functionality on trunk.
>
> 5. It helps to document the security features in one place. If we have a
> list for the existing functionality in the same format, it would be easy to
> merge the new documentation of the new features as they will be reported in
> the same form. (So it won't be so hard to maintain the list...).
>
> Marton
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-10-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/563/

[Oct 19, 2017 6:51:24 AM] (zhz) HDFS-12502. nntop should support a category 
based on
[Oct 19, 2017 1:02:13 PM] (weichiu) HADOOP-14880. [KMS] Document&test missing 
KMS client side configs.
[Oct 19, 2017 1:17:59 PM] (weichiu) HDFS-12619. Do not catch and throw 
unchecked exceptions if IBRs fail to
[Oct 19, 2017 8:25:08 PM] (haibochen) HADOOP-14771. hadoop-client does not 
include hadoop-yarn-client. (Ajay
[Oct 19, 2017 9:44:42 PM] (wangda) YARN-7338. Support same origin policy for 
cross site scripting
[Oct 19, 2017 9:45:44 PM] (wangda) YARN-7345. GPU Isolation: Incorrect minor 
device numbers written to
[Oct 19, 2017 11:39:25 PM] (yufei) YARN-7294. 
TestSignalContainer#testSignalRequestDeliveryToNM fails
[Oct 19, 2017 11:45:18 PM] (cdouglas) HADOOP-14816. Update Dockerfile to use 
Xenial. Contributed by Allen
[Oct 19, 2017 11:51:47 PM] (yufei) YARN-7359. 
TestAppManager.testQueueSubmitWithNoPermission() should be
[Oct 20, 2017 1:08:45 AM] (inigoiri) HDFS-12620. Backporting HDFS-10467 to 
branch-2. Contributed by Inigo
[Oct 20, 2017 1:42:04 AM] (kai.zheng) HDFS-12448. Make sure user defined 
erasure coding policy ID will not
[Oct 20, 2017 4:58:40 AM] (wangda) YARN-7170. Improve bower dependencies for 
YARN UI v2. (Sunil G via
[Oct 20, 2017 8:32:20 AM] (yufei) YARN-4090. Make Collections.sort() more 
efficient by caching resource
[Oct 20, 2017 4:02:06 PM] (eyang) YARN-7353. Improved volume mount check for 
directories and unit test
[Oct 20, 2017 5:00:13 PM] (yufei) YARN-7261. Add debug message for better 
download latency monitoring.
[Oct 20, 2017 6:15:20 PM] (yufei) YARN-7355. TestDistributedShell should be 
scheduler agnostic.
[Oct 20, 2017 8:27:21 PM] (wang) HDFS-12497. Re-enable 
TestDFSStripedOutputStreamWithFailure tests.
[Oct 20, 2017 9:24:17 PM] (haibochen) YARN-7372.
[Oct 20, 2017 9:27:04 PM] (stevel) HADOOP-14942. DistCp#cleanup() should check 
whether jobFS is null.
[Oct 20, 2017 10:54:15 PM] (wangda) YARN-7318. Fix shell check warnings of SLS. 
(Gergely Novák via wangda)
[Oct 20, 2017 11:13:41 PM] (jzhuge) HADOOP-14954. MetricsSystemImpl#init should 
increment refCount when
[Oct 20, 2017 11:25:04 PM] (xiao) HDFS-12518. Re-encryption should handle task 
cancellation and progress




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior
 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification 
   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler 
   hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
   hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation 
   hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   hadoop.yarn.server.resourcemanager.TestRMHA 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue 
   hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   hadoop.yarn.server.resourcemanager.TestApplicationMasterService 
   hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation 

Timed out junit tests :

   
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestZKConfigurationStore
 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   org.apache.hadoop.mapred.pipes.TestPipeApplication 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/5

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/

No changes




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.net.TestClusterTopology 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler 
   hadoop.yarn.server.resourcemanager.TestApplicationMasterService 
   hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler 
   hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
   hadoop.yarn.server.resourcemanager.TestRMHA 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps 
   hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation 
   hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors 
   hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior
 
   hadoop.mapred.TestTaskProgressReporter 

Timed out junit tests :

   
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestZKConfigurationStore
 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   org.apache.hadoop.mapred.pipes.TestPipeApplication 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/diff-compile-javac-root.txt
  [284K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/whitespace-eol.txt
  [8.5M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/whitespace-tabs.txt
  [292K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [340K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [736K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/1/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

   asflice

Re: [DISCUSS] Feature Branch Merge and Security Audits

2017-10-21 Thread Elek, Marton



On 10/21/2017 02:41 AM, larry mccay wrote:



"We might want to start a security section for Hadoop wiki for each of the
services and components.
This helps to track what has been completed."


Do you mean to keep the audit checklist for each service and component
there?
Interesting idea, I wonder what sort of maintenance that implies and
whether we want to take on that burden even though it would be great
information to have for future reviewers.


I think we should care about the maintenance of the documentation 
anyway. We also need to maintain all the other documentations. I think 
it could be even part of the generated docs and not the wiki.


I also suggest to fill this list about the current trunk/3.0 as a first 
step.


1. It would be a very usefull documentation for the end-users (some 
answers could link the existing documentation, it exists, but I am not 
sure if all the answers are in the current documentation.)


2. It would be a good example who the questions could be answered.

3. It would help to check, if something is missing from the list.

4. There are future branches where some of the components are not 
touched. For example, no web ui or no REST service. A prefilled list 
could help to check if the branch doesn't break any old security 
functionality on trunk.


5. It helps to document the security features in one place. If we have a 
list for the existing functionality in the same format, it would be easy 
to merge the new documentation of the new features as they will be 
reported in the same form. (So it won't be so hard to maintain the list...).


Marton

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org