Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-01-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/

[Jan 22, 2018 6:30:01 PM] (yufei) YARN-7755. Clean up deprecation messages for 
allocation increments in FS
[Jan 22, 2018 9:33:38 PM] (eyang) YARN-7729.  Add support for setting Docker 
PID namespace mode. 
[Jan 22, 2018 11:54:44 PM] (hanishakoneru) HADOOP-15121. Encounter 
NullPointerException when using
[Jan 23, 2018 12:02:32 AM] (hanishakoneru) HDFS-13023. Journal Sync does not 
work on a secure cluster. Contributed




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesConfigurationMutation
 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.lib.output.TestJobOutputCommitter 
   hadoop.mapreduce.v2.TestMROldApiJobs 
   hadoop.mapreduce.v2.TestUberAM 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.mapred.TestJobCleanup 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/whitespace-tabs.txt
  [292K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [452K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [1.1M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [380K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapre

Re: When are incompatible changes acceptable (HDFS-12990)

2018-01-23 Thread Tsz Wo (Nicholas), Sze
 >    For a), since we have *just* released 3.0.0, it's safe to say we have
>    tremendously more users on 2.x than 3.0.0 now. If we make the release notes
>    clear, this will benefit tremendously more users than harming.


Unfortunately, 3.0.0 is a GA (but neither alpha nor beta) release.  Once we 
have signed a contract, it does not matter if it just has been signed, or it 
was signed long time ago.

A big downside is that Hadoop becomes known for being incompatible for its 
major releases, and known for not keeping its own promise.  Is the Hadoop brand 
name important?

It seems that best solution is to have NN RPC listening to both 8020 and 9820 
by default.  Why not doing it instead of being compatible?

Regards,Tsz-Wo


On Tuesday, January 23, 2018, 7:09:01 AM GMT+8, Eric Yang 
 wrote:  
 
 Hi Xiao Chen,

I am unaffected by this change either way.  If this change saves people time, 
then we should include it.  The voting outcome for 3.0.1 release determines if 
this should be addressed by the community.  I am merely bringing up the 
potential risk of the change.  With proper communication, this should not be an 
issue.

Regards,
Eric

On 1/22/18, 2:37 PM, "Xiao Chen"  wrote:

    Thanks all for the comments, and ATM for initiating the discussion thread.
    (I have just returned from a 2-week PTO).
    
    Reading up all the comments here and from HDFS-12990, I think we all agree
    having different default NN ports will be inconvenient for all, and
    problematic for several cases - ranging from rolling upgrade to various
    downstream use cases. In CDH, this was initially reported from downstream
    (Impala) testing when the scripts there tries to do RPC with 8020 but NN is
    running on 9820. The intuitive was 'change CM to match it'. Later cases pop
    up, including the table location in Hive metastore and custom scripts
    (including Oozie WFs). The only other real world example we heard so far is
    Anu's comment on HDFS-12990, where he did not enjoy keeping separate
    scripts for hadoop 2 / 3.
    
    Note that this limits only to NN RPC port (8020 <-> 9820), because other
    port changes in HDFS-9427 are indeed switching the default from ephemeral
    ports.
    
    The disagreement so far is how to proceed from here.
    1. Not fix it at all.
    
    This means everyone on 2.x will run into this issue when they upgrade.
    
    2. Make NN RPC listen to both 8020 and 9820
    
    Nicholas came up with this idea, which by itself smartly solves the
    compatibility problems.
    
    The downside of it is, even though things works during/after an upgrade,
    people will still have to whack-a-mole their existing 8020's. I agree
    adding this will have the side-effect to give NN more flexibility in the
    future. We can do this with or without the port change.
    
    3. Change back to 8020
    
    This will make all upgrades from 2.x -> 3.0.1 (if this goes in) free of
    this problem, because the original 8020->9820 switch doesn't appear to be a
    mature move.
    
    Downside that I summarize up are: a) what about 3.0.0 users b) compat
    
    For a), since we have *just* released 3.0.0, it's safe to say we have
    tremendously more users on 2.x than 3.0.0 now. If we make the release notes
    clear, this will benefit tremendously more users than harming.
    For b), as various others commented, this can be a special case where a
    by-definition incompatible change actually fixes a previously problematic
    incompatible change. If we can have consensus, and also notify users from
    mailing list + release notes, it doesn't weaken our compatibility
    guidelines nor surprise the community.
    
    
    Eric and Nicholas, does this address your concerns?
    
    
    -Xiao
    
    On Sun, Jan 21, 2018 at 8:27 PM, Akira Ajisaka  wrote:
    
    > Thanks Chris and Daryn for the replies.
    >
    > First of all, I missed why NN RPC port was moved to 9820.
    > HDFS-9427 is to avoid ephemeral port, however, NN RPC port (8020)
    > is already out of the range. The change is only to move the
    > all ports in the same range, so the change is not really necessary.
    >
    > I agree the change is disastrous for many users, however,
    > reverting the change is also disastrous for 3.0.0 users.
    > Therefore, if we are to revert the change, we must notify
    > the incompatibility to the users. Adding the notification in the
    > release announcement seems to be a good choice. Probably
    > users does not carefully read the change logs and they can
    > easily miss it, as we missed it in the release process.
    >
    > Cancelling my -1.
    >
    > -Akira
    >
    > On 2018/01/20 7:17, Daryn Sharp wrote:
    >
    >>  > I'm -1 for reverting HDFS-9427 in 3.x.
    >>
    >> I'm -1 on not reverting.  If yahoo/oath had the cycles to begin testing
    >> 3.0 prior to release, I would have -1'ed this change immediately.  It's
    >> already broken our Q

[jira] [Created] (HADOOP-15188) azure datalake AzureADAuthenticator failing, no error info provided

2018-01-23 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15188:
---

 Summary: azure datalake AzureADAuthenticator failing, no error 
info provided
 Key: HADOOP-15188
 URL: https://issues.apache.org/jira/browse/HADOOP-15188
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 3.1.0
Reporter: Steve Loughran


Get a failure in ADLS client, but nothing useful in terms of failure description

{code}
DEBUG oauth2.AzureADAuthenticator: AADToken: starting to fetch token using 
client creds for client ID 
DEBUG store.HttpTransport: 
HTTPRequest,Failed,cReqId:,lat:127370,err:HTTP0(null),Reqlen:0,Resplen:0,token_ns:,sReqId:null,path:,qp:op=GETFILESTATUS&tooid=true&api-version=2016-11-01
{code}
so: we had a failure but the response code is 0, error(null); "something 
happened but we don't know what"

Looks like this log message is in the ADLS SDK, and can be translated like this.
{code}
String logline =
  "HTTPRequest," + outcome +
  ",cReqId:" + opts.requestid +
  ",lat:" + Long.toString(resp.lastCallLatency) +
  ",err:" + error +
  ",Reqlen:" + length +
  ",Resplen:" + respLength +
  ",token_ns:" + Long.toString(resp.tokenAcquisitionLatency) +
  ",sReqId:" + resp.requestId +
  ",path:" + path +
  ",qp:" + queryParams.serialize();

{code}

It looks like whatever code tries to parse the JSON response from the OAuth 
service couldn't make sense of the response, and we end up with nothing back. 

Not sure what can be done in hadoop to handle this, except maybe provide more 
diags on request failures.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-01-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/113/

[Jan 22, 2018 8:15:31 AM] (aajisaka) HADOOP-15181. Typo in SecureMode.md




-1 overall


The following subsystems voted -1:
asflicense unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Unreaped Processes :

   hadoop-hdfs:41 
   bkjournal:8 
   hadoop-yarn-client:6 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-mapreduce-client-jobclient:2 
   hadoop-distcp:4 
   hadoop-archives:1 
   hadoop-extras:1 

Failed junit tests :

   hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.namenode.ha.TestHAMetrics 
   hadoop.hdfs.server.namenode.TestSecurityTokenEditLog 
   hadoop.hdfs.server.namenode.TestFileLimit 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots 
   hadoop.hdfs.server.namenode.TestFSImageWithAcl 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd 
   hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover 
   hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup 
   hadoop.hdfs.server.namenode.TestEditLogAutoroll 
   hadoop.hdfs.server.namenode.TestStreamFile 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.namenode.TestFsck 
   hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.server.namenode.TestAuditLogger 
   hadoop.hdfs.server.namenode.TestTransferFsImage 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA 
   hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot 
   hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.federation.router.TestNamenodeHeartbeat 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.server.namenode.TestProtectedDirectories 
   hadoop.hdfs.server.namenode.TestLargeDirectoryDelete 
   hadoop.hdfs.server.namenode.TestXAttrConfigFlag 
   hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot 
   hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade 
   hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication 
   hadoop.hdfs.server.namenode.TestBackupNode 
   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer 
   hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA 
   hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination 
   hadoop.hdfs.server.federation.router.TestRouterMountTable 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot 
   
hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters 
   hadoop.hdfs.server.blockmanagement.TestNodeCount 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHAStateTransitions 
   hadoop.hdfs.server.namenode.TestSaveNamespace 
   hadoop.hdfs.server.namenode.TestNameNodeRpcServerMethods 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem 
   hadoop.hdfs.server.namenode.TestDeadDatanode 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure 
   hadoop.hdfs.server.namenode.TestEditLogRace 
   hadoop.hdfs.server.namenode.TestNameNodeResourceChecker 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics 
   hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.TestEditLogJournalFailures 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark 
   hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy 
   hado

[jira] [Created] (HADOOP-15187) Remove mock test dependency on REST call invoked from Java SDK

2018-01-23 Thread Vishwajeet Dusane (JIRA)
Vishwajeet Dusane created HADOOP-15187:
--

 Summary: Remove mock test dependency on REST call invoked from 
Java SDK 
 Key: HADOOP-15187
 URL: https://issues.apache.org/jira/browse/HADOOP-15187
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl
Affects Versions: 3.0.0
Reporter: Vishwajeet Dusane
Assignee: Vishwajeet Dusane


Cleanup unit test which mocks REST calls invoked within dependency SDK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15186) Allow Azure Data Lake SDK dependency version to override from the command line

2018-01-23 Thread Vishwajeet Dusane (JIRA)
Vishwajeet Dusane created HADOOP-15186:
--

 Summary: Allow Azure Data Lake SDK dependency version to override 
from the command line
 Key: HADOOP-15186
 URL: https://issues.apache.org/jira/browse/HADOOP-15186
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl
Affects Versions: 3.0.0
Reporter: Vishwajeet Dusane
Assignee: Vishwajeet Dusane


For backward/forward release of Java SDK compatibility test against Hadoop 
driver. Allow Azure Data Lake Java SDK dependency version to override from 
command line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org