[jira] [Created] (HADOOP-16351) Change ":" to ApplicationConstants.CLASS_PATH_SEPARATOR

2019-06-06 Thread kevin su (JIRA)
kevin su created HADOOP-16351:
-

 Summary: Change ":" to ApplicationConstants.CLASS_PATH_SEPARATOR
 Key: HADOOP-16351
 URL: https://issues.apache.org/jira/browse/HADOOP-16351
 Project: Hadoop Common
  Issue Type: Task
  Components: common
Affects Versions: 3.1.2
Reporter: kevin su
Assignee: kevin su
 Fix For: 3.1.2


under distributedshell/Clients.java 
We should change ":" to ApplicationConstants.CLASS_PATH_SEPARATOR, so it could 
also support Windows client

{code}
// add the runtime classpath needed for tests to work
if (conf.getBoolean(YarnConfiguration.IS_MINI_YARN_CLUSTER, false)) {
  classPathEnv.append(':')
  .append(System.getProperty("java.class.path"));
}
{code}

{code}
// add the runtime classpath needed for tests to work
if (conf.getBoolean(YarnConfiguration.IS_MINI_YARN_CLUSTER, false)) {
  classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR)
  .append(System.getProperty("java.class.path"));
}
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15544) ABFS: validate packing, transient classpath, hadoop fs CLI

2019-06-06 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15544.
-
   Resolution: Done
Fix Version/s: 3.3.0

> ABFS: validate packing, transient classpath, hadoop fs CLI
> --
>
> Key: HADOOP-15544
> URL: https://issues.apache.org/jira/browse/HADOOP-15544
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: dependencies.txt
>
>
> Validate the packaging and dependencies of ABFS
> * hadoop-cloud-storage artifact to export everything needed
> * {{hadoop fs -ls abfs://path}} to work in ASF distributions
> * check transient CP (e.g spark)
> Spark master;s hadoop-cloud module depends on hadoop-cloud-storage if you 
> build with the hadoop-3.1 profile, so it should automatically get in there. 
> Just need to check that it picks it up too



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16187) ITestS3GuardToolDynamoDB test failures

2019-06-06 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16187.
-
Resolution: Duplicate

> ITestS3GuardToolDynamoDB test failures
> --
>
> Key: HADOOP-16187
> URL: https://issues.apache.org/jira/browse/HADOOP-16187
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> two tests failiing in ITestS3GuardToolDynamoDB
> * ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle
> * ITestS3GuardToolDynamoDB.testBucketInfoUnguarded



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-06-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/

[Jun 5, 2019 2:18:24 AM] (sunilg) SUBMARINE-88. rat.sh regex pattern not 
working issue while using lower
[Jun 5, 2019 2:44:44 AM] (ztang) SUBMARINE-89. Add submarine-src profile to 
generate source package.
[Jun 5, 2019 5:51:23 AM] (elek) HDDS-1640. Reduce the size of recon jar file
[Jun 5, 2019 5:55:30 AM] (wwei) YARN-9600. Support self-adaption width for 
columns of containers table
[Jun 5, 2019 12:04:17 PM] (elek) HDDS-1628. Fix the execution and return code 
of smoketest executor shell
[Jun 5, 2019 12:54:42 PM] (stevel) Revert "HADOOP-16321: 
ITestS3ASSL+TestOpenSSLSocketFactory failing with
[Jun 5, 2019 12:54:55 PM] (stevel) Revert "HADOOP-16050: s3a SSL connections 
should use OpenSSL"
[Jun 5, 2019 1:33:00 PM] (sammichen) HDFS-14356. Implement HDFS cache on SCM 
with native PMDK libs.
[Jun 5, 2019 4:09:36 PM] (xyao) HDDS-1637. Fix random test failure 
TestSCMContainerPlacementRackAware.
[Jun 5, 2019 9:42:10 PM] (xyao) HDDS-1541. Implement 
addAcl,removeAcl,setAcl,getAcl for Key. Contributed
[Jun 5, 2019 10:55:13 PM] (eyang) HADOOP-16314.  Make sure all web end points 
are covered by the same




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.hdds.scm.block.TestBlockManager 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/diff-patch-pylint.txt
  [108K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1159/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [12K]
   

[jira] [Reopened] (HADOOP-16344) Make DurationInfo "public unstable"

2019-06-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reopened HADOOP-16344:
--

> Make DurationInfo  "public unstable"
> 
>
> Key: HADOOP-16344
> URL: https://issues.apache.org/jira/browse/HADOOP-16344
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Kevin Risden
>Assignee: kevin su
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16344.01.patch
>
>
> HADOOP-16093 moved DurationInfo to hadoop-common org.apache.hadoop.util. It 
> would be useful if DurationInfo was annotated as "public unstable".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-06-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.cli.TestHDFSCLI 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength 
   hadoop.fs.viewfs.TestViewFsHdfs 
   hadoop.cli.TestCryptoAdminCLI 
   hadoop.cli.TestAclCLI 
   hadoop.fs.viewfs.TestViewFsWithAcls 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   hadoop.hdfs.TestEncryptionZonesWithKMS 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen 
   hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/344/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   

[jira] [Resolved] (HADOOP-16344) Make DurationInfo "public unstable"

2019-06-06 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16344.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

> Make DurationInfo  "public unstable"
> 
>
> Key: HADOOP-16344
> URL: https://issues.apache.org/jira/browse/HADOOP-16344
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Kevin Risden
>Assignee: kevin su
>Priority: Minor
> Fix For: 3.3.0
>
>
> HADOOP-16093 moved DurationInfo to hadoop-common org.apache.hadoop.util. It 
> would be useful if DurationInfo was annotated as "public unstable".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout

2019-06-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reopened HADOOP-15888:
-

Reopened. This depends on HADOOP-15563 we will close when that is in.

> ITestDynamoDBMetadataStore can leak (large) DDB tables in test 
> failures/timeout
> ---
>
> Key: HADOOP-15888
> URL: https://issues.apache.org/jira/browse/HADOOP-15888
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.2
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: Screen Shot 2018-10-30 at 17.32.43.png
>
>
> This is me doing some backporting of patches from branch-3.2, so it may be an 
> intermediate condition but
> # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore
> # so I set it up to work with teh right config opts (table and region)
> # but the tests were timing out
> # looking at DDB tables in the AWS console showed a number of DDB tables 
> "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 
> write capacity (i.e. ~$50/month)
> I haven't replicated this in trunk/branch-3.2 itself, but its clearly 
> dangerous. At the very least, we should have a size of 1 R/W in all 
> creations, so the cost of a test failure is neglible, and then we should 
> document the risk and best practise.
> Also: use "s3guard" as the table prefix to make clear its origin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16117) Update AWS SDK to 1.11.563

2019-06-06 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16117.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

> Update AWS SDK to 1.11.563
> --
>
> Key: HADOOP-16117
> URL: https://issues.apache.org/jira/browse/HADOOP-16117
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Upgrade to the most recent AWS SDK. That's 1.11; even though there's a 2.0 
> out it'll be more significant an upgrade, with impact downstream.
> The new [AWS SDK update 
> process|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md#-qualifying-an-aws-sdk-update]
>  *must* be followed, and we should plan for 1-2 surprises afterwards anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org