[jira] [Resolved] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1483.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1132) Ozone serialization codec for Ozone S3 secret table

2019-05-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1132.
--
Resolution: Duplicate

> Ozone serialization codec for Ozone S3 secret table
> ---
>
> Key: HDDS-1132
> URL: https://issues.apache.org/jira/browse/HDDS-1132
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, S3
>Reporter: Elek, Marton
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: newbie
>
> HDDS-748/HDDS-864 introduced an option to use strongly typed metadata tables 
> and separated the serialization/deserialization logic to separated codec 
> implementation
> HDDS-937 introduced a new S3 secret table which is not codec based.
> I propose to use codecs for this table.
> In OzoneMetadataManager the return value of getS3SecretTable() should be 
> changed from Table to Table. 
> The encoding/decoding logic of S3SecretValue should be registered in 
> ~OzoneMetadataManagerImpl:L204
> As the codecs are type based we may need a wrapper class to encode the String 
> kerberos id with md5: class S3SecretKey(String name = kerberodId). Long term 
> we can modify the S3SecretKey to support multiple keys for the same kerberos 
> id.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1483:


 Summary: Fix getMultipartKey javadoc
 Key: HDDS-1483
 URL: https://issues.apache.org/jira/browse/HDDS-1483
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham


{code:java}
/**
<<< HEAD
 * Returns the DB key name of a multipart upload key in OM metadata store.
 *
 * @param volume - volume name
 * @param bucket - bucket name
 * @param key - key name
 * @param uploadId - the upload id for this key
 * @return bytes of DB key.
 */
 String getMultipartKey(String volume, String bucket, String key, String
 uploadId);{code}
 

Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1482:


 Summary: Use strongly typed codec implementations for the S3Table
 Key: HDDS-1482
 URL: https://issues.apache.org/jira/browse/HDDS-1482
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


HDDS-864 added the implementation for Strongly typed codec implementation for 
the tables of OmMetadataManager.

 

Tables which are added as part of S3 Implementation are not using this. This 
Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13933) [JDK 11] SWebhdfsFileSystem related tests fail with hostname verification problems for "localhost"

2019-05-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HDFS-13933.
---
Resolution: Won't Fix

> [JDK 11] SWebhdfsFileSystem related tests fail with hostname verification 
> problems for "localhost"
> --
>
> Key: HDFS-13933
> URL: https://issues.apache.org/jira/browse/HDFS-13933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Andrew Purtell
>Priority: Minor
>
> Tests with issues:
> * TestHttpFSFWithSWebhdfsFileSystem
> * TestWebHdfsTokens
> * TestSWebHdfsFileContextMainOperations
> Possibly others. Failure looks like 
> {noformat}
> java.io.IOException: localhost:50260: HTTPS hostname wrong:  should be 
> 
> {noformat}
> These tests set up a trust store and use HTTPS connections, and with Java 11 
> the client validation of the server name in the generated self-signed 
> certificate is failing. Exceptions originate in the JRE's HTTP client 
> library. How everything hooks together uses static initializers, static 
> methods, JUnit MethodRules... There's a lot to unpack, not sure how to fix. 
> This is Java 11+28



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13189) Standby NameNode should roll active edit log when checkpointing

2019-05-01 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun resolved HDFS-13189.
-
Resolution: Duplicate

> Standby NameNode should roll active edit log when checkpointing
> ---
>
> Key: HDFS-13189
> URL: https://issues.apache.org/jira/browse/HDFS-13189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chao Sun
>Priority: Minor
>
> When the SBN is doing checkpointing, it will hold the {{cpLock}}. In the 
> current implementation of edit log tailer thread, it will first check and 
> roll active edit log, and then tail and apply edits. In the case of 
> checkpointing, it will be blocked on the {{cpLock}} and will not roll the 
> edit log.
> It seems there is no dependency between the edit log roll and tailing edits, 
> so a better may be to do these in separate threads. This will be helpful for 
> people who uses the observer feature without in-progress edit log tailing. 
> An alternative is to configure 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} and 
> {{dfs.namenode.edit.log.autoroll.check.interval.ms}} to let ANN roll its own 
> log more frequently in case SBN is stuck on the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-05-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1123/

[Apr 30, 2019 2:46:20 AM] (ajay) Revert "HDDS-973. HDDS/Ozone fail to build on 
Windows."
[Apr 30, 2019 2:54:25 AM] (ztang) SUBMARINE-64. Improve TonY runtime's 
document. Contributed by Keqiu Hu.
[Apr 30, 2019 3:06:44 AM] (ztang) YARN-9476. [YARN-9473] Create unit tests for 
VE plugin. Contributed by
[Apr 30, 2019 10:53:26 AM] (stevel) HADOOP-16221. S3Guard: add option to fail 
operation on metadata write
[Apr 30, 2019 12:27:39 PM] (elek) HDDS-1384. TestBlockOutputStreamWithFailures 
is failing
[Apr 30, 2019 9:04:59 PM] (eyang) YARN-6929.  Improved partition algorithm for 
yarn remote-app-log-dir.   
[Apr 30, 2019 9:52:16 PM] (todd) HDFS-3246: pRead equivalent for direct read 
path (#597)




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.hdds.scm.container.TestContainerStateManagerIntegration 
   hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean 
   hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules 
   hadoop.ozone.om.TestMultipleContainerReadWrite 
   hadoop.ozone.client.rpc.TestContainerStateMachineFailures 
   hadoop.ozone.web.client.TestBuckets 
   hadoop.ozone.scm.TestContainerSmallFile 
   hadoop.ozone.TestStorageContainerManager 
   hadoop.ozone.client.rpc.TestBCSID 
   hadoop.ozone.ozShell.TestOzoneDatanodeShell 
   hadoop.ozone.client.rpc.TestBlockOutputStream 
   hadoop.ozone.scm.TestXceiverClientMetrics 
   hadoop.ozone.om.TestOmAcls 
   hadoop.ozone.om.TestOzoneManager 
   hadoop.ozone.client.rpc.TestCommitWatcher 
   hadoop.ozone.web.client.TestKeys 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 
   hadoop.ozone.client.rpc.TestOzoneRpcClient 
   hadoop.ozone.client.rpc.TestContainerStateMachine 
   hadoop.ozone.container.TestContainerReplication 
   hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException 
   hadoop.ozone.om.TestScmSafeMode 
   hadoop.ozone.om.TestOMDbCheckpointServlet 
   hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient 
   hadoop.ozone.om.TestOzoneManagerHA 
   hadoop.ozone.om.TestOmInit 
   hadoop.ozone.om.TestOmBlockVersioning 
   hadoop.ozone.om.TestOzoneManagerRestInterface 
   hadoop.ozone.scm.TestAllocateContainer 
   hadoop.hdds.scm.pipeline.TestPipelineClose 
   hadoop.ozone.ozShell.TestS3Shell 
   hadoop.hdds.scm.pipeline.TestNodeFailure 
   hadoop.fs.ozone.contract.ITestOzoneContractRename 
   hadoop.fs.ozone.contract.ITestOzoneContractRootDir 
   hadoop.fs.ozone.contract.ITestOzoneContractMkdir 
   hadoop.fs.ozone.contract.ITestOzoneContractSeek 
   hadoop.fs.ozone.contract.ITestOzoneContractOpen 
   hadoop.fs.ozone.contract.ITestOzoneContractDelete 
   hadoop.fs.ozone.contract.ITestOzoneContractDistCp 
   hadoop.fs.ozone.contract.ITestOzoneContractCreate 
   hadoop.ozone.freon.TestFreonWithDatanodeFastRestart 
   hadoop.ozone.freon.TestRandomKeyGenerator 
   hadoop.ozone.freon.TestFreonWithPipelineDestroy 
   hadoop.ozone.freon.TestDataValidateWithUnsafeByteOperations 
   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-05-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestHFlush 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestParallelShortCircuitRead 
   hadoop.hdfs.TestFileCreation 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot 
   hadoop.hdfs.TestDFSStartupVersions 
   hadoop.hdfs.TestRollingUpgradeDowngrade 
   hadoop.hdfs.TestBlockStoragePolicy 
   hadoop.hdfs.TestDFSClientSocketSize 
   hadoop.cli.TestHDFSCLI 
   hadoop.hdfs.TestIsMethodSupported 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/308/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   

[jira] [Created] (HDDS-1481) Cleanup BasicOzoneFileSystem#mkdir

2019-05-01 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1481:
-

 Summary: Cleanup BasicOzoneFileSystem#mkdir
 Key: HDDS-1481
 URL: https://issues.apache.org/jira/browse/HDDS-1481
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently BasicOzoneFileSystem#mkdir does not have the optimizations made in 
HDDS-1300. The changes for this function were missed in HDDS-1460.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org