[jira] [Resolved] (HDFS-14556) Spelling Mistake "gloablly"

2019-06-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDFS-14556.
---
   Resolution: Fixed
Fix Version/s: 3.3.0

> Spelling Mistake "gloablly"
> ---
>
> Key: HDFS-14556
> URL: https://issues.apache.org/jira/browse/HDFS-14556
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.10.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Trivial
>  Labels: newbie, noob
> Fix For: 3.3.0
>
>
> https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto#L41
> {code:java}
> message ExtendedBlockProto {
>   required string poolId = 1;   // Block pool id - gloablly unique across 
> clusters
>   required uint64 blockId = 2;  // the local id within a pool
>   required uint64 generationStamp = 3;
>   optional uint64 numBytes = 4 [default = 0];  // len does not belong in ebid 
>// here for historical reasons
> }
> {code}
> _gloablly_ = _globally_
> Saw this typo in my Eclipse editor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[RESULT][VOTE] Merge HDFS-13891(RBF) to trunk

2019-06-16 Thread Brahma Reddy Battula
Vote is passed with 3 binding +1's  and no -1 ( 5 nonbinding votes) .

thanks to all for voting.

Will try to merge this week..


On Tue, Jun 11, 2019 at 1:13 PM Takanobu Asanuma 
wrote:

> +1(non-binding).
>
> Regards,
> - Takanobu
>
> 
> From: Xiaoqiao He 
> Sent: Monday, June 10, 2019 15:47
> To: Ranith Sardar
> Cc: Brahma Reddy Battula; Hadoop Common; Hdfs-dev
> Subject: Re: [VOTE] Merge HDFS-13891(RBF) to trunk
>
> +1 (non-binding)
>
> - Try to merge branch HDFS-13891(RBF) to trunk at local and no conflict or
> failure.
> - Built from merged sources.
> - Ran 85 RBF test class at local and result shows: (Tests run: 639,
> Failures: 2, Errors: 2, Skipped: 2) - failed tests include
> #TestRouterWithSecureStartup and #TestRouterHttpDelegationToken. I don't
> think it is blocked issue.
>
> Thanks Brahma for organizing the merge.
>
>
> On Mon, Jun 10, 2019 at 12:27 PM Ranith Sardar 
> wrote:
>
> > +1 (Non-binding)
> >
> > -Original Message-
> > From: Brahma Reddy Battula [mailto:bra...@apache.org]
> > Sent: 09 June 2019 19:31
> > To: Hadoop Common ; Hdfs-dev <
> > hdfs-dev@hadoop.apache.org>
> > Subject: [VOTE] Merge HDFS-13891(RBF) to trunk
> >
> > Updated mail...
> >
> > -- Forwarded message -
> > From: Brahma Reddy Battula 
> > Date: Sun, Jun 9, 2019 at 7:26 PM
> > Subject: Re: [VOTE] Merge HDFS-13891(RBF) to trunk
> > To: Xiaoqiao He 
> > Cc: Akira Ajisaka , Chittaranjan Hota <
> > chitts.h...@gmail.com>, Giovanni Matteo Fumarola <
> > giovanni.fumar...@gmail.com>, Hadoop Common <
> common-...@hadoop.apache.org>,
> > Hdfs-dev , Iñigo Goiri 
> >
> >
> > Hi All,
> >
> > Given the positive response to the discussion thread [1], here is the
> > formal vote thread to merge HDFS-13891 in to trunk.
> >
> > Summary of code changes:
> > 1. Code changes for this branch are done in the hadoop-hdfs-rbf
> > subproject, there is no impact to hadoop-hdfs and hadoop-common.
> > 2. Added Security support for RBF
> > 3. Added Missing Client Protocol API's
> > 4. Bug Fixes/ Improvments
> >
> >
> >  The vote will run for 7 days, ending Sat June 15th. I will start
> this
> > vote with my +1.
> >
> > Regards,
> > Brahma Reddy Battula
> >
> > 1).
> >
> >
> https://lists.apache.org/thread.html/cdc2e084874b30bf6af2dd827bcbdba4ab8d3d983a8b9796e61608e6@%3Ccommon-dev.hadoop.apache.org%3E
> >
> >
> > On Wed, Jun 5, 2019 at 3:44 PM Xiaoqiao He  wrote:
> >
> > > Thanks Brahma for starting the thread.
> > > +1 for merging.
> > >
> > > He Xiaoqiao
> > >
> > > On Tue, Jun 4, 2019 at 1:53 PM Chittaranjan Hota
> > > 
> > > wrote:
> > >
> > >> Thanks Brahma initiating this.
> > >> +1(non-binding) for merge.
> > >>
> > >> @Uber we have almost all changes specially rbf security in production
> > >> for a while now without issues.
> > >>
> > >> On Mon, Jun 3, 2019 at 12:56 PM Giovanni Matteo Fumarola <
> > >> giovanni.fumar...@gmail.com> wrote:
> > >>
> > >> > +1 on merging.
> > >> >
> > >> > Thanks Brahma for starting the thread.
> > >> >
> > >> > On Mon, Jun 3, 2019 at 10:00 AM Iñigo Goiri 
> > wrote:
> > >> >
> > >> > > Thank you Brahma for pushing this.
> > >> > >
> > >> > > As you mentioned, we have already taken most of the changes into
> > >> > > production.
> > >> > > I want to highlight that the main contribution is the addition of
> > >> > security.
> > >> > > We have been able to test this at a smaller scale (~500 servers
> > >> > > and 4
> > >> > > subclusters) and the performance is great with our current
> > >> > > ZooKeeper deployment.
> > >> > > I would also like to highlight that all the changes are
> > >> > > constrained to hadoop-hdfs-rbf and there is no differences in
> > commons or HDFS.
> > >> > >
> > >> > > +1 on merging
> > >> > >
> > >> > > Inigo
> > >> > >
> > >> > > On Sun, Jun 2, 2019 at 10:19 PM Akira Ajisaka
> > >> > > 
> > >> > wrote:
> > >> > >
> > >> > > > Thanks Brahma for starting the discussion.
> > >> > > > I'm +1 for merging this.
> > >> > > >
> > >> > > > FYI: In Yahoo! JAPAN, deployed all these changes in 20 nodes
> > >> > > > cluster with 2 routers (not in production) and running several
> > tests.
> > >> > > >
> > >> > > > Regards,
> > >> > > > Akira
> > >> > > >
> > >> > > > On Sun, Jun 2, 2019 at 12:40 PM Brahma Reddy Battula <
> > >> > bra...@apache.org>
> > >> > > > wrote:
> > >> > > > >
> > >> > > > > Dear Hadoop Developers
> > >> > > > >
> > >> > > > > I would like to propose RBF Branch (HDFS-13891) merge into
> > trunk.
> > >> We
> > >> > > have
> > >> > > > > been working on this feature from last several months.
> > >> > > > > This feature work received the contributions from different
> > >> > companies.
> > >> > > > All
> > >> > > > > of the feature development happened smoothly and
> > >> > > > > collaboratively
> > >> in
> > >> > > > JIRAs.
> > >> > > > >
> > >> > > > > Kindly do take a look at the branch and raise issues/concerns
> > >> > > > > that
> > >> > need
> > >> > > > to
> > >> > > > > be 

[jira] [Created] (HDDS-1693) Enable Partitioned-Index-Filters for OM Metadata Manager

2019-06-16 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1693:
---

 Summary: Enable Partitioned-Index-Filters for OM Metadata Manager
 Key: HDDS-1693
 URL: https://issues.apache.org/jira/browse/HDDS-1693
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


Enable Partitioned-Index-Filters for OM Metadata Manager, this will help in 
caching metadablocks effectively as the size of the objects increase.

https://github.com/facebook/rocksdb/wiki/Partitioned-Index-Filters#how-to-use-it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1692) RDBTable#iterator should disabled caching of the keys during iterator

2019-06-16 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1692:
---

 Summary: RDBTable#iterator should disabled caching of the keys 
during iterator
 Key: HDDS-1692
 URL: https://issues.apache.org/jira/browse/HDDS-1692
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Mukul Kumar Singh


Iterator normally do a bulk load of the keys, this causes thrashing of the 
actual keys in the DB.

This option is documented here:-
https://github.com/facebook/rocksdb/wiki/Basic-Operations#cache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1691) RDBTable#isExist should use Rocksdb#keyMayExist

2019-06-16 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1691:
---

 Summary: RDBTable#isExist should use Rocksdb#keyMayExist
 Key: HDDS-1691
 URL: https://issues.apache.org/jira/browse/HDDS-1691
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Mukul Kumar Singh


RDBTable#isExist can use Rocksdb#keyMayExist, this avoids the cost of reading 
the value for the key.

Please refer, 
https://github.com/facebook/rocksdb/blob/7a8d7358bb40b13a06c2c6adc62e80295d89ed05/java/src/main/java/org/rocksdb/RocksDB.java#L2184



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-06-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/

[Jun 15, 2019 3:05:20 AM] (weichiu) HADOOP-16336. finish variable is unused in 
ZStandardCompressor.
[Jun 15, 2019 1:47:10 PM] (weichiu) HDFS-14203. Refactor OIV Delimited output 
entry building mechanism.
[Jun 15, 2019 8:47:07 PM] (github) HDDS-1601. Implement updating 
lastAppliedIndex after buffer flush to OM




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.TestFileCorruption 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.ozone.container.common.impl.TestHddsDispatcher 
   hadoop.hdds.scm.node.TestNodeReportHandler 
   hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis 
   hadoop.ozone.client.rpc.TestOzoneAtRestEncryption 
   hadoop.ozone.client.rpc.TestOzoneRpcClient 
   hadoop.ozone.client.rpc.TestWatchForCommit 
   hadoop.ozone.client.rpc.TestSecureOzoneRpcClient 
   hadoop.hdds.scm.pipeline.TestRatisPipelineProvider 
   hadoop.fs.ozone.contract.ITestOzoneContractRootDir 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-patch-pylint.txt
  [120K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-tony-runtime.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1169/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-06-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [224K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/354/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]