Hadoop storage online sync (Mandarin) happening in 2 hours

2019-10-09 Thread Wei-Chiu Chuang
I would like to talk about Erasure Coding development this time around. But
feel free to join and chime in.

Join via Zoom:
https://docs.google.com/document/d/1XkrcyVil_ORV1UP-JhosGzK8qWGXXX3wuplo4RtC7u0/edit

Past meeting minutes:
https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.21derujgi03p


[jira] [Created] (HDDS-2279) S3 commands not working on HA cluster

2019-10-09 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2279:


 Summary: S3 commands not working on HA cluster
 Key: HDDS-2279
 URL: https://issues.apache.org/jira/browse/HDDS-2279
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


ozone s3 getSecret

ozone s3 path are not working on OM HA cluster

 

Because these commands do not take URI as a parameter. And for shell in HA, 
passing URI is mandatory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2278) Run S3 test suite on OM HA cluster

2019-10-09 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2278:


 Summary: Run S3 test suite on OM HA cluster
 Key: HDDS-2278
 URL: https://issues.apache.org/jira/browse/HDDS-2278
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This will add a new compose setup with 3 OM's and start SCM, S3G, Datanode.

Run the existing test suite against this new docker-compose cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2191) Handle bucket create request in OzoneNativeAuthorizer

2019-10-09 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2191.
--
Resolution: Fixed

> Handle bucket create request in OzoneNativeAuthorizer
> -
>
> Key: HDDS-2191
> URL: https://issues.apache.org/jira/browse/HDDS-2191
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> OzoneNativeAuthorizer should handle bucket create request when the bucket 
> object is not yet created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts

2019-10-09 Thread Wei-Chiu Chuang
Hi I am late to this but I am keen to understand more.

To be exact, how can we better use the thirdparty repo? Looking at HBase as
an example, it looks like everything that are known to break a lot after an
update get shaded into the hbase-thirdparty artifact: guava, netty, ... etc.
Is it the purpose to isolate these naughty dependencies?

On Wed, Oct 9, 2019 at 12:38 PM Vinayakumar B 
wrote:

> Hi All,
>
> I have updated the PR as per @Owen O'Malley 
> 's suggestions.
>
> i. Renamed the module to 'hadoop-shaded-protobuf37'
> ii. Kept the shaded package to 'o.a.h.thirdparty.protobuf37'
>
> Please review!!
>
> Thanks,
> -Vinay
>
>
> On Sat, Sep 28, 2019 at 10:29 AM 张铎(Duo Zhang) 
> wrote:
>
> > For HBase we have a separated repo for hbase-thirdparty
> >
> > https://github.com/apache/hbase-thirdparty
> >
> > We will publish the artifacts to nexus so we do not need to include
> > binaries in our git repo, just add a dependency in the pom.
> >
> >
> >
> https://mvnrepository.com/artifact/org.apache.hbase.thirdparty/hbase-shaded-protobuf
> >
> >
> > And it has its own release cycles, only when there are special
> requirements
> > or we want to upgrade some of the dependencies. This is the vote thread
> for
> > the newest release, where we want to provide a shaded gson for jdk7.
> >
> >
> >
> https://lists.apache.org/thread.html/f12c589baabbc79c7fb2843422d4590bea982cd102e2bd9d21e9884b@%3Cdev.hbase.apache.org%3E
> >
> >
> > Thanks.
> >
> > Vinayakumar B  于2019年9月28日周六 上午1:28写道:
> >
> > > Please find replies inline.
> > >
> > > -Vinay
> > >
> > > On Fri, Sep 27, 2019 at 10:21 PM Owen O'Malley  >
> > > wrote:
> > >
> > > > I'm very unhappy with this direction. In particular, I don't think
> git
> > is
> > > > a good place for distribution of binary artifacts. Furthermore, the
> PMC
> > > > shouldn't be releasing anything without a release vote.
> > > >
> > > >
> > > Proposed solution doesnt release any binaries in git. Its actually a
> > > complete sub-project which follows entire release process, including
> VOTE
> > > in public. I have mentioned already that release process is similar to
> > > hadoop.
> > > To be specific, using the (almost) same script used in hadoop to
> generate
> > > artifacts, sign and deploy to staging repository. Please let me know
> If I
> > > am conveying anything wrong.
> > >
> > >
> > > > I'd propose that we make a third party module that contains the
> > *source*
> > > > of the pom files to build the relocated jars. This should absolutely
> be
> > > > treated as a last resort for the mostly Google projects that
> regularly
> > > > break binary compatibility (eg. Protobuf & Guava).
> > > >
> > > >
> > > Same has been implemented in the PR
> > > https://github.com/apache/hadoop-thirdparty/pull/1. Please check and
> let
> > > me
> > > know If I misunderstood. Yes, this is the last option we have AFAIK.
> > >
> > >
> > > > In terms of naming, I'd propose something like:
> > > >
> > > > org.apache.hadoop.thirdparty.protobuf2_5
> > > > org.apache.hadoop.thirdparty.guava28
> > > >
> > > > In particular, I think we absolutely need to include the version of
> the
> > > > underlying project. On the other hand, since we should not be shading
> > > > *everything* we can drop the leading com.google.
> > > >
> > > >
> > > IMO, This naming convention is easy for identifying the underlying
> > project,
> > > but  it will be difficult to maintain going forward if underlying
> project
> > > versions changes. Since thirdparty module have its own releases, each
> of
> > > those release can be mapped to specific version of underlying project.
> > Even
> > > the binary artifact can include a MANIFEST with underlying project
> > details
> > > as per Steve's suggestion on HADOOP-13363.
> > > That said, if you still prefer to have project number in artifact id,
> it
> > > can be done.
> > >
> > > The Hadoop project can make releases of  the thirdparty module:
> > > >
> > > > 
> > > >   org.apache.hadoop
> > > >   hadoop-thirdparty-protobuf25
> > > >   1.0
> > > > 
> > > >
> > > >
> > > Note that the version has to be the hadoop thirdparty release number,
> > which
> > > > is part of why you need to have the underlying version in the
> artifact
> > > > name. These we can push to maven central as new releases from Hadoop.
> > > >
> > > >
> > > Exactly, same has been implemented in the PR. hadoop-thirdparty module
> > have
> > > its own releases. But in HADOOP Jira, thirdparty versions can be
> > > differentiated using prefix "thirdparty-".
> > >
> > > Same solution is being followed in HBase. May be people involved in
> HBase
> > > can add some points here.
> > >
> > > Thoughts?
> > > >
> > > > .. Owen
> > > >
> > > > On Fri, Sep 27, 2019 at 8:38 AM Vinayakumar B <
> vinayakum...@apache.org
> > >
> > > > wrote:
> > > >
> > > >> Hi All,
> > > >>
> > > >>I wanted to discuss about the separate repo for thirdparty
> > > dependencies
> > > >> which we need to shaded and include in Hadoop component's jar

Re: [DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts

2019-10-09 Thread Vinayakumar B
Hi All,

I have updated the PR as per @Owen O'Malley 
's suggestions.

i. Renamed the module to 'hadoop-shaded-protobuf37'
ii. Kept the shaded package to 'o.a.h.thirdparty.protobuf37'

Please review!!

Thanks,
-Vinay


On Sat, Sep 28, 2019 at 10:29 AM 张铎(Duo Zhang) 
wrote:

> For HBase we have a separated repo for hbase-thirdparty
>
> https://github.com/apache/hbase-thirdparty
>
> We will publish the artifacts to nexus so we do not need to include
> binaries in our git repo, just add a dependency in the pom.
>
>
> https://mvnrepository.com/artifact/org.apache.hbase.thirdparty/hbase-shaded-protobuf
>
>
> And it has its own release cycles, only when there are special requirements
> or we want to upgrade some of the dependencies. This is the vote thread for
> the newest release, where we want to provide a shaded gson for jdk7.
>
>
> https://lists.apache.org/thread.html/f12c589baabbc79c7fb2843422d4590bea982cd102e2bd9d21e9884b@%3Cdev.hbase.apache.org%3E
>
>
> Thanks.
>
> Vinayakumar B  于2019年9月28日周六 上午1:28写道:
>
> > Please find replies inline.
> >
> > -Vinay
> >
> > On Fri, Sep 27, 2019 at 10:21 PM Owen O'Malley 
> > wrote:
> >
> > > I'm very unhappy with this direction. In particular, I don't think git
> is
> > > a good place for distribution of binary artifacts. Furthermore, the PMC
> > > shouldn't be releasing anything without a release vote.
> > >
> > >
> > Proposed solution doesnt release any binaries in git. Its actually a
> > complete sub-project which follows entire release process, including VOTE
> > in public. I have mentioned already that release process is similar to
> > hadoop.
> > To be specific, using the (almost) same script used in hadoop to generate
> > artifacts, sign and deploy to staging repository. Please let me know If I
> > am conveying anything wrong.
> >
> >
> > > I'd propose that we make a third party module that contains the
> *source*
> > > of the pom files to build the relocated jars. This should absolutely be
> > > treated as a last resort for the mostly Google projects that regularly
> > > break binary compatibility (eg. Protobuf & Guava).
> > >
> > >
> > Same has been implemented in the PR
> > https://github.com/apache/hadoop-thirdparty/pull/1. Please check and let
> > me
> > know If I misunderstood. Yes, this is the last option we have AFAIK.
> >
> >
> > > In terms of naming, I'd propose something like:
> > >
> > > org.apache.hadoop.thirdparty.protobuf2_5
> > > org.apache.hadoop.thirdparty.guava28
> > >
> > > In particular, I think we absolutely need to include the version of the
> > > underlying project. On the other hand, since we should not be shading
> > > *everything* we can drop the leading com.google.
> > >
> > >
> > IMO, This naming convention is easy for identifying the underlying
> project,
> > but  it will be difficult to maintain going forward if underlying project
> > versions changes. Since thirdparty module have its own releases, each of
> > those release can be mapped to specific version of underlying project.
> Even
> > the binary artifact can include a MANIFEST with underlying project
> details
> > as per Steve's suggestion on HADOOP-13363.
> > That said, if you still prefer to have project number in artifact id, it
> > can be done.
> >
> > The Hadoop project can make releases of  the thirdparty module:
> > >
> > > 
> > >   org.apache.hadoop
> > >   hadoop-thirdparty-protobuf25
> > >   1.0
> > > 
> > >
> > >
> > Note that the version has to be the hadoop thirdparty release number,
> which
> > > is part of why you need to have the underlying version in the artifact
> > > name. These we can push to maven central as new releases from Hadoop.
> > >
> > >
> > Exactly, same has been implemented in the PR. hadoop-thirdparty module
> have
> > its own releases. But in HADOOP Jira, thirdparty versions can be
> > differentiated using prefix "thirdparty-".
> >
> > Same solution is being followed in HBase. May be people involved in HBase
> > can add some points here.
> >
> > Thoughts?
> > >
> > > .. Owen
> > >
> > > On Fri, Sep 27, 2019 at 8:38 AM Vinayakumar B  >
> > > wrote:
> > >
> > >> Hi All,
> > >>
> > >>I wanted to discuss about the separate repo for thirdparty
> > dependencies
> > >> which we need to shaded and include in Hadoop component's jars.
> > >>
> > >>Apologies for the big text ahead, but this needs clear
> explanation!!
> > >>
> > >>Right now most needed such dependency is protobuf. Protobuf
> > dependency
> > >> was not upgraded from 2.5.0 onwards with the fear that downstream
> > builds,
> > >> which depends on transitive dependency protobuf coming from hadoop's
> > jars,
> > >> may fail with the upgrade. Apparently protobuf does not guarantee
> source
> > >> compatibility, though it guarantees wire compatibility between
> versions.
> > >> Because of this behavior, version upgrade may cause breakage in known
> > and
> > >> unknown (private?) downstreams.
> > >>
> > >>So to tackle this, we came up the following proposal in
> HADOOP-

Re: Please cherry pick commits to lower branches

2019-10-09 Thread Steve Loughran
created https://issues.apache.org/jira/browse/HADOOP-16646 for the s3a
stuff.

branch-3.2 doesn't build right now, someone needs to fix that first; I'll
start running the full ITest suites off a commit which does build...

On Wed, Oct 9, 2019 at 1:51 PM Steve Loughran  wrote:

>
> abfs stuff has gone in to 3.2 -credit Thomas Marquardt and Da Zhou there.
>
> I'll do the same for S3A, including SDK updates
> And I'd like to pull back two changes to the FS APIs
>
>
> HADOOP-15691 PathCapabilities. Lets apps probes for FS instances having
> specific features/semantics
> HADOOP-15229 Add FS/FC builder open() API.
>
> making the apis broadly available, even if any tuning in the object stores
> is left out, makes it possible to adopt.
>
> One thing to highlight which has cause problems on some backporting is the
> mockito update https://issues.apache.org/jira/browse/HADOOP-16275
>
> without that mock tests can still compile -but they go on to fail.I think
> we will need to put that into 3.2 at the very least. Same for SLF4J
>
> backporting big patches to 3.1.x is harder because of the moves to slf4j
> spanning things: you may not have merge conflicts but things don't compile.
>
>
> On Tue, Oct 8, 2019 at 10:15 AM Wei-Chiu Chuang 
> wrote:
>
>> I spent the whole last week cherry picking commits from trunk/branch-3.2
>> to
>> branch-3.1 (should've done this prior to 3.1.4 code freeze). There were
>> about 50-60 of them, many of them are conflict-free, and several of them
>> are critical bug fixes.
>>
>> If your commit stays in trunk, it'll be useless for the community until
>> the
>> next minor release, and many months after people start using the new
>> release.
>>
>> Here are a few tips:
>> (1) update dependency to address a know security vulnerability, should be
>> cherry picked into all lower branches, especially when it updates the
>> maintenance release number. Example: update commons-compress from 1.18 to
>> 1.19.
>>
>> (2) blocker/critical bug fixes should be backported to all applicable
>> branches.
>>
>> (3) because of the removal of commons-logging and a few code refactors,
>> commits may apply cleanly but doesn't compile in branch-3.2, branch-3.1
>> and
>> lower branches. Please spend the time to verify a commit is good.
>>
>> Best
>> Weichiu
>>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-10-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1284/

[Oct 8, 2019 8:31:44 AM] (ayushsaxena) HDFS-14814. RBF: 
RouterQuotaUpdateService supports inherited rule.
[Oct 8, 2019 8:44:14 AM] (ayushsaxena) HDFS-14859. Prevent unnecessary 
evaluation of costly operation
[Oct 8, 2019 6:13:53 PM] (bharat) HDDS-2260. Avoid evaluation of LOG.trace and 
LOG.debug statement in the
[Oct 8, 2019 6:20:13 PM] (jhung) YARN-9760. Support configuring application 
priorities on a workflow
[Oct 8, 2019 6:56:52 PM] (cliang) HDFS-14509. DN throws InvalidToken due to 
inequality of password when
[Oct 8, 2019 8:03:14 PM] (github) HDDS-2244. Use new ReadWrite lock in 
OzoneManager. (#1589)




-1 overall


The following subsystems voted -1:
asflicense compile findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

FindBugs :

   module:hadoop-ozone/csi 
   Useless control flow in 
csi.v1.Csi$CapacityRange$Builder.maybeForceBuilderInitialization() At Csi.java: 
At Csi.java:[line 15977] 
   Class csi.v1.Csi$ControllerExpandVolumeRequest defines non-transient 
non-serializable instance field secrets_ In Csi.java:instance field secrets_ In 
Csi.java 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 50408] 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeResponse$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 51319] 
   Useless control flow in 
csi.v1.Csi$ControllerGetCapabilitiesRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 39596] 
   Class csi.v1.Csi$ControllerPublishVolumeRequest defines non-transient 
non-serializable instanc

Re: Please cherry pick commits to lower branches

2019-10-09 Thread Steve Loughran
abfs stuff has gone in to 3.2 -credit Thomas Marquardt and Da Zhou there.

I'll do the same for S3A, including SDK updates
And I'd like to pull back two changes to the FS APIs


HADOOP-15691 PathCapabilities. Lets apps probes for FS instances having
specific features/semantics
HADOOP-15229 Add FS/FC builder open() API.

making the apis broadly available, even if any tuning in the object stores
is left out, makes it possible to adopt.

One thing to highlight which has cause problems on some backporting is the
mockito update https://issues.apache.org/jira/browse/HADOOP-16275

without that mock tests can still compile -but they go on to fail.I think
we will need to put that into 3.2 at the very least. Same for SLF4J

backporting big patches to 3.1.x is harder because of the moves to slf4j
spanning things: you may not have merge conflicts but things don't compile.


On Tue, Oct 8, 2019 at 10:15 AM Wei-Chiu Chuang  wrote:

> I spent the whole last week cherry picking commits from trunk/branch-3.2 to
> branch-3.1 (should've done this prior to 3.1.4 code freeze). There were
> about 50-60 of them, many of them are conflict-free, and several of them
> are critical bug fixes.
>
> If your commit stays in trunk, it'll be useless for the community until the
> next minor release, and many months after people start using the new
> release.
>
> Here are a few tips:
> (1) update dependency to address a know security vulnerability, should be
> cherry picked into all lower branches, especially when it updates the
> maintenance release number. Example: update commons-compress from 1.18 to
> 1.19.
>
> (2) blocker/critical bug fixes should be backported to all applicable
> branches.
>
> (3) because of the removal of commons-logging and a few code refactors,
> commits may apply cleanly but doesn't compile in branch-3.2, branch-3.1 and
> lower branches. Please spend the time to verify a commit is good.
>
> Best
> Weichiu
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/

[Oct 8, 2019 8:00:30 AM] (ayushsaxena) HDFS-14655. [SBN Read] Namenode crashes 
if one of The JN is down.
[Oct 8, 2019 6:19:39 PM] (jhung) YARN-9760. Support configuring application 
priorities on a workflow




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.yarn.webapp.TestWebApp 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [164K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/469/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.o

[jira] [Created] (HDDS-2277) Consider allowing maintenance end time to be specified in human readable format

2019-10-09 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-2277:
---

 Summary: Consider allowing maintenance end time to be specified in 
human readable format
 Key: HDDS-2277
 URL: https://issues.apache.org/jira/browse/HDDS-2277
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Stephen O'Donnell


The initial command for maintenance mode allows a use to specify the number of 
hours when maintenance will end.

It may be a better user experience to allow them to specific the time like:

1.5 days

1 day

10 hours

etc

We should consider whether it makes sense to add this feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2276) Allow users to pass hostnames or IP when decommissioning nodes

2019-10-09 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-2276:
---

 Summary: Allow users to pass hostnames or IP when decommissioning 
nodes
 Key: HDDS-2276
 URL: https://issues.apache.org/jira/browse/HDDS-2276
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Affects Versions: 0.5.0
Reporter: Stephen O'Donnell


In the initial implementation, the user must pass a hostname or the IP when 
decommissioning a host, depending on the setting:

dfs.datanode.use.datanode.hostname

It would be better if the user can pass either host or IP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2275) In BatchOperation.SingleOperation, do not clone byte[]

2019-10-09 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created HDDS-2275:


 Summary: In BatchOperation.SingleOperation, do not clone byte[]
 Key: HDDS-2275
 URL: https://issues.apache.org/jira/browse/HDDS-2275
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Tsz-wo Sze
Assignee: Tsz-wo Sze


byte[] is cloned once in the constructor and then it is cloned again in the 
getter methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2261) Change readChunk methods to return ByteBuffer

2019-10-09 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-2261.
---
Resolution: Fixed

> Change readChunk methods to return ByteBuffer
> -
>
> Key: HDDS-2261
> URL: https://issues.apache.org/jira/browse/HDDS-2261
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: pull-request-available
>
> During refactoring to HDDS-2233 I realized the following:
> KeyValueHandler.handleReadChunk and handleGetSmallFile methods are using 
> ChunkManager.readChunk, which returns a byte[], but then both of them (the 
> only usage points) converts the returning byte[] to a ByteBuffer, and then to 
> a ByteString.
> ChunkManagerImpl on the other hand in readChunk utilizes 
> ChunkUtils.readChunk, which in order to conform the return value converts a 
> ByteBuffer back to a byte[].
> I open this JIRA to change the internal logic to fully rely on ByteBuffers 
> instead of converting from ByteBuffer to byte[] then to ByteBuffer again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14488) libhdfs SIGSEGV during shutdown of Java

2019-10-09 Thread Kotomi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kotomi resolved HDFS-14488.
---
Resolution: Cannot Reproduce

> libhdfs SIGSEGV during shutdown of Java
> ---
>
> Key: HDFS-14488
> URL: https://issues.apache.org/jira/browse/HDFS-14488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.7.3, 3.2.0
> Environment: Centos 7
> gcc (GCC) 8.3.0
>Reporter: Kotomi
>Priority: Critical
>
> For short, it‘s quite similar to 
> https://issues.apache.org/jira/browse/HDFS-13585,but backtrace is different.
> We are using libhdfs for hdfs support in our native library, as a plugin, 
> _dlopen_ when hdfs path detected and _dlclose_ when application closes.
> Crash happens randomly, during shutdown of Java. GDB shows in function 
> hdfsThreadDestructor, JNIEnv v is not NULL but looks like a wild pointer. 
> Therefore, it crashes when trying to get JavaVm from its own JNIEnv.
> {code:java}
> 43: ret = (*env)->GetJavaVM(env, &vm);
> {code}
> Any workarounds? Thx.
> Backtrace from core:
> {code:java}
> #1920 0x7fac0386da47 in abort () from /usr/lib64/libc.so.6
> #1921 0x7fac0315b769 in os::abort(bool) () from 
> /usr/java/jdk1.8.0_201-amd64/jre/lib/amd64/server/libjvm.so
> #1922 0x7fac03320803 in VMError::report_and_die() () from 
> /usr/java/jdk1.8.0_201-amd64/jre/lib/amd64/server/libjvm.so
> #1923 0x7fac031659f5 in JVM_handle_linux_signal () from 
> /usr/java/jdk1.8.0_201-amd64/jre/lib/amd64/server/libjvm.so
> #1924 0x7fac031588b8 in signalHandler(int, siginfo*, void*) () from 
> /usr/java/jdk1.8.0_201-amd64/jre/lib/amd64/server/libjvm.so
> #1925 
> #1926 0x7fab028f7ef0 in hdfsThreadDestructor (v=0x7fab840919f8) at 
> /home/ambari-qa/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c:43
> #1927 0x7fac04026bd2 in __nptl_deallocate_tsd () from 
> /usr/lib64/libpthread.so.0
> #1928 0x7fac04026de3 in start_thread () from /usr/lib64/libpthread.so.0
> #1929 0x7fac03933ead in clone () from /usr/lib64/libc.so.6
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2274) Avoid buffer copying in Codec

2019-10-09 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created HDDS-2274:


 Summary: Avoid buffer copying in Codec
 Key: HDDS-2274
 URL: https://issues.apache.org/jira/browse/HDDS-2274
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Tsz-wo Sze
Assignee: Tsz-wo Sze


Codec declares byte[] as a parameter in fromPersistedFormat(..) and a return 
type in toPersistedFormat(..).  It leads to buffer copying when using it with 
ByteString.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org