[jira] [Resolved] (HADOOP-16894) Announce user-zh mailing list

2020-02-28 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-16894.
--
Resolution: Done

Resolve this one. I see the website 
(https://hadoop.apache.org/mailing_lists.html) is updated.

> Announce user-zh mailing list
> -
>
> Key: HADOOP-16894
> URL: https://issues.apache.org/jira/browse/HADOOP-16894
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> A user-zh mailing list for Mandarin-speaking Hadoop users is set up now. 
> Let's make it public:
> (1) Add the mailing list to https://hadoop.apache.org/mailing_lists.html
> (2) Announce the ML at the user@, general@
> (3) Send notification to the ASF slack channel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16896) TestIPC#testProxyUserBinding failure

2020-02-28 Thread Paul (Jira)
Paul created HADOOP-16896:
-

 Summary: TestIPC#testProxyUserBinding failure
 Key: HADOOP-16896
 URL: https://issues.apache.org/jira/browse/HADOOP-16896
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 3.1.3
 Environment: Im running  Ubuntu 18.04 on a virtual box hosted on 
windows.
Reporter: Paul
 Attachments: TEST-org.apache.hadoop.ipc.TestIPC.xml, 
org.apache.hadoop.ipc.TestIPC.txt

mvn clean install -Pdist -Pnative -Dtar -DskipTests in hadoop-common fails due 
to TestIPC failure, specifically testProxyUserBinding. Ive tested separately 
and the different components seem to work just fine, so Im not sure why its 
failing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC0

2020-02-28 Thread Arpit Agarwal
Hi Dinesh,

Thanks for spinning up this RC! Looks like we still had ~15 issues that were 
tagged as Blockers for 0.5.0 in jira.

I’ve moved out most of them, however the remaining 4 look like must fixes.

https://issues.apache.org/jira/issues/?jql=project%20%3D%20%22HDDS%22%20and%20%22Target%20Version%2Fs%22%20in%20(0.5.0)%20and%20resolution%20%3D%20Unresolved%20and%20priority%20%3D%20Blocker
 


Thanks,
Arpit


> On Feb 27, 2020, at 8:23 PM, Dinesh Chitlangia  wrote:
> 
> Hi Folks,
> 
> We have put together RC0 for Apache Hadoop Ozone 0.5.0-beta.
> 
> The RC artifacts are at:
> https://home.apache.org/~dineshc/ozone-0.5.0-rc0/
> 
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1259
> 
> The RC tag in git is at:
> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC0
> 
> This release contains 800+ fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
> 
> *The vote will run for 7 days, ending on March 4th 2020 at 11:59 pm PST.*
> 
> Note: This release is beta quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
> 
> 
> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> 
> Thanks,
> Dinesh Chitlangia



Re: [VOTE] Release Apache Hadoop Thirdparty 1.0.0

2020-02-28 Thread Vinayakumar B
https://issues.apache.org/jira/browse/HADOOP-16895 jira created for
handling LICENCE and NOTICEs
PR also has been raised for a proposal. Please validate
https://github.com/apache/hadoop-thirdparty/pull/6

-Vinay


On Fri, Feb 28, 2020 at 11:48 PM Vinayakumar B 
wrote:

> Thanks Elek for detailed verification.
>
> Please find inline replies.
>
> -Vinay
>
>
> On Fri, Feb 28, 2020 at 7:49 PM Elek, Marton  wrote:
>
>>
>> Thank you very much to work on this release Vinay, 1.0.0 is always a
>> hard work...
>>
>>
>> 1. I downloaded it and I can build it from the source
>>
>> 2. Checked the signature and the sha512 of the src package and they are
>> fine
>>
>> 3. Yetus seems to be included in the source package. I am not sure if
>> it's intentional but I would remove the patchprocess directory from the
>> tar file.
>>
>> Since dev-support/create-release script and assembly file is copied from
> hadoop-repo,  I can find this issue exits in hadoop source release packages
> as well. ex: I checked 3.1.2 and 2.10 src packages.
> I will raise a Jira and fix this for both hadoop and thirdparty.
>
> 4. NOTICE.txt seems to be outdated (I am not sure, but I think the
>> Export Notice is unnecessary, especially for the source release, also
>> the note about the bouncycastle and Yarn server is unnecessary).
>>
>> Again, NOTICE.txt was copied from Hadoop and kept as is. I will create a
> jira to decide about NOTICE and LICENSEs
>
> 5. NOTICE-binary and LICENSE-binary seems to be unused (and they contain
>> unrelated entries, especially the NOTICE). IMHO
>>
>> We can decide in the Jira whether NOTICE-binary and LICENSE-binary to be
> used or not.
>
> 6. As far as I understand the binary release in this case is the maven
>> artifact. IANAL but the original protobuf license seems to be missing
>> from "unzip -p hadoop-shaded-protobuf_3_7-1.0.0.jar META-INF/LICENSE.txt"
>>
>
> I observed that there is one more file "META-INF/DEPENDENCIES" generated
> by shade plugin, which have reference to shaded artifacts and poniting to
> link of the original artifact LICENSE. I think this should be sufficient
> about protobuf's original license.
> IMO, "META-INF/LICENSE.txt" should point to current project's LICENSE,
> which in-turn can have contents/pointers of dependents' licenses. Siimilar
> approach followed in hadoop-shaded-client jars.
>
> hadoop's artifacts also will be uploaded to maven repo during release,
> which doesnot carry all LICENSE files in artifacts. It just says "See
> licenses/ for text of these licenses" which doesnot exist in artifact. May
> be we need to fix this too.
>
> 7. Minor nit: I would suggest to use only the filename in the sha512
>> files (instead of having the /build/source/target prefix). It would help
>> to use `sha512 -c` command to validate the checksum.
>>
>>
> Again, this is from create-release  script. will update the script.
>
> Thanks again to work on this,
>> Marton
>>
>> ps: I am not experienced with licensing enough to judge which one of
>> these are blocking and I might be wrong.
>>
>>
> IMO, none of these should be blocking and can be handled before next
> release. Still if someone feels this should be fixed and RC should be cut
> again, I am open to it.
>
> Thanks.
>
>>
>>
>> On 2/25/20 8:17 PM, Vinayakumar B wrote:
>> > Hi folks,
>> >
>> > Thanks to everyone's help on this release.
>> >
>> > I have created a release candidate (RC0) for Apache Hadoop Thirdparty
>> 1.0.0.
>> >
>> > RC Release artifacts are available at :
>> >
>> http://home.apache.org/~vinayakumarb/release/hadoop-thirdparty-1.0.0-RC0/
>> >
>> > Maven artifacts are available in staging repo:
>> >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1258/
>> >
>> > The RC tag in git is here:
>> > https://github.com/apache/hadoop-thirdparty/tree/release-1.0.0-RC0
>> >
>> > And my public key is at:
>> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> >
>> > *This vote will run for 5 days, ending on March 1st 2020 at 11:59 pm
>> IST.*
>> >
>> > For the testing, I have verified Hadoop trunk compilation with
>> > "-DdistMgmtSnapshotsUrl=
>> >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1258/
>> > -Dhadoop-thirdparty-protobuf.version=1.0.0"
>> >
>> > My +1 to start.
>> >
>> > -Vinay
>> >
>>
>


[GitHub] [hadoop-thirdparty] vinayakumarb opened a new pull request #6: HADOOP-16895. [thirdparty] Revisit LICENSEs and NOTICEs

2020-02-28 Thread GitBox
vinayakumarb opened a new pull request #6: HADOOP-16895. [thirdparty] Revisit 
LICENSEs and NOTICEs
URL: https://github.com/apache/hadoop-thirdparty/pull/6
 
 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16895) [thirdparty] Revisit LICENSEs and NOTICEs

2020-02-28 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16895:
--

 Summary: [thirdparty] Revisit LICENSEs and NOTICEs
 Key: HADOOP-16895
 URL: https://issues.apache.org/jira/browse/HADOOP-16895
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinayakumar B


LICENSE.txt and NOTICE.txt have many entries which are unrelated to thirdparty,
Revisit and cleanup such entries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Thirdparty 1.0.0

2020-02-28 Thread Vinayakumar B
Thanks Elek for detailed verification.

Please find inline replies.

-Vinay


On Fri, Feb 28, 2020 at 7:49 PM Elek, Marton  wrote:

>
> Thank you very much to work on this release Vinay, 1.0.0 is always a
> hard work...
>
>
> 1. I downloaded it and I can build it from the source
>
> 2. Checked the signature and the sha512 of the src package and they are
> fine
>
> 3. Yetus seems to be included in the source package. I am not sure if
> it's intentional but I would remove the patchprocess directory from the
> tar file.
>
> Since dev-support/create-release script and assembly file is copied from
hadoop-repo,  I can find this issue exits in hadoop source release packages
as well. ex: I checked 3.1.2 and 2.10 src packages.
I will raise a Jira and fix this for both hadoop and thirdparty.

4. NOTICE.txt seems to be outdated (I am not sure, but I think the
> Export Notice is unnecessary, especially for the source release, also
> the note about the bouncycastle and Yarn server is unnecessary).
>
> Again, NOTICE.txt was copied from Hadoop and kept as is. I will create a
jira to decide about NOTICE and LICENSEs

5. NOTICE-binary and LICENSE-binary seems to be unused (and they contain
> unrelated entries, especially the NOTICE). IMHO
>
> We can decide in the Jira whether NOTICE-binary and LICENSE-binary to be
used or not.

6. As far as I understand the binary release in this case is the maven
> artifact. IANAL but the original protobuf license seems to be missing
> from "unzip -p hadoop-shaded-protobuf_3_7-1.0.0.jar META-INF/LICENSE.txt"
>

I observed that there is one more file "META-INF/DEPENDENCIES" generated by
shade plugin, which have reference to shaded artifacts and poniting to link
of the original artifact LICENSE. I think this should be sufficient about
protobuf's original license.
IMO, "META-INF/LICENSE.txt" should point to current project's LICENSE,
which in-turn can have contents/pointers of dependents' licenses. Siimilar
approach followed in hadoop-shaded-client jars.

hadoop's artifacts also will be uploaded to maven repo during release,
which doesnot carry all LICENSE files in artifacts. It just says "See
licenses/ for text of these licenses" which doesnot exist in artifact. May
be we need to fix this too.

7. Minor nit: I would suggest to use only the filename in the sha512
> files (instead of having the /build/source/target prefix). It would help
> to use `sha512 -c` command to validate the checksum.
>
>
Again, this is from create-release  script. will update the script.

Thanks again to work on this,
> Marton
>
> ps: I am not experienced with licensing enough to judge which one of
> these are blocking and I might be wrong.
>
>
IMO, none of these should be blocking and can be handled before next
release. Still if someone feels this should be fixed and RC should be cut
again, I am open to it.

Thanks.

>
>
> On 2/25/20 8:17 PM, Vinayakumar B wrote:
> > Hi folks,
> >
> > Thanks to everyone's help on this release.
> >
> > I have created a release candidate (RC0) for Apache Hadoop Thirdparty
> 1.0.0.
> >
> > RC Release artifacts are available at :
> >
> http://home.apache.org/~vinayakumarb/release/hadoop-thirdparty-1.0.0-RC0/
> >
> > Maven artifacts are available in staging repo:
> >
> https://repository.apache.org/content/repositories/orgapachehadoop-1258/
> >
> > The RC tag in git is here:
> > https://github.com/apache/hadoop-thirdparty/tree/release-1.0.0-RC0
> >
> > And my public key is at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > *This vote will run for 5 days, ending on March 1st 2020 at 11:59 pm
> IST.*
> >
> > For the testing, I have verified Hadoop trunk compilation with
> > "-DdistMgmtSnapshotsUrl=
> > https://repository.apache.org/content/repositories/orgapachehadoop-1258/
> > -Dhadoop-thirdparty-protobuf.version=1.0.0"
> >
> > My +1 to start.
> >
> > -Vinay
> >
>


[通知] 建立 user-zh 郵件列表

2020-02-28 Thread Wei-Chiu Chuang
大家好
Apache Hadoop欢迎来自世界各地的参与者。我很高兴代表Hadoop PMC宣布我们创建了user...@hadoop.apache.org
邮件列表。

这个邮件列表的目的是给中文(简体/正体)用户询问关于Apache Hadoop的问题。欢迎善于使用中文的参与者利用此列表发问。

过去数年间我们发现在中国的本地用户社区蓬勃成长。包括去年第一次在北京由Apache Hadoop
PMC举行的Hadoop社区聚会。我们希望借由创建一个中文友善的邮件列表能使Apache
Hadoop成为一个更加多元的社区。我们也希望借此能使中文用户以Apache Way交流,并使它成为中文世界用户社区与世界用户社区之间的桥梁。

请注意虽然user-zh邮件列表是为中文用户讨论而设立,开发的讨论包括设计及代码修改仍应以英文在 *-dev@, JIRAs and
GitHub上进行。

这个邮件列表目前已经可以使用了,我们的网站也将再更新后加入此邮件列表。任何人都可借由发信至
user-zh-subscr...@hadoop.apache.org 订阅此列表。非订阅者的信件经审核后可发出。

- 庄伟赳(Apache Hadoop PMC代表)


大家好

Apache Hadoop歡迎來自世界各地的參與者。我很高興代表Hadoop PMC宣布我們建立了user...@hadoop.apache.org
郵件列表。

這個郵件列表的目的是給中文(簡體/正體)使用者詢問關於Apache Hadoop的問題。歡迎善於使用中文的參與者利用此列表發問。

過去數年間我們發現在中國的本地使用者社群蓬勃成長。包括去年第一次在北京由Apache Hadoop
PMC舉行的Hadoop社群聚會。我們希望藉由建立一個中文友善的郵件列表能使Apache
Hadoop成為一個更加多元的社群。我們也希望藉此能使中文使用者以Apache Way交流,並使它成為中文世界使用者社群與世界使用者社群之間的橋樑。


請注意雖然user-zh郵件列表是為中文使用者討論而設立,開發的討論包括設計及程式碼修改仍應以英文在 *-dev@, JIRAs and
GitHub上進行。

這個郵件列表目前已經可以使用了,我們的網站也將再更新後加入此郵件列表。任何人都可藉由發信至
user-zh-subscr...@hadoop.apache.org 訂閱此列表。非訂閱者的信件經審核後可發出。

- 莊偉赳(Apache Hadoop PMC代表)

On Fri, Feb 28, 2020 at 9:30 AM Wei-Chiu Chuang  wrote:

> Hi!
>
> Apache Hadoop welcomes contributors from around the world. On behalf of
> the Hadoop PMC, I am pleased to announce the creation of a new mailing list
> user...@hadoop.apache.org.
>
> The intent of this mailing list is to act as a place for users to ask
> questions about Apache Hadoop in Chinese (Traditional/Simplified).
> Individuals who feel more comfortable communicating in Chinese should feel
> welcome to ask questions in Chinese on this list.
>
> Over the past few years we have observed a healthy growing local user
> local community in China. Evidence include the first ever Hadoop
> Community Meetup
>  in
> Beijing hosted by the Apache Hadoop PMC last year. We hope that by creating
> a Mandarin Chinese friendly mailing list will make the Apache Hadoop
> project a more diverse community. We also hope that by doing so, will make
> the Chinese users to operate in the Apache Way, and to serve as a bridge
> between the local user community with the global community.
>
> Please note that while the user-zh mailing list is set up for user
> discussions in Mandarin, development discussions such as design and code
> changes should still go to *-dev@, JIRAs and GitHub in English as is.
>
> The mailing list is live as of now and the website
>  will be updated shortly to
> include the mailing list. Any one can subscribe to this list by sending an
> email to user-zh-subscr...@hadoop.apache.org. Non-subscribers may also
> post messages after the moderators' approvals.
>
> - Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
>


[ANNOUNCE] Creation of user-zh mailing list

2020-02-28 Thread Wei-Chiu Chuang
Hi!

Apache Hadoop welcomes contributors from around the world. On behalf of the
Hadoop PMC, I am pleased to announce the creation of a new mailing list
user...@hadoop.apache.org.

The intent of this mailing list is to act as a place for users to ask
questions about Apache Hadoop in Chinese (Traditional/Simplified).
Individuals who feel more comfortable communicating in Chinese should feel
welcome to ask questions in Chinese on this list.

Over the past few years we have observed a healthy growing local user local
community in China. Evidence include the first ever Hadoop Community Meetup
 in
Beijing hosted by the Apache Hadoop PMC last year. We hope that by creating
a Mandarin Chinese friendly mailing list will make the Apache Hadoop
project a more diverse community. We also hope that by doing so, will make
the Chinese users to operate in the Apache Way, and to serve as a bridge
between the local user community with the global community.

Please note that while the user-zh mailing list is set up for user
discussions in Mandarin, development discussions such as design and code
changes should still go to *-dev@, JIRAs and GitHub in English as is.

The mailing list is live as of now and the website
 will be updated shortly to
include the mailing list. Any one can subscribe to this list by sending an
email to user-zh-subscr...@hadoop.apache.org. Non-subscribers may also post
messages after the moderators' approvals.

- Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)


[jira] [Created] (HADOOP-16894) Announce user-zh mailing list

2020-02-28 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16894:


 Summary: Announce user-zh mailing list
 Key: HADOOP-16894
 URL: https://issues.apache.org/jira/browse/HADOOP-16894
 Project: Hadoop Common
  Issue Type: Task
Reporter: Wei-Chiu Chuang


A user-zh mailing list for Mandarin-speaking Hadoop users is set up now. Let's 
make it public:

(1) Add the mailing list to https://hadoop.apache.org/mailing_lists.html
(2) Announce the ML at the user@, general@




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-02-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/

[Feb 27, 2020 8:38:42 AM] (surendralilhore) HDFS-15167. Block Report Interval 
shouldn't be reset apart from first
[Feb 27, 2020 3:48:14 PM] (github) HDFS-14668 Support Fuse with Users from 
multiple Security Realms (#1739)
[Feb 27, 2020 4:49:35 PM] (ayushsaxena) HDFS-15124. Crashing bugs in NameNode 
when using a valid configuration
[Feb 27, 2020 6:27:22 PM] (tmarq) HADOOP-16730: ABFS: Support for Shared Access 
Signatures (SAS).
[Feb 27, 2020 7:01:55 PM] (ayushsaxena) HDFS-15186. Erasure Coding: 
Decommission may generate the parity block's
[Feb 27, 2020 7:10:32 PM] (snemeth) YARN-10148. Add Unit test for queue ACL for 
both FS and CS. Contributed
[Feb 27, 2020 8:53:20 PM] (inigoiri) YARN-10155. 
TestDelegationTokenRenewer.testTokenThreadTimeout fails in
[Feb 27, 2020 9:18:30 PM] (inigoiri) YARN-10161. TestRouterWebServicesREST is 
corrupting STDOUT. Contributed




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.metrics2.source.TestJvmMetrics 
   hadoop.security.token.delegation.TestZKDelegationTokenSecretManager 
   hadoop.hdfs.TestAclsEndToEnd 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.TestDFSStorageStateRecovery 
   hadoop.hdfs.server.namenode.TestFSEditLogLoader 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.tools.TestECAdmin 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.tools.TestViewFSStoragePolicyCommands 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor 
   hadoop.hdfs.TestDecommissionWithBackoffMonitor 
   hadoop.hdfs.TestMultiThreadedHflush 
   hadoop.hdfs.TestFileAppend4 
   hadoop.hdfs.TestErasureCodingExerciseAPIs 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail 
   hadoop.hdfs.TestReadStripedFileWithDNFailure 
   

Hadoop File API v.s. Commons VFS

2020-02-28 Thread David Mollitor
Hello,

I'm curious to know what the history of Hadoop File API is in relationship
to VFS.  Hadoop supports several file schemes and so does VFS.  Why are
there two projects working on this same effort and what are the pros/cons
of each?

Thanks.


Re: [VOTE] Release Apache Hadoop Thirdparty 1.0.0

2020-02-28 Thread Elek, Marton



Thank you very much to work on this release Vinay, 1.0.0 is always a 
hard work...



1. I downloaded it and I can build it from the source

2. Checked the signature and the sha512 of the src package and they are fine

3. Yetus seems to be included in the source package. I am not sure if 
it's intentional but I would remove the patchprocess directory from the 
tar file.


4. NOTICE.txt seems to be outdated (I am not sure, but I think the 
Export Notice is unnecessary, especially for the source release, also 
the note about the bouncycastle and Yarn server is unnecessary).


5. NOTICE-binary and LICENSE-binary seems to be unused (and they contain 
unrelated entries, especially the NOTICE). IMHO


6. As far as I understand the binary release in this case is the maven 
artifact. IANAL but the original protobuf license seems to be missing 
from "unzip -p hadoop-shaded-protobuf_3_7-1.0.0.jar META-INF/LICENSE.txt"


7. Minor nit: I would suggest to use only the filename in the sha512 
files (instead of having the /build/source/target prefix). It would help 
to use `sha512 -c` command to validate the checksum.


Thanks again to work on this,
Marton

ps: I am not experienced with licensing enough to judge which one of 
these are blocking and I might be wrong.




On 2/25/20 8:17 PM, Vinayakumar B wrote:

Hi folks,

Thanks to everyone's help on this release.

I have created a release candidate (RC0) for Apache Hadoop Thirdparty 1.0.0.

RC Release artifacts are available at :
   http://home.apache.org/~vinayakumarb/release/hadoop-thirdparty-1.0.0-RC0/

Maven artifacts are available in staging repo:
 https://repository.apache.org/content/repositories/orgapachehadoop-1258/

The RC tag in git is here:
https://github.com/apache/hadoop-thirdparty/tree/release-1.0.0-RC0

And my public key is at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

*This vote will run for 5 days, ending on March 1st 2020 at 11:59 pm IST.*

For the testing, I have verified Hadoop trunk compilation with
"-DdistMgmtSnapshotsUrl=
https://repository.apache.org/content/repositories/orgapachehadoop-1258/
-Dhadoop-thirdparty-protobuf.version=1.0.0"

My +1 to start.

-Vinay



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-02-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/

[Feb 27, 2020 3:58:44 PM] (kihwal) HDFS-15147. LazyPersistTestCase wait logic 
is error-prone. Contributed
[Feb 27, 2020 9:22:16 PM] (inigoiri) YARN-10161. TestRouterWebServicesREST is 
corrupting STDOUT. Contributed




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.client.api.impl.TestAMRMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/609/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]