Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-09-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/

[Sep 22, 2020 2:48:18 AM] (Masatake Iwasaki) Publishing the bits for release 
2.10.1
[Sep 22, 2020 2:51:53 AM] (Masatake Iwasaki) Publishing the bits for release 
2.10.1 (addendum)
[Sep 22, 2020 6:57:36 PM] (noreply) MAPREDUCE-7294. Only application master 
should upload resource to Yarn Shared Cache. (#2319)




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.client.api.impl.TestTimelineClientV2Impl 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-compile-javac-root.txt
  [456K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [280K]
   

[GitHub] [hadoop-thirdparty] ayushtkn commented on a change in pull request #8: HADOOP-17278. Shade guava 29.0-jre in hadoop thirdparty.

2020-09-23 Thread GitBox


ayushtkn commented on a change in pull request #8:
URL: https://github.com/apache/hadoop-thirdparty/pull/8#discussion_r493489153



##
File path: hadoop-shaded-guava/pom.xml
##
@@ -0,0 +1,110 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+hadoop-thirdparty
+org.apache.hadoop.thirdparty
+1.1.0-SNAPSHOT
+..
+
+4.0.0
+hadoop-shaded-guava
+Apache Hadoop shaded Guava
+jar
+
+
+
+com.google.guava
+guava
+${guava.version}
+
+
+
+com.google.errorprone
+error_prone_annotations
+
+
+
+
+
+
+
+
+${project.basedir}/..
+META-INF
+
+licenses-binary/*
+NOTICE.txt
+NOTICE-binary
+
+
+
+${project.basedir}/src/main/resources
+
+
+
+
+org.apache.maven.plugins
+maven-shade-plugin
+
+
true
+
+
+
+shade-guava
+package
+
+shade
+
+
+
+
+com.google.guava:*
+

Review comment:
   Thanx @vinayakumarb for the review!!!
   I have included all of the dependencies  as suggested





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3aFileSystem

2020-09-23 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-17281:
--

 Summary: Implement FileSystem.listStatusIterator() in S3aFileSystem
 Key: HADOOP-17281
 URL: https://issues.apache.org/jira/browse/HADOOP-17281
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Mukund Thakur
Assignee: Mukund Thakur


Currently S3AFileSystem only implements listStatus() api which returns an 
array. Once we implement the listStatusIterator(), clients can benefit from the 
async listing done recently 

https://issues.apache.org/jira/browse/HADOOP-17074  by performing some tasks on 
files while iterating them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread Wei-Chiu Chuang
I am pleased to announce that Hui Fei has accepted the invitation to become
a Hadoop committer.

He started contributing to the project in October 2016. Over the past 4
years he has contributed a lot in HDFS, especially in Erasure Coding,
Hadoop 3 upgrade, RBF and Standby Serving reads.

One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
The adoption of Hadoop 3 has gone up after this. In the past the community
discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
one took the initiative to make it happen. I am personally very grateful
for this.

The work on EC is impressive as well. He managed to onboard EC in
production at scale, fixing tricky problems. Again, I am impressed and
grateful for the contribution in EC.

In addition to code contributions, he invested a lot in the community:

>
>- Apache Hadoop Community 2019 Beijing Meetup
>https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug 
> where
>he discussed the operational experience of RBF in production
>
>
>- Apache Hadoop Storage Community Sync Online
>
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
>  where
>he discussed the Hadoop 3 rolling upgrade support
>
>
Let's congratulate Hui for this new role!

Cheers,
Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)


Re: Question About Feasibility of Hadoop Over P2P Architecture

2020-09-23 Thread Steve Loughran
HDFS isn't going to work here, but the filesystem APIs could be suitable
for implementation.

Look also at what Apache Cassandra do; they use a DHT to scatter data.

On Tue, 22 Sep 2020 at 06:17, Lauren Taylor 
wrote:

> Hello!
>
> I am currently provisioning a P2P peer network (there are already a couple
> of live networks that have been created, but we do not want to test this in
> production, fo course).
>
> In this p2p network, I was looking at the best ways in which one could
> distribute file storage (and access it to) in an efficient manner.
>
> The difference between this solution & Bittorrent (DHT / mainline
> DHT), is *that
> all of the files that are uploaded to the network are meant to be stored
> and distributed*.
>
> Putting the complexities of that to the side (the sustainability of that
> proposal has been accounted for), I am wondering whether Apache Hadoop
> would be a good structure to run on top of that system.
>
> *Why I Ask*
> The p2p structure of this protocol is absolutely essential to its
> functioning. Thus, if I am going to leverage it for the purposes of storage
> / distribution, it is imperative that I ensure I'm not injecting something
> into the ecosystem that could ultimately harm it (i.e., DoS vulnerability).
>
> *Hadoop-LAFS?*
> I was on the 'Tahoe-LAFS' website and I saw that there was a proposal for
> 'Hadoop-LAFS' - which is a deployment of Apache Hadoop over top of the
> Tahoe-LAFS layer.
>
> According to the project description given by Google's Code Archive, this
> allows for:
>
> "Provides an integration layer between Tahoe LAFS and Hadoop so Map Reduce
> > jobs can be run over encrypted data stored in Tahoe."
> >
>
> Any and all answers would help a ton, thank you!
>
> Sincerely,
> Buck Wiston
>


Hadoop Storage Online Meetup in the Wiki

2020-09-23 Thread Wei-Chiu Chuang
Hi!

We've been running this call for over a year but I just realized we never
managed to publish the information in a searchable location. So here it is
in our wiki:
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Storage+Online+Meetup

By the way, does anyone know of a good solution for storing community
material? I'm looking for a service to store the recordings of these calls.
I don't think Apache offers this kind of service other than the Apache web
host. I don't think Apache offers official Google Drive integration.
Perhaps I can create a Google Drive account that is shared among the Hadoop
PMCs.

Thoughts?

Thanks,
Wei-Chiu


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-09-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/

[Sep 22, 2020 3:53:04 PM] (Kihwal Lee) HDFS-15581. Access Controlled HttpFS 
Server. Contributed by Richard Ross.
[Sep 22, 2020 4:10:33 PM] (noreply) HADOOP-17277. Correct spelling errors for 
separator (#2322)
[Sep 22, 2020 4:22:04 PM] (noreply) HADOOP-17261. s3a rename() needs 
s3:deleteObjectVersion permission (#2303)
[Sep 22, 2020 8:23:20 PM] (noreply) HDFS-15557. Log the reason why a storage 
log file can't be deleted (#2274)




-1 overall


The following subsystems voted -1:
asflicense pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.TestSnapshotCommands 
   hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [416K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]

   asflicense:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.12.0   https://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Skipping this week’s APAC Hadoop storage online meetup

2020-09-23 Thread Wei-Chiu Chuang
The Chinese Hadoop Meetup will take place this Saturday. So a call is not
planned this week.

If you are interested, feee free to sign up at
https://www.slidestalk.com/m/290 (The event is in Mandarin)

Thanks Xiaoqiao for organizing the next few calls & agenda.


[ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-23 Thread Wei-Chiu Chuang
I am pleased to announce that Lisheng Sun has accepted the invitation to
become a Hadoop committer.

Lisheng actively contributed to the project since July 2019, and he
contributed two new features: Dead datanode detector (HDFS-13571
) and a new du
implementation (HDFS-14313
) Lots of improvements
including a number of short circuit read optimization
HDFS-15161  , speeding up
NN fsimage loading time: HDFS-13694
 and HDFS-13693
. Code wise, he resolved
57 Hadoop jiras.

Let's congratulate Lisheng for this new role!

Cheers,
Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)


Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-23 Thread Guanghao Zhang
Congratulations, Lisheng!


Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-23 Thread Xiaoqiao He
Congrats!

Best Regards,
He Xiaoqiao

On Thu, Sep 24, 2020 at 9:08 AM Guanghao Zhang  wrote:

> Congratulations, Lisheng!
>


Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread Sammi Chen
Congratulations to Hui !

On Thu, Sep 24, 2020 at 2:07 AM Wei-Chiu Chuang  wrote:

> I am pleased to announce that Hui Fei has accepted the invitation to become
> a Hadoop committer.
>
> He started contributing to the project in October 2016. Over the past 4
> years he has contributed a lot in HDFS, especially in Erasure Coding,
> Hadoop 3 upgrade, RBF and Standby Serving reads.
>
> One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
> This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
> The adoption of Hadoop 3 has gone up after this. In the past the community
> discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
> one took the initiative to make it happen. I am personally very grateful
> for this.
>
> The work on EC is impressive as well. He managed to onboard EC in
> production at scale, fixing tricky problems. Again, I am impressed and
> grateful for the contribution in EC.
>
> In addition to code contributions, he invested a lot in the community:
>
> >
> >- Apache Hadoop Community 2019 Beijing Meetup
> >
> https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug
> where
> >he discussed the operational experience of RBF in production
> >
> >
> >- Apache Hadoop Storage Community Sync Online
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> where
> >he discussed the Hadoop 3 rolling upgrade support
> >
> >
> Let's congratulate Hui for this new role!
>
> Cheers,
> Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
>


Re: Question About Feasibility of Hadoop Over P2P Architecture

2020-09-23 Thread Hariharan
As long as you have a filesystem implementation [1] for your p2p fs, hadoop
(and other software like Hive and Spark that use the hadoop fs) should work
just fine. Performance may be a concern, but you may have to tune your
implementation to adapt as far as possible.

1.
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html

Thanks,
Hariharan

On Wed, 23 Sep 2020, 22:27 Lauren Taylor,  wrote:

> Hello!
>
> I am currently provisioning a P2P peer network (there are already a couple
> of live networks that have been created, but we do not want to test this in
> production, fo course).
>
> In this p2p network, I was looking at the best ways in which one could
> distribute file storage (and access it to) in an efficient manner.
>
> The difference between this solution & Bittorrent (DHT / mainline DHT), is 
> *that
> all of the files that are uploaded to the network are meant to be stored
> and distributed*.
>
> Putting the complexities of that to the side (the sustainability of that
> proposal has been accounted for), I am wondering whether Apache Hadoop
> would be a good structure to run on top of that system.
>
> *Why I Ask*
> The p2p structure of this protocol is absolutely essential to its
> functioning. Thus, if I am going to leverage it for the purposes of storage
> / distribution, it is imperative that I ensure I'm not injecting something
> into the ecosystem that could ultimately harm it (i.e., DoS vulnerability).
>
> *Hadoop-LAFS?*
> I was on the 'Tahoe-LAFS' website and I saw that there was a proposal for
> 'Hadoop-LAFS' - which is a deployment of Apache Hadoop over top of the
> Tahoe-LAFS layer.
>
> According to the project description given by Google's Code Archive, this
> allows for:
>
> "Provides an integration layer between Tahoe LAFS and Hadoop so Map Reduce
>> jobs can be run over encrypted data stored in Tahoe."
>>
>
> Any and all answers would help a ton, thank you!
>
> Sincerely,
> Buck Wiston
>
>
>
>
>
>


Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer(Internet mail)

2020-09-23 Thread 毛宝龙
Congratulations to Hui !

> On Sep 24, 2020, at 02:06, Wei-Chiu Chuang  wrote:
> 
> I am pleased to announce that Hui Fei has accepted the invitation to become
> a Hadoop committer.
> 
> He started contributing to the project in October 2016. Over the past 4
> years he has contributed a lot in HDFS, especially in Erasure Coding,
> Hadoop 3 upgrade, RBF and Standby Serving reads.
> 
> One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
> This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
> The adoption of Hadoop 3 has gone up after this. In the past the community
> discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
> one took the initiative to make it happen. I am personally very grateful
> for this.
> 
> The work on EC is impressive as well. He managed to onboard EC in
> production at scale, fixing tricky problems. Again, I am impressed and
> grateful for the contribution in EC.
> 
> In addition to code contributions, he invested a lot in the community:
> 
>> 
>>   - Apache Hadoop Community 2019 Beijing Meetup
>>   https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug 
>> where
>>   he discussed the operational experience of RBF in production
>> 
>> 
>>   - Apache Hadoop Storage Community Sync Online
>>   
>> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
>>  where
>>   he discussed the Hadoop 3 rolling upgrade support
>> 
>> 
> Let's congratulate Hui for this new role!
> 
> Cheers,
> Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)



Re: [DISCUSS] Ozone TLP proposal

2020-09-23 Thread Hui Fei
Hi Elek,

> 2. Following the path of Submarine, any existing Hadoop committers --
who are willing to contribute -- can ask to be included in the initial
committer list without any additional constraints. (Edit the wiki, or
send an email to this thread or to me). Thanks for Vinod to suggesting
this approach. (for Submarine at that time)

Since I'm doing some work on ozone in the near and willing to contribute,
please add my name to the wiki.

Thanks
Fei Hui

Elek, Marton  于2020年9月7日周一 下午8:04写道:

>
> Hi,
>
> The Hadoop community earlier decided to move out Ozone sub-project to a
> separated Apache Top Level Project (TLP). [1]
>
> For detailed history and motivation, please check the previous thread ([1])
>
> Ozone community discussed and agreed on the initial version of the
> project proposal, and now it's time to discuss it with the full Hadoop
> community.
>
> The current version is available at the Hadoop wiki:
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Hadoop+subproject+to+Apache+TLP+proposal
>
>
>   1. Please read it. You can suggest any modifications or topics to
> cover (here or in the comments)
>
>   2. Following the path of Submarine, any existing Hadoop committers --
> who are willing to contribute -- can ask to be included in the initial
> committer list without any additional constraints. (Edit the wiki, or
> send an email to this thread or to me). Thanks for Vinod to suggesting
> this approach. (for Submarine at that time)
>
>
> Next steps:
>
>   * After this discussion thread (in case of consensus) a new VOTE
> thread will be started about the proposal (*-dev@hadoop.a.o)
>
>   * In case VOTE is passed, the proposal will be sent to the Apache
> Board to be discussed.
>
>
> Please help to make the proposal better,
>
> Thanks a lot,
> Marton
>
>
> [1].
>
> https://lists.apache.org/thread.html/r298eba8abecc210abd952f040b0c4f07eccc62dcdc49429c1b8f4ba9%40%3Chdfs-dev.hadoop.apache.org%3E
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread Xun Liu
Hui Fei, Congratulations!

On Thu, Sep 24, 2020 at 2:07 AM Wei-Chiu Chuang  wrote:

> I am pleased to announce that Hui Fei has accepted the invitation to become
> a Hadoop committer.
>
> He started contributing to the project in October 2016. Over the past 4
> years he has contributed a lot in HDFS, especially in Erasure Coding,
> Hadoop 3 upgrade, RBF and Standby Serving reads.
>
> One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
> This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
> The adoption of Hadoop 3 has gone up after this. In the past the community
> discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
> one took the initiative to make it happen. I am personally very grateful
> for this.
>
> The work on EC is impressive as well. He managed to onboard EC in
> production at scale, fixing tricky problems. Again, I am impressed and
> grateful for the contribution in EC.
>
> In addition to code contributions, he invested a lot in the community:
>
> >
> >- Apache Hadoop Community 2019 Beijing Meetup
> >
> https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug
> where
> >he discussed the operational experience of RBF in production
> >
> >
> >- Apache Hadoop Storage Community Sync Online
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> where
> >he discussed the Hadoop 3 rolling upgrade support
> >
> >
> Let's congratulate Hui for this new role!
>
> Cheers,
> Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
>


[jira] [Created] (HADOOP-17282) libzstd-dev should be used instead of libzstd1-dev on Ubuntu 18.04 or higher

2020-09-23 Thread Takeru Kuramoto (Jira)
Takeru Kuramoto created HADOOP-17282:


 Summary: libzstd-dev should be used instead of libzstd1-dev on 
Ubuntu 18.04 or higher
 Key: HADOOP-17282
 URL: https://issues.apache.org/jira/browse/HADOOP-17282
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Takeru Kuramoto
Assignee: Takeru Kuramoto


libzstd1-dev is a transitional package on Ubuntu 18.04.
It is better to use libzstd-dev instead of libzstd1-dev in the Dockerfile 
(dev-support/docker/Dockerfile).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org