Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-16 Thread Vinod Kumar Vavilapalli
Makes sense, +1.

Thanks
+Vinod

> On Jul 17, 2019, at 11:37 AM, Elek, Marton  wrote:
> 
> Hi,
> 
> Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> There are three different ways to do it [1]:
> 
> 1. Keep all the different commits from the PR branch and create one
> additional merge commit ("Create a merge commit")
> 
> 2. Squash all the commits and commit the change as one patch ("Squash
> and merge")
> 
> 3. Keep all the different commits from the PR branch but rebase, merge
> commit will be missing ("Rebase and merge")
> 
> 
> 
> As only the option 2 is compatible with the existing development
> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> consensus vote: If no objections withing 3 days, I will ask INFRA to
> disable the options 1 and 3 to make the process less error prone.
> 
> Please let me know, what do you think,
> 
> Thanks a lot
> Marton
> 
> ps: Personally I prefer to merge from local as it enables to sign the
> commits and do a final build before push. But this is a different story,
> this proposal is only about removing the options which are obviously
> risky...
> 
> ps2: You can always do any kind of merge / commits from CLI, for example
> to merge a feature branch together with keeping the history.
> 
> [1]:
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Any thoughts making Submarine a separate Apache project?

2019-07-16 Thread Szilard Nemeth
+1, this is a very great idea.
As Hadoop repository has already grown huge and contains many projects, I
think in general it's a good idea to separate projects in the early phase.


On Wed, Jul 17, 2019, 08:50 runlin zhang  wrote:

> +1 ,That will be great !
>
> > 在 2019年7月10日,下午3:34,Xun Liu  写道:
> >
> > Hi all,
> >
> > This is Xun Liu contributing to the Submarine project for deep learning
> > workloads running with big data workloads together on Hadoop clusters.
> >
> > There are a bunch of integrations of Submarine to other projects are
> > finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next
> step
> > of Submarine is going to integrate with more projects like Apache Arrow,
> > Redis, MLflow, etc. & be able to handle end-to-end machine learning use
> > cases like model serving, notebook management, advanced training
> > optimizations (like auto parameter tuning, memory cache optimizations for
> > large datasets for training, etc.), and make it run on other platforms
> like
> > Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY
> project
> > to Apache so we can put Submarine and TonY together to the same codebase
> > (Page #30.
> >
> https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30
> > ).
> >
> > This expands the scope of the original Submarine project in exciting new
> > ways. Toward that end, would it make sense to create a separate Submarine
> > project at Apache? This can make faster adoption of Submarine, and allow
> > Submarine to grow to a full-blown machine learning platform.
> >
> > There will be lots of technical details to work out, but any initial
> > thoughts on this?
> >
> > Best Regards,
> > Xun Liu
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-16 Thread runlin zhang


Congrats Tao! 

> 在 2019年7月15日,下午5:53,Weiwei Yang  写道:
> 
> Hi Dear Apache Hadoop Community
> 
> It's my pleasure to announce that Tao Yang has been elected as an Apache
> Hadoop committer, this is to recognize his contributions to Apache Hadoop
> YARN project.
> 
> Congratulations and welcome on board!
> 
> Weiwei
> (On behalf of the Apache Hadoop PMC)


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Any thoughts making Submarine a separate Apache project?

2019-07-16 Thread runlin zhang
+1 ,That will be great !

> 在 2019年7月10日,下午3:34,Xun Liu  写道:
> 
> Hi all,
> 
> This is Xun Liu contributing to the Submarine project for deep learning
> workloads running with big data workloads together on Hadoop clusters.
> 
> There are a bunch of integrations of Submarine to other projects are
> finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next step
> of Submarine is going to integrate with more projects like Apache Arrow,
> Redis, MLflow, etc. & be able to handle end-to-end machine learning use
> cases like model serving, notebook management, advanced training
> optimizations (like auto parameter tuning, memory cache optimizations for
> large datasets for training, etc.), and make it run on other platforms like
> Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY project
> to Apache so we can put Submarine and TonY together to the same codebase
> (Page #30.
> https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30
> ).
> 
> This expands the scope of the original Submarine project in exciting new
> ways. Toward that end, would it make sense to create a separate Submarine
> project at Apache? This can make faster adoption of Submarine, and allow
> Submarine to grow to a full-blown machine learning platform.
> 
> There will be lots of technical details to work out, but any initial
> thoughts on this?
> 
> Best Regards,
> Xun Liu


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-16 Thread Mukul Kumar Singh

+1, Lets have Squash and merge as the default & only option on github UI.

Thanks,
Mukul


On 7/17/19 11:37 AM, Elek, Marton wrote:

Hi,

Github UI (ui!) helps to merge Pull Requests to the proposed branch.
There are three different ways to do it [1]:

1. Keep all the different commits from the PR branch and create one
additional merge commit ("Create a merge commit")

2. Squash all the commits and commit the change as one patch ("Squash
and merge")

3. Keep all the different commits from the PR branch but rebase, merge
commit will be missing ("Rebase and merge")



As only the option 2 is compatible with the existing development
practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
consensus vote: If no objections withing 3 days, I will ask INFRA to
disable the options 1 and 3 to make the process less error prone.

Please let me know, what do you think,

Thanks a lot
Marton

ps: Personally I prefer to merge from local as it enables to sign the
commits and do a final build before push. But this is a different story,
this proposal is only about removing the options which are obviously
risky...

ps2: You can always do any kind of merge / commits from CLI, for example
to merge a feature branch together with keeping the history.

[1]:
https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-16 Thread Weiwei Yang
Thanks Marton, +1 on this.

Weiwei

On Jul 17, 2019, 2:07 PM +0800, Elek, Marton , wrote:
> Hi,
>
> Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> There are three different ways to do it [1]:
>
> 1. Keep all the different commits from the PR branch and create one
> additional merge commit ("Create a merge commit")
>
> 2. Squash all the commits and commit the change as one patch ("Squash
> and merge")
>
> 3. Keep all the different commits from the PR branch but rebase, merge
> commit will be missing ("Rebase and merge")
>
>
>
> As only the option 2 is compatible with the existing development
> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> consensus vote: If no objections withing 3 days, I will ask INFRA to
> disable the options 1 and 3 to make the process less error prone.
>
> Please let me know, what do you think,
>
> Thanks a lot
> Marton
>
> ps: Personally I prefer to merge from local as it enables to sign the
> commits and do a final build before push. But this is a different story,
> this proposal is only about removing the options which are obviously
> risky...
>
> ps2: You can always do any kind of merge / commits from CLI, for example
> to merge a feature branch together with keeping the history.
>
> [1]:
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>


[VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-16 Thread Elek, Marton
Hi,

Github UI (ui!) helps to merge Pull Requests to the proposed branch.
There are three different ways to do it [1]:

1. Keep all the different commits from the PR branch and create one
additional merge commit ("Create a merge commit")

2. Squash all the commits and commit the change as one patch ("Squash
and merge")

3. Keep all the different commits from the PR branch but rebase, merge
commit will be missing ("Rebase and merge")



As only the option 2 is compatible with the existing development
practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
consensus vote: If no objections withing 3 days, I will ask INFRA to
disable the options 1 and 3 to make the process less error prone.

Please let me know, what do you think,

Thanks a lot
Marton

ps: Personally I prefer to merge from local as it enables to sign the
commits and do a final build before push. But this is a different story,
this proposal is only about removing the options which are obviously
risky...

ps2: You can always do any kind of merge / commits from CLI, for example
to merge a feature branch together with keeping the history.

[1]:
https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



回复:[ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-16 Thread 郑锴(铁杰)
Congrats Tao, great work!

-kai


--
发件人:杨弢(搏远) 
发送时间:2019年7月16日(星期二) 10:37
收件人:Naganarasimha Garla ; Weiwei Yang 

抄 送:yarn-dev ; Hadoop Common 
; mapreduce-dev 
; Hdfs-dev 
主 题:回复:[ANNOUNCE] New Apache Hadoop Committer - Tao Yang

Thanks everyone.
I'm so honored to be an Apache Hadoop Committer, I will keep working on this 
great project and contribute more. Thanks.

Best Regards,
Tao Yang


--
发件人:Naganarasimha Garla 
发送时间:2019年7月15日(星期一) 17:55
收件人:Weiwei Yang 
抄 送:yarn-dev ; Hadoop Common 
; mapreduce-dev 
; Hdfs-dev 
主 题:Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

Congrats and welcome Tao Yang!

Regards
+ Naga

On Mon, 15 Jul 2019, 17:54 Weiwei Yang,  wrote:

> Hi Dear Apache Hadoop Community
>
> It's my pleasure to announce that Tao Yang has been elected as an Apache
> Hadoop committer, this is to recognize his contributions to Apache Hadoop
> YARN project.
>
> Congratulations and welcome on board!
>
> Weiwei
> (On behalf of the Apache Hadoop PMC)
>



[jira] [Created] (HADOOP-16433) Filter expired entries and tombstones when listing with MetadataStore#listChildren

2019-07-16 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16433:
---

 Summary: Filter expired entries and tombstones when listing with 
MetadataStore#listChildren
 Key: HADOOP-16433
 URL: https://issues.apache.org/jira/browse/HADOOP-16433
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Gabor Bota
Assignee: Gabor Bota


Currently, we don't filter out entries in {{listChildren}} implementations.

This can cause bugs and inconsistencies, so this should be fixed.
It can lead to a status where we can't recover from the following:
{{guarded and raw (OOB op) clients are doing ops to S3}}
Guarded: touch /
Guarded: touch /
Guarded: rm / {{-> tombstone in MS}}
RAW: touch //file.ext {{-> file is hidden with a tombstone}}
Guarded: ls / {{-> the directory is empty}}

After we change the following code
{code:java}
  final List metas = new ArrayList<>();
  for (Item item : items) {
DDBPathMetadata meta = itemToPathMetadata(item, username);
metas.add(meta);
  }
{code}
to 
{code:java}
// handle expiry - only add not expired entries to listing.
if (meta.getLastUpdated() == 0 ||
!meta.isExpired(ttlTimeProvider.getMetadataTtl(),
ttlTimeProvider.getNow())) {
  metas.add(meta);
}
{code}
we will filter out expired entries from the listing, so we can recover form 
these kind of OOB ops.




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-16 Thread Wanqiang Ji
Congratulations Tao!

On Tue, Jul 16, 2019 at 10:14 PM Eric Payne
 wrote:

>  Congratulations Tao! Well deserved!
>
> On Monday, July 15, 2019, 4:54:10 AM CDT, Weiwei Yang 
> wrote:
>
>  Hi Dear Apache Hadoop Community
>
> It's my pleasure to announce that Tao Yang has been elected as an Apache
> Hadoop committer, this is to recognize his contributions to Apache Hadoop
> YARN project.
>
> Congratulations and welcome on board!
>
> Weiwei
> (On behalf of the Apache Hadoop PMC)
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1199/

[Jul 2, 2019 1:12:28 AM] (elek) HDDS-1668. Add liveness probe to the example 
k8s resources files
[Jul 4, 2019 3:14:19 PM] (elek) HDDS-1763. Use vendor neutral s3 logo in ozone 
doc. Contributed by Elek,
[Jul 15, 2019 7:32:37 AM] (rakeshr) HDFS-14458. Report pmem stats to namenode. 
Contributed by Feilong He.
[Jul 15, 2019 7:48:23 AM] (rakeshr) HDFS-14357. Update documentation for HDFS 
cache on SCM support.
[Jul 15, 2019 8:47:20 AM] (snemeth) YARN-9360. Do not expose innards of 
QueueMetrics object into
[Jul 15, 2019 9:17:16 AM] (snemeth) SUBMARINE-62. PS_LAUNCH_CMD CLI description 
is wrong in RunJobCli.
[Jul 15, 2019 9:59:11 AM] (snemeth) YARN-9127. Create more tests to verify 
GpuDeviceInformationParser.
[Jul 15, 2019 11:28:01 AM] (snemeth) YARN-9326. Fair Scheduler configuration 
defaults are not documented in
[Jul 15, 2019 4:00:10 PM] (elek) HDDS-1800. Result of author check is inverted
[Jul 15, 2019 5:08:00 PM] (ayushsaxena) HDFS-14593. RBF: Implement deletion 
feature for expired records in State
[Jul 16, 2019 12:53:19 AM] (github) HDDS-1761. Fix class hierarchy for 
KeyRequest and FileRequest classes.
[Jul 16, 2019 12:54:41 AM] (arp7) HDDS-1666. Issue in openKey when allocating 
block. (#943)




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.fs.http.server.TestHttpFSServerNoXAttrs 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis 
   hadoop.ozone.client.rpc.TestOzoneAtRestEncryption 
   hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient 
   hadoop.ozone.client.rpc.TestFailureHandlingByClient 
   hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException 
   hadoop.ozone.client.rpc.TestOzoneRpcClient 
   hadoop.ozone.client.rpc.TestWatchForCommit 
   hadoop.ozone.client.rpc.TestSecureOzoneRpcClient 
   hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1199/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1199/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1199/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1199/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1199/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1199/artifact/out/diff-patch-pylint.txt
  [212K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-t

Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-16 Thread Eric Payne
 Congratulations Tao! Well deserved!

On Monday, July 15, 2019, 4:54:10 AM CDT, Weiwei Yang  
wrote:  
 
 Hi Dear Apache Hadoop Community

It's my pleasure to announce that Tao Yang has been elected as an Apache
Hadoop committer, this is to recognize his contributions to Apache Hadoop
YARN project.

Congratulations and welcome on board!

Weiwei
(On behalf of the Apache Hadoop PMC)
  

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.lib.output.TestJobOutputCommitter 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/384/artifact/out/patch-unit-hadoop-hdfs-project_hadoo

Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-16 Thread zhankun tang
Hi Tao,

Congratulations!!

BR,
Zhankun

On Tue, 16 Jul 2019 at 10:49, Wangda Tan  wrote:

> Congrats!
>
> Best,
> Wangda
>
> On Tue, Jul 16, 2019 at 10:37 AM 杨弢(杨弢) 
> wrote:
>
> > Thanks everyone.
> > I'm so honored to be an Apache Hadoop Committer, I will keep working on
> > this great project and contribute more. Thanks.
> >
> > Best Regards,
> > Tao Yang
> >
> >
> > --
> > 发件人:Naganarasimha Garla 
> > 发送时间:2019年7月15日(星期一) 17:55
> > 收件人:Weiwei Yang 
> > 抄 送:yarn-dev ; Hadoop Common <
> > common-dev@hadoop.apache.org>; mapreduce-dev <
> > mapreduce-...@hadoop.apache.org>; Hdfs-dev 
> > 主 题:Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang
> >
> > Congrats and welcome Tao Yang!
> >
> > Regards
> > + Naga
> >
> > On Mon, 15 Jul 2019, 17:54 Weiwei Yang,  wrote:
> >
> > > Hi Dear Apache Hadoop Community
> > >
> > > It's my pleasure to announce that Tao Yang has been elected as an
> Apache
> > > Hadoop committer, this is to recognize his contributions to Apache
> Hadoop
> > > YARN project.
> > >
> > > Congratulations and welcome on board!
> > >
> > > Weiwei
> > > (On behalf of the Apache Hadoop PMC)
> > >
> >
> >
>