Re: New committer: Botong Huang

2018-11-22 Thread Botong Huang
Thanks everyone for the kind words and happy Thanksgiving!

Best,
Botong

On Thu, Nov 22, 2018 at 6:07 AM Wanqiang Ji  wrote:

> Congrats !!!
>
> On Thu, Nov 22, 2018 at 9:48 AM Sree V 
> wrote:
>
> > Congratulations, Botong.We look forward to see many more contributions
> > from you.
> > Thank you.
> > /Sree
> >
> >
> > On Wednesday, November 21, 2018, 5:37:47 PM PST, Xun Liu <
> > neliu...@163.com> wrote:
> >
> >  Congrats !!!
> >
> > > 在 2018年11月22日,上午9:29,Wangda Tan  写道:
> > >
> > > Congrats!
> > >
> > > Best,
> > > Wangda
> > >
> > > On Wed, Nov 21, 2018 at 4:23 PM Srinivas Reddy <
> > srinivas96all...@gmail.com>
> > > wrote:
> > >
> > >> Congratulations Botong !!!
> > >>
> > >> -
> > >> Srinivas
> > >>
> > >> - Typed on tiny keys. pls ignore typos.{mobile app}
> > >>
> > >> On Thu 22 Nov, 2018, 03:27 Chang Qiang Cao  > wrote:
> > >>
> > >>> Congrats Botong!
> > >>>
> > >>> On Wed, Nov 21, 2018 at 2:15 PM Subru Krishnan 
> > wrote:
> > >>>
> >  The Project Management Committee (PMC) for Apache Hadoophas invited
> >  Botong Huang to become a committer and we are pleased to announce
> that
> >  he has accepted.
> >  Being a committer enables easier contribution to theproject since
> >  there is no need to go via the patchsubmission process. This should
> >  enable better productivity.Being a PMC member enables assistance
> with
> >  the managementand to guide the direction of the project.
> > 
> >  Congrats and welcome aboard.
> > 
> >  -Subru
> > 
> > >>>
> > >>
> >
> >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/

[Nov 21, 2018 4:31:07 AM] (xyao) HDDS-855. Move OMMetadataManager from 
hadoop-ozone/ozone-manager to
[Nov 21, 2018 5:21:50 AM] (shashikant) HDDS-835. Use storageSize instead of 
Long for buffer size configs in
[Nov 21, 2018 5:58:20 AM] (shashikant) HDDS-860. Fix TestDataValidate unit 
tests. Contributed by Shashikant
[Nov 21, 2018 9:32:22 AM] (rohithsharmaks) YARN-8936. Bump up Atsv2 hbase 
versions. Contributed by Vrushali C.
[Nov 21, 2018 4:04:53 PM] (elek) HDDS-791. Support Range header for ozone s3 
object download. Contributed
[Nov 21, 2018 4:59:36 PM] (elek) HDDS-732. Add read method which takes offset 
and length in
[Nov 21, 2018 6:13:01 PM] (shashikant) HDDS-865. GrpcXceiverService is added 
twice to GRPC netty server.
[Nov 21, 2018 6:35:39 PM] (bharat) HDDS-816. Create OM metrics for bucket, 
volume, keys. Contributed by
[Nov 21, 2018 6:43:56 PM] (brahma) HDFS-14064. WEBHDFS: Support Enable/Disable 
EC Policy. Contributed by
[Nov 21, 2018 7:20:27 PM] (nanda) HDDS-853. Option to force close a container 
in Datanode. Contributed by
[Nov 21, 2018 7:46:53 PM] (xyao) HDDS-861. SCMNodeManager unit tests are 
broken. Contributed by Xiaoyu
[Nov 21, 2018 8:25:41 PM] (ajay) HDDS-795. RocksDb specific classes leak from 
DBStore/Table interfaces.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 
   hadoop.mapreduce.jobhistory.TestEvents 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/965/artifact/out/branch-findbugs-ha

Re: New committer: Botong Huang

2018-11-22 Thread Wanqiang Ji
Congrats !!!

On Thu, Nov 22, 2018 at 9:48 AM Sree V 
wrote:

> Congratulations, Botong.We look forward to see many more contributions
> from you.
> Thank you.
> /Sree
>
>
> On Wednesday, November 21, 2018, 5:37:47 PM PST, Xun Liu <
> neliu...@163.com> wrote:
>
>  Congrats !!!
>
> > 在 2018年11月22日,上午9:29,Wangda Tan  写道:
> >
> > Congrats!
> >
> > Best,
> > Wangda
> >
> > On Wed, Nov 21, 2018 at 4:23 PM Srinivas Reddy <
> srinivas96all...@gmail.com>
> > wrote:
> >
> >> Congratulations Botong !!!
> >>
> >> -
> >> Srinivas
> >>
> >> - Typed on tiny keys. pls ignore typos.{mobile app}
> >>
> >> On Thu 22 Nov, 2018, 03:27 Chang Qiang Cao  wrote:
> >>
> >>> Congrats Botong!
> >>>
> >>> On Wed, Nov 21, 2018 at 2:15 PM Subru Krishnan 
> wrote:
> >>>
>  The Project Management Committee (PMC) for Apache Hadoophas invited
>  Botong Huang to become a committer and we are pleased to announce that
>  he has accepted.
>  Being a committer enables easier contribution to theproject since
>  there is no need to go via the patchsubmission process. This should
>  enable better productivity.Being a PMC member enables assistance with
>  the managementand to guide the direction of the project.
> 
>  Congrats and welcome aboard.
> 
>  -Subru
> 
> >>>
> >>
>
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>


[ANNOUNCE] Apache Hadoop Ozone 0.3.0-alpha release

2018-11-22 Thread Elek, Marton


It gives me great pleasure to announce that the Apache Hadoop community
has voted to release Apache Hadoop Ozone 0.3.0-alpha (Arches).

Apache Hadoop Ozone is an object store for Hadoop built using Hadoop
Distributed Data Store.

This release contains a new S3 compatible interface and additional
stability improvements.

For more information and to download, please check

https://hadoop.apache.org/ozone

Many thanks to everyone who contributed to the release, and everyone in
the Apache Hadoop community! The release is a result of work from many
contributors. Thank you for all of them.

Cheers,
Marton Elek

ps: This release is still alpha quality, it's not recommended to use in
production.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15946) the Connection thread should notify all calls in finally clause before quit.

2018-11-22 Thread Jinglun (JIRA)
Jinglun created HADOOP-15946:


 Summary: the Connection thread should notify all calls in finally 
clause before quit.
 Key: HADOOP-15946
 URL: https://issues.apache.org/jira/browse/HADOOP-15946
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jinglun
 Attachments: issue-replay.patch

Threads that call Client.call() would wait forever unless the connection thread 
notifies them, so the connection thread should try it's best to notify when 
it's going to quit.

In Connection.close(), if any Throwable occurs before cleanupCalls(), the 
connection thread will quit directly and leave all the waiting threads waiting 
forever. So i think doing cleanupCalls() in finally clause might be a good idea.

I met this problem when i started a hadoop2.6 DataNode with 8 block pools. The 
DN successfully reported to 7 Namespaces and failed at the last Namespace 
because the connection thread of the heartbeat rpc got a "OOME:Direct buffer 
memory" and quit without calling cleanupCalls().

I think we can move cleanupCalls() to finally clause as a protection, because i 
notice in HADOOP-10940 the close of stream is changed to 
IOUtils.closeStream(ipcStreams) which catches all Throwable, so the problem i 
met was fixed. 

issue-replay.patch simulates the case i described above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org