Re: new committer: Gabor Bota

2019-07-01 Thread Zsolt Venczel
This is awesome Gábor! Congratulations!

On Tue, Jul 2, 2019 at 7:20 AM Sree Vaddi 
wrote:

> Congratulations, Gabor.
>
>
>
> Thank you./Sree
>
>
> On Monday, July 1, 2019, 4:35:59 PM PDT, Sean Mackrory <
> mackror...@gmail.com> wrote:
>
>  The Project Management Committee (PMC) for Apache Hadoop
> has invited Gabor Bota to become a committer and we are pleased
> to announce that he has accepted.
>
> Gabor has been working on the S3A file-system, especially on
> the robustness and completeness of S3Guard to help deal with
> inconsistency in object storage. I'm excited to see his work
> with the community continue!
>
> Being a committer enables easier contribution to the
> project since there is no need to go via the patch
> submission process. This should enable better productivity.
>


Re: new committer: Gabor Bota

2019-07-01 Thread Sree Vaddi
Congratulations, Gabor.



Thank you./Sree
 

On Monday, July 1, 2019, 4:35:59 PM PDT, Sean Mackrory 
 wrote:  
 
 The Project Management Committee (PMC) for Apache Hadoop
has invited Gabor Bota to become a committer and we are pleased
to announce that he has accepted.

Gabor has been working on the S3A file-system, especially on
the robustness and completeness of S3Guard to help deal with
inconsistency in object storage. I'm excited to see his work
with the community continue!

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.
  

Re: new committer: Gabor Bota

2019-07-01 Thread Dinesh Chitlangia
Congrats Gabor!


-Dinesh




On Mon, Jul 1, 2019 at 7:36 PM Sean Mackrory  wrote:

> The Project Management Committee (PMC) for Apache Hadoop
> has invited Gabor Bota to become a committer and we are pleased
> to announce that he has accepted.
>
> Gabor has been working on the S3A file-system, especially on
> the robustness and completeness of S3Guard to help deal with
> inconsistency in object storage. I'm excited to see his work
> with the community continue!
>
> Being a committer enables easier contribution to the
> project since there is no need to go via the patch
> submission process. This should enable better productivity.
>


Re: new committer: Gabor Bota

2019-07-01 Thread Wanqiang Ji
Congratulations, Gabor.

On Tue, Jul 2, 2019 at 7:36 AM Sean Mackrory  wrote:

> The Project Management Committee (PMC) for Apache Hadoop
> has invited Gabor Bota to become a committer and we are pleased
> to announce that he has accepted.
>
> Gabor has been working on the S3A file-system, especially on
> the robustness and completeness of S3Guard to help deal with
> inconsistency in object storage. I'm excited to see his work
> with the community continue!
>
> Being a committer enables easier contribution to the
> project since there is no need to go via the patch
> submission process. This should enable better productivity.
>


new committer: Gabor Bota

2019-07-01 Thread Sean Mackrory
The Project Management Committee (PMC) for Apache Hadoop
has invited Gabor Bota to become a committer and we are pleased
to announce that he has accepted.

Gabor has been working on the S3A file-system, especially on
the robustness and completeness of S3Guard to help deal with
inconsistency in object storage. I'm excited to see his work
with the community continue!

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1184/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra 
   org.apache.hadoop.tools.dynamometer.Client.addFileToZipRecursively(File, 
File, ZipOutputStream) may fail to clean up java.io.InputStream on checked 
exception Obligation to clean up resource created at Client.java:to clean up 
java.io.InputStream on checked exception Obligation to clean up resource 
created at Client.java:[line 859] is not discharged 
   Exceptional return value of java.io.File.mkdirs() ignored in 
org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, 
String, Configuration, Logger) At DynoInfraUtils.java:ignored in 
org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, 
String, Configuration, Logger) At DynoInfraUtils.java:[line 138] 
   Found reliance on default encoding in 
org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]):in 
org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]): new 
java.io.InputStreamReader(InputStream) At SimulatedDataNodes.java:[line 149] 
   org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) 
invokes System.exit(...), which shuts down the entire virtual machine At 
SimulatedDataNodes.java:down the entire virtual machine At 
SimulatedDataNodes.java:[line 123] 
   org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) may 
fail to close stream At SimulatedDataNodes.java:stream At 
SimulatedDataNodes.java:[line 149] 

FindBugs :

   module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen 
   Self assignment of field BlockInfo.replication in new 
org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At 
BlockInfo.java:in new 
org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At 
BlockInfo.java:[line 78] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue 
   hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis 
   hadoop.ozone.client.rpc.TestOzoneAtRestEncryption 
   hadoop.ozone.client.rpc.TestFailureHandlingByClient 
   hadoop.ozone.client.rpc.TestOzoneRpcClient 
   hadoop.ozone.client.rpc.TestSecureOzoneRpcClient 
   hadoop.hdds.scm.pipeline.TestSCMPipelineManager 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1184/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1184/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1184/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1184/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1184/artifact/out/pathlen.txt
  [12K]

   pylint:

   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/369/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [288K]
   

[jira] [Created] (HADOOP-16403) Start a new statistical rpc queue and make the Reader's pendingConnection queue runtime-replaceable

2019-07-01 Thread Jinglun (JIRA)
Jinglun created HADOOP-16403:


 Summary: Start a new statistical rpc queue and make the Reader's 
pendingConnection queue runtime-replaceable
 Key: HADOOP-16403
 URL: https://issues.apache.org/jira/browse/HADOOP-16403
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jinglun


I have an HA cluster with 2 NameNodes. The NameNode's meta is quite big so 
after the active dead, it takes the standby more than 40s to become active. 
Many requests(tcp connect request and rpc request) from Datanodes, clients and 
zkfc timed out and start retrying. The suddenly request flood lasts for the 
next 2 minutes and finally all requests are either handled or run out of retry 
times. 
Adjusting the rpc related settings might power the NameNode and solve this 
problem and the key point is finding the bottle neck. The rpc server can be 
described as below:
{noformat}
Listener -> Readers' queues -> Readers -> callQueue -> Handlers{noformat}
By sampling some failed clients, I find many of them got ConnectException. It's 
caused by a 20s un-responded tcp connect request. I think may be the reader 
queue is full and block the listener from handling new connections. Both slow 
handlers and slow readers can block the whole processing progress, and I need 
to know who it is. I think *a queue that computes the qps, write log when the 
queue is full and could be replaced easily* will help. 
I find the nice work HADOOP-10302 implementing a runtime-swapped queue. Using 
it at Reader's queue makes the reader queue runtime-swapped automatically. The 
qps computing job could be done by implementing a subclass of LinkedBlockQueue 
that does the computing job while put/take/... happens. The qps data will show 
on jmx.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org