[jira] [Created] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile

2019-10-01 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16624:
--

 Summary: Upgrade hugo to the latest version in Dockerfile
 Key: HADOOP-16624
 URL: https://issues.apache.org/jira/browse/HADOOP-16624
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is 
0.58.3.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/462/

[Oct 1, 2019 10:43:53 AM] (bibinchundatt) YARN-9858. Optimize RMContext 
getExclusiveEnforcedPartitions.

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Resolved] (HADOOP-16578) ABFS: fileSystemExists() should not call container level apis

2019-10-01 Thread Da Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou resolved HADOOP-16578.
--
Resolution: Fixed

> ABFS: fileSystemExists() should not call container level apis
> -
>
> Key: HADOOP-16578
> URL: https://issues.apache.org/jira/browse/HADOOP-16578
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
>
> ABFS driver should not use container level api "Get Container Properties" as 
> there is no concept of container in HDFS, and this caused some RBAC check 
> issue.
> Fix: use getFileStatus() to check if the container exists.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14930) Upgrade Jetty to 9.4 version

2019-10-01 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-14930.
-
Resolution: Duplicate

Closing this one since latest work is being done in HADOOP-16152.

> Upgrade Jetty to 9.4 version
> 
>
> Key: HADOOP-14930
> URL: https://issues.apache.org/jira/browse/HADOOP-14930
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-14930.00.patch
>
>
> Currently 9.3.19.v20170502 is used.
> In hbase 2.0+, 9.4.6.v20170531 is used.
> When starting mini dfs cluster in hbase unit tests, we get the following:
> {code}
> java.lang.NoSuchMethodError: 
> org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
>   at 
> org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
>   at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921)
> {code}
> This issue is to upgrade Jetty to 9.4 version



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16619) Upgrade jackson and jackson-databind to 2.9.10

2019-10-01 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-16619.
-
Fix Version/s: 3.3.0
   Resolution: Done

> Upgrade jackson and jackson-databind to 2.9.10
> --
>
> Key: HADOOP-16619
> URL: https://issues.apache.org/jira/browse/HADOOP-16619
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
>
> Two more CVEs (CVE-2019-16335 and CVE-2019-14540) are addressed in 
> jackson-databind 2.9.10.
> For details see Jackson Release 2.9.10 [release 
> notes|https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.10].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16623) Expose total number of DT in JMX for KMS

2019-10-01 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16623:


 Summary: Expose total number of DT in JMX for KMS
 Key: HADOOP-16623
 URL: https://issues.apache.org/jira/browse/HADOOP-16623
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Reporter: Wei-Chiu Chuang


Similar to HDFS-14449, we should expose total number of KMS delegation tokens 
in JMX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16458) LocatedFileStatusFetcher scans failing intermittently against S3 store

2019-10-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16458.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

resolved in trunk. As noted in the commit


Includes
-S3A glob scans don't bother trying to resolve symlinks
-stack traces don't get lost in getFileStatuses() when exceptions are wrapped
-debug level logging of what is up in Globber
-Includes a test of LocatedFileStatus in S3A, though I've got some better ideas 
there (i.e. make it a scale test)
-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.
-ITestRestrictedReadAccess tests incomplete read access to files.

This adds a builder API for constructing globbers which other stores can use
so that they too can skip symlink resolution when not needed.

> LocatedFileStatusFetcher scans failing intermittently against S3 store
> --
>
> Key: HADOOP-16458
> URL: https://issues.apache.org/jira/browse/HADOOP-16458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
> Environment: S3 + S3Guard
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> Intermittent failure of LocatedFileStatusFetcher.getFileStatuses(), which is 
> using globStatus to find files.
> I'd say "turn s3guard on" except this appears to be the case, and the dataset 
> being read is
> over 1h old.
> Which means it is harder than I'd like to blame S3 for what would sound like 
> an inconsistency
> We're hampered by the number of debug level statements in the globber code 
> being approximately none; there's no debugging to turn on. All we know is 
> that globFiles returns null without any explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



ApacheCon North America 2020, project participation

2019-10-01 Thread Rich Bowen
Hi, folks,

(Note: You're receiving this email because you're on the dev@ list for
one or more Apache Software Foundation projects.)

For ApacheCon North America 2019, we asked projects to participate in
the creation of project/topic specific tracks. This was very successful,
with about 15 projects stepping up to curate the content for their
track/summit/event.

We need to know if you're going to do the same for 2020. This informs
how large a venue we book for the event, how long the event runs, and
many other considerations.

If you intend to participate again in 2020, we need to hear from you on
the plann...@apachecon.com mailing list. This is not a firm commitment,
but we need to know if you're, say, 75% confident that you'll be
participating.

And, no, we do not have any details at all, but assume that it will be
in roughly the same calendar space as this year's event, ie, somewhere
in the August-October timeframe.

Thanks.

-- 
Rich Bowen
VP Conferences
The Apache Software Foundation
@apachecon

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16310) Log of a slow RPC request should contain the parameter of the request

2019-10-01 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-16310.
---
Resolution: Duplicate

> Log of a slow RPC request should contain the parameter of the request
> -
>
> Key: HADOOP-16310
> URL: https://issues.apache.org/jira/browse/HADOOP-16310
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Affects Versions: 3.1.1, 2.7.7, 3.1.2
>Reporter: lindongdong
>Priority: Minor
>
>  Now, the log of  a slow RPC request just contains the 
> *methodName*,*processingTime* and *client*. Code is here:
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   if(LOG.isWarnEnabled()) {
> String client = CurCall.get().toString();
> LOG.warn(
> "Slow RPC : " + methodName + " took " + processingTime +
> " milliseconds to process from client " + client);
>   }
>   rpcMetrics.incrSlowRpc();
> }{code}
>  
> It is not enough to analyze why the RPC request is slow. 
> The parameter of the request is a very important thing, and need to be logged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16622) intermittent failure of ITestCommitOperations: too many s3guard writes

2019-10-01 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16622:
---

 Summary: intermittent failure of ITestCommitOperations: too many 
s3guard writes
 Key: HADOOP-16622
 URL: https://issues.apache.org/jira/browse/HADOOP-16622
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


intermittent failure of ITestCommitOperations; expected 2 s3guard writes, saw 7

the logged commit state shows that only two entries were added, so I'm not sure 
what is up. Proposed: in HADOOP-16207 I will add s3guard.operations log to 
debug so we get a trace of all DDB put/delete calls; this will let us debug it 
when it surfaces again



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16621) spark-hive doesn't compile against hadoop trunk because Token uses protobuf 3

2019-10-01 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16621:
---

 Summary: spark-hive doesn't compile against hadoop trunk because 
Token uses protobuf 3
 Key: HADOOP-16621
 URL: https://issues.apache.org/jira/browse/HADOOP-16621
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: common
Affects Versions: 3.3.0
Reporter: Steve Loughran


the move to protobuf 3.x stops spark building because Token has a method which 
returns a protobuf, and now its returning some v3 types.

if we want to isolate downstream code from protobuf changes, we need to move 
that marshalling method from token and put in a helper class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16620) Remove protocol buffers 3.7.1 from requirements in BUILDING.txt

2019-10-01 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16620:
--

 Summary: Remove protocol buffers 3.7.1 from requirements in 
BUILDING.txt
 Key: HADOOP-16620
 URL: https://issues.apache.org/jira/browse/HADOOP-16620
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 3.3.0
Reporter: Akira Ajisaka


After HADOOP-16558, protocol buffers 3.7.1 is not required for building Apache 
Hadoop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Daily Builds Getting Aborted Due To Timeout

2019-10-01 Thread Vinayakumar B
Thanks Ayush,

I think possible other ways are improving the test runtime.

1. try "parallel-tests" profile if applicable on all modules whichever is
okay with tests running in parallel. This may reduce the total unit tests
runtime drastically. ex: I can see hadoop-hdfs-rbf takes around ~22 min.
Here parallel-tests can be implemented (Also need to run and verify that
all tests are using local filesystem path which doesnot collide with other
tests running in parallel.

2. Right now all modules of HDDS and OZone also running as part of
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/
, But
I can also see there was a QBT Job created for Ozone alone. Can we make
that work and exclude HDDS and OZone related executions from
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1271/ ?
  In above aborted builds, it was running unit tests in HDDS/Ozone
module. That means all other modules have finished.

3. If above options are not possible or not resulting in any improvements
then we can definitely go for timeout increase.

-Vinay


On Fri, Sep 27, 2019 at 8:54 PM Ayush Saxena  wrote:

> Hi All,
> Just to bring into notice the hadoop daily builds are getting aborted due
> to timeout( Configured to be 900 Minutes).
>
> > Build timed out (after 900 minutes). Marking the build as aborted.
> > Build was aborted
> > [CHECKSTYLE] Skipping publisher since build result is ABORTED
> > [FINDBUGS] Skipping publisher since build result is ABORTED
> >
> > Recording test results
> > No emails were triggered.
> > Finished: ABORTED
> >
> >
> Reference :
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1271/
>
> Checked with the infra team, The only resolution told was to increase the
> configured time of 900 mins or make the build take less time.
>
> Someone with access to change the config can probably increase the time.
> (Probably people in https://whimsy.apache.org/roster/group/hudson-jobadmin
> have access)
> *Link To Change Configured Time* :
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/configure
> <
> https://slack-redir.net/link?url=https%3A%2F%2Fbuilds.apache.org%2Fjob%2Fhadoop-qbt-trunk-java8-linux-x86%2Fconfigure=3
> >
>
>
> -Ayush
>