Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/454/

[Sep 23, 2019 3:06:27 PM] (ekrogen) HADOOP-16581. Addendum: Remove use of Java 
8 functionality. Contributed
[Sep 23, 2019 8:13:24 PM] (jhung) YARN-9762. Add submission context label to 
audit logs. Contributed by

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-23 Thread Duo Zhang
The new protobuf plugin related issues have all been pushed to trunk(though
I think we'd better port them to all active branches).

So what's the next step? Shade and relocated protobuf? HBase has already
done this before so I do not think it will take too much time. If we all
agree on the solution, I think we can finish this in one week.

But maybe a problem is that, is it OK to upgrade protobuf in a minor
release? Of course if we shade and relocate protobuf it will less hurt to
users as they can depend on protobuf 2.5 explicitly if they want, but still
a bit uncomfortable.

Thanks.

Wangda Tan  于2019年9月24日周二 上午2:29写道:

> Hi Vinay,
>
> Thanks for the clarification.
>
> Do you have a timeline about all you described works w.r.t.  the
> compatibility will be completed? I'm asking this is because we need to
> release 3.3.0 earlier if possible since there're 1k+ of patches in 3.3.0
> already, we should get it out earlier.
>
> If the PB work will take more time, do you think if we should create a
> branch for 3.3, revert PB changes from branch-3.3, and keep on working on
> PB for the next minor release? (or major release if we do see some
> compatibility issue in the future).
>
> Just my $0.02
>
> Thanks,
> Wangda
>
> On Mon, Sep 23, 2019 at 5:43 AM Steve Loughran  >
> wrote:
>
> > aah, that makes sense
> >
> > On Sun, Sep 22, 2019 at 6:11 PM Vinayakumar B 
> > wrote:
> >
> > > Thanks Steve.
> > >
> > > Idea is not to shade all artifacts.
> > > Instead maintain one artifact ( hadoop-thirdparty) which have all such
> > > dependencies ( com.google.* may be), add  this artifact as dependency
> in
> > > hadoop modules. Use shaded classes directly in the code of hadoop
> modules
> > > instead of shading at package phase.
> > >
> > > Hbase, ozone and ratis already following this way. The artifact (
> > > hadoop-thirdparty) with shaded dependencies can be maintained in a
> > separate
> > > repo as suggested by stack on HADOOP-13363 or could be maintained as a
> > > separate module in Hadoop repo. If maintained in separate repo, need to
> > > build this only when there are changes related to shaded dependencies.
> > >
> > >
> > > -Vinay
> > >
> > > On Sun, 22 Sep 2019, 10:11 pm Steve Loughran, 
> > wrote:
> > >
> > > >
> > > >
> > > > On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B <
> vinayakum...@apache.org
> > >
> > > > wrote:
> > > >
> > > >>Protobuf provides Wire compatibility between releases.. but not
> > > >> guarantees the source compatibility in generated sources. There will
> > be
> > > a
> > > >> problem in compatibility if anyone uses generated protobuf message
> > > outside
> > > >> of Hadoop modules. Which ideally shouldn't be as generated sources
> are
> > > not
> > > >> public APIs.
> > > >>
> > > >>There should not be any compatibility problems between releases
> in
> > > >> terms
> > > >> of communication provided both uses same syntax (proto2) of proto
> > > message.
> > > >> This I have verified by communication between protobuf 2.5.0 client
> > with
> > > >> protobuf 3.7.1 server.
> > > >>
> > > >>To avoid the downstream transitive dependency classpath problem,
> > who
> > > >> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1
> > > classes
> > > >> and its usages in all hadoop modules.. and keep 2.5.0 jar back in
> > hadoop
> > > >> classpath.
> > > >>
> > > >> Hope I have answered your question.
> > > >>
> > > >> -Vinay
> > > >>
> > > >>
> > > > While I support the move and CP isolation, this is going to (finally)
> > > > force us to make shaded versions of all artifacts which we publish
> with
> > > the
> > > > intent of them being loaded on the classpath of other applications
> > > >
> > >
> >
>


[jira] [Resolved] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2159.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk

> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2170:
--

 Summary: Add Object IDs and Update ID to Volume Object
 Key: HDDS-2170
 URL: https://issues.apache.org/jira/browse/HDDS-2170
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2168:


 Summary: TestOzoneManagerDoubleBufferWithOMResponse sometimes 
fails with out of memory error
 Key: HDDS-2168
 URL: https://issues.apache.org/jira/browse/HDDS-2168
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
outofmemory exceptions at times in dev machines.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-23 Thread Wangda Tan
Hi Vinay,

Thanks for the clarification.

Do you have a timeline about all you described works w.r.t.  the
compatibility will be completed? I'm asking this is because we need to
release 3.3.0 earlier if possible since there're 1k+ of patches in 3.3.0
already, we should get it out earlier.

If the PB work will take more time, do you think if we should create a
branch for 3.3, revert PB changes from branch-3.3, and keep on working on
PB for the next minor release? (or major release if we do see some
compatibility issue in the future).

Just my $0.02

Thanks,
Wangda

On Mon, Sep 23, 2019 at 5:43 AM Steve Loughran 
wrote:

> aah, that makes sense
>
> On Sun, Sep 22, 2019 at 6:11 PM Vinayakumar B 
> wrote:
>
> > Thanks Steve.
> >
> > Idea is not to shade all artifacts.
> > Instead maintain one artifact ( hadoop-thirdparty) which have all such
> > dependencies ( com.google.* may be), add  this artifact as dependency in
> > hadoop modules. Use shaded classes directly in the code of hadoop modules
> > instead of shading at package phase.
> >
> > Hbase, ozone and ratis already following this way. The artifact (
> > hadoop-thirdparty) with shaded dependencies can be maintained in a
> separate
> > repo as suggested by stack on HADOOP-13363 or could be maintained as a
> > separate module in Hadoop repo. If maintained in separate repo, need to
> > build this only when there are changes related to shaded dependencies.
> >
> >
> > -Vinay
> >
> > On Sun, 22 Sep 2019, 10:11 pm Steve Loughran, 
> wrote:
> >
> > >
> > >
> > > On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B  >
> > > wrote:
> > >
> > >>Protobuf provides Wire compatibility between releases.. but not
> > >> guarantees the source compatibility in generated sources. There will
> be
> > a
> > >> problem in compatibility if anyone uses generated protobuf message
> > outside
> > >> of Hadoop modules. Which ideally shouldn't be as generated sources are
> > not
> > >> public APIs.
> > >>
> > >>There should not be any compatibility problems between releases in
> > >> terms
> > >> of communication provided both uses same syntax (proto2) of proto
> > message.
> > >> This I have verified by communication between protobuf 2.5.0 client
> with
> > >> protobuf 3.7.1 server.
> > >>
> > >>To avoid the downstream transitive dependency classpath problem,
> who
> > >> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1
> > classes
> > >> and its usages in all hadoop modules.. and keep 2.5.0 jar back in
> hadoop
> > >> classpath.
> > >>
> > >> Hope I have answered your question.
> > >>
> > >> -Vinay
> > >>
> > >>
> > > While I support the move and CP isolation, this is going to (finally)
> > > force us to make shaded versions of all artifacts which we publish with
> > the
> > > intent of them being loaded on the classpath of other applications
> > >
> >
>


Meeting notes from the Ozone/Ratis Community meeting

2019-09-23 Thread Anu Engineer
During ApacheCon, Las Vegas, I was encouraged to share these meeting notes
in the apache mailing lists. So please forgive me for the weekly spam. I
had presumed that people know of this weekly sync-ups and hence not been
posting notes to the mailing list. Please take a look at older meeting
notes in the wiki if you are interested.

https://cwiki.apache.org/confluence/display/HADOOP/2019-09-23+Meeting+notes

Thanks
Anu


[jira] [Created] (HDDS-2167) Hadoop31-mr acceptance test is failing due to the shading

2019-09-23 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2167:
--

 Summary: Hadoop31-mr acceptance test is failing due to the shading
 Key: HDDS-2167
 URL: https://issues.apache.org/jira/browse/HDDS-2167
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


>From the daily build:

{code}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/ozone/shaded/org/apache/http/client/utils/URIBuilder
at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:138)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
at 
org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:259)
at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.ozone.shaded.org.apache.http.client.utils.URIBuilder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 15 more
{code}

It can be reproduced locally with executing the tests:

{code}
cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-mr/hadoop31
./test.sh
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2166) Some RPC metrics are missing from SCM prometheus endpoint

2019-09-23 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2166:
--

 Summary: Some RPC metrics are missing from SCM prometheus endpoint
 Key: HDDS-2166
 URL: https://issues.apache.org/jira/browse/HDDS-2166
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


In Hadoop metrics it's possible to register multiple metrics with the same name 
but with different tags. For example each RpcServere has an own metrics 
instance in SCM.

{code}
"name" : 
"Hadoop:service=StorageContainerManager,name=RpcActivityForPort9860",
"name" : 
"Hadoop:service=StorageContainerManager,name=RpcActivityForPort9863",
{code}

They are converted by PrometheusSink to a prometheus metric line with proper 
name and tags. For example:

{code}
rpc_rpc_queue_time60s_num_ops{port="9860",servername="StorageContainerLocationProtocolService",context="rpc",hostname="72736061cbc5"}
 0
{code}

The PrometheusSink uses a Map to cache all the recent values but unfortunately 
the key contains only the name (rpc_rpc_queue_time60s_num_ops in our example) 
but not the tags (port=...)

For this reason if there are multiple metrics with the same name, only the 
first one will be displayed.

As a result in SCM only the metrics of the first RPC server can be exported to 
the prometheus endpoint. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2165) Freon fails if bucket does not exists

2019-09-23 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2165:
---

 Summary: Freon fails if bucket does not exists
 Key: HDDS-2165
 URL: https://issues.apache.org/jira/browse/HDDS-2165
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{code:title=ozone freon ockg}
Bucket not found
...
Failures: 0
Successful executions: 0
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-23 Thread Steve Loughran
aah, that makes sense

On Sun, Sep 22, 2019 at 6:11 PM Vinayakumar B 
wrote:

> Thanks Steve.
>
> Idea is not to shade all artifacts.
> Instead maintain one artifact ( hadoop-thirdparty) which have all such
> dependencies ( com.google.* may be), add  this artifact as dependency in
> hadoop modules. Use shaded classes directly in the code of hadoop modules
> instead of shading at package phase.
>
> Hbase, ozone and ratis already following this way. The artifact (
> hadoop-thirdparty) with shaded dependencies can be maintained in a separate
> repo as suggested by stack on HADOOP-13363 or could be maintained as a
> separate module in Hadoop repo. If maintained in separate repo, need to
> build this only when there are changes related to shaded dependencies.
>
>
> -Vinay
>
> On Sun, 22 Sep 2019, 10:11 pm Steve Loughran,  wrote:
>
> >
> >
> > On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B 
> > wrote:
> >
> >>Protobuf provides Wire compatibility between releases.. but not
> >> guarantees the source compatibility in generated sources. There will be
> a
> >> problem in compatibility if anyone uses generated protobuf message
> outside
> >> of Hadoop modules. Which ideally shouldn't be as generated sources are
> not
> >> public APIs.
> >>
> >>There should not be any compatibility problems between releases in
> >> terms
> >> of communication provided both uses same syntax (proto2) of proto
> message.
> >> This I have verified by communication between protobuf 2.5.0 client with
> >> protobuf 3.7.1 server.
> >>
> >>To avoid the downstream transitive dependency classpath problem, who
> >> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1
> classes
> >> and its usages in all hadoop modules.. and keep 2.5.0 jar back in hadoop
> >> classpath.
> >>
> >> Hope I have answered your question.
> >>
> >> -Vinay
> >>
> >>
> > While I support the move and CP isolation, this is going to (finally)
> > force us to make shaded versions of all artifacts which we publish with
> the
> > intent of them being loaded on the classpath of other applications
> >
>


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree [discussion -> lazy vote]

2019-09-23 Thread Elek, Marton
> Do you see a Submarine like split-also-into-a-TLP for Ozone? If not 
now, sometime further down the line?


Good question, and I don't know what is the best answer right now. It's 
definitely an option, But Submarine move hasn't been finished, so it's 
not yet possible to learn form the experiences (which can be a usefull 
input for the decision).


I think it's a bigger/more important question and I would prefer to 
start a new thread about it.


>  If so, why not do both at the same time?

That's an easier question: I think the repo separation is an easier 
step, with immediate benefits, therefore I would prefer to do it as soon 
as possible.


Moving to a separated TLP may take months (discussion, vote, proposal, 
board approval, etc.). While this code organization step can be done 
easily after the 0.4.1 Ozone release (which is very close, I hope).


As it should be done anyway (with or without separated TLP) I propose to 
do it after the next Ozone release (in the next 1-2 weeks).




As the overall feedback was positive (in fact many of the answers were 
simple +1 votes) I don't think the thread should be repeated under 
[VOTE] subject. Therefore I call it for a lazy consensus. If you have 
any objections (against doing the repo separation now or doing it at 
all) please express in the next 3 days...


Thanks a lot,
Marton

On 9/22/19 4:02 PM, Vinod Kumar Vavilapalli wrote:

Looks to me that the advantages of this additional step are only incremental 
given that you've already decoupled releases and dependencies.

Do you see a Submarine like split-also-into-a-TLP for Ozone? If not now, 
sometime further down the line? If so, why not do both at the same time? I felt 
the same way with Submarine, but couldn't follow up in time.

Thanks
+Vinod


On Sep 18, 2019, at 4:04 AM, Wangda Tan  wrote:

+1 (binding).

 From my experiences of Submarine project, I think moving to a separate repo
helps.

- Wangda

On Tue, Sep 17, 2019 at 11:41 AM Subru Krishnan  wrote:


+1 (binding).

IIUC, there will not be an Ozone module in trunk anymore as that was my
only concern from the original discussion thread? IMHO, this should be the
default approach for new modules.

On Tue, Sep 17, 2019 at 9:58 AM Salvatore LaMendola (BLOOMBERG/ 731 LEX) <
slamendo...@bloomberg.net> wrote:


+1

From: e...@apache.org At: 09/17/19 05:48:32To:

hdfs-dev@hadoop.apache.org,

mapreduce-...@hadoop.apache.org,  common-...@hadoop.apache.org,
yarn-...@hadoop.apache.org
Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
source tree


TLDR; I propose to move Ozone related code out from Hadoop trunk and
store it in a separated *Hadoop* git repository apache/hadoop-ozone.git


When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
be part of the source tree but with separated release cadence, mainly
because it had the hadoop-trunk/SNAPSHOT as compile time dependency.

During the last Ozone releases this dependency is removed to provide
more stable releases. Instead of using the latest trunk/SNAPSHOT build
from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).

As we have no more strict dependency between Hadoop trunk SNAPSHOT and
Ozone trunk I propose to separate the two code base from each other with
creating a new Hadoop git repository (apache/hadoop-ozone.git):

With moving Ozone to a separated git repository:

  * It would be easier to contribute and understand the build (as of now
we always need `-f pom.ozone.xml` as a Maven parameter)
  * It would be possible to adjust build process without breaking
Hadoop/Ozone builds.
  * It would be possible to use different Readme/.asf.yaml/github
template for the Hadoop Ozone and core Hadoop. (For example the current
github template [2] has a link to the contribution guideline [3]. Ozone
has an extended version [4] from this guideline with additional
information.)
  * Testing would be more safe as it won't be possible to change core
Hadoop and Hadoop Ozone in the same patch.
  * It would be easier to cut branches for Hadoop releases (based on the
original consensus, Ozone should be removed from all the release
branches after creating relase branches from trunk)


What do you think?

Thanks,
Marton

[1]:



https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8

c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
[2]:



https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md

[3]:

https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute

[4]:



https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org








-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Created] (HDFS-14868) Fix typo in TestRouterQuota

2019-09-23 Thread Jinglun (Jira)
Jinglun created HDFS-14868:
--

 Summary: Fix typo in TestRouterQuota
 Key: HDFS-14868
 URL: https://issues.apache.org/jira/browse/HDFS-14868
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jinglun


There is a typo in TestRouterQuota, see the patch for detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org