[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer edited comment on HDDS-1735 at 7/2/19 5:57 AM:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gauntlet, if *you* can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base, I will *personally* fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot, please hold you peace forever and stop 
trolling us.

 

 

 

 


was (Author: anu):
{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gauntlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base, I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot, please hold you peace forever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14620) RBF: Fix 'not a super user' error when disabling a namespace in kerberos with superuser principal

2019-07-01 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876683#comment-16876683
 ] 

luhuachao commented on HDFS-14620:
--

[~elgoiri] [~hexiaoqiao] Thanx for review, all nits are fixed in patch04

> RBF: Fix 'not a super user' error when disabling a namespace in kerberos with 
> superuser principal
> -
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch, HDFS-14620-HDFS-13891-03.patch, 
> HDFS-14620-HDFS-13891-04.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-07-01 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876682#comment-16876682
 ] 

Takanobu Asanuma commented on HDFS-14609:
-

Thanks for the discussion, [~crh] and [~eyang].

Although I'm no familiar with the security implementations and it may take 
time, I will investigate it.

BTW, it seems that the tests have failed since it was created at first.
{noformat}
$ git clone -b HDFS-13891 --single-branch https://github.com/apache/hadoop.git 
&& cd hadoop
$ git checkout 506d0734825f01daa7bc4ef93664d450b03f0890 # HDFS-13972. RBF: 
Support for Delegation Token (WebHDFS)
$ mvn clean install -DskipTests -DskipShade

$ mvn test -Dtest=TestRouterWithSecureStartup
...
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 25.671 
s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
...
[ERROR] Failures: 
[ERROR]   TestRouterWithSecureStartup.testStartupWithoutSpnegoPrincipal 
Expected test to throw (an instance of java.io.IOException and exception with 
message a string containing "Unable to initialize WebAppContext")
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0

$ mvn test -Dtest=TestRouterHttpDelegationToken
...
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken
[ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 3.787 s 
<<< FAILURE! - in 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken
...
[ERROR] Errors: 
[ERROR]   TestRouterHttpDelegationToken.testCancelDelegationToken:148 » IO 
Security enab...
[ERROR]   TestRouterHttpDelegationToken.setup:99 » ServiceState 
org.apache.hadoop.securi...
[ERROR]   TestRouterHttpDelegationToken.testRenewDelegationToken:137 » IO 
Security enabl...
[ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0
{noformat}

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer edited comment on HDDS-1735 at 7/2/19 5:30 AM:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gauntlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base, I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 


was (Author: anu):
{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gauntlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer edited comment on HDDS-1735 at 7/2/19 5:30 AM:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gauntlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base, I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot, please hold you peace forever and stop 
trolling us.

 

 

 

 


was (Author: anu):
{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gauntlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base, I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer edited comment on HDDS-1735 at 7/2/19 5:30 AM:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gauntlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 


was (Author: anu):
{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14483) Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-07-01 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876673#comment-16876673
 ] 

stack commented on HDFS-14483:
--

Retry. All but one failure look like they could be related. Lets see.

> Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9
> --
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch, 
> HDFS-14483.branch-2.9.v1.patch, HDFS-14483.branch-2.9.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14483) Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-07-01 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-14483:
-
Attachment: HDFS-14483.branch-2.9.v1.patch

> Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9
> --
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch, 
> HDFS-14483.branch-2.9.v1.patch, HDFS-14483.branch-2.9.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer edited comment on HDDS-1735 at 7/2/19 5:23 AM:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it over Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 


was (Author: anu):
{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it our Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer edited comment on HDDS-1735 at 7/2/19 5:21 AM:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it our Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace forever and stop 
trolling us.

 

 

 

 


was (Author: anu):
{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it our Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace for ever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14621) Distcp can not preserve timestamp with -delete option

2019-07-01 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14621:


Assignee: (was: hemanthboyina)

> Distcp can not preserve timestamp with -delete  option
> --
>
> Key: HDFS-14621
> URL: https://issues.apache.org/jira/browse/HDFS-14621
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.7, 3.1.2
>Reporter: ludun
>Priority: Major
>
> Use distcp with  -prbugpcaxt and -delete to copy data between cluster.
> hadoop distcp -Dmapreduce.job.queuename="QueueA" -prbugpcaxt -update -delete  
> hdfs://sourcecluster/user/hive/warehouse/sum.db 
> hdfs://destcluster/user/hive/warehouse/sum.db
> After distcp, we found  the timestamp of dest is different from source, and 
> the timestamp of some directory was the time distcp running.
> Check the code of distcp, in committer, it preserves time first then process 
> -delete option which will change the timestamp of dest directory. So we 
> should process -delete option first. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer edited comment on HDDS-1735 at 7/2/19 5:16 AM:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions without data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it our Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace for ever and stop 
trolling us.

 

 

 

 


was (Author: anu):
{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions with data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it our Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace for ever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876671#comment-16876671
 ] 

Anu Engineer commented on HDDS-1735:


{quote}Look for the section, Testing your patch
{quote}
It tells how to run Hadoop on my Mac. My question was:
{quote}Can you point me to an instruction – or documentation that explain how 
to even setup Yetus on my Mac
{quote}
Completely different question.

 
{quote}Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all 
using default or relaxed rules sets
{quote}
You are making assertions with data to back you up. I will personally assert 
what you saying is completely wrong. To the point that I think you have no idea 
what you are talking about.
 # Let us do some tests – Take checkstyle – run it our Hadoop HDFS and Hadoop 
Common code base and post the result here.
 # Now run the same over Ozone and HDDS directory; post the result here.

I am sure you will not; since It will just expose your false claim.

 

Now let us go to findbugs, Do the same – run the findbugs – Then be a good 
citizen in the Hadoop world and fix the issues. I will throw my name in the 
gaunlet, if you can fix 10% of find bugs issues in HDFS/Hadoop Common code 
base. I will personally fix all find bugs issues in Ozone / HDDS code base.

 

Please do not make assertions without knowing what you are talking about.

 

Before we argue your *opinion* vs my *opinion*; let us measure, I really dare 
you to follow up on this; if you cannot please hold you peace for ever and stop 
trolling us.

 

 

 

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14483) Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9

2019-07-01 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14483:
---
Summary: Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to 
branch-2.9  (was: Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to 
branch-2.9)

> Backport HDFS-14111,HDFS-3246 ByteBuffer pread interface to branch-2.9
> --
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch, 
> HDFS-14483.branch-2.9.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876667#comment-16876667
 ] 

Anu Engineer commented on HDDS-1661:


 I think that would be too early now. Either Ozone should be as good as HDFS or 
HDFS should start using HDDS.These are the two possible out comes we are 
looking for.

HDFS cannot start using HDDS since HDFS did not want to take dependency on 
something that is not prove in the field. 

IMHO, the best time to have  a discussion about should we move HDFS over HDDS 
or is Ozone good enough? Needs us to prove that HDDS is stable and performant. 
So perhaps after the Ozone GA?

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14483) Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to branch-2.9

2019-07-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876661#comment-16876661
 ] 

Hadoop QA commented on HDFS-14483:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
35s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
17s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} branch-2.9 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.9 has 
1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (HDFS-14620) RBF: Fix 'not a super user' error when disabling a namespace in kerberos with superuser principal

2019-07-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876659#comment-16876659
 ] 

Hadoop QA commented on HDFS-14620:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
12s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  5s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14620 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973365/HDFS-14620-HDFS-13891-04.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b967febab52a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 02597b6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27124/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27124/testReport/ |
| Max. process+thread count | 1522 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDDS-1532) Ozone: Freon: Improve the concurrent testing framework.

2019-07-01 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876651#comment-16876651
 ] 

Jitendra Nath Pandey commented on HDDS-1532:


Awesome! Great work [~xudongcao].

> Ozone: Freon: Improve the concurrent testing framework.
> ---
>
> Key: HDDS-1532
> URL: https://issues.apache.org/jira/browse/HDDS-1532
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, Freon's concurrency framework is just on volume-level, but in 
> actual testing, users are likely to provide a smaller volume number(typically 
> 1), and a larger bucket number and key number, in which case the existing 
> concurrency framework can not make good use of the thread pool.
> We need to improve the concurrency policy, make the volume creation task, 
> bucket creation task, and key creation task all can be equally submitted to 
> the thread pool as a general task. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876649#comment-16876649
 ] 

Eric Yang commented on HDDS-1661:
-

[~elek] [~anu] how about we start another discussion thead to see if HDFS still 
want HDDS integration?  At this point, I don't see any HDFS developer work on 
HDFS HDDS integration.  Maybe the argument for this arrangement has fading away?

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876644#comment-16876644
 ] 

Eric Yang commented on HDDS-1735:
-

[~anu] {quote}Can you point me to an instruction – or documentation that 
explain how to even setup Yetus on my Mac ? Ozone avoids all these costly 
mistakes that make Hadoop very hard to use. So please don't hamper our efforts 
by insisting that we need to go back to Hadoop tool chain if we have a better 
experience in place.{quote}

This is documented in 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute

Look for the section, Testing your patch.  This provides the same testing 
procedure as running patch testing with Yetus in Jenkins.

[~elek] {quote}If you see any risk in making safer the existing scripts, please 
let me know (with the the definition how can I reproduce the problems, 
please).{quote}

Ozone maven javadoc has been failing since March.  The percommit build test 
triggered acceptance test does not use the same javadoc rules as Hadoop.  More 
javadoc errors are getting generated without incentive to clean up.  
Checkstyle, findbugs, rat plugins, shellcheck, Dockerfile checks are all using 
default or relaxed rules sets.  This creates more hidden problems in the code 
base.  Technical debts are building up.  This is my reasoning to advice against 
using one off shell script to run maven plugin goal without configuring the 
plugin configurations.

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1115) Provide ozone specific top-level pom.xml

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876613#comment-16876613
 ] 

Anu Engineer edited comment on HDDS-1115 at 7/2/19 4:21 AM:


{quote} I think pom.ozone.xml is a mistake
{quote}
It is nice to hear your *opinion*, and I respect and value you sharing your 
point of view.
{quote}Hadoop submarine project has done this more elegantly,
{quote}
*Elegance* is also in the eye of the beholder.

 

For me personally, this is very useful. These days I work exclusively against 
this pom.ozone.xml. When I import projects into my editor like IntelliJ, I use 
this. My scripts and some of the internal scripts of Ozone has started using it.

So while I understand that Ozone is not measuring up to your high sense of 
Elegance, there is nothing really we can do about it. We have released this, 
trained our users and developers on this, and a change now will cause 
disruption to many people. Elegance something that we should certainly strive 
for; it is an ambitions and laudable goal; but we have to consider the pain for 
plebeian folks like me.


was (Author: anu):
{quote} I think pom.ozone.xml is a mistake
{quote}
It is nice to hear your *opinion*, and I respect and value you sharing your 
point of view.
{quote}Hadoop submarine project has done this more elegantly,
{quote}
*Elegance* is also in the eye of the beholder.

 

For me personally, this is very useful. These days I work exclusively against 
this pom.ozone.xml. When I import projects into my editor like IntelliJ, I use 
this. My scripts and some of the internal scripts of Ozone has started using it.

So while I understand that Ozone is not measuring up to your high sense of 
Elegance, there is nothing really we can do about it. We have released this, 
trained our users and developers on this, and this change will cause disruption 
to many people. Elegance something that we should certainly strive for; it is 
an ambitions and laudable goal; but we have to consider the pain for plebeian 
folks like me.

> Provide ozone specific top-level pom.xml
> 
>
> Key: HDDS-1115
> URL: https://issues.apache.org/jira/browse/HDDS-1115
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build process doesn't require the pom.xml in the top level hadoop 
> directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
> hadoop-hdds. The ./pom.xml is used only to include the 
> hadoop-ozone/hadoop-hdds projects in the maven reactor.
> From command line, it's easy to build only the ozone artifacts:
> {code}
> mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
> -Danimal.sniffer.skip=true  -Denforcer.skip=true
> {code}
> Where: '-pl' defines the build of the hadoop-ozone-dist project
> and '-am' defines to build all of the dependencies from the source tree 
> (hadoop-ozone-common, hadoop-hdds-common, etc.)
> But this filtering is available only from the command line.
> With providing a lightweight pom.ozone.xml we can achieve the same:
>  * We can open only hdds/ozone projects in the IDE/intellij. It makes the 
> development faster as IDE doesn't need to reindex all the sources all the 
> time + it's easy to execute checkstyle/findbugs plugins of the intellij to 
> the whole project.
>  * Longer term we should create an ozone specific source artifact (currently 
> the source artifact for hadoop and ozone releases are the same) which also 
> requires a simplified pom.
> In this patch I also added the .mvn directory to the .gitignore file.
> With 
> {code}
> mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can persist 
> the usage of the ozone.pom.xml for all the subsequent builds (in the same dir)
> How to test?
> Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876643#comment-16876643
 ] 

Anu Engineer commented on HDDS-1661:


Yes, when Ozone merge vote happened; there community wanted a clear path on how 
HDFS would be able to use HDDS. That is the primary reason that HDDS is not 
part of Ozone. So we have to keep that long term goal in mind.

So Ozone depends on HDDS and we need to build both projects, but moving HDDS 
under Ozone would imply HDDS is part of Ozone, instead of a dependency of 
Ozone. Just sharing the history. I vote to keep it as is, since HDDS can be 
consumed by other storage layers.

 

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1115) Provide ozone specific top-level pom.xml

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876613#comment-16876613
 ] 

Anu Engineer edited comment on HDDS-1115 at 7/2/19 4:16 AM:


{quote} I think pom.ozone.xml is a mistake
{quote}
It is nice to hear your *opinion*, and I respect and value you sharing your 
point of view.
{quote}Hadoop submarine project has done this more elegantly,
{quote}
*Elegance* is also in the eye of the beholder.

 

For me personally, this is very useful. These days I work exclusively against 
this pom.ozone.xml. When I import projects into my editor like IntelliJ, I use 
this. My scripts and some of the internal scripts of Ozone has started using it.

So while I understand that Ozone is not measuring up to your high sense of 
Elegance, there is nothing really we can do about it. We have released this, 
trained our users and developers on this, and this change will cause disruption 
to many people. Elegance something that we should certainly strive for; it is 
an ambitions and laudable goal; but we have to consider the pain for plebeian 
folks like me.


was (Author: anu):
{quote} I think pom.ozone.xml is a mistake
{quote}
It is nice to hear your *opinion*, and I respect and value you sharing your 
point of view.
{quote}Hadoop submarine project has done this more elegantly,
{quote}
*Elegance* is also in the eye of the beholder.

 

For me personally, this is very useful. These days I work exclusively against 
this pom.ozone.xml. When I import projects into my editor like IntelliJ, I use 
this. My scripts and some of the internal scripts of Ozone has started using it.

So while I understand that Ozone is not measuring up to your high sense of 
Elegance, there is nothing really we can do about it. We have released this, 
trained our users and developers on this, and this change will cause disruption 
to many people. Elegance something that we should certainly strive for; it is a 
ambitions and laudable goal; but we have to consider the pain for plebeian 
folks like me.

> Provide ozone specific top-level pom.xml
> 
>
> Key: HDDS-1115
> URL: https://issues.apache.org/jira/browse/HDDS-1115
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build process doesn't require the pom.xml in the top level hadoop 
> directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
> hadoop-hdds. The ./pom.xml is used only to include the 
> hadoop-ozone/hadoop-hdds projects in the maven reactor.
> From command line, it's easy to build only the ozone artifacts:
> {code}
> mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
> -Danimal.sniffer.skip=true  -Denforcer.skip=true
> {code}
> Where: '-pl' defines the build of the hadoop-ozone-dist project
> and '-am' defines to build all of the dependencies from the source tree 
> (hadoop-ozone-common, hadoop-hdds-common, etc.)
> But this filtering is available only from the command line.
> With providing a lightweight pom.ozone.xml we can achieve the same:
>  * We can open only hdds/ozone projects in the IDE/intellij. It makes the 
> development faster as IDE doesn't need to reindex all the sources all the 
> time + it's easy to execute checkstyle/findbugs plugins of the intellij to 
> the whole project.
>  * Longer term we should create an ozone specific source artifact (currently 
> the source artifact for hadoop and ozone releases are the same) which also 
> requires a simplified pom.
> In this patch I also added the .mvn directory to the .gitignore file.
> With 
> {code}
> mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can persist 
> the usage of the ozone.pom.xml for all the subsequent builds (in the same dir)
> How to test?
> Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1115) Provide ozone specific top-level pom.xml

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876613#comment-16876613
 ] 

Anu Engineer edited comment on HDDS-1115 at 7/2/19 4:16 AM:


{quote} I think pom.ozone.xml is a mistake
{quote}
It is nice to hear your *opinion*, and I respect and value you sharing your 
point of view.
{quote}Hadoop submarine project has done this more elegantly,
{quote}
*Elegance* is also in the eye of the beholder.

 

For me personally, this is very useful. These days I work exclusively against 
this pom.ozone.xml. When I import projects into my editor like IntelliJ, I use 
this. My scripts and some of the internal scripts of Ozone has started using it.

So while I understand that Ozone is not measuring up to your high sense of 
Elegance, there is nothing really we can do about it. We have released this, 
trained our users and developers on this, and this change will cause disruption 
to many people. Elegance something that we should certainly strive for; it is a 
ambitions and laudable goal; but we have to consider the pain for plebeian 
folks like me.


was (Author: anu):
{quote} I think pom.ozone.xml is a mistake
{quote}
It is nice to hear your *opinion*, and I respect and value your sharing your 
point of view.
{quote}Hadoop submarine project has done this more elegantly,
{quote}
*Elegance* is also in the eye of the beholder.

 

For me personally, this is very useful. These days I work exclusively against 
this pom.ozone.xml. When I import projects into my editor like IntelliJ, I use 
this. My scripts and some of the internal scripts of Ozone has started using it.

So while I understand that Ozone is not measuring up to your high sense of 
Elegance, there is nothing really we can do about it. We have released this, 
trained our users and developers on this, and this change will cause disruption 
to many people. Elegance something that we should certainly strive for; it is a 
ambitions and laudable goal; but we have to consider the pain for plebeian 
folks like me.

> Provide ozone specific top-level pom.xml
> 
>
> Key: HDDS-1115
> URL: https://issues.apache.org/jira/browse/HDDS-1115
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build process doesn't require the pom.xml in the top level hadoop 
> directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
> hadoop-hdds. The ./pom.xml is used only to include the 
> hadoop-ozone/hadoop-hdds projects in the maven reactor.
> From command line, it's easy to build only the ozone artifacts:
> {code}
> mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
> -Danimal.sniffer.skip=true  -Denforcer.skip=true
> {code}
> Where: '-pl' defines the build of the hadoop-ozone-dist project
> and '-am' defines to build all of the dependencies from the source tree 
> (hadoop-ozone-common, hadoop-hdds-common, etc.)
> But this filtering is available only from the command line.
> With providing a lightweight pom.ozone.xml we can achieve the same:
>  * We can open only hdds/ozone projects in the IDE/intellij. It makes the 
> development faster as IDE doesn't need to reindex all the sources all the 
> time + it's easy to execute checkstyle/findbugs plugins of the intellij to 
> the whole project.
>  * Longer term we should create an ozone specific source artifact (currently 
> the source artifact for hadoop and ozone releases are the same) which also 
> requires a simplified pom.
> In this patch I also added the .mvn directory to the .gitignore file.
> With 
> {code}
> mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can persist 
> the usage of the ozone.pom.xml for all the subsequent builds (in the same dir)
> How to test?
> Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1458) Create a maven profile to run fault injection tests

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1458?focusedWorklogId=270579=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270579
 ]

ASF GitHub Bot logged work on HDDS-1458:


Author: ASF GitHub Bot
Created on: 02/Jul/19 03:41
Start Date: 02/Jul/19 03:41
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #800: HDDS-1458. Create 
a maven profile to run fault injection tests
URL: https://github.com/apache/hadoop/pull/800
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270579)
Time Spent: 1h  (was: 50m)

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch, 
> HDDS-1458.015.patch, HDDS-1458.016.patch, HDDS-1458.017.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2019-07-01 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876637#comment-16876637
 ] 

Weiwei Yang commented on HDFS-12748:


Ping [~xkrogen], could you please help to review this? Thank you.

> NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
> 
>
> Key: HDFS-12748
> URL: https://issues.apache.org/jira/browse/HDFS-12748
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, 
> HDFS-12748.003.patch, HDFS-12748.004.patch, HDFS-12748.005.patch
>
>
> In our production environment, the standby NN often do fullgc, through mat we 
> found the largest object is FileSystem$Cache, which contains 7,844,890 
> DistributedFileSystem.
> By view hierarchy of method FileSystem.get() , I found only 
> NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating 
> different DistributedFileSystem every time instead of get a FileSystem from 
> cache.
> {code:java}
> case GETHOMEDIRECTORY: {
>   final String js = JsonUtil.toJsonString("Path",
>   FileSystem.get(conf != null ? conf : new Configuration())
>   .getHomeDirectory().toUri().getPath());
>   return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
> }
> {code}
> When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc.
> {code:java}
> case GETHOMEDIRECTORY: {
>   FileSystem fs = null;
>   try {
> fs = FileSystem.get(conf != null ? conf : new Configuration());
> final String js = JsonUtil.toJsonString("Path",
> fs.getHomeDirectory().toUri().getPath());
> return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
>   } finally {
> if (fs != null) {
>   fs.close();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1668) Add liveness probe to the example k8s resources files

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1668?focusedWorklogId=270576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270576
 ]

ASF GitHub Bot logged work on HDDS-1668:


Author: ASF GitHub Bot
Created on: 02/Jul/19 03:27
Start Date: 02/Jul/19 03:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #944: HDDS-1668. Add 
liveness probe to the example k8s resources files
URL: https://github.com/apache/hadoop/pull/944#issuecomment-507503330
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 79 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 605 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1065 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 229 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 564 | the patch passed |
   | +1 | compile | 339 | the patch passed |
   | +1 | javac | 339 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 872 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 213 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 370 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2930 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 65 | The patch does not generate ASF License warnings. |
   | | | 7949 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-944/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/944 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux b7588a8abeaa 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2f4b37b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-944/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-944/2/testReport/ |
   | Max. process+thread count | 3740 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-944/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270576)
Time Spent: 0.5h  (was: 20m)

> Add liveness probe to the example k8s resources files
> -
>
> Key: HDDS-1668
> URL: https://issues.apache.org/jira/browse/HDDS-1668
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>   

[jira] [Updated] (HDFS-14620) RBF: Fix 'not a super user' error when disabling a namespace in kerberos with superuser principal

2019-07-01 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14620:
-
Attachment: HDFS-14620-HDFS-13891-04.patch

> RBF: Fix 'not a super user' error when disabling a namespace in kerberos with 
> superuser principal
> -
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch, HDFS-14620-HDFS-13891-03.patch, 
> HDFS-14620-HDFS-13891-04.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1532) Ozone: Freon: Improve the concurrent testing framework.

2019-07-01 Thread Junping Du (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876624#comment-16876624
 ] 

Junping Du commented on HDDS-1532:
--

bq. In this environment, writing the same amount of data, by improving the 
concurrent framework, test performance is 8 times faster.
Amazing patch! Great work, [~xudongcao]!

> Ozone: Freon: Improve the concurrent testing framework.
> ---
>
> Key: HDDS-1532
> URL: https://issues.apache.org/jira/browse/HDDS-1532
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, Freon's concurrency framework is just on volume-level, but in 
> actual testing, users are likely to provide a smaller volume number(typically 
> 1), and a larger bucket number and key number, in which case the existing 
> concurrency framework can not make good use of the thread pool.
> We need to improve the concurrency policy, make the volume creation task, 
> bucket creation task, and key creation task all can be equally submitted to 
> the thread pool as a general task. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1710) Publish JVM metrics via Hadoop metrics

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1710?focusedWorklogId=270571=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270571
 ]

ASF GitHub Bot logged work on HDDS-1710:


Author: ASF GitHub Bot
Created on: 02/Jul/19 03:00
Start Date: 02/Jul/19 03:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #994: HDDS-1710. 
Publish JVM metrics via Hadoop metrics
URL: https://github.com/apache/hadoop/pull/994#issuecomment-507498733
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 513 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 510 | trunk passed |
   | +1 | compile | 251 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 863 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 313 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 503 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for patch |
   | +1 | mvninstall | 436 | the patch passed |
   | +1 | compile | 265 | the patch passed |
   | +1 | javac | 265 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-hdds: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 519 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 258 | hadoop-hdds in the patch passed. |
   | -1 | unit | 984 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 6570 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/994 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cde61dd86463 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2f4b37b |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/2/testReport/ |
   | Max. process+thread count | 4655 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-994/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

 

[jira] [Work logged] (HDDS-1698) Switch to use apache/ozone-runner in the compose/Dockerfile

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1698?focusedWorklogId=270566=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270566
 ]

ASF GitHub Bot logged work on HDDS-1698:


Author: ASF GitHub Bot
Created on: 02/Jul/19 02:40
Start Date: 02/Jul/19 02:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #979: HDDS-1698. Switch 
to use apache/ozone-runner in the compose/Dockerfile
URL: https://github.com/apache/hadoop/pull/979#issuecomment-507495262
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 475 | trunk passed |
   | +1 | compile | 276 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 430 | the patch passed |
   | +1 | compile | 260 | the patch passed |
   | +1 | javac | 260 | the patch passed |
   | -1 | hadolint | 3 | The patch generated 3 new + 14 unchanged - 3 fixed = 
17 total (was 17) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 622 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 253 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1330 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 5053 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/979 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml hadolint yamllint |
   | uname | Linux 7fa924bb46a4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2f4b37b |
   | Default Java | 1.8.0_212 |
   | hadolint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/2/artifact/out/diff-patch-hadolint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/2/testReport/ |
   | Max. process+thread count | 4606 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270566)
Time Spent: 40m  (was: 0.5h)

> Switch to use apache/ozone-runner in the compose/Dockerfile
> ---
>
> Key: HDDS-1698
> URL: https://issues.apache.org/jira/browse/HDDS-1698
> Project: Hadoop Distributed Data 

[jira] [Commented] (HDDS-1115) Provide ozone specific top-level pom.xml

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876613#comment-16876613
 ] 

Anu Engineer commented on HDDS-1115:


{quote} I think pom.ozone.xml is a mistake
{quote}
It is nice to hear your *opinion*, and I respect and value your sharing your 
point of view.
{quote}Hadoop submarine project has done this more elegantly,
{quote}
*Elegance* is also in the eye of the beholder.

 

For me personally, this is very useful. These days I work exclusively against 
this pom.ozone.xml. When I import projects into my editor like IntelliJ, I use 
this. My scripts and some of the internal scripts of Ozone has started using it.

So while I understand that Ozone is not measuring up to your high sense of 
Elegance, there is nothing really we can do about it. We have released this, 
trained our users and developers on this, and this change will cause disruption 
to many people. Elegance something that we should certainly strive for; it is a 
ambitions and laudable goal; but we have to consider the pain for plebeian 
folks like me.

> Provide ozone specific top-level pom.xml
> 
>
> Key: HDDS-1115
> URL: https://issues.apache.org/jira/browse/HDDS-1115
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build process doesn't require the pom.xml in the top level hadoop 
> directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
> hadoop-hdds. The ./pom.xml is used only to include the 
> hadoop-ozone/hadoop-hdds projects in the maven reactor.
> From command line, it's easy to build only the ozone artifacts:
> {code}
> mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
> -Danimal.sniffer.skip=true  -Denforcer.skip=true
> {code}
> Where: '-pl' defines the build of the hadoop-ozone-dist project
> and '-am' defines to build all of the dependencies from the source tree 
> (hadoop-ozone-common, hadoop-hdds-common, etc.)
> But this filtering is available only from the command line.
> With providing a lightweight pom.ozone.xml we can achieve the same:
>  * We can open only hdds/ozone projects in the IDE/intellij. It makes the 
> development faster as IDE doesn't need to reindex all the sources all the 
> time + it's easy to execute checkstyle/findbugs plugins of the intellij to 
> the whole project.
>  * Longer term we should create an ozone specific source artifact (currently 
> the source artifact for hadoop and ozone releases are the same) which also 
> requires a simplified pom.
> In this patch I also added the .mvn directory to the .gitignore file.
> With 
> {code}
> mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can persist 
> the usage of the ozone.pom.xml for all the subsequent builds (in the same dir)
> How to test?
> Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-07-01 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876610#comment-16876610
 ] 

Elek, Marton commented on HDDS-1661:


I would do it only after a clear consensus how HDDS and Ozone projects will be 
used by HDFS. The real reason to use two separated projects (HDDS and Ozone) is 
the possibility to release them and use them in two different way. HDDS 
projects may or may not be used by HDFS project and may or may not be released 
in a separated way.

The future is unknown, but what we know is that the there is now blocking 
problem with the current approach. This Jira didn't define any problem just 
that Ozone "some what fragmented" which seems to be an opinion.While I can see 
some value to use a different approach (such as the suggested one) I would wait 
with such big change until we will have more data about the future's plan.

.

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=270561=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270561
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 02/Jul/19 02:18
Start Date: 02/Jul/19 02:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507491077
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 539 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 54 | Maven dependency ordering for branch |
   | +1 | mvninstall | 551 | trunk passed |
   | +1 | compile | 266 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 305 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 499 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 440 | the patch passed |
   | +1 | compile | 267 | the patch passed |
   | +1 | cc | 267 | the patch passed |
   | +1 | javac | 267 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 93 | hadoop-ozone generated 1 new + 9 unchanged - 0 fixed = 
10 total (was 9) |
   | +1 | findbugs | 521 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 239 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1089 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 6687 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 1b464a00e411 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2f4b37b |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/3/testReport/ |
   | Max. process+thread count | 4350 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 

[jira] [Commented] (HDDS-1115) Provide ozone specific top-level pom.xml

2019-07-01 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876608#comment-16876608
 ] 

Elek, Marton commented on HDDS-1115:


IMHO Hadooop submarine project is as elegant as Hadoop HDDS project. And Hadoop 
HDDS project as elegant as Hadoop Ozone project. pom.ozone.xml is a helper to 
compile both HDDS and Ozone projects together.

> Provide ozone specific top-level pom.xml
> 
>
> Key: HDDS-1115
> URL: https://issues.apache.org/jira/browse/HDDS-1115
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build process doesn't require the pom.xml in the top level hadoop 
> directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
> hadoop-hdds. The ./pom.xml is used only to include the 
> hadoop-ozone/hadoop-hdds projects in the maven reactor.
> From command line, it's easy to build only the ozone artifacts:
> {code}
> mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
> -Danimal.sniffer.skip=true  -Denforcer.skip=true
> {code}
> Where: '-pl' defines the build of the hadoop-ozone-dist project
> and '-am' defines to build all of the dependencies from the source tree 
> (hadoop-ozone-common, hadoop-hdds-common, etc.)
> But this filtering is available only from the command line.
> With providing a lightweight pom.ozone.xml we can achieve the same:
>  * We can open only hdds/ozone projects in the IDE/intellij. It makes the 
> development faster as IDE doesn't need to reindex all the sources all the 
> time + it's easy to execute checkstyle/findbugs plugins of the intellij to 
> the whole project.
>  * Longer term we should create an ozone specific source artifact (currently 
> the source artifact for hadoop and ozone releases are the same) which also 
> requires a simplified pom.
> In this patch I also added the .mvn directory to the .gitignore file.
> With 
> {code}
> mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can persist 
> the usage of the ozone.pom.xml for all the subsequent builds (in the same dir)
> How to test?
> Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876605#comment-16876605
 ] 

Elek, Marton commented on HDDS-1735:


{quote}Hadoop already provide it's own pre-commit build validation suites 
(yetus). The optimization of building speed should not introduce a complete 
fork on how CI is done for Hadoop sub-project.
{quote}
If you see any fork in the patch, please let me know.
 # Did this patch modify anything related to maven lifecycle? Nope.
 # Did this patch modify anything related to Yetus? Nope.

We provided shortcuts to make it easier to execute some maven commands. Nothing 
more. And this is already done.

In this patch, I suggested to _improve_ the shortcuts (and cut one shortcut to 
separated unit + integration tests)

If you see any risk in making safer the *existing* scripts, please let me know 
(with the the definition how can I reproduce the problems, please).

 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?focusedWorklogId=270558=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270558
 ]

ASF GitHub Bot logged work on HDDS-1741:


Author: ASF GitHub Bot
Created on: 02/Jul/19 02:12
Start Date: 02/Jul/19 02:12
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1045: HDDS-1741 Fix prometheus 
configuration in ozoneperf example cluster
URL: https://github.com/apache/hadoop/pull/1045#issuecomment-507490165
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270558)
Time Spent: 0.5h  (was: 20m)

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14483) Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to branch-2.9

2019-07-01 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14483:
---
Attachment: HDFS-14483.branch-2.9.v1.patch

> Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to branch-2.9
> --
>
> Key: HDFS-14483
> URL: https://issues.apache.org/jira/browse/HDFS-14483
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Zheng Hu
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14483.branch-2.8.v1.patch, 
> HDFS-14483.branch-2.9.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=270555=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270555
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 02/Jul/19 01:41
Start Date: 02/Jul/19 01:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507484618
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 525 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 43 | Maven dependency ordering for branch |
   | +1 | mvninstall | 483 | trunk passed |
   | +1 | compile | 276 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 870 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | trunk passed |
   | 0 | spotbugs | 320 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 509 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 420 | the patch passed |
   | +1 | compile | 246 | the patch passed |
   | +1 | cc | 246 | the patch passed |
   | +1 | javac | 246 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 651 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 93 | hadoop-ozone generated 1 new + 9 unchanged - 0 fixed = 
10 total (was 9) |
   | +1 | findbugs | 520 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 238 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1732 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 7205 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 71770f4d9f2f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d8bac50 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/2/testReport/ |
   | Max. process+thread count | 5235 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this 

[jira] [Work logged] (HDDS-1710) Publish JVM metrics via Hadoop metrics

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1710?focusedWorklogId=270546=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270546
 ]

ASF GitHub Bot logged work on HDDS-1710:


Author: ASF GitHub Bot
Created on: 02/Jul/19 01:11
Start Date: 02/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #994: HDDS-1710. Publish JVM 
metrics via Hadoop metrics
URL: https://github.com/apache/hadoop/pull/994#issuecomment-507479341
 
 
   > @elek Can this be done for HDDS client as well?
   
   Sure. I added it to the freon.
   
   For simple `ozone sh` it's more tricky as metrics generates additional 
logging which may break the existing tests (and can be confusing), but at freon 
it's no problem as it's already logging a lot. And metrics is more useful for 
freon than shell anyway
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270546)
Time Spent: 40m  (was: 0.5h)

> Publish JVM metrics via Hadoop metrics
> --
>
> Key: HDDS-1710
> URL: https://issues.apache.org/jira/browse/HDDS-1710
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om, Ozone Datanode, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In ozone metrics can be published with the help of hadoop metrics (for 
> example via PrometheusMetricsSink)
> The basic jvm metrics are not published by the metrics system (just with JMX)
> I am very interested about the basic JVM metrics (gc count, heap memory 
> usage) to identify possible problems in the test environment.
> Fortunately it's very easy to turn it on with the help of 
> org.apache.hadoop.metrics2.source.JvmMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?focusedWorklogId=270544=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270544
 ]

ASF GitHub Bot logged work on HDDS-1741:


Author: ASF GitHub Bot
Created on: 02/Jul/19 01:01
Start Date: 02/Jul/19 01:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1045: HDDS-1741 Fix 
prometheus configuration in ozoneperf example cluster
URL: https://github.com/apache/hadoop/pull/1045#issuecomment-507477576
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 125 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 527 | trunk passed |
   | +1 | compile | 272 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1699 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 455 | the patch passed |
   | +1 | compile | 280 | the patch passed |
   | +1 | javac | 280 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 311 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1825 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6015 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1045 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint |
   | uname | Linux 883ff013bf73 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d8bac50 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/1/testReport/ |
   | Max. process+thread count | 5314 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270544)
Time Spent: 20m  (was: 10m)

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 

[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=270543=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270543
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 02/Jul/19 00:56
Start Date: 02/Jul/19 00:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507476729
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 508 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 839 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 352 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 555 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 448 | the patch passed |
   | +1 | compile | 276 | the patch passed |
   | +1 | cc | 276 | the patch passed |
   | +1 | javac | 276 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 666 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 87 | hadoop-ozone generated 1 new + 9 unchanged - 0 fixed = 
10 total (was 9) |
   | -1 | findbugs | 343 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 258 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1230 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 6343 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Possible null pointer dereference in 
org.apache.hadoop.ozone.om.request.file.OMFileCreateRequest.checkKeysUnderPath(OMMetadataManager,
 String, String, String) due to return value of called method  Dereferenced at 
OMFileCreateRequest.java:org.apache.hadoop.ozone.om.request.file.OMFileCreateRequest.checkKeysUnderPath(OMMetadataManager,
 String, String, String) due to return value of called method  Dereferenced at 
OMFileCreateRequest.java:[line 277] |
   |  |  Load of known null value in 
org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.prepareCreateKeyResponse(OzoneManagerProtocolProtos$KeyArgs,
 OmKeyInfo, List, FileEncryptionInfo, IOException, long, long, String, String, 
String, OzoneManager, OMAction)  At OMKeyCreateRequest.java:in 
org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.prepareCreateKeyResponse(OzoneManagerProtocolProtos$KeyArgs,
 OmKeyInfo, List, FileEncryptionInfo, IOException, long, long, String, String, 
String, OzoneManager, OMAction)  At OMKeyCreateRequest.java:[line 294] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 8b78332ea88f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d8bac50 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 

[jira] [Created] (HDDS-1747) Support override of configuration annotations

2019-07-01 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1747:
--

 Summary: Support override of configuration annotations
 Key: HDDS-1747
 URL: https://issues.apache.org/jira/browse/HDDS-1747
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Stephen O'Donnell


To support HDDS-1744 we need a way to override existing configuration defaults. 
For example given a main HttpConfiguration:
{code:java}

public class OzoneHttpServerConfig {

  private int httpBindPort;

  @Config(key = "http-bind-port",
  defaultValue = "9874",
  description =
  "The actual port the web server will listen on for HTTP "
  + "communication. If the "
  + "port is 0 then the server will start on a free port.",
  tags = {ConfigTag.OM, ConfigTag.MANAGEMENT})
  public void setHttpBindPort(int httpBindPort) {
    this.httpBindPort = httpBindPort;
  }
{code}
We need an option to extend  this class and override the default value:
{code:java}
  @ConfigGroup(prefix = "hdds.datanode")
  public static class HttpConfig extends OzoneHttpServerConfig {

    @Override
    @ConfigOverride(defaultValue = "9882")
    public void setHttpBindPort(int httpBindPort) {
  super.setHttpBindPort(httpBindPort);
    }


  }

{code}
The expected behavior is a generated hdds.datanode.http-bind-port where the 
default is 9882.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1667) Docker compose file may referring to incorrect docker image name

2019-07-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876578#comment-16876578
 ] 

Hudson commented on HDDS-1667:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16848 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16848/])
HDDS-1667. Docker compose file may referring to incorrect docker image (elek: 
rev 2f4b37b53c9c0c5db22f797be3f0c13318cf3ed2)
* (edit) hadoop-ozone/dist/pom.xml
* (edit) hadoop-ozone/pom.xml
* (edit) 
hadoop-ozone/fault-injection-test/network-tests/src/test/compose/docker-compose.yaml


> Docker compose file may referring to incorrect docker image name
> 
>
> Key: HDDS-1667
> URL: https://issues.apache.org/jira/browse/HDDS-1667
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Trivial
> Fix For: 0.4.1
>
> Attachments: HDDS-1667.001.patch, HDDS-1667.002.patch, 
> HDDS-1667.003.patch, HDDS-1667.004.patch, HDDS-1667.005.patch, 
> HDDS-1667.006.patch, HDDS-1667.007.patch, HDDS-1667.008.patch, 
> HDDS-1667.009.patch, HDDS-1667.010.patch
>
>
> In fault injection test, the docker compose file is templated using:
> ${user.name}/ozone:${project.version}
> If user pass in parameter -Ddocker.image to cause docker build to generate a 
> different name. This can cause fault injection test to fail/stuck because it 
> could not find the required docker image. The fix is simply use docker.image 
> token to filter docker compose file*.*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-07-01 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876575#comment-16876575
 ] 

Elek, Marton commented on HDDS-1734:


{quote} This prevents down stream project to use Ozone tarball as a dependency. 
 It would be nice to create Ozone tarball with maven assembly plugin to have 
ability to cache ozone tarball in maven repository.
{quote}
There is no down stream project who would like to use the tarbal. In fact even 
hadoop has no downstream project which consumes tar files. This use case 
doesn't seem to be realistic for me (at least now)
{quote}This change can help docker development to be more agile without making 
a full project build.
{quote}
You can do it even now:
{code:java}
cd hadoop-ozone/dist
mvn docker:build -Pdocker-build{code}
 

I agree with [~anu]: It makes the build (and release) slower and consumes more 
space. And I can't see any immediate technical benefit.

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch, 
> HDDS-1734.003.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1667) Docker compose file may referring to incorrect docker image name

2019-07-01 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876574#comment-16876574
 ] 

Elek, Marton commented on HDDS-1667:


+1

Just committed to the trunk.

Thanks [~eyang] the contribution.

> Docker compose file may referring to incorrect docker image name
> 
>
> Key: HDDS-1667
> URL: https://issues.apache.org/jira/browse/HDDS-1667
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Trivial
> Fix For: 0.4.1
>
> Attachments: HDDS-1667.001.patch, HDDS-1667.002.patch, 
> HDDS-1667.003.patch, HDDS-1667.004.patch, HDDS-1667.005.patch, 
> HDDS-1667.006.patch, HDDS-1667.007.patch, HDDS-1667.008.patch, 
> HDDS-1667.009.patch, HDDS-1667.010.patch
>
>
> In fault injection test, the docker compose file is templated using:
> ${user.name}/ozone:${project.version}
> If user pass in parameter -Ddocker.image to cause docker build to generate a 
> different name. This can cause fault injection test to fail/stuck because it 
> could not find the required docker image. The fix is simply use docker.image 
> token to filter docker compose file*.*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1667) Docker compose file may referring to incorrect docker image name

2019-07-01 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1667:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Docker compose file may referring to incorrect docker image name
> 
>
> Key: HDDS-1667
> URL: https://issues.apache.org/jira/browse/HDDS-1667
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Trivial
> Fix For: 0.4.1
>
> Attachments: HDDS-1667.001.patch, HDDS-1667.002.patch, 
> HDDS-1667.003.patch, HDDS-1667.004.patch, HDDS-1667.005.patch, 
> HDDS-1667.006.patch, HDDS-1667.007.patch, HDDS-1667.008.patch, 
> HDDS-1667.009.patch, HDDS-1667.010.patch
>
>
> In fault injection test, the docker compose file is templated using:
> ${user.name}/ozone:${project.version}
> If user pass in parameter -Ddocker.image to cause docker build to generate a 
> different name. This can cause fault injection test to fail/stuck because it 
> could not find the required docker image. The fix is simply use docker.image 
> token to filter docker compose file*.*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14621) Distcp can not preserve timestamp with -delete option

2019-07-01 Thread ludun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876573#comment-16876573
 ] 

ludun commented on HDFS-14621:
--

Ok, I will upload a patch and a UT.

> Distcp can not preserve timestamp with -delete  option
> --
>
> Key: HDFS-14621
> URL: https://issues.apache.org/jira/browse/HDFS-14621
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.7.7, 3.1.2
>Reporter: ludun
>Assignee: hemanthboyina
>Priority: Major
>
> Use distcp with  -prbugpcaxt and -delete to copy data between cluster.
> hadoop distcp -Dmapreduce.job.queuename="QueueA" -prbugpcaxt -update -delete  
> hdfs://sourcecluster/user/hive/warehouse/sum.db 
> hdfs://destcluster/user/hive/warehouse/sum.db
> After distcp, we found  the timestamp of dest is different from source, and 
> the timestamp of some directory was the time distcp running.
> Check the code of distcp, in committer, it preserves time first then process 
> -delete option which will change the timestamp of dest directory. So we 
> should process -delete option first. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1746) Allocating Blocks is not happening for Multipart upload part key in createMultipartKey call

2019-07-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-1746:


Assignee: (was: Bharat Viswanadham)

> Allocating Blocks is not happening for Multipart upload part key in 
> createMultipartKey call
> ---
>
> Key: HDDS-1746
> URL: https://issues.apache.org/jira/browse/HDDS-1746
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> In code, in preExecute, we don't read from DB/cache. As isLeader() can have 
> stale information. As right now, this updates for every 1 second.
> When isLeader() is fixed to be always true. We can allocateBlocks for 
> multipart Key when openKey is called in OM.
>  
> To know why isLeader can have stale info. Read this comment from [~anu]
> https://issues.apache.org/jira/browse/HDDS-1175?focusedCommentId=16831257=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16831257



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1746) Allocating Blocks is not happening for Multipart upload part key in createMultipartKey call

2019-07-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1746:
-
Description: 
In code, in preExecute, we don't read from DB/cache. As isLeader() can have 
stale information. As right now, this updates for every 1 second.

When isLeader() is fixed to be always true. We can allocateBlocks for multipart 
Key when openKey is called in OM.

 

To know why isLeader can have stale info. Read this comment from [~anu]

https://issues.apache.org/jira/browse/HDDS-1175?focusedCommentId=16831257=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16831257

  was:
In code, in preExecute, we don't read from DB/cache. As isLeader() can have 
stale information. As right now, this updates for every 1 second.

When isLeader() is fixed to be always true. We can allocateBlocks for multipart 
Key when openKey is called in OM.


> Allocating Blocks is not happening for Multipart upload part key in 
> createMultipartKey call
> ---
>
> Key: HDDS-1746
> URL: https://issues.apache.org/jira/browse/HDDS-1746
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> In code, in preExecute, we don't read from DB/cache. As isLeader() can have 
> stale information. As right now, this updates for every 1 second.
> When isLeader() is fixed to be always true. We can allocateBlocks for 
> multipart Key when openKey is called in OM.
>  
> To know why isLeader can have stale info. Read this comment from [~anu]
> https://issues.apache.org/jira/browse/HDDS-1175?focusedCommentId=16831257=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16831257



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1746) Allocating Blocks is not happening for Multipart upload part key in createMultipartKey call

2019-07-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1746:
---
Target Version/s: 0.5.0
Priority: Blocker  (was: Major)

> Allocating Blocks is not happening for Multipart upload part key in 
> createMultipartKey call
> ---
>
> Key: HDDS-1746
> URL: https://issues.apache.org/jira/browse/HDDS-1746
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> In code, in preExecute, we don't read from DB/cache. As isLeader() can have 
> stale information. As right now, this updates for every 1 second.
> When isLeader() is fixed to be always true. We can allocateBlocks for 
> multipart Key when openKey is called in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1746) Allocating Blocks is not happening for Multipart upload part key in createMultipartKey call

2019-07-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1746:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-505

> Allocating Blocks is not happening for Multipart upload part key in 
> createMultipartKey call
> ---
>
> Key: HDDS-1746
> URL: https://issues.apache.org/jira/browse/HDDS-1746
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In code, in preExecute, we don't read from DB/cache. As isLeader() can have 
> stale information. As right now, this updates for every 1 second.
> When isLeader() is fixed to be always true. We can allocateBlocks for 
> multipart Key when openKey is called in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1746) Allocating Blocks is not happening for Multipart upload part key in createMultipartKey call

2019-07-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1746:


 Summary: Allocating Blocks is not happening for Multipart upload 
part key in createMultipartKey call
 Key: HDDS-1746
 URL: https://issues.apache.org/jira/browse/HDDS-1746
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In code, in preExecute, we don't read from DB/cache. As isLeader() can have 
stale information. As right now, this updates for every 1 second.

When isLeader() is fixed to be always true. We can allocateBlocks for multipart 
Key when openKey is called in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1745) Add integration test for createDirectory for OM HA

2019-07-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1745:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-505

> Add integration test for createDirectory for OM HA
> --
>
> Key: HDDS-1745
> URL: https://issues.apache.org/jira/browse/HDDS-1745
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Add an integration test for createDirectory which is implemented as part of 
> HDDS-1730 for OM HA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1745) Add integration test for createDirectory for OM HA

2019-07-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1745:


 Summary: Add integration test for createDirectory for OM HA
 Key: HDDS-1745
 URL: https://issues.apache.org/jira/browse/HDDS-1745
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Add an integration test for createDirectory which is implemented as part of 
HDDS-1730 for OM HA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-07-01 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876559#comment-16876559
 ] 

Ayush Saxena commented on HDFS-14358:
-

{quote}Even adding filter for overview page not good, just redirect to DataNode 
page and then user can filter based on his requirement.
{quote}
Well, Frankly speaking I don't agree with this. The overview page has a link 
being directed on clicking on {{Live Datanodes}} and {{Dead Datanodes}} . If a 
user is clicking on the {{Dead Datanodes}} link, his intention is to see the 
{{Dead Datanodes}} only. Honestly Showing the full the list then, when you have 
a provision to show the only intended ones, Looks kind of weird to me.
 Assertion that user can do using the filter there after, Then you don't even 
require the links on the names too. User can click on the Datanode Tab directly 
from the Top also. Both clicking the names and up on tab is same as of now.
{quote}Having two dropdown boxes looks odd
{quote}
The checkbox idea was just to counter this, because It looked kind of weird to 
me too. Anyway I don't have any objections on not having this. What looks good 
to everyone's eyes, I am completely OK with it.

> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358(2).patch, hdfs-14358.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1201) Reporting Corruptions in Containers to SCM

2019-07-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876556#comment-16876556
 ] 

Hadoop QA commented on HDDS-1201:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  6m 
57s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
29s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 38s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.TestSecureOzoneCluster |
|   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | 

[jira] [Work logged] (HDDS-1201) Reporting Corruptions in Containers to SCM

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1201?focusedWorklogId=270521=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270521
 ]

ASF GitHub Bot logged work on HDDS-1201:


Author: ASF GitHub Bot
Created on: 01/Jul/19 23:22
Start Date: 01/Jul/19 23:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1032: [HDDS-1201] 
Reporting corrupted containers info to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-507459866
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 311 | Maven dependency ordering for branch |
   | +1 | mvninstall | 638 | trunk passed |
   | +1 | compile | 277 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1001 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 229 | trunk passed |
   | 0 | spotbugs | 417 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 695 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 46 | Maven dependency ordering for patch |
   | +1 | mvninstall | 593 | the patch passed |
   | +1 | compile | 338 | the patch passed |
   | +1 | javac | 338 | the patch passed |
   | -0 | checkstyle | 50 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -0 | checkstyle | 51 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 867 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 204 | the patch passed |
   | +1 | findbugs | 652 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 389 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2198 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 8980 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1032 |
   | JIRA Issue | HDDS-1201 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 977bead8f735 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d8bac50 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/2/testReport/ |
   | Max. process+thread count | 3677 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[jira] [Commented] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-01 Thread Istvan Fajth (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876550#comment-16876550
 ] 

Istvan Fajth commented on HDDS-1741:


Hi [~elek],

I have added the PR to fix this config problem, based on a quick check it is 
working properly in my env after a clean build.

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1741:
-
Labels: pull-request-available  (was: )

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?focusedWorklogId=270514=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270514
 ]

ASF GitHub Bot logged work on HDDS-1741:


Author: ASF GitHub Bot
Created on: 01/Jul/19 23:06
Start Date: 01/Jul/19 23:06
Worklog Time Spent: 10m 
  Work Description: fapifta commented on pull request #1045: HDDS-1741 Fix 
prometheus configuration in ozoneperf example cluster
URL: https://github.com/apache/hadoop/pull/1045
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270514)
Time Spent: 10m
Remaining Estimate: 0h

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-01 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1705 started by Vivek Ratnavel Subramanian.

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1384) TestBlockOutputStreamWithFailures is failing

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1384?focusedWorklogId=270512=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270512
 ]

ASF GitHub Bot logged work on HDDS-1384:


Author: ASF GitHub Bot
Created on: 01/Jul/19 22:57
Start Date: 01/Jul/19 22:57
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1029: HDDS-1384. 
TestBlockOutputStreamWithFailures is failing
URL: https://github.com/apache/hadoop/pull/1029#issuecomment-507454475
 
 
   +1 the patch lgtm.
   
   The unit test failures may be related. Thanks for taking this up Marton!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270512)
Time Spent: 2h 10m  (was: 2h)

> TestBlockOutputStreamWithFailures is failing
> 
>
> Key: HDDS-1384
> URL: https://issues.apache.org/jira/browse/HDDS-1384
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestBlockOutputStreamWithFailures is failing with the following error
> {noformat}
> 2019-04-04 18:52:43,240 INFO  volume.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(140)) - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@1f6c0e8a
> 2019-04-04 18:52:43,240 INFO  volume.HddsVolumeChecker 
> (HddsVolumeChecker.java:checkAllVolumes(203)) - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@1f6c0e8a
> 2019-04-04 18:52:43,241 ERROR server.GrpcService 
> (ExitUtils.java:terminate(133)) - Terminating with exit status 1: Failed to 
> start Grpc server
> java.io.IOException: Failed to bind
>   at 
> org.apache.ratis.thirdparty.io.grpc.netty.NettyServer.start(NettyServer.java:253)
>   at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl.start(ServerImpl.java:166)
>   at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl.start(ServerImpl.java:81)
>   at org.apache.ratis.grpc.server.GrpcService.startImpl(GrpcService.java:144)
>   at org.apache.ratis.util.LifeCycle.startAndTransition(LifeCycle.java:202)
>   at 
> org.apache.ratis.server.impl.RaftServerRpcWithProxy.start(RaftServerRpcWithProxy.java:69)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$start$3(RaftServerProxy.java:300)
>   at org.apache.ratis.util.LifeCycle.startAndTransition(LifeCycle.java:202)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:298)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:419)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:186)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:169)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:338)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at 
> org.apache.ratis.thirdparty.io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:130)
>   at 
> org.apache.ratis.thirdparty.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558)
>   at 
> org.apache.ratis.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1358)
>   at 
> org.apache.ratis.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501)
>   at 
> org.apache.ratis.thirdparty.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486)
>   at 
> org.apache.ratis.thirdparty.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:1019)
>   at 
> org.apache.ratis.thirdparty.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254)
>   at 
> org.apache.ratis.thirdparty.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:366)
>   at 
> 

[jira] [Updated] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1731:
-
Status: Patch Available  (was: Open)

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1731:
-
Labels: pull-request-available  (was: )

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1731) Implement File CreateFile Request to use Cache and DoubleBuffer

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1731?focusedWorklogId=270490=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270490
 ]

ASF GitHub Bot logged work on HDDS-1731:


Author: ASF GitHub Bot
Created on: 01/Jul/19 22:07
Start Date: 01/Jul/19 22:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1044: 
HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270490)
Time Spent: 10m
Remaining Estimate: 0h

> Implement File CreateFile Request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1731
> URL: https://issues.apache.org/jira/browse/HDDS-1731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createFile request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-07-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876518#comment-16876518
 ] 

Hudson commented on HDFS-14610:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16847 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16847/])
HDFS-14610. HashMap is not thread safe. Field storageMap is typically 
(aengineer: rev d8bac50e12d243ef8fd2c7e0ce5c9997131dee74)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


> HashMap is not thread safe. Field storageMap is typically synchronized by 
> storageMap. However, in one place, field storageMap is not protected with 
> synchronized.
> -
>
> Key: HDFS-14610
> URL: https://issues.apache.org/jira/browse/HDFS-14610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Fix For: 3.3.0
>
> Attachments: addingSynchronization.patch
>
>
> I submitted a CR for this issue at:
> [https://github.com/apache/hadoop/pull/1015]
> The field *storageMap* (a *HashMap*)
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L155]
> is typically protected by synchronization on *storageMap*, e.g.,
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L294]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L443]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> For a total of 9 locations.
> The reason is because *HashMap* is not thread safe.
> However, here:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]
> {{DatanodeStorageInfo storage =}}
> {{   storageMap.get(report.getStorage().getStorageID());}}
> It is not synchronized.
> Note that in the same method:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> *storageMap* is again protected by synchronization:
> {{synchronized (storageMap) {}}
> {{   storageMapSize = storageMap.size();}}
> {{}}}
>  
> The CR I inlined above protected the above instance (line 455 ) with 
> synchronization
>  like in line 484 and in all other occurrences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-07-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1734:
---
Target Version/s: 0.5.0

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch, 
> HDDS-1734.003.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-07-01 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876513#comment-16876513
 ] 

CR Hota commented on HDFS-14609:


[Eric Yang|http://jira/secure/ViewProfile.jspa?name=eyang] Thanks for the 
detailed explanation. Apologies for a delayed response.

For TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal

Since the test was fine earlier, we will just remove the test as it wont make 
such sense now considering the generic 
hadoop.http.authentication.kerberos.principal is to be used to grab the spnego 
principal. In any case, it's still unclear to me why this was working just fine 
earlier with same version of AbstractService. This would need some more digging.

For TestRouterHttpDelegationToken

We wanted to make sure for webhdfs, some tests were done to see if tokens could 
be generated by router's security manager. This was NOT intended to do a E2E 
security test. Again router works just fine as it inherits namenode 
implementation, but we may need to modify the test to inject an appropriate no 
auth filter and bypass auth to maintain the rationale behind the test.

[~tasanuma] Do you have any cycles to help with this? Will be out of office 
soon, but I will be happy to help review and guide you. Feel free to assign 
this to yourself if you work.

 

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876511#comment-16876511
 ] 

Anu Engineer edited comment on HDDS-1734 at 7/1/19 9:19 PM:


{quote}Ozone is using tar stitching to create ozone tarball.  This prevents 
down stream project to use Ozone tarball as a dependency.  It would be nice to 
create Ozone tarball with maven assembly plugin to have ability to cache ozone 
tarball in maven repository.  This ability allows docker build to be a separate 
sub-module and referencing to Ozone tarball.  This change can help docker 
development to be more agile without making a full project buil
{quote}
Docker builds can be done by copying the tarball from Dist. we do that today; 
so I really don't understand why we need to do this at all, and as I have 
mentioned in previous JIRAs, Clean full builds are good; we should encourage 
more of them; instead of bypassing them. We need to make our builds faster and 
try to avoid doing complicated things in the build so that developers are 
encouraged to build cleanly all the time.


was (Author: anu):
{quote}Ozone is using tar stitching to create ozone tarball.  This prevents 
down stream project to use Ozone tarball as a dependency.  It would be nice to 
create Ozone tarball with maven assembly plugin to have ability to cache ozone 
tarball in maven repository.  This ability allows docker build to be a separate 
sub-module and referencing to Ozone tarball.  This change can help docker 
development to be more agile without making a full project buil
{quote}
Docker builds can be done by copying the tarball from Dist. we do that today; 
so I really don't understand why we need to do this at all.

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch, 
> HDDS-1734.003.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876511#comment-16876511
 ] 

Anu Engineer commented on HDDS-1734:


{quote}Ozone is using tar stitching to create ozone tarball.  This prevents 
down stream project to use Ozone tarball as a dependency.  It would be nice to 
create Ozone tarball with maven assembly plugin to have ability to cache ozone 
tarball in maven repository.  This ability allows docker build to be a separate 
sub-module and referencing to Ozone tarball.  This change can help docker 
development to be more agile without making a full project buil
{quote}
Docker builds can be done by copying the tarball from Dist. we do that today; 
so I really don't understand why we need to do this at all.

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch, 
> HDDS-1734.003.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876510#comment-16876510
 ] 

Anu Engineer commented on HDDS-1734:


{quote}Expected result:

This will install tarball into:
{code:java}
~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz


{code}
{quote}
 

Sorry, this makes no sense to me. Hadoop and Ozone write to dist directory; 
mostly Maven too. why would I want to write anything to .m2 directory? All 
release builds of Ozone is being pushed to Maven Central; so if you are a 
developer and wants to get client libraries, you should use Maven central. No 
local builds that push these into .m2.

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch, 
> HDDS-1734.003.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1744) Improve BaseHttpServer to use typesafe configuration.

2019-07-01 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1744:
--

 Summary: Improve BaseHttpServer to use typesafe configuration.
 Key: HDDS-1744
 URL: https://issues.apache.org/jira/browse/HDDS-1744
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Stephen O'Donnell


As it's defined in the parent task, we have a new typesafe way to define 
configuration based on annotations instead of constants.

The next step is to replace existing code to use the new code.

In this Jira I propose to improve the 
org.apache.hadoop.hdds.server.BaseHttpServer to use configuration object 
instead of constants.

We need to create a generic configuration object with the right annotation:
{code:java}
public class OzoneHttpServerConfig{
   
   private String httpBindHost

   @Config(key = "http-bind-host",
   defaultValue = "0.0.0.0",
   description = "The actual address the web server will bind to. If "
  + "this optional address is set, it overrides only the hostname"
  + " portion of http-address configuration value.",
  tags = {ConfigTag.OM, ConfigTag.MANAGEMENT})
 public void setHttpBindHost(String httpBindHost) {
    this.httpBindHost = httpBindHost;
 }


}{code}
And we need to extend this basic configuration in all the HttpServer 
implementation:
{code:java}

public class OzoneManagerHttpServer extends BaseHttpServer{
   
   @ConfigGroup(prefix = "ozone.om")
   public static class HttpConfig extends OzoneHttpServerConfig {
 
    @Override
    @ConfigOverride(defaultValue = "9874")
    public void setHttpBindPort(int httpBindPort) {
  super.setHttpBindPort(httpBindPort);
    }

 
   }
}{code}
Note: configuration keys used by HttpServer2 can't be replaced easly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876508#comment-16876508
 ] 

Anu Engineer commented on HDDS-1735:


Also, large parts of our tests run under K8s. Maven and Java-based infra are 
not very easy to support – but simple scripts work very well. So while Yetus is 
great for CI of Hadoop, we do need complimentary bash scripts. That is an uber 
statement; not specific to this JIRA, just an explanation of why we need to do 
this.

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876505#comment-16876505
 ] 

Anu Engineer commented on HDDS-1735:


{quote}Hadoop already provide it's own pre-commit build validation suites 
(yetus)
{quote}
Can you point me to an instruction – or documentation that explain how to even 
setup Yetus on my Mac ? Ozone avoids all these costly mistakes that make Hadoop 
very hard to use. So please don't hamper our efforts by insisting that we need 
to go back to Hadoop tool chain if we have a better experience in place.

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1735:
---
Target Version/s: 0.4.1

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876503#comment-16876503
 ] 

Anu Engineer commented on HDDS-1735:


{quote}bq. Incorrect assumption, you can use maven profile to trigger different 
type of tests. For more comprehensive test suites, it can be written as 
sub-modules.
This avoid elementary mistakes like:

  1. adding test artifacts into distribution binaries. e.g. include smoke test 
in distribution tarball.
{quote}
We *absolutely* want these tests in the distribution tarball. We want every 
person to validate Ozone works before voting as well have the ability to verify 
the system they are deploying using the tests. Please understand that Ozone is 
a product under development. Many of the hadoop practices like not shipping 
tests only hamper ozone. 

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1743) Create service catalog endpoint in the SCM

2019-07-01 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1743:
--

 Summary: Create service catalog endpoint in the SCM
 Key: HDDS-1743
 URL: https://issues.apache.org/jira/browse/HDDS-1743
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Reporter: Elek, Marton
Assignee: Stephen O'Donnell


Based on the the design doc in the parent pom, we need a Service Catalog 
endpoint in the SCM.

 
{code:java}
public interface ServiceRegistry {

   void register(ServiceEndpoint endpoint) throws IOException;

   ServiceEndpoint findEndpoint(String serviceName, int instanceId);

   Collection getAllServices();
}{code}
Where the ServiceEndpoint is something like this:
{code:java}
public class ServiceEndpoint {

  private String host;

  private String ip;

  private ServicePort port;

  private String serviceName;

  private int instanceId;

...

}


public class ServicePort {
   
   private ServiceProtocol protocol;

   private String name;

   private int port;

...

}

public enum ServiceProtocol {
   RPC, HTTP, GRPC
}{code}
The ServiceRegistry may have multiple implementation, but as a first step we 
need a simple implementation which calls a new endpoint on SCM via REST.

The endpoint should persist the data to a local Rocksdb with the help of 
DBStore.

This task is about to create the server and client implementation. In a 
follow-up Jira we can start to use the client on the om/datanode/client side to 
mix the service discovery data with the existing configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-07-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-14610:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> HashMap is not thread safe. Field storageMap is typically synchronized by 
> storageMap. However, in one place, field storageMap is not protected with 
> synchronized.
> -
>
> Key: HDFS-14610
> URL: https://issues.apache.org/jira/browse/HDFS-14610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Fix For: 3.3.0
>
> Attachments: addingSynchronization.patch
>
>
> I submitted a CR for this issue at:
> [https://github.com/apache/hadoop/pull/1015]
> The field *storageMap* (a *HashMap*)
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L155]
> is typically protected by synchronization on *storageMap*, e.g.,
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L294]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L443]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> For a total of 9 locations.
> The reason is because *HashMap* is not thread safe.
> However, here:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]
> {{DatanodeStorageInfo storage =}}
> {{   storageMap.get(report.getStorage().getStorageID());}}
> It is not synchronized.
> Note that in the same method:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> *storageMap* is again protected by synchronization:
> {{synchronized (storageMap) {}}
> {{   storageMapSize = storageMap.size();}}
> {{}}}
>  
> The CR I inlined above protected the above instance (line 455 ) with 
> synchronization
>  like in line 484 and in all other occurrences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-07-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876498#comment-16876498
 ] 

Anu Engineer commented on HDFS-14610:
-

[~paulward24] Thank you for the contribution. And welcome to Hadoop. Since you 
like to post patches via Github, can I please show you a process which might 
make your life easier?

The following links contain the information for Github and Hadoop. 

[https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Contributor+Guide]

[https://cwiki.apache.org/confluence/display/HADOOP/Using+Github+for+Ozone+development]

 

If you push a patch with right Jira number, like if you commits have HDFS-14610 
etc in the subject tag, then the link will be added automatically to this JIRA.

For example; if you look at how I have committed your patch. it starts with 
"HDFS-14610. HashMap is not thread safe"

 

If you like, I can setup a video conference and help you understand the 
contribution process. I am presuming that all these issues are being discovered 
by some tool, and you will be posting more of these race conditions and 
solutions for them. Just wanted to make sure you are comfortable with your 
contributions.

 

As always, Thank you for your contribution to Hadoop, and I have committed this 
patch to the trunk. Feel free to tag me or send me an email if you need my 
attention on any of these JIRAS.

 

 

> HashMap is not thread safe. Field storageMap is typically synchronized by 
> storageMap. However, in one place, field storageMap is not protected with 
> synchronized.
> -
>
> Key: HDFS-14610
> URL: https://issues.apache.org/jira/browse/HDFS-14610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: addingSynchronization.patch
>
>
> I submitted a CR for this issue at:
> [https://github.com/apache/hadoop/pull/1015]
> The field *storageMap* (a *HashMap*)
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L155]
> is typically protected by synchronization on *storageMap*, e.g.,
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L294]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L443]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> For a total of 9 locations.
> The reason is because *HashMap* is not thread safe.
> However, here:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]
> {{DatanodeStorageInfo storage =}}
> {{   storageMap.get(report.getStorage().getStorageID());}}
> It is not synchronized.
> Note that in the same method:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> *storageMap* is again protected by synchronization:
> {{synchronized (storageMap) {}}
> {{   storageMapSize = storageMap.size();}}
> {{}}}
>  
> The CR I inlined above protected the above instance (line 455 ) with 
> synchronization
>  like in line 484 and in all other occurrences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-07-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876485#comment-16876485
 ] 

Eric Yang commented on HDDS-1734:
-

[~elek] Can you help with review of this patch?  Thanks

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch, 
> HDDS-1734.003.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1742) Merge ozone-perf and ozonetrace example clusters

2019-07-01 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1742:
--

 Summary: Merge ozone-perf and ozonetrace example clusters
 Key: HDDS-1742
 URL: https://issues.apache.org/jira/browse/HDDS-1742
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: docker
Reporter: Elek, Marton
Assignee: Istvan Fajth


We have multiple example clusters in hadoop-ozone/dist/src/main/compose to 
demonstrate how different type of configuration can be set with ozone.

But some of them can be consolidated. I propose to combine ozonetrace to 
ozoneperf to one ozoneperf which includes all the required components for a 
local performance testing:
 # opentracing (jaeger component in docker-compose + environment variables)
 # monitoring (grafana + prometheus)
 # perf profile (as of now it's enabled only in the ozone cluster[1])

 

[1]
{code:java}
cat compose/ozone/docker-config | grep prof

OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true
ASYNC_PROFILER_HOME=/opt/profiler
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14619) chmod changes the mask when ACL is enabled

2019-07-01 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876473#comment-16876473
 ] 

Siyao Meng commented on HDFS-14619:
---

Thanks [~sodonnell] [~pifta]. Closing this jira as Not A Problem.

> chmod changes the mask when ACL is enabled
> --
>
> Key: HDFS-14619
> URL: https://issues.apache.org/jira/browse/HDFS-14619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Priority: Major
>
> When setting a directory's permission with HDFS shell chmod, it changes the 
> ACL mask instead of the permission bits:
> {code:bash}
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 777 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx
> group::rwx
> mask::rwx
> other::rwx
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 755 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u impala hdfs dfs -touch /user/hive/warehouse/exttablename/key=1/file
> touch: Permission denied: user=impala, access=WRITE, 
> inode="/user/hive/warehouse/exttablename/key=1/file":hive:hive:drwxr-xr-x
> {code}
> The cluster has dfs.namenode.acls.enabled=true and 
> dfs.namenode.posix.acl.inheritance.enabled=true.
> As far as I understand, the chmod should change the permission bits instead 
> of the ACL mask. CMIIW
> Might be related to HDFS-14517. [~pifta]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-01 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-1741:
--

Assignee: Istvan Fajth  (was: Elek, Marton)

> Fix prometheus configuration in ozoneperf example cluster
> -
>
> Key: HDDS-1741
> URL: https://issues.apache.org/jira/browse/HDDS-1741
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Affects Versions: 0.4.0
>Reporter: Elek, Marton
>Assignee: Istvan Fajth
>Priority: Trivial
>
> HDDS-1216 renamed the ozoneManager components to om in the docker-compose 
> file. But the prometheus configuration of the compose/ozoneperf environment 
> is not updated.
> We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14619) chmod changes the mask when ACL is enabled

2019-07-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14619:
--
Comment: was deleted

(was: Already fixed by HDFS-14359.)

> chmod changes the mask when ACL is enabled
> --
>
> Key: HDFS-14619
> URL: https://issues.apache.org/jira/browse/HDFS-14619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Priority: Major
>
> When setting a directory's permission with HDFS shell chmod, it changes the 
> ACL mask instead of the permission bits:
> {code:bash}
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 777 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx
> group::rwx
> mask::rwx
> other::rwx
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 755 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u impala hdfs dfs -touch /user/hive/warehouse/exttablename/key=1/file
> touch: Permission denied: user=impala, access=WRITE, 
> inode="/user/hive/warehouse/exttablename/key=1/file":hive:hive:drwxr-xr-x
> {code}
> The cluster has dfs.namenode.acls.enabled=true and 
> dfs.namenode.posix.acl.inheritance.enabled=true.
> As far as I understand, the chmod should change the permission bits instead 
> of the ACL mask. CMIIW
> Might be related to HDFS-14517. [~pifta]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1741) Fix prometheus configuration in ozoneperf example cluster

2019-07-01 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1741:
--

 Summary: Fix prometheus configuration in ozoneperf example cluster
 Key: HDDS-1741
 URL: https://issues.apache.org/jira/browse/HDDS-1741
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: docker
Affects Versions: 0.4.0
Reporter: Elek, Marton
Assignee: Elek, Marton


HDDS-1216 renamed the ozoneManager components to om in the docker-compose file. 
But the prometheus configuration of the compose/ozoneperf environment is not 
updated.

We need to updated it to get meaningful metrics from om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14619) chmod changes the mask when ACL is enabled

2019-07-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HDFS-14619.
---
Resolution: Not A Problem

Already fixed by HDFS-14359.

> chmod changes the mask when ACL is enabled
> --
>
> Key: HDFS-14619
> URL: https://issues.apache.org/jira/browse/HDFS-14619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Priority: Major
>
> When setting a directory's permission with HDFS shell chmod, it changes the 
> ACL mask instead of the permission bits:
> {code:bash}
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 777 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx
> group::rwx
> mask::rwx
> other::rwx
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 755 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u impala hdfs dfs -touch /user/hive/warehouse/exttablename/key=1/file
> touch: Permission denied: user=impala, access=WRITE, 
> inode="/user/hive/warehouse/exttablename/key=1/file":hive:hive:drwxr-xr-x
> {code}
> The cluster has dfs.namenode.acls.enabled=true and 
> dfs.namenode.posix.acl.inheritance.enabled=true.
> As far as I understand, the chmod should change the permission bits instead 
> of the ACL mask. CMIIW
> Might be related to HDFS-14517. [~pifta]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=270439=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270439
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 01/Jul/19 19:45
Start Date: 01/Jul/19 19:45
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #987: HDDS-1685. 
Recon: Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-507398655
 
 
   +1 LGTM.
   
   Thank You @avijayanhwx for review and @vivekratnavel for the contribution.
   Test failures are not related to this patch.
   Test failures are not related to this patch. I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270439)
Time Spent: 3h 40m  (was: 3.5h)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-07-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1685.
--
   Resolution: Fixed
Fix Version/s: 0.4.1

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-07-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876449#comment-16876449
 ] 

Hudson commented on HDDS-1685:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16846 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16846/])
HDDS-1685. Recon: Add support for "start" query param to containers and 
(bharat: rev db674a0b143b495a7d0c63d3a3c269c29763f6bc)
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconConstants.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestContainerDBServiceProviderImpl.java


> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=270438=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270438
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 01/Jul/19 19:44
Start Date: 01/Jul/19 19:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #987: 
HDDS-1685. Recon: Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270438)
Time Spent: 3.5h  (was: 3h 20m)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=270437=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270437
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 01/Jul/19 19:43
Start Date: 01/Jul/19 19:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #987: HDDS-1685. 
Recon: Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-507398655
 
 
   +1 LGTM.
   Thank You @avijayanhwx for review and @vivekratnavel for the contribution.
   Test failures are not related to this patch.
   I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270437)
Time Spent: 3h 20m  (was: 3h 10m)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1115) Provide ozone specific top-level pom.xml

2019-07-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876417#comment-16876417
 ] 

Eric Yang commented on HDDS-1115:
-

Sorry, I am late to discuss this issue.  I think pom.ozone.xml is a mistake.  
Everything that [~elek] described can be done by hadoop-ozone-project as maven 
submodule and referring to hadoop-3.2.0 as parent.  Hadoop submarine project 
has done this more elegantly, and I suggest to follow their pattern in 
HDDS-1661.

> Provide ozone specific top-level pom.xml
> 
>
> Key: HDDS-1115
> URL: https://issues.apache.org/jira/browse/HDDS-1115
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build process doesn't require the pom.xml in the top level hadoop 
> directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
> hadoop-hdds. The ./pom.xml is used only to include the 
> hadoop-ozone/hadoop-hdds projects in the maven reactor.
> From command line, it's easy to build only the ozone artifacts:
> {code}
> mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
> -Danimal.sniffer.skip=true  -Denforcer.skip=true
> {code}
> Where: '-pl' defines the build of the hadoop-ozone-dist project
> and '-am' defines to build all of the dependencies from the source tree 
> (hadoop-ozone-common, hadoop-hdds-common, etc.)
> But this filtering is available only from the command line.
> With providing a lightweight pom.ozone.xml we can achieve the same:
>  * We can open only hdds/ozone projects in the IDE/intellij. It makes the 
> development faster as IDE doesn't need to reindex all the sources all the 
> time + it's easy to execute checkstyle/findbugs plugins of the intellij to 
> the whole project.
>  * Longer term we should create an ozone specific source artifact (currently 
> the source artifact for hadoop and ozone releases are the same) which also 
> requires a simplified pom.
> In this patch I also added the .mvn directory to the .gitignore file.
> With 
> {code}
> mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can persist 
> the usage of the ozone.pom.xml for all the subsequent builds (in the same dir)
> How to test?
> Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-07-01 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876411#comment-16876411
 ] 

Surendra Singh Lilhore commented on HDFS-14358:
---

Changes looks good to me. some minor alignment need to taken care.
{quote}check box makes the UI odd , as the DN's states may increase in future 
which makes check box row/column unordered
{quote}
I am agree with this. Adding checkbox is not good.

{quote}The new filter guess need to be linked with the DFSHealth.html Summary 
elements too, That it directly leads to the required part.Presently clicking on 
dead nodes too, lead to Datanode page with all, I guess on clicking DeadNodes 
we should show only dead.  Here :: {quote}

Even adding filter for overview page not good, just redirect to DataNode page 
and then user can filter based on his requirement.  

> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358(2).patch, hdfs-14358.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1258) Fix error propagation for SCM protocol

2019-07-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876407#comment-16876407
 ] 

Hudson commented on HDDS-1258:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16845 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16845/])
HDDS-1258. Fix error propagation for SCM protocol (elek: rev 
f8d62a9c4c03a637896bf4f1795176901c4e7235)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ScmBlockLocationProtocolServerSideTranslatorPB.java
* (edit) hadoop-hdds/common/src/main/proto/ScmBlockLocationProtocol.proto
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/SCMException.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/scm/exceptions/TestSCMExceptionResultCodes.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/ScmBlockLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/package-info.java


> Fix error propagation for SCM protocol
> --
>
> Key: HDDS-1258
> URL: https://issues.apache.org/jira/browse/HDDS-1258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> HDDS-1068 fixed the error propagation between the OM client and OM server.
> By default the Server.java transforms all the IOExceptions to one string 
> (message + stack trace) and this is returned to the client.
> But for business exception (eg. volume not found, chill mode is active, etc.) 
> this is not what we need.
> In the OM side we fixed this behaviour. In the ServerSideTranslator classes 
> we catch (server) the business (OMException) exceptions and serialize them to 
> the response object.
> The exception (and the status code) is stored in message/status field of the 
> OMResponse (hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto)
> Here I propose to do the same for the ScmBlockLocationProtocol.proto.
> Unfortunately there is no common parent object (like OMRequest) in this 
> protocol, but we can easily add one as only the Serverside/Clientside 
> translator should be changed for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1258) Fix error propagation for SCM protocol

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1258?focusedWorklogId=270400=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270400
 ]

ASF GitHub Bot logged work on HDDS-1258:


Author: ASF GitHub Bot
Created on: 01/Jul/19 18:20
Start Date: 01/Jul/19 18:20
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1001: HDDS-1258 - Fix 
error propagation for SCM protocol
URL: https://github.com/apache/hadoop/pull/1001
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270400)
Time Spent: 2h 20m  (was: 2h 10m)

> Fix error propagation for SCM protocol
> --
>
> Key: HDDS-1258
> URL: https://issues.apache.org/jira/browse/HDDS-1258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> HDDS-1068 fixed the error propagation between the OM client and OM server.
> By default the Server.java transforms all the IOExceptions to one string 
> (message + stack trace) and this is returned to the client.
> But for business exception (eg. volume not found, chill mode is active, etc.) 
> this is not what we need.
> In the OM side we fixed this behaviour. In the ServerSideTranslator classes 
> we catch (server) the business (OMException) exceptions and serialize them to 
> the response object.
> The exception (and the status code) is stored in message/status field of the 
> OMResponse (hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto)
> Here I propose to do the same for the ScmBlockLocationProtocol.proto.
> Unfortunately there is no common parent object (like OMRequest) in this 
> protocol, but we can easily add one as only the Serverside/Clientside 
> translator should be changed for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1258) Fix error propagation for SCM protocol

2019-07-01 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1258:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix error propagation for SCM protocol
> --
>
> Key: HDDS-1258
> URL: https://issues.apache.org/jira/browse/HDDS-1258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> HDDS-1068 fixed the error propagation between the OM client and OM server.
> By default the Server.java transforms all the IOExceptions to one string 
> (message + stack trace) and this is returned to the client.
> But for business exception (eg. volume not found, chill mode is active, etc.) 
> this is not what we need.
> In the OM side we fixed this behaviour. In the ServerSideTranslator classes 
> we catch (server) the business (OMException) exceptions and serialize them to 
> the response object.
> The exception (and the status code) is stored in message/status field of the 
> OMResponse (hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto)
> Here I propose to do the same for the ScmBlockLocationProtocol.proto.
> Unfortunately there is no common parent object (like OMRequest) in this 
> protocol, but we can easily add one as only the Serverside/Clientside 
> translator should be changed for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14585) Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9

2019-07-01 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HDFS-14585.
--
Resolution: Fixed

Reapplied w/ proper commit message. Re-resolving.

> Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9
> -
>
> Key: HDFS-14585
> URL: https://issues.apache.org/jira/browse/HDFS-14585
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14585.branch-2.9.v1.patch, 
> HDFS-14585.branch-2.9.v2.patch, HDFS-14585.branch-2.9.v2.patch, 
> HDFS-14585.branch-2.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14585) Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9

2019-07-01 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HDFS-14585:
--

Reopening. Commit message was missing the JIRA # so revert and reapply with 
fixed commit message.

> Backport HDFS-8901 Use ByteBuffer in DFSInputStream#read to branch2.9
> -
>
> Key: HDFS-14585
> URL: https://issues.apache.org/jira/browse/HDFS-14585
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14585.branch-2.9.v1.patch, 
> HDFS-14585.branch-2.9.v2.patch, HDFS-14585.branch-2.9.v2.patch, 
> HDFS-14585.branch-2.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1258) Fix error propagation for SCM protocol

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1258?focusedWorklogId=270387=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270387
 ]

ASF GitHub Bot logged work on HDDS-1258:


Author: ASF GitHub Bot
Created on: 01/Jul/19 18:13
Start Date: 01/Jul/19 18:13
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1001: HDDS-1258 - Fix error 
propagation for SCM protocol
URL: https://github.com/apache/hadoop/pull/1001#issuecomment-507369570
 
 
   Wow, full green build check with no intermittent failures. How did you do 
that? ;-)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270387)
Time Spent: 2h 10m  (was: 2h)

> Fix error propagation for SCM protocol
> --
>
> Key: HDDS-1258
> URL: https://issues.apache.org/jira/browse/HDDS-1258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> HDDS-1068 fixed the error propagation between the OM client and OM server.
> By default the Server.java transforms all the IOExceptions to one string 
> (message + stack trace) and this is returned to the client.
> But for business exception (eg. volume not found, chill mode is active, etc.) 
> this is not what we need.
> In the OM side we fixed this behaviour. In the ServerSideTranslator classes 
> we catch (server) the business (OMException) exceptions and serialize them to 
> the response object.
> The exception (and the status code) is stored in message/status field of the 
> OMResponse (hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto)
> Here I propose to do the same for the ScmBlockLocationProtocol.proto.
> Unfortunately there is no common parent object (like OMRequest) in this 
> protocol, but we can easily add one as only the Serverside/Clientside 
> translator should be changed for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >