[GitHub] avijayanhwx opened a new pull request #494: HDDS-1085 : Create an OM API to serve snapshots to Recon server.

2019-02-15 Thread GitBox
avijayanhwx opened a new pull request #494: HDDS-1085 : Create an OM API to 
serve snapshots to Recon server.
URL: https://github.com/apache/hadoop/pull/494
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15967) KMS Benchmark Tool

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770016#comment-16770016
 ] 

Hadoop QA commented on HADOOP-15967:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
11s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958956/HADOOP-15967.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4469e16186a8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dde0ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15927/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15927/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
>

[jira] [Updated] (HADOOP-15967) KMS Benchmark Tool

2019-02-15 Thread George Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HADOOP-15967:
--
Attachment: HADOOP-15967.003.patch

> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch, 
> HADOOP-15967.003.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15967) KMS Benchmark Tool

2019-02-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769995#comment-16769995
 ] 

Hadoop QA commented on HADOOP-15967:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
9s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958952/HADOOP-15967.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c5fa0da822de 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dde0ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15926/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15926/testReport/ |
| Max. process+thread count | 294 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15926/console |
| Powered by | Apache Yetus 0.8.0 

[jira] [Commented] (HADOOP-15967) KMS Benchmark Tool

2019-02-15 Thread George Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769973#comment-16769973
 ] 

George Huang commented on HADOOP-15967:
---

Thanks [~jojochuang]! Newer patch uploaded addressed comments and check-style 
issues.

> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15967) KMS Benchmark Tool

2019-02-15 Thread George Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HADOOP-15967:
--
Attachment: HADOOP-15967.002.patch

> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-15 Thread Michael Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769789#comment-16769789
 ] 

Michael Miller commented on HADOOP-11223:
-

{quote}the "Unmodifiable" one is not immutable (see earlier discussion about 
addDefaultResource).
{quote}
Correct. I understand a true immutable object is not possible because of the 
static methods. This is why we would settle for an "Unmodifiable" object. This 
would at least prevent changes on the object itself, for example, in our 
Testing framework:
{code:java}
public void test(ServerContext context) {
   Configuration conf  = context.getHadoopConf();
   conf.set("prop", "value");
}
{code}
Here we would prefer an error be thrown since the framework expects 
configuration to be set a certain way. It is a lot easier to make this mistake 
then to think "I want to change a property so I am going to create another 
configuration, call this static method and reload it ".

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, 
> HADOOP-11223.003.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15525) s3a: clarify / improve support for mixed ACL buckets

2019-02-15 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769765#comment-16769765
 ] 

t oo commented on HADOOP-15525:
---

gentle ping

> s3a: clarify / improve support for mixed ACL buckets
> 
>
> Key: HADOOP-15525
> URL: https://issues.apache.org/jira/browse/HADOOP-15525
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
>
> Scenario: customer wants to only give a Hadoop cluster access to a subtree of 
> an S3 bucket.
> For example, assume Hadoop uses some IAM identity "hadoop", which they wish 
> to grant full permission to everything under the following path:
> s3a://bucket/a/b/c/hadoop-dir
> they don't want hadoop user to be able to read/list/delete anything outside 
> of the hadoop-dir "subdir"
> Problems: 
> To implement the "directory structure on flat key space" emulation logic we 
> use to present a Hadoop FS on top of a blob store, we need to create / delete 
> / list ancestors of {{hadoop-dir}}. (to maintain the invariants (1) zero-byte 
> object with key ending in '/' exists iff empty directory is there and (2) 
> files cannot live beneath files, only directories.)
> I'd like us to (1) document a an example with IAM ACLs policies that gets 
> this basic functionality, and consider (2) making improvements to make this 
> easier.
> We've discussed some of these issues before but I didn't see a dedicated JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-15 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769741#comment-16769741
 ] 

Gopal V commented on HADOOP-11223:
--

bq. Mainly for performance and concurrency reasons we don't want the Hadoop 
Configuration to change or be re-read.

This is not guaranteed by this patch, unfortunately - the "Unmodifiable" one is 
not immutable (see earlier discussion about addDefaultResource).

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, 
> HADOOP-11223.003.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] prasanthj commented on a change in pull request #491: HDDS-1116. Add java profiler servlet to the Ozone web servers

2019-02-15 Thread GitBox
prasanthj commented on a change in pull request #491: HDDS-1116. Add java 
profiler servlet to the Ozone web servers
URL: https://github.com/apache/hadoop/pull/491#discussion_r257394705
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
 ##
 @@ -18,6 +18,7 @@ version: "3"
 services:
datanode:
   image: apache/hadoop-runner
+  privileged: true #required by the profiler
 
 Review comment:
   Sorry, I missed the part that it is running using docker-compose and not 
k8s. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] prasanthj commented on a change in pull request #491: HDDS-1116. Add java profiler servlet to the Ozone web servers

2019-02-15 Thread GitBox
prasanthj commented on a change in pull request #491: HDDS-1116. Add java 
profiler servlet to the Ozone web servers
URL: https://github.com/apache/hadoop/pull/491#discussion_r257394453
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
 ##
 @@ -18,6 +18,7 @@ version: "3"
 services:
datanode:
   image: apache/hadoop-runner
+  privileged: true #required by the profiler
 
 Review comment:
   This can be avoided with initContainer running in privileged mode that 
updates the following
   ```
   sudo bash -c 'echo 1 > /proc/sys/kernel/perf_event_paranoid'
   sudo bash -c 'echo 0 > /proc/sys/kernel/kptr_restrict'
   ```
   with this initContainer will apply the required changes and will complete. 
main container can still run in non-previleged mode. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-15 Thread Michael Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769739#comment-16769739
 ] 

Michael Miller commented on HADOOP-11223:
-

{quote}
Michael Miller Sorry to join the party so late, just want to understand how 
Accumulo uses this feature, as this is still not read-only config, But surely 
benefits you someway. I would like to understand the use case a little better. 
Thanks
{quote}

Accumulo currently does not use this feature but it is something that would be 
beneficial.  We previously had a static class that would save the Configuration 
object the first time it was created and then return that object whenever it 
was needed.  There were a lot of changes internally to eliminate static state 
like this and only construct some objects once at startup of the server.  
Mainly for performance and concurrency reasons we don't want the Hadoop 
Configuration to change or be re-read.

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, 
> HADOOP-11223.003.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769711#comment-16769711
 ] 

Anu Engineer commented on HADOOP-11223:
---

[~milleruntime] Sorry to join the party so late, just want to understand how 
Accumulo uses this feature, as this is still not read-only config, But surely 
benefits you someway. I would like to understand the use case a little better. 
Thanks


> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, 
> HADOOP-11223.003.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15563) S3guard init and set-capacity to support DDB autoscaling

2019-02-15 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769693#comment-16769693
 ] 

Prasanth Jayachandran commented on HADOOP-15563:


Does s3guard support "on-demand" mode (introduced recently) for DDB tables?

> S3guard init and set-capacity to support DDB autoscaling
> 
>
> Key: HADOOP-15563
> URL: https://issues.apache.org/jira/browse/HADOOP-15563
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> To keep costs down on DDB, autoscaling is a key feature: you set the max 
> values and when idle, you don't get billed, *at the cost of delayed scale 
> time and risk of not getting the max value when AWS is busy*
> It can be done from the AWS web UI, but not in the s3guard init and 
> set-capacity calls
> It can be done [through the 
> API|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.HowTo.SDK.html]
> Usual issues then: wiring up, CLI params, testing. It'll be hard to test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15979) Add Netty support to the RPC client

2019-02-15 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769667#comment-16769667
 ] 

Wei-Chiu Chuang commented on HADOOP-15979:
--

Regarding NettyIpcStreams:
 * It may be easier to debug if {{getInputStream()}} returns a named class, 
rather than an anonymous class. Similarly for {{getOutputStream()}}.

 
{code:java}
IOException timeout(String op) {
  return new SocketTimeoutException(
  soTimeout + " millis timeout while " +
  "waiting for channel to be ready for " +
  op + ". ch : " + channel);
}
{code}
 * For troubleshooting, it would be helpful when client has a socket time out, 
throw a SocketTimeoutException that also record the peer address. Not sure if 
NioSocketChannel.toString() does that. Maybe we can add peer address in 
toException() too.
 * Is channelInactive() still needed?
 * Should there be a interrupt test for NettyIpcStreams, similar to 
{{TestIPC#testInterrupted}}?

> Add Netty support to the RPC client
> ---
>
> Key: HADOOP-15979
> URL: https://issues.apache.org/jira/browse/HADOOP-15979
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15979.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-15 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769624#comment-16769624
 ] 

Ben Roling commented on HADOOP-15625:
-

Hey [~ste...@apache.org] - just wanted to check to see if you've had a chance 
to review my latest comments.  I'd like to keep moving this forward but will 
need a little more guidance.  I very much appreciate your help so far.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, 
> HADOOP-15625-003.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 merged pull request #488: HDDS-1114. Fix findbugs/checkstyle/accepteance errors in Ozone

2019-02-15 Thread GitBox
bharatviswa504 merged pull request #488: HDDS-1114. Fix 
findbugs/checkstyle/accepteance errors in Ozone
URL: https://github.com/apache/hadoop/pull/488
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15979) Add Netty support to the RPC client

2019-02-15 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769565#comment-16769565
 ] 

Wei-Chiu Chuang commented on HADOOP-15979:
--

Thanks a lot for the patch, [~daryn]!

 

The shading stuff will need an update, like what I mentioned in HADOOP-15978.

 

At first pass, it looked to me the variable sendParamsExecutor is no longer 
used.

{{NioIpcStreams#submit()}} calls {{Client.getClientExecutor()}} which returns 

{{Client.clientExcecutorFactory.clientExecutor}}. But it is not initialized, 
unless you call {{clientExcecutorFactory.refAndGetInstance()}}.

Simply put, shouldn't {{NioIpcStreams#submit()}} call 
{{sendParamsExecutor.submit()}} instead?

It doesn't fail tests because each Client object initializes sendParamsExecutor 
which initializes Client.clientExcecutorFactory.clientExecutor properly. But if 
I remove sendParamsExecutor from the code (because it appears not used), it 
breaks tests.

 

> Add Netty support to the RPC client
> ---
>
> Key: HADOOP-15979
> URL: https://issues.apache.org/jira/browse/HADOOP-15979
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15979.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] Praveen2112 opened a new pull request #493: HADOOP-16114 Ensure NetUtils#canonicalizeHost returns same canonicalized host name for a given host

2019-02-15 Thread GitBox
Praveen2112 opened a new pull request #493: HADOOP-16114 Ensure 
NetUtils#canonicalizeHost returns same canonicalized host name for a given host
URL: https://github.com/apache/hadoop/pull/493
 
 
   Currently `NetUtils#canonicalizedHost` returns different canonicalized host 
name for the same host, this patch resolves it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-15 Thread Praveen Krishna (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769532#comment-16769532
 ] 

Praveen Krishna commented on HADOOP-16114:
--

If this solution looks good then I'm happy to submit a patch for the same

> NetUtils#canonicalizeHost gives different value for same host
> -
>
> Key: HADOOP-16114
> URL: https://issues.apache.org/jira/browse/HADOOP-16114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.7.6, 3.1.2
>Reporter: Praveen Krishna
>Priority: Minor
>
> In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an 
> entry to the cache
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> If two different threads were invoking this method for the first time (so the 
> cache is empty) and if SecurityUtil#getByName()#getHostName gives two 
> different value for the same host , only one fqHost would be added in the 
> cache and an invalid fqHost would be given to one of the thread which might 
> cause some APIs to fail for the first time `FileSystem#checkPath` even if the 
> path is in the given file system. It might be better if we modify the above 
> method to this
>  
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
> fqHost = canonicalizedHostCache.get(host);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> So even if other thread get a different host name it will be updated to the 
> cached value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16113) Your project apache/hadoop is using buggy third-party libraries [WARNING]

2019-02-15 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769496#comment-16769496
 ] 

Anu Engineer commented on HADOOP-16113:
---

bq. Ozone team to upgrade log4j 2 and then tell us how we went,

Will do;

> Your project apache/hadoop is using buggy third-party libraries [WARNING]
> -
>
> Key: HADOOP-16113
> URL: https://issues.apache.org/jira/browse/HADOOP-16113
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kaifeng Huang
>Priority: Major
>
> Hi, there!
> We are a research team working on third-party library analysis. We have 
> found that some widely-used third-party libraries in your project have 
> major/critical bugs, which will degrade the quality of your project. We 
> highly recommend you to update those libraries to new versions.
> We have attached the buggy third-party libraries and corresponding jira 
> issue links below for you to have more detailed information.
>   1. org.apache.logging.log4j log4j-core(hadoop-hdds/common/pom.xml)
>   version: 2.11.0
>   Jira issues:
>   Log4j2 throws NoClassDefFoundError in Java 9
>   affectsVersions:2.10.0,2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2129?filter=allopenissues
>   Empty Automatic-Module-Name Header
>   affectsVersions:2.10.0,2.11.0,3.0.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2254?filter=allopenissues
>   gc-free mixed async loging loses parameter values after the first 
> appender
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2301?filter=allopenissues
>   Log4j 2.10+not working with SLF4J 1.8 in OSGI environment
>   affectsVersions:2.10.0,2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2305?filter=allopenissues
>   AsyncQueueFullMessageUtil causes unparsable message output
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2318?filter=allopenissues
>   AbstractLogger NPE hides actual cause when getFormat returns null
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2320?filter=allopenissues
>   AsyncLogger without specifying a level always uses ERROR
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2321?filter=allopenissues
>   Errors thrown in formatting may stop background threads
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2333?filter=allopenissues
>   JsonLayout not working with AsyncLoggerContextSelector in 2.11.0
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2341?filter=allopenissues
>   Typo in log4j-api Activator
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2343?filter=allopenissues
>   PropertiesUtil.reload() might throw NullPointerException
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2355?filter=allopenissues
>   NameAbbreviator skips first fragments
>   affectsVersions:2.11.0,2.11.1
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2365?filter=allopenissues
>   Outputs wrong message when used within overridden Throwable method
>   affectsVersions:2.8.1,2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2368?filter=allopenissues
>   StringBuilder escapeJson performs unnecessary Memory Allocations
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2373?filter=allopenissues
>   fix the CacheEntry map in ThrowableProxy#toExtendedStackTrace to be put 
> and gotten with same key
>   affectsVersions:2.6.2,2.7,2.8,2.8.1,2.8.2,2.9.0,2.9.1,2.10.0,2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2389?filter=allopenissues
>   Fix incorrect links in Log4j web documentation.
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2390?filter=allopenissues
>   2. org.apache.httpcomponents httpclient(hadoop-project/pom.xml)
>   version: 4.5.2
>   Jira issues:
>   
> org.apache.http.impl.client.AbstractHttpClient#createClientConnectionManager 
> Does not account for context class loader
>   affectsVersions:4.4.1;4.5;4.5.1;4.5.2
>   
> https://issues.apache.org/jira/projects/HTTPCLIENT/issues/HTTPCLIENT-1727?filter=allopenissues
>   Memory Leak in OSGi support
>   affectsVersions:4.4.1;4.5.2
>   
> 

[GitHub] elek merged pull request #486: HDDS-1092. Use Java 11 JRE to run Ozone in containers

2019-02-15 Thread GitBox
elek merged pull request #486: HDDS-1092. Use Java 11 JRE to run Ozone in 
containers
URL: https://github.com/apache/hadoop/pull/486
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #486: HDDS-1092. Use Java 11 JRE to run Ozone in containers

2019-02-15 Thread GitBox
elek commented on issue #486: HDDS-1092. Use Java 11 JRE to run Ozone in 
containers
URL: https://github.com/apache/hadoop/pull/486#issuecomment-464110176
 
 
   Only the ozonefs is failed which will be fixed in #488.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #488: HDDS-1114. Fix findbugs/checkstyle/accepteance errors in Ozone

2019-02-15 Thread GitBox
elek commented on issue #488: HDDS-1114. Fix findbugs/checkstyle/accepteance 
errors in Ozone
URL: https://github.com/apache/hadoop/pull/488#issuecomment-464109140
 
 
   It turned out that the error message always should be filled for 
OMException. The status code itself is not enough, as external clients (eg. 
ozonefs) will check only the getMessage of the exception. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #492: HDDS-1117. Add async profiler to the hadoop-runner base container image.

2019-02-15 Thread GitBox
elek opened a new pull request #492: HDDS-1117. Add async profiler to the 
hadoop-runner base container image.
URL: https://github.com/apache/hadoop/pull/492
 
 
   HDDS-1116 provides a simple servlet to execute async profiler 
(https://github.com/jvm-profiling-tools/async-profiler) thanks to the Hive 
developers.
   
   To run it in the docker-composed based example environments we should add it 
to the apache/hadoop-runner base image. 
   
   Note: The size is not significant, the downloadable package is 102k.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #491: HDDS-1116. Add java profiler servlet to the Ozone web servers

2019-02-15 Thread GitBox
elek opened a new pull request #491: HDDS-1116. Add java profiler servlet to 
the Ozone web servers
URL: https://github.com/apache/hadoop/pull/491
 
 
   Thanks to [~gopalv] we learned that [~prasanth_j] implemented a helper 
servlet in Hive to initialize new [async 
profiler|https://github.com/jvm-profiling-tools/async-profiler] sessions and 
provide the svg based flame graph over HTTP. (see HIVE-20202)
   
   It seems to very useful as with this approach the profiling could be very 
easy.
   
   This patch imports the servlet from the Hive code base to the Ozone code 
base with minor modification (to make it work with our servlet containers)
   
* The two servlets are unified to one
* Streaming the svg to the browser based on IOUtils.copy 
* Output message is improved
   
   By default the profile servlet is turned off, but you can enable it with 
'hdds.profiler.endpoint.enabled=true' ozone-site.xml settings. In that case you 
can access the /prof endpoint from scm,om,s3g. 
   
   You should upload the async profiler first 
(https://github.com/jvm-profiling-tools/async-profiler) and set the 
ASYNC_PROFILER_HOME environment variable to find it. 
   
   See: https://issues.apache.org/jira/browse/HDDS-1116


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15967) KMS Benchmark Tool

2019-02-15 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769329#comment-16769329
 ] 

Wei-Chiu Chuang commented on HADOOP-15967:
--

Thanks for making the patch! [~ghuangups]

Overall looks good. I've also ran the benchmark and it works well.

Some nits I found in the code:

 
{code:java}
* For usage, please see http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Benchmarking.html#KMSBenchmark;>the
 documentation.
{code}
There is no doc yet. We should file another Jira to add the doc.

 

 
{code:java}
LOG.warn("encryption key already exists: ",
encryptionKeyName);
{code}
Missed a {}
{code:java}
LOG.warn("encryption key already exists: {}",
encryptionKeyName);
{code}
{code:java}
final String HADOOP_SECURITY_KEY_PROVIDER_PATH =
"hadoop.security.key.provider.path";
{code}
Could you use  
{{CommonConfigurationKeysPublic.HADOOP_SECURITY_KEY_PROVIDER_PATH}} instead?
  
{code:java}
if (keyProvider == null) {
return null;
}
{code}
I think we want to throw an exception saying key provider is not configured. 
Otherwise a null keyProvider will result in an NPE when accessed later, and 
that would be hard to troubleshoot.

> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Attachments: HADOOP-15967.001.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #490: HDDS-1113. Remove default dependencies from hadoop-ozone project

2019-02-15 Thread GitBox
elek opened a new pull request #490: HDDS-1113. Remove default dependencies 
from hadoop-ozone project
URL: https://github.com/apache/hadoop/pull/490
 
 
   There are two ways to define common dependencies with maven:
   
 1.) put all the dependencies to the parent project and inherit them
 2.) get all the dependencies via transitive dependencies
   
   TLDR; I would like to switch from 1 to 2 in hadoop-ozone
   
   My main problem with the first approach that all the child project get a lot 
of dependencies independent if they need them or not. Let's imagine that I 
would like to create a new project (for example a java csi implementation) It 
doesn't need ozone-client, ozone-common etc, in fact it conflicts with 
ozone-client. But these jars are always added as of now.
   
   Using transitive dependencies is more safe: we can add the dependencies 
where we need them and all of the other dependent projects will use them. 
   
   See: https://issues.apache.org/jira/browse/HDDS-1113


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #487: HDDS-1113. Remove default dependencies from hadoop-ozone project

2019-02-15 Thread GitBox
elek closed pull request #487: HDDS-1113. Remove default dependencies from 
hadoop-ozone project
URL: https://github.com/apache/hadoop/pull/487
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #489: HDDS-1115. Provide ozone specific top-level pom.xml

2019-02-15 Thread GitBox
elek opened a new pull request #489: HDDS-1115. Provide ozone specific 
top-level pom.xml
URL: https://github.com/apache/hadoop/pull/489
 
 
   Ozone build process doesn't require the pom.xml in the top level hadoop 
directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
hadoop-hdds. The ./pom.xml is used only to include the hadoop-ozone/hadoop-hdds 
projects in the maven reactor.
   
   From command line, it's easy to build only the ozone artifacts:
   
   {code}
   mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
-Danimal.sniffer.skip=true  -Denforcer.skip=true
   {code}
   
   Where: '-pl' defines the build of the hadoop-ozone-dist project
   and '-am' defines to build all of the dependencies from the source tree 
(hadoop-ozone-common, hadoop-hdds-common, etc.)
   
   But this filtering is available only from the command line.
   
   With providing a lightweight pom.ozone.xml we can achieve the same:
   
* We can open only hdds/ozone projects in the IDE/intellij. It makes the 
development faster as IDE doesn't need to reindex all the sources all the time 
+ it's easy to execute checkstyle/findbugs plugins of the intellij to the whole 
project.
* Longer term we should create an ozone specific source artifact (currently 
the source artifact for hadoop and ozone releases are the same) which also 
requires a simplified pom.
   
   In this patch I also added the .mvn directory to the .gitignore file.
   
   With 
   {code}
   mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can 
persist the usage of the ozone.pom.xml for all the subsequent builds (in the 
same dir)
   
   How to test?
   
   Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'
   
   See: https://issues.apache.org/jira/browse/HDDS-1115


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #488: HDDS-1114. Fix findbugs/checkstyle/accepteance errors in Ozone

2019-02-15 Thread GitBox
elek opened a new pull request #488: HDDS-1114. Fix 
findbugs/checkstyle/accepteance errors in Ozone
URL: https://github.com/apache/hadoop/pull/488
 
 
   Unfortunately as the previous two big commits (error handling HDDS-1068, 
checkstyle HDDS-1103) are committed in the same time a few new errors are 
introduced during the rebase.
   
   This patch will fix the remaining 5 issues (+ a type in the acceptance test 
executor) 
   
   See: https://issues.apache.org/jira/browse/HDDS-1114


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15979) Add Netty support to the RPC client

2019-02-15 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769244#comment-16769244
 ] 

Wei-Chiu Chuang commented on HADOOP-15979:
--

I'm sorry I completely missed the patch here. Will review as soon as possible.

> Add Netty support to the RPC client
> ---
>
> Key: HADOOP-15979
> URL: https://issues.apache.org/jira/browse/HADOOP-15979
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15979.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769207#comment-16769207
 ] 

Steve Loughran commented on HADOOP-16114:
-

I see: it guarantees that whichever hostname went into the cache is the one 
used in both threads. Makes sense, and its done elsewhere.

Fancy submitting a patch? I'm not sure if we can do an easy test for this, so 
we'll have to rely on review

> NetUtils#canonicalizeHost gives different value for same host
> -
>
> Key: HADOOP-16114
> URL: https://issues.apache.org/jira/browse/HADOOP-16114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.7.6, 3.1.2
>Reporter: Praveen Krishna
>Priority: Minor
>
> In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an 
> entry to the cache
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> If two different threads were invoking this method for the first time (so the 
> cache is empty) and if SecurityUtil#getByName()#getHostName gives two 
> different value for the same host , only one fqHost would be added in the 
> cache and an invalid fqHost would be given to one of the thread which might 
> cause some APIs to fail for the first time `FileSystem#checkPath` even if the 
> path is in the given file system. It might be better if we modify the above 
> method to this
>  
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
> fqHost = canonicalizedHostCache.get(host);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> So even if other thread get a different host name it will be updated to the 
> cached value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16115) [JDK 11] TestJersey fails

2019-02-15 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16115:
---
Summary: [JDK 11] TestJersey fails  (was: [JDK11] TestJersey fails)

> [JDK 11] TestJersey fails
> -
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16115) [JDK11] TestJersey fails

2019-02-15 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769194#comment-16769194
 ] 

Akira Ajisaka commented on HADOOP-16115:


Now Jersey does not support Java 11, 
https://github.com/eclipse-ee4j/jersey/issues/3965

> [JDK11] TestJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16115) [JDK11] TestJersey fails

2019-02-15 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769187#comment-16769187
 ] 

Akira Ajisaka commented on HADOOP-16115:


surefire report
{noformat}
2019-02-15 19:47:06,932 WARN  test (ContextHandler.java:log(2177)) - unavailable
java.lang.reflect.InvocationTargetException
at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorI
mpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorA
ccessorImpl.java:45)
at 
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at 
com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:724)
at 
com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig(WebComponent.java:674)
at 
com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:205)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:577)
at javax.servlet.GenericServlet.init(GenericServlet.java:244)
at 
org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:643)
at 
org.eclipse.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:499)
at 
org.eclipse.jetty.servlet.ServletHolder.ensureInstance(ServletHolder.java:791)
at 
org.eclipse.jetty.servlet.ServletHolder.prepare(ServletHolder.java:776)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:579)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalArgumentException
at 
jersey.repackaged.org.objectweb.asm.ClassReader.(ClassReader.java:170)
at 
jersey.repackaged.org.objectweb.asm.ClassReader.(ClassReader.java:153)
at 
jersey.repackaged.org.objectweb.asm.ClassReader.(ClassReader.java:424)
at 
com.sun.jersey.spi.scanning.AnnotationScannerListener.onProcess(AnnotationScannerListener.java:138)
at 
com.sun.jersey.core.spi.scanning.uri.FileSchemeScanner$1.f(FileSchemeScanner.java:86)
at com.sun.jersey.core.util.Closing.f(Closing.java:71)
at 
com.sun.jersey.core.spi.scanning.uri.FileSchemeScanner.scanDirectory(FileSchemeScanner.java:83)
at 
com.sun.jersey.core.spi.scanning.uri.FileSchemeScanner.scan(FileSchemeScanner.java:71)
at 
com.sun.jersey.core.spi.scanning.PackageNamesScanner.scan(PackageNamesScanner.java:226)
at 
com.sun.jersey.core.spi.scanning.PackageNamesScanner.scan(PackageNamesScanner.java:142)
at 
com.sun.jersey.api.core.ScanningResourceConfig.init(ScanningResourceConfig.java:80)
at 
com.sun.jersey.api.core.PackagesResourceConfig.init(PackagesResourceConfig.java:104)
at 
com.sun.jersey.api.core.PackagesResourceConfig.(PackagesResourceConfig.java:78)
at 
com.sun.jersey.api.core.PackagesResourceConfig.(PackagesResourceConfig.java:89)
... 34 more
{noformat}

> [JDK11] TestJersey fails
> 
>
> Key: HADOOP-16115
>  

[jira] [Updated] (HADOOP-16115) [JDK11] TestJersey fails

2019-02-15 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16115:
---
Summary: [JDK11] TestJersey fails  (was: TestJersey fails)

> [JDK11] TestJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16115) [JDK11] TestJersey fails

2019-02-15 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16115:
---
Description: 
{noformat}
[INFO] Running org.apache.hadoop.http.TestHttpServer
[ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.954 
s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
[ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 0.128 
s  <<< ERROR!
java.io.IOException: Server returned HTTP response code: 500 for URL: 
http://localhost:40339/jersey/foo?op=bar
at 
java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
at 
java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
at 
org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
at 
org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{noformat}

> [JDK11] TestJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Created] (HADOOP-16115) TestJersey fails

2019-02-15 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16115:
--

 Summary: TestJersey fails
 Key: HADOOP-16115
 URL: https://issues.apache.org/jira/browse/HADOOP-16115
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16113) Your project apache/hadoop is using buggy third-party libraries [WARNING]

2019-02-15 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769183#comment-16769183
 ] 

Steve Loughran commented on HADOOP-16113:
-

thank you for this, it's always good to get a review of what issues other 
people know about

Upgrading dependencies is potentially a traumatic process. See [fear of 
dependencies|http://steveloughran.blogspot.com/2016/05/fear-of-dependencies.html]
 for a summary of my feelings there; HADOOP-9991 "upgrade to the latest 
version" for the eternal problem.

h3. Every update of every library breaks something, somewhere. Possibly 
transitively downstream.

That's a key problem we have. Java 9 modules promises better isolation for the 
transitive issue, but there's still our own code to worry about.


Looking at that list, apache httpclient is one we should be worrying about, 
because it is retrieving content from remote sites -if anything malicious can 
cause problems then we don't want that. Commons-io probably too. The others? 
I'm not sure.

FWIW, didn't know we were using Log4J2 at all; we'd stayed on 1.x for 
consistent configuration, through the commons-logging and SLF4J APIs. We'll 
have to see about getting the Ozone team to upgrade log4j 2 and then tell us 
how we went,


Anyway, regarding the other issues, it's the classic triage "is a bug worth 
fixing" problem as applied to upgrades. We tend to lag, just out of fear of 
change

BTW, within JIRA s short link "LANG-1397" is all we need. thanks

> Your project apache/hadoop is using buggy third-party libraries [WARNING]
> -
>
> Key: HADOOP-16113
> URL: https://issues.apache.org/jira/browse/HADOOP-16113
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kaifeng Huang
>Priority: Major
>
> Hi, there!
> We are a research team working on third-party library analysis. We have 
> found that some widely-used third-party libraries in your project have 
> major/critical bugs, which will degrade the quality of your project. We 
> highly recommend you to update those libraries to new versions.
> We have attached the buggy third-party libraries and corresponding jira 
> issue links below for you to have more detailed information.
>   1. org.apache.logging.log4j log4j-core(hadoop-hdds/common/pom.xml)
>   version: 2.11.0
>   Jira issues:
>   Log4j2 throws NoClassDefFoundError in Java 9
>   affectsVersions:2.10.0,2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2129?filter=allopenissues
>   Empty Automatic-Module-Name Header
>   affectsVersions:2.10.0,2.11.0,3.0.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2254?filter=allopenissues
>   gc-free mixed async loging loses parameter values after the first 
> appender
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2301?filter=allopenissues
>   Log4j 2.10+not working with SLF4J 1.8 in OSGI environment
>   affectsVersions:2.10.0,2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2305?filter=allopenissues
>   AsyncQueueFullMessageUtil causes unparsable message output
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2318?filter=allopenissues
>   AbstractLogger NPE hides actual cause when getFormat returns null
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2320?filter=allopenissues
>   AsyncLogger without specifying a level always uses ERROR
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2321?filter=allopenissues
>   Errors thrown in formatting may stop background threads
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2333?filter=allopenissues
>   JsonLayout not working with AsyncLoggerContextSelector in 2.11.0
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2341?filter=allopenissues
>   Typo in log4j-api Activator
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2343?filter=allopenissues
>   PropertiesUtil.reload() might throw NullPointerException
>   affectsVersions:2.11.0
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2355?filter=allopenissues
>   NameAbbreviator skips first fragments
>   affectsVersions:2.11.0,2.11.1
>   
> https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2365?filter=allopenissues
>   Outputs wrong message when used within overridden Throwable method
>   affectsVersions:2.8.1,2.11.0
>   
> 

[GitHub] elek opened a new pull request #487: Remove default dependencies from hadoop-ozone project

2019-02-15 Thread GitBox
elek opened a new pull request #487: Remove default dependencies from 
hadoop-ozone project
URL: https://github.com/apache/hadoop/pull/487
 
 
   There are two ways to define common dependencies with maven:
   
 1.) put all the dependencies to the parent project and inherit them
 2.) get all the dependencies via transitive dependencies
   
   TLDR; I would like to switch from 1 to 2 in hadoop-ozone
   
   My main problem with the first approach that all the child project get a lot 
of dependencies independent if they need them or not. Let's imagine that I 
would like to create a new project (for example a java csi implementation) It 
doesn't need ozone-client, ozone-common etc, in fact it conflicts with 
ozone-client. But these jars are always added as of now.
   
   Using transitive dependencies is more safe: we can add the dependencies 
where we need them and all of the other dependent projects will use them. 
   
   see: https://issues.apache.org/jira/browse/HDDS-1113


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #484: HDDS-1103. Fix rat/findbug/checkstyle errors in ozone/hdds projects

2019-02-15 Thread GitBox
elek closed pull request #484: HDDS-1103. Fix rat/findbug/checkstyle errors in 
ozone/hdds projects
URL: https://github.com/apache/hadoop/pull/484
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #484: HDDS-1103. Fix rat/findbug/checkstyle errors in ozone/hdds projects

2019-02-15 Thread GitBox
elek commented on issue #484: HDDS-1103. Fix rat/findbug/checkstyle errors in 
ozone/hdds projects
URL: https://github.com/apache/hadoop/pull/484#issuecomment-463957063
 
 
   Thanks the review and the merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #463: HDDS-905. Create informative landing page for Ozone S3 gateway

2019-02-15 Thread GitBox
elek closed pull request #463: HDDS-905. Create informative landing page for 
Ozone S3 gateway
URL: https://github.com/apache/hadoop/pull/463
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #477: HDDS-1025: Handle replication of closed containers in DeadNodeHanlder.

2019-02-15 Thread GitBox
elek closed pull request #477: HDDS-1025: Handle replication of closed 
containers in DeadNodeHanlder.
URL: https://github.com/apache/hadoop/pull/477
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #486: HDDS-1092. Use Java 11 JRE to run Ozone in containers

2019-02-15 Thread GitBox
elek commented on issue #486: HDDS-1092. Use Java 11 JRE to run Ozone in 
containers
URL: https://github.com/apache/hadoop/pull/486#issuecomment-463956353
 
 
   Note: I will wait for a normal jenkins execution first. If works well, I can 
commit the trunk patch first. 
   If trunk patch is in, I can bump the java version to java 11 in the base 
image (hadoop-docker-runner branch)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #476: HDDS-1029: Allow option for force in DeleteContainerCommand.

2019-02-15 Thread GitBox
elek closed pull request #476: HDDS-1029: Allow option for force in 
DeleteContainerCommand.
URL: https://github.com/apache/hadoop/pull/476
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #481: HDDS-1068. Improve the error propagation for ozone sh.

2019-02-15 Thread GitBox
elek closed pull request #481: HDDS-1068. Improve the error propagation for 
ozone sh.
URL: https://github.com/apache/hadoop/pull/481
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #486: HDDS-1092. Use Java 11 JRE to run Ozone in containers

2019-02-15 Thread GitBox
elek opened a new pull request #486: HDDS-1092. Use Java 11 JRE to run Ozone in 
containers
URL: https://github.com/apache/hadoop/pull/486
 
 
   see: https://issues.apache.org/jira/browse/HDDS-1092


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-15 Thread Praveen Krishna (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Praveen Krishna updated HADOOP-16114:
-
Description: 
In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an entry 
to the cache
{code:java}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.putIfAbsent(host, fqHost);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
}
{code}
 

If two different threads were invoking this method for the first time (so the 
cache is empty) and if SecurityUtil#getByName()#getHostName gives two different 
value for the same host , only one fqHost would be added in the cache and an 
invalid fqHost would be given to one of the thread which might cause some APIs 
to fail for the first time `FileSystem#checkPath` even if the path is in the 
given file system. It might be better if we modify the above method to this

 
{code:java}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.putIfAbsent(host, fqHost);
fqHost = canonicalizedHostCache.get(host);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
}
{code}
 

So even if other thread get a different host name it will be updated to the 
cached value.

  was:
In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an entry 
to the cache
{code:java}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.putIfAbsent(host, fqHost);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
}
{code}
 

If two different threads were invoking this method for the first time (so the 
cache is empty) and if SecurityUtil#getByName()#getHostName gives two different 
value for the same host , only one fqHost would be added in the cache and an 
invalid fqHost would be given to one of the thread which might cause some APIs 
to fail for the first time `FileSystem#checkPath` even if the path is in the 
given file system. It might be better if we modify the above method to this

 
{code:java}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.putIfAbsent(host, fqHost);
fqHost = canonicalizedHostCache.get(host);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
}
{code}
 

So even if other thread get a different host name it will be updated to the 
cached value/


> NetUtils#canonicalizeHost gives different value for same host
> -
>
> Key: HADOOP-16114
> URL: https://issues.apache.org/jira/browse/HADOOP-16114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.7.6, 3.1.2
>Reporter: Praveen Krishna
>Priority: Minor
>
> In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an 
> entry to the cache
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> If two different threads were invoking this method for the first time (so the 
> cache is empty) and if SecurityUtil#getByName()#getHostName gives two 
> different value for the same host , only one fqHost would be added in the 
> cache and an invalid fqHost would be given to one of the thread which might 

[jira] [Created] (HADOOP-16113) Your project apache/hadoop is using buggy third-party libraries [WARNING]

2019-02-15 Thread Kaifeng Huang (JIRA)
Kaifeng Huang created HADOOP-16113:
--

 Summary: Your project apache/hadoop is using buggy third-party 
libraries [WARNING]
 Key: HADOOP-16113
 URL: https://issues.apache.org/jira/browse/HADOOP-16113
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kaifeng Huang



Hi, there!

We are a research team working on third-party library analysis. We have 
found that some widely-used third-party libraries in your project have 
major/critical bugs, which will degrade the quality of your project. We highly 
recommend you to update those libraries to new versions.

We have attached the buggy third-party libraries and corresponding jira 
issue links below for you to have more detailed information.

1. org.apache.logging.log4j log4j-core(hadoop-hdds/common/pom.xml)
version: 2.11.0

Jira issues:
Log4j2 throws NoClassDefFoundError in Java 9
affectsVersions:2.10.0,2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2129?filter=allopenissues
Empty Automatic-Module-Name Header
affectsVersions:2.10.0,2.11.0,3.0.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2254?filter=allopenissues
gc-free mixed async loging loses parameter values after the first 
appender
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2301?filter=allopenissues
Log4j 2.10+not working with SLF4J 1.8 in OSGI environment
affectsVersions:2.10.0,2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2305?filter=allopenissues
AsyncQueueFullMessageUtil causes unparsable message output
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2318?filter=allopenissues
AbstractLogger NPE hides actual cause when getFormat returns null
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2320?filter=allopenissues
AsyncLogger without specifying a level always uses ERROR
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2321?filter=allopenissues
Errors thrown in formatting may stop background threads
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2333?filter=allopenissues
JsonLayout not working with AsyncLoggerContextSelector in 2.11.0
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2341?filter=allopenissues
Typo in log4j-api Activator
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2343?filter=allopenissues
PropertiesUtil.reload() might throw NullPointerException
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2355?filter=allopenissues
NameAbbreviator skips first fragments
affectsVersions:2.11.0,2.11.1

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2365?filter=allopenissues
Outputs wrong message when used within overridden Throwable method
affectsVersions:2.8.1,2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2368?filter=allopenissues
StringBuilder escapeJson performs unnecessary Memory Allocations
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2373?filter=allopenissues
fix the CacheEntry map in ThrowableProxy#toExtendedStackTrace to be put 
and gotten with same key
affectsVersions:2.6.2,2.7,2.8,2.8.1,2.8.2,2.9.0,2.9.1,2.10.0,2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2389?filter=allopenissues
Fix incorrect links in Log4j web documentation.
affectsVersions:2.11.0

https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2390?filter=allopenissues


2. org.apache.httpcomponents httpclient(hadoop-project/pom.xml)
version: 4.5.2

Jira issues:

org.apache.http.impl.client.AbstractHttpClient#createClientConnectionManager 
Does not account for context class loader
affectsVersions:4.4.1;4.5;4.5.1;4.5.2

https://issues.apache.org/jira/projects/HTTPCLIENT/issues/HTTPCLIENT-1727?filter=allopenissues
Memory Leak in OSGi support
affectsVersions:4.4.1;4.5.2

https://issues.apache.org/jira/projects/HTTPCLIENT/issues/HTTPCLIENT-1749?filter=allopenissues
SystemDefaultRoutePlanner: Possible null pointer dereference
affectsVersions:4.5.2

https://issues.apache.org/jira/projects/HTTPCLIENT/issues/HTTPCLIENT-1766?filter=allopenissues
Null pointer dereference in EofSensorInputStream and ResponseEntityProxy
affectsVersions:4.5.2
   

[jira] [Created] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-15 Thread Praveen Krishna (JIRA)
Praveen Krishna created HADOOP-16114:


 Summary: NetUtils#canonicalizeHost gives different value for same 
host
 Key: HADOOP-16114
 URL: https://issues.apache.org/jira/browse/HADOOP-16114
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 3.1.2, 2.7.6
Reporter: Praveen Krishna


In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an entry 
to the cache
{code:java}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.putIfAbsent(host, fqHost);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
}
{code}
 

If two different threads were invoking this method for the first time (so the 
cache is empty) and if SecurityUtil#getByName()#getHostName gives two different 
value for the same host , only one fqHost would be added in the cache and an 
invalid fqHost would be given to one of the thread which might cause some APIs 
to fail for the first time `FileSystem#checkPath` even if the path is in the 
given file system. It might be better if we modify the above method to this

 
{code:java}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.putIfAbsent(host, fqHost);
fqHost = canonicalizedHostCache.get(host);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
}
{code}
 

So even if other thread get a different host name it will be updated to the 
cached value/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org