[jira] [Assigned] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-04-20 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reassigned HADOOP-18691:
---

Assignee: Christos Bisias

> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Assignee: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-04-20 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18691:

   Fix Version/s: 3.3.9
Target Version/s:   (was: 3.4.0, 3.3.9)

> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-04-20 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-18691.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-04-16 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18691:

Target Version/s: 3.4.0, 3.3.9  (was: 3.4.0, 3.3.6)

> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18693) Upgrade Apache Derby from 10.10.2.0 to 10.14.2.0 due to CVEs

2023-04-16 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18693:

Fix Version/s: 3.3.9
   (was: 3.3.6)

> Upgrade Apache Derby from 10.10.2.0 to 10.14.2.0 due to CVEs
> 
>
> Key: HADOOP-18693
> URL: https://issues.apache.org/jira/browse/HADOOP-18693
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, test
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> [https://github.com/advisories/GHSA-wr69-g62g-2r9h]
> [https://github.com/advisories/GHSA-42xw-p62x-hwcf]
> [https://github.com/apache/hadoop/pull/5427]
> Only seems to be used in test scope but it would be nice to silence the 
> dependabot warnings by merging the PR. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18693) Upgrade Apache Derby from 10.10.2.0 to 10.14.2.0 due to CVEs

2023-04-16 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18693:

Fix Version/s: 3.3.6

> Upgrade Apache Derby from 10.10.2.0 to 10.14.2.0 due to CVEs
> 
>
> Key: HADOOP-18693
> URL: https://issues.apache.org/jira/browse/HADOOP-18693
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, test
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.6
>
>
> [https://github.com/advisories/GHSA-wr69-g62g-2r9h]
> [https://github.com/advisories/GHSA-42xw-p62x-hwcf]
> [https://github.com/apache/hadoop/pull/5427]
> Only seems to be used in test scope but it would be nice to silence the 
> dependabot warnings by merging the PR. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2023-04-13 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712131#comment-17712131
 ] 

Siyao Meng commented on HADOOP-17834:
-

transitive dep licenses were an oversight. Thanks [~ste...@apache.org] for 
taking care of that in HADOOP-18641.

Ideally we would have a CI check for the licenses. like ozone currently 
[does|https://github.com/apache/ozone/blob/master/.github/workflows/ci.yml#L230-L233]
 in 
[CI|https://github.com/apache/ozone/blob/master/hadoop-ozone/dev-support/checks/dependency.sh#L37].

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18693) Upgrade Apache Derby from 10.10.2.0 to 10.14.2.0 due to CVEs

2023-04-13 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18693:

Summary: Upgrade Apache Derby from 10.10.2.0 to 10.14.2.0 due to CVEs  
(was: Upgrade Apache Derby due to CVEs)

> Upgrade Apache Derby from 10.10.2.0 to 10.14.2.0 due to CVEs
> 
>
> Key: HADOOP-18693
> URL: https://issues.apache.org/jira/browse/HADOOP-18693
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> [https://github.com/advisories/GHSA-wr69-g62g-2r9h]
> [https://github.com/advisories/GHSA-42xw-p62x-hwcf]
> [https://github.com/apache/hadoop/pull/5427]
> Only seems to be used in test scope but it would be nice to silence the 
> dependabot warnings by merging the PR. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18693) Upgrade Apache Derby due to CVEs

2023-04-13 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-18693.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Upgrade Apache Derby due to CVEs
> 
>
> Key: HADOOP-18693
> URL: https://issues.apache.org/jira/browse/HADOOP-18693
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> [https://github.com/advisories/GHSA-wr69-g62g-2r9h]
> [https://github.com/advisories/GHSA-42xw-p62x-hwcf]
> [https://github.com/apache/hadoop/pull/5427]
> Only seems to be used in test scope but it would be nice to silence the 
> dependabot warnings by merging the PR. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18693) Upgrade Apache Derby due to CVEs

2023-04-13 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18693:

Summary: Upgrade Apache Derby due to CVEs  (was: upgrade Apache Derby due 
to CVEs)

> Upgrade Apache Derby due to CVEs
> 
>
> Key: HADOOP-18693
> URL: https://issues.apache.org/jira/browse/HADOOP-18693
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: PJ Fanning
>Priority: Major
>
> [https://github.com/advisories/GHSA-wr69-g62g-2r9h]
> [https://github.com/advisories/GHSA-42xw-p62x-hwcf]
> [https://github.com/apache/hadoop/pull/5427]
> Only seems to be used in test scope but it would be nice to silence the 
> dependabot warnings by merging the PR. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18699) InvalidProtocolBufferException caused by JDK 11 < 11.0.18 AES-CTR cipher state corruption with AVX-512 bug

2023-04-12 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18699:

Description: 
This serves as a PSA for a JDK bug. Not really a bug in Hadoop / HDFS. 
Symptom/Workaround/Solution detailed below.

[~relek] identified [JDK-8292158|https://bugs.openjdk.org/browse/JDK-8292158] 
(backported to JDK 11 in 
[JDK-8295297|https://bugs.openjdk.org/browse/JDK-8295297]) causes HDFS clients 
to fail with InvalidProtocolBufferException due to corrupted protobuf message 
in Hadoop RPC request when all of the below conditions are met:

1. The host is capable of AVX-512 instruction sets
2. AVX-512 is enabled in JVM. This should be enabled by default on AVX-512 
capable hosts, equivalent to specifying JVM arg {{-XX:UseAVX=3}}
3. Hadoop native library (e.g. libhadoop.so) is not available. So the HDFS 
client falls back using Hotspot JVM's {{aesctr_encrypt}} implementation for 
AES/CTR/NoPadding.
4. Client uses JDK 11. And OpenJDK version < 11.0.18

As a result, the client could print messages like these:

{code:title=Symptoms on the HDFS client}
2023-02-21 15:21:44,380 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545-10.125.248.11-1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.

2023-02-21 15:21:44,378 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545--1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message end-group tag did not match expected tag.
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group 
tag did not match expected tag.

2023-02-21 15:06:55,530 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_06_55.b4a633a8bde014aa
 for block 
BP-1836197545--1672668423261:blk_1073935025_194771:com.google.protobuf.InvalidProtocolBufferException:
 While parsing a protocol message, the input ended unexpectedly in the middle 
of a field. This could mean either than the input has been truncated or that an 
embedded message misreported its own length.
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
message, the input ended unexpectedly in the middle of a field. This could mean 
either than the input has been truncated or that an embedded message 
misreported its own length.
{code}

The error message might mislead devs/users into thinking this is a Hadoop 
Common or HDFS bug (while it is a JDK bug in this case).


{color:red}Solutions:{color}
1. As a workaround, append {{-XX:UseAVX=2}} to client JVM args; or
2. Upgrade to OpenJDK >= 11.0.18.


I might post a repro test case for this, or find a way in the code to prompt 
the user that this could be the potential issue (need to upgrade JDK 11) when 
it occurs.

  was:
This serves as a PSA for a JDK bug. Not really a bug in Hadoop / HDFS. 
Symptom/Workaround/Solution detailed below.

[~relek] identified [JDK-8292158|https://bugs.openjdk.org/browse/JDK-8292158] 
(backported to JDK 11 in 
[JDK-8295297|https://bugs.openjdk.org/browse/JDK-8295297]) causes HDFS clients 
to fail with InvalidProtocolBufferException due to corrupted protobuf message 
in Hadoop RPC request when all of the below conditions are met:

1. The host is capable of AVX-512 instruction sets
2. AVX-512 is enabled in JVM. This should be enabled by default on AVX-512 
capable hosts, equivalent to specifying JVM arg {{-XX:UseAVX=3}}
3. Hadoop native library (e.g. libhadoop.so) is not available. So the HDFS 
client falls back to AES/CTR/NoPadding and thus uses Hotspot JVM's 
{{aesctr_encrypt}} implementation.
4. Client uses JDK 11. And OpenJDK version < 11.0.18

As a result, the client could print messages like these:

{code:title=Symptoms on the HDFS client}
2023-02-21 15:21:44,380 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545-10.125.248.11-1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.

2023-02-21 15:21:44,378 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 

[jira] [Updated] (HADOOP-18699) InvalidProtocolBufferException caused by JDK 11 < 11.0.18 AES-CTR cipher state corruption with AVX-512 bug

2023-04-12 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18699:

Description: 
This serves as a PSA for a JDK bug. Not really a bug in Hadoop / HDFS. 
Symptom/Workaround/Solution detailed below.

[~relek] identified [JDK-8292158|https://bugs.openjdk.org/browse/JDK-8292158] 
(backported to JDK 11 in 
[JDK-8295297|https://bugs.openjdk.org/browse/JDK-8295297]) causes HDFS clients 
to fail with InvalidProtocolBufferException due to corrupted protobuf message 
in Hadoop RPC request when all of the below conditions are met:

1. The host is capable of AVX-512 instruction sets
2. AVX-512 is enabled in JVM. This should be enabled by default on AVX-512 
capable hosts, equivalent to specifying JVM arg {{-XX:UseAVX=3}}
3. Hadoop native library (e.g. libhadoop.so) is not available. So the HDFS 
client falls back to AES/CTR/NoPadding and thus uses Hotspot JVM's 
{{aesctr_encrypt}} implementation.
4. Client uses JDK 11. And OpenJDK version < 11.0.18

As a result, the client could print messages like these:

{code:title=Symptoms on the HDFS client}
2023-02-21 15:21:44,380 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545-10.125.248.11-1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.

2023-02-21 15:21:44,378 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545--1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message end-group tag did not match expected tag.
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group 
tag did not match expected tag.

2023-02-21 15:06:55,530 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_06_55.b4a633a8bde014aa
 for block 
BP-1836197545--1672668423261:blk_1073935025_194771:com.google.protobuf.InvalidProtocolBufferException:
 While parsing a protocol message, the input ended unexpectedly in the middle 
of a field. This could mean either than the input has been truncated or that an 
embedded message misreported its own length.
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
message, the input ended unexpectedly in the middle of a field. This could mean 
either than the input has been truncated or that an embedded message 
misreported its own length.
{code}

The error message might mislead devs/users into thinking this is a Hadoop 
Common or HDFS bug (while it is a JDK bug in this case).


{color:red}Solutions:{color}
1. As a workaround, append {{-XX:UseAVX=2}} to client JVM args; or
2. Upgrade to OpenJDK >= 11.0.18.


I might post a repro test case for this, or find a way in the code to prompt 
the user that this could be the potential issue (need to upgrade JDK 11) when 
it occurs.

  was:
This serves as a PSA for a JDK bug. Not really a bug in Hadoop / HDFS. 
Symptom/Workaround/Solution detailed below.

[~relek] identified [JDK-8292158|https://bugs.openjdk.org/browse/JDK-8292158] 
(backported to JDK 11 in 
[JDK-8295297|https://bugs.openjdk.org/browse/JDK-8295297]) causes HDFS clients 
to fail with InvalidProtocolBufferException due to corrupted protobuf message 
in Hadoop RPC request when all of the below conditions are met:

1. The host is capable of AVX-512 instruction sets
2. AVX-512 is enabled in JVM. This should be enabled by default on AVX-512 
capable hosts, equivalent to specifying JVM arg {{-XX:UseAVX=3}}
3. Hadoop native library (e.g. libhadoop.so) is not available
4. Client uses JDK 11. And OpenJDK version < 11.0.18

As a result, the client could print messages like these:

{code:title=Symptoms on the HDFS client}
2023-02-21 15:21:44,380 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545-10.125.248.11-1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.

2023-02-21 15:21:44,378 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 

[jira] [Updated] (HADOOP-18699) InvalidProtocolBufferException caused by JDK 11 < 11.0.18 AES-CTR cipher state corruption with AVX-512 bug

2023-04-12 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18699:

Description: 
This serves as a PSA for a JDK bug. Not really a bug in Hadoop / HDFS. 
Symptom/Workaround/Solution detailed below.

[~relek] identified [JDK-8292158|https://bugs.openjdk.org/browse/JDK-8292158] 
(backported to JDK 11 in 
[JDK-8295297|https://bugs.openjdk.org/browse/JDK-8295297]) causes HDFS clients 
to fail with InvalidProtocolBufferException due to corrupted protobuf message 
in Hadoop RPC request when all of the below conditions are met:

1. The host is capable of AVX-512 instruction sets
2. AVX-512 is enabled in JVM. This should be enabled by default on AVX-512 
capable hosts, equivalent to specifying JVM arg {{-XX:UseAVX=3}}
3. Hadoop native library (e.g. libhadoop.so) is not available
4. Client uses JDK 11. And OpenJDK version < 11.0.18

As a result, the client could print messages like these:

{code:title=Symptoms on the HDFS client}
2023-02-21 15:21:44,380 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545-10.125.248.11-1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.

2023-02-21 15:21:44,378 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545--1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message end-group tag did not match expected tag.
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group 
tag did not match expected tag.

2023-02-21 15:06:55,530 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_06_55.b4a633a8bde014aa
 for block 
BP-1836197545--1672668423261:blk_1073935025_194771:com.google.protobuf.InvalidProtocolBufferException:
 While parsing a protocol message, the input ended unexpectedly in the middle 
of a field. This could mean either than the input has been truncated or that an 
embedded message misreported its own length.
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
message, the input ended unexpectedly in the middle of a field. This could mean 
either than the input has been truncated or that an embedded message 
misreported its own length.
{code}

The error message might mislead devs/users into thinking this is a Hadoop 
Common or HDFS bug (while it is a JDK bug in this case).


{color:red}Solutions:{color}
1. As a workaround, append {{-XX:UseAVX=2}} to client JVM args; or
2. Upgrade to OpenJDK >= 11.0.18.


I might post a repro test case for this, or find a way in the code to prompt 
the user that this could be the potential issue (need to upgrade JDK 11) when 
it occurs.

  was:
This serves as a PSA for a JDK bug. Not really a bug in Hadoop / HDFS itself.

[~relek] identified [JDK-8292158|https://bugs.openjdk.org/browse/JDK-8292158] 
(backported to JDK 11 in 
[JDK-8295297|https://bugs.openjdk.org/browse/JDK-8295297]) causes HDFS clients 
to fail with InvalidProtocolBufferException due to corrupted protobuf message 
in Hadoop RPC request when all of the below conditions are met:

1. The host is capable of AVX-512 instruction sets
2. AVX-512 is enabled in JVM. This should be enabled by default on AVX-512 
capable hosts, equivalent to specifying JVM arg {{-XX:UseAVX=3}}

As a result, the client could print messages like these:

{code:title=Symptoms on the HDFS client}
2023-02-21 15:21:44,380 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545-10.125.248.11-1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.

2023-02-21 15:21:44,378 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545--1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message end-group tag did not match expected tag.
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group 
tag did not match expected tag.

2023-02-21 15:06:55,530 

[jira] [Created] (HADOOP-18699) InvalidProtocolBufferException caused by JDK 11 < 11.0.18 AES-CTR cipher state corruption with AVX-512 bug

2023-04-12 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-18699:
---

 Summary: InvalidProtocolBufferException caused by JDK 11 < 11.0.18 
AES-CTR cipher state corruption with AVX-512 bug
 Key: HADOOP-18699
 URL: https://issues.apache.org/jira/browse/HADOOP-18699
 Project: Hadoop Common
  Issue Type: Bug
  Components: hdfs-client
Reporter: Siyao Meng


This serves as a PSA for a JDK bug. Not really a bug in Hadoop / HDFS itself.

[~relek] identified [JDK-8292158|https://bugs.openjdk.org/browse/JDK-8292158] 
(backported to JDK 11 in 
[JDK-8295297|https://bugs.openjdk.org/browse/JDK-8295297]) causes HDFS clients 
to fail with InvalidProtocolBufferException due to corrupted protobuf message 
in Hadoop RPC request when all of the below conditions are met:

1. The host is capable of AVX-512 instruction sets
2. AVX-512 is enabled in JVM. This should be enabled by default on AVX-512 
capable hosts, equivalent to specifying JVM arg {{-XX:UseAVX=3}}

As a result, the client could print messages like these:

{code:title=Symptoms on the HDFS client}
2023-02-21 15:21:44,380 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545-10.125.248.11-1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had 
invalid wire type.

2023-02-21 15:21:44,378 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_21_25.b6788e89894a61b5
 for block 
BP-1836197545--1672668423261:blk_1073935111_194857:com.google.protobuf.InvalidProtocolBufferException:
 Protocol message end-group tag did not match expected tag.
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group 
tag did not match expected tag.

2023-02-21 15:06:55,530 WARN org.apache.hadoop.hdfs.DFSClient: Connection 
failure: Failed to connect to  for file 
/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2023_02_21-15_06_55.b4a633a8bde014aa
 for block 
BP-1836197545--1672668423261:blk_1073935025_194771:com.google.protobuf.InvalidProtocolBufferException:
 While parsing a protocol message, the input ended unexpectedly in the middle 
of a field. This could mean either than the input has been truncated or that an 
embedded message misreported its own length.
com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol 
message, the input ended unexpectedly in the middle of a field. This could mean 
either than the input has been truncated or that an embedded message 
misreported its own length.
{code}

Solutions:
1. As a workaround, append {{-XX:UseAVX=2}} to client JVM args; or
2. Upgrade to OpenJDK >= 11.0.18.


I might post a repro test case for this, or find a way in the code to prompt 
the user that this could be the potential issue (need to upgrade JDK 11) when 
it occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17424) Replace HTrace with No-Op tracer

2023-02-21 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17691638#comment-17691638
 ] 

Siyao Meng commented on HADOOP-17424:
-

Hi [~bpatel], it should be easy to cherry-pick this onto the 3.3.0 branch as it 
is in 3.3.2. However, this won't be done upstream since 3.3.0 branch is a 
released branch and is 
[frozen|https://github.com/apache/hadoop/tree/branch-3.3.0].

If you would like to, you can cherry-pick the 
[commit](1a205cc3adffa568c814a5241e041b08e2fcd3eb) and try to apply it to your 
fork's branch-3.3.0 branch and compile it.

> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18564) Use file-level checksum by default when copying between two different file systems

2022-12-08 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18564:

Description: 
h2. Goal

Reduce user friction

h2. Background

When distcp'ing between two different file systems, distcp still uses 
block-level checksum by default, even though the two file systems can be very 
different in how they manage blocks, so that a block-level checksum no longer 
makes sense between these two.

e.g. distcp between HDFS and Ozone without overriding 
{{dfs.checksum.combine.mode}} throws IOException because the blocks of the same 
file on two FSes are different (as expected):

{code}
$ hadoop distcp -i -pp /test o3fs://buck-test1.vol1.ozone1/
java.lang.Exception: java.io.IOException: File copy failed: 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin --> 
o3fs://buck-test1.vol1.ozone1/test/test.bin
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.io.IOException: File copy failed: 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin --> 
o3fs://buck-test1.vol1.ozone1/test/test.bin
at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin to 
o3fs://buck-test1.vol1.ozone1/test/test.bin
at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
... 11 more
Caused by: java.io.IOException: Checksum mismatch between 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin and 
o3fs://buck-test1.vol1.ozone1/.distcp.tmp.attempt_local1346550241_0001_m_00_0.Source
 and destination filesystems are of different types
Their checksum algorithms may be incompatible You can choose file-level 
checksum validation via -Ddfs.checksum.combine.mode=COMPOSITE_CRC when 
block-sizes or filesystems are different. Or you can skip checksum-checks 
altogether  with -skipcrccheck.
{code}

And it works when we use a file-level checksum like {{COMPOSITE_CRC}}:

{code:title=With -Ddfs.checksum.combine.mode=COMPOSITE_CRC}
$ hadoop distcp -i -pp /test o3fs://buck-test2.vol1.ozone1/ 
-Ddfs.checksum.combine.mode=COMPOSITE_CRC
22/10/18 19:07:42 INFO mapreduce.Job: Job job_local386071499_0001 completed 
successfully
22/10/18 19:07:42 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=219900
FILE: Number of bytes written=794129
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=0
HDFS: Number of bytes written=0
HDFS: Number of read operations=13
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
HDFS: Number of bytes read erasure-coded=0
O3FS: Number of bytes read=0
O3FS: Number of bytes written=0
O3FS: Number of read operations=5
O3FS: Number of large read operations=0
O3FS: Number of write operations=0
..
{code}

h2. Alternative

(if changing global defaults could potentially break distcp'ing between 
HDFS/S3/etc. Also [~weichiu] mentioned COMPOSITE_CRC is only added in Hadoop 
3.1.1. So this might be the only way.)

Don't touch the global default, and make it a client-side config.

e.g. add a config to allow automatically usage of COMPOSITE_CRC 
(dfs.checksum.combine.mode) when distcp'ing between HDFS and Ozone, which would 
be the equivalent of specifying {{-Ddfs.checksum.combine.mode=COMPOSITE_CRC}} 
on the distcp command but the end user won't have to specify it every single 
time.


cc [~duongnguyen] [~weichiu]

  was:
h2. Goal

Reduce user friction

[jira] [Created] (HADOOP-18564) Use file-level checksum by default when copying between two different file systems

2022-12-08 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-18564:
---

 Summary: Use file-level checksum by default when copying between 
two different file systems
 Key: HADOOP-18564
 URL: https://issues.apache.org/jira/browse/HADOOP-18564
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Siyao Meng


h2. Goal

Reduce user friction

h2. Background

When distcp'ing between two different file systems, distcp still uses 
block-level checksum by default, even though the two file systems can be very 
different in how they manage blocks, so that a block-level checksum no longer 
makes sense between these two.

e.g. distcp between HDFS and Ozone without overriding 
{{dfs.checksum.combine.mode}} throws IOException because the blocks of the same 
file on two FSes are different (as expected):

{code}
$ hadoop distcp -i -pp /test o3fs://buck-test1.vol1.ozone1/
java.lang.Exception: java.io.IOException: File copy failed: 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin --> 
o3fs://buck-test1.vol1.ozone1/test/test.bin
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.io.IOException: File copy failed: 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin --> 
o3fs://buck-test1.vol1.ozone1/test/test.bin
at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin to 
o3fs://buck-test1.vol1.ozone1/test/test.bin
at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
... 11 more
Caused by: java.io.IOException: Checksum mismatch between 
hdfs://duong-1.duong.root.hwx.site:8020/test/test.bin and 
o3fs://buck-test1.vol1.ozone1/.distcp.tmp.attempt_local1346550241_0001_m_00_0.Source
 and destination filesystems are of different types
Their checksum algorithms may be incompatible You can choose file-level 
checksum validation via -Ddfs.checksum.combine.mode=COMPOSITE_CRC when 
block-sizes or filesystems are different. Or you can skip checksum-checks 
altogether  with -skipcrccheck.
{code}

And it works when we use a file-level checksum like {{COMPOSITE_CRC}}:

{code:title=With -Ddfs.checksum.combine.mode=COMPOSITE_CRC}
$ hadoop distcp -i -pp /test o3fs://buck-test2.vol1.ozone1/ 
-Ddfs.checksum.combine.mode=COMPOSITE_CRC
22/10/18 19:07:42 INFO mapreduce.Job: Job job_local386071499_0001 completed 
successfully
22/10/18 19:07:42 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=219900
FILE: Number of bytes written=794129
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=0
HDFS: Number of bytes written=0
HDFS: Number of read operations=13
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
HDFS: Number of bytes read erasure-coded=0
O3FS: Number of bytes read=0
O3FS: Number of bytes written=0
O3FS: Number of read operations=5
O3FS: Number of large read operations=0
O3FS: Number of write operations=0
..
{code}

h2. Alternative

(if changing global defaults could potentially break distcp'ing between 
HDFS/S3/etc.)

Don't touch the global default, and make it a client-side config.

e.g. add a config to allow automatically usage of COMPOSITE_CRC 
(dfs.checksum.combine.mode) when distcp'ing between HDFS and Ozone, which would 
be the equivalent of specifying {{-Ddfs.checksum.combine.mode=COMPOSITE_CRC}} 
on the distcp command but the end user won't have to specify it every single 

[jira] [Resolved] (HADOOP-18101) Bump aliyun-sdk-oss to 3.13.2 and jdom2 to 2.0.6.1

2022-02-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-18101.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Bump aliyun-sdk-oss to 3.13.2 and jdom2 to 2.0.6.1
> --
>
> Key: HADOOP-18101
> URL: https://issues.apache.org/jira/browse/HADOOP-18101
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Aswin Shakil Balasubramanian
>Assignee: Aswin Shakil Balasubramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The current aliyun-sdk-oss 3.13.0 is affected by 
> [CVE-2021-33813|https://github.com/advisories/GHSA-2363-cqg2-863c] due to 
> jdom 2.0.6. maven-shade-plugin is also affected by the CVE. 
> Bumping aliyun-sdk-oss to 3.13.2 and jdom2 to 2.0.6.1 will resolve this issue
> {code:java}
> [INFO] +- org.apache.maven.plugins:maven-shade-plugin:jar:3.2.1:provided
> [INFO] |  +- 
> org.apache.maven.shared:maven-artifact-transfer:jar:0.10.0:provided
> [INFO] |  +- org.jdom:jdom2:jar:2.0.6:provided
> ..
> [INFO] +- com.aliyun.oss:aliyun-sdk-oss:jar:3.13.1:compile
> [INFO] |  +- org.jdom:jdom2:jar:2.0.6:compile
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-14 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17459557#comment-17459557
 ] 

Siyao Meng commented on HADOOP-17723:
-

[~weichiu] Posted HADOOP-18048. The plan is to merge to branch-3.3 first then 
backport it to branch-3.3.2

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~
>   compilation terminated. 
>   
> error: command 'aarch64-linux-gnu-gcc' failed with exit 
> status 1
> {noformat}
> The missing Python3.h requires python3-dev package: 
> https://stackoverflow.com/questions/21530577/fatal-error-python-h-no-such-file-or-directory
> The PhantomJS binary was built for Xenial, doesn't run after the Dockerfile 
> migrated to Bionic/Focal. Fortunately Bionic/Focal has official PhantomJS 
> packages.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18048) [branch-3.3] Dockerfile_aarch64 build fails with fatal error: Python.h: No such file or directory

2021-12-14 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18048:

Status: Patch Available  (was: Open)

> [branch-3.3] Dockerfile_aarch64 build fails with fatal error: Python.h: No 
> such file or directory
> -
>
> Key: HADOOP-18048
> URL: https://issues.apache.org/jira/browse/HADOOP-18048
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> See previous discussion: 
> https://issues.apache.org/jira/browse/HADOOP-17723?focusedCommentId=17452329=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17452329



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18048) [branch-3.3] Dockerfile_aarch64 build fails with fatal error: Python.h: No such file or directory

2021-12-14 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18048:

Target Version/s: 3.3.2, 3.3.3  (was: 3.3.2)

> [branch-3.3] Dockerfile_aarch64 build fails with fatal error: Python.h: No 
> such file or directory
> -
>
> Key: HADOOP-18048
> URL: https://issues.apache.org/jira/browse/HADOOP-18048
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> See previous discussion: 
> https://issues.apache.org/jira/browse/HADOOP-17723?focusedCommentId=17452329=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17452329



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18048) [branch-3.3] Dockerfile_aarch64 build fails with fatal error: Python.h: No such file or directory

2021-12-14 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-18048:
---

 Summary: [branch-3.3] Dockerfile_aarch64 build fails with fatal 
error: Python.h: No such file or directory
 Key: HADOOP-18048
 URL: https://issues.apache.org/jira/browse/HADOOP-18048
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Siyao Meng
Assignee: Siyao Meng


See previous discussion: 
https://issues.apache.org/jira/browse/HADOOP-17723?focusedCommentId=17452329=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17452329



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-06 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454198#comment-17454198
 ] 

Siyao Meng commented on HADOOP-17723:
-

[~weichiu] Yup, will post the one-line change for branch-3.3 later.

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~
>   compilation terminated. 
>   
> error: command 'aarch64-linux-gnu-gcc' failed with exit 
> status 1
> {noformat}
> The missing Python3.h requires python3-dev package: 
> https://stackoverflow.com/questions/21530577/fatal-error-python-h-no-such-file-or-directory
> The PhantomJS binary was built for Xenial, doesn't run after the Dockerfile 
> migrated to Bionic/Focal. Fortunately Bionic/Focal has official PhantomJS 
> packages.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17509) Parallelize building of dependencies

2021-12-06 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454197#comment-17454197
 ] 

Siyao Meng commented on HADOOP-17509:
-

[~gaurava] Got it. Thanks!

> Parallelize building of dependencies
> 
>
> Key: HADOOP-17509
> URL: https://issues.apache.org/jira/browse/HADOOP-17509
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Need to use make -j$(nproc) to parallelize building of Protocol buffers and 
> Intel ISA - L dependency.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452329#comment-17452329
 ] 

Siyao Meng edited comment on HADOOP-17723 at 12/2/21, 11:34 AM:


Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:bash|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..63de24146a4 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -74,6 +74,7 @@ RUN apt-get -q update \
 pkg-config \
 python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

Not sure why {{python3-dev}} is mentioned in the jira description but somehow 
omitted in the PR?

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. Same deal for 
branch-3.3.2 (need to add python3-dev). trunk (3.4.0) works without any changes.


was (Author: smeng):
Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:bash|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..63de24146a4 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -74,6 +74,7 @@ RUN apt-get -q update \
 pkg-config \
 python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

Not sure why {{python3-dev}} is mentioned in the jira description but somehow 
omitted in the PR?

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. Same deal on 
branch-3.3.2. trunk (3.4.0) has some other issues (invalid repo signature).

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~  

[jira] [Comment Edited] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452329#comment-17452329
 ] 

Siyao Meng edited comment on HADOOP-17723 at 12/2/21, 11:10 AM:


Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:bash|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..63de24146a4 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -74,6 +74,7 @@ RUN apt-get -q update \
 pkg-config \
 python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

Not sure why {{python3-dev}} is mentioned in the jira description but somehow 
omitted in the PR?

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. Same deal on 
branch-3.3.2. trunk (3.4.0) has some other issues (invalid repo signature).


was (Author: smeng):
Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:bash|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..63de24146a4 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -74,6 +74,7 @@ RUN apt-get -q update \
 pkg-config \
 python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

Not sure why {{python3-dev}} is mentioned in the jira description but somehow 
omitted in the PR?

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~  

[jira] [Comment Edited] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452329#comment-17452329
 ] 

Siyao Meng edited comment on HADOOP-17723 at 12/2/21, 10:58 AM:


Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:bash|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..63de24146a4 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -74,6 +74,7 @@ RUN apt-get -q update \
 pkg-config \
 python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

Not sure why {{python3-dev}} is mentioned in the jira description but somehow 
omitted in the PR?

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.


was (Author: smeng):
Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:bash|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..737539b3c25 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -72,8 +72,8 @@ RUN apt-get -q update \
 phantomjs \
 pinentry-curses \
 pkg-config \
-python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~
> 

[jira] [Comment Edited] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452329#comment-17452329
 ] 

Siyao Meng edited comment on HADOOP-17723 at 12/2/21, 10:55 AM:


Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:patch|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..737539b3c25 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -72,8 +72,8 @@ RUN apt-get -q update \
 phantomjs \
 pinentry-curses \
 pkg-config \
-python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.


was (Author: smeng):
Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:diff|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..737539b3c25 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -72,8 +72,8 @@ RUN apt-get -q update \
 phantomjs \
 pinentry-curses \
 pkg-config \
-python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~
>   compilation terminated. 
>  

[jira] [Commented] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452329#comment-17452329
 ] 

Siyao Meng commented on HADOOP-17723:
-

Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:diff|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..737539b3c25 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -72,8 +72,8 @@ RUN apt-get -q update \
 phantomjs \
 pinentry-curses \
 pkg-config \
-python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~
>   compilation terminated. 
>   
> error: command 'aarch64-linux-gnu-gcc' failed with exit 
> status 1
> {noformat}
> The missing Python3.h requires python3-dev package: 
> https://stackoverflow.com/questions/21530577/fatal-error-python-h-no-such-file-or-directory
> The PhantomJS binary was built for Xenial, doesn't run after the Dockerfile 
> migrated to Bionic/Focal. Fortunately Bionic/Focal has official PhantomJS 
> packages.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17723) [build] fix the Dockerfile for ARM

2021-12-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452329#comment-17452329
 ] 

Siyao Meng edited comment on HADOOP-17723 at 12/2/21, 10:55 AM:


Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:bash|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..737539b3c25 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -72,8 +72,8 @@ RUN apt-get -q update \
 phantomjs \
 pinentry-curses \
 pkg-config \
-python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.


was (Author: smeng):
Hmm. This patch doesn't fix the "header not found" ({{Python.h: No such file}}) 
issue for me. It is possible that a previous version of {{python2.7}} package 
for {{ubuntu:bionic}} contains the header but now it is gone.

{code:title=docker build -t apache/hadoop:3 -f Dockerfile_aarch64 .}
#13 9.705 In file included from ast27/Parser/acceler.c:13:0:
#13 9.705 ast27/Parser/../Include/pgenheaders.h:8:10: fatal error: 
Python.h: No such file or directory
#13 9.705  #include "Python.h"
#13 9.705   ^~
#13 9.705 compilation terminated.
#13 9.705 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
{code}

I am able to work around the issue by installing the {{python3-dev}} package 
(which should have the Python.h header file):

{code:patch|title=Fix}
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 46818a6e234..737539b3c25 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -72,8 +72,8 @@ RUN apt-get -q update \
 phantomjs \
 pinentry-curses \
 pkg-config \
-python2.7 \
 python3 \
+python3-dev \
 python3-pip \
 python3-pkg-resources \
 python3-setuptools \
{code}

With the diff above, {{cd dev-support/docker/ && docker build -t 
apache/hadoop:3 -f Dockerfile_aarch64 .}} works for me. I might test this on 
branch-3.3.2/trunk and post a PR if it also fails on that branch.

> [build] fix the Dockerfile for ARM
> --
>
> Key: HADOOP-17723
> URL: https://issues.apache.org/jira/browse/HADOOP-17723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Running the create-release script for Hadoop 3.3.1 on an ARM machine, docker 
> image fails to build:
> {noformat}
> aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fPIC -Iast27/Include -I/usr/include/python3.6m -c 
> ast27/Parser/acceler.c -o build/temp.linux-aarch64-3.6/ast27/Parser/acceler.o 
>   
>  In file included from 
> ast27/Parser/acceler.c:13:0:  
>   ast27/Parser/../Include/pgenheaders.h:8:10: 
> fatal error: Python.h: No such file or directory  
>  #include "Python.h"  
>   
> ^~
>   compilation terminated. 
>  

[jira] [Updated] (HADOOP-18031) Build arm64 (aarch64) and x86_64 image with the same Dockerfile

2021-12-02 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18031:

Summary: Build arm64 (aarch64) and x86_64 image with the same Dockerfile  
(was: Dockerfile: Support arm64 (aarch64))

> Build arm64 (aarch64) and x86_64 image with the same Dockerfile
> ---
>
> Key: HADOOP-18031
> URL: https://issues.apache.org/jira/browse/HADOOP-18031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-18031.branch-3.3.1.001.testing.patch
>
>
> -Support building Linux arm64 (aarch64) Docker images. And bump up some 
> dependency versions.-
> -Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not 
> a full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC 
> may well need to be compiled from source because it hasn't started 
> distributing Linux arm64 binaries yet.-
> -The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 
> when I was trying to build arm64 Linux Hadoop Docker image. For trunk 
> (3.4.0), due to HADOOP-17727, I need to post a different PR.-
> Just realized we already had {{Dockerfile_aarch64}}. Will try it out.
> My approach builds the Docker images for both architectures (x86_64 and 
> aarch64) with the same {{Dockerfile}}.
> We should push the built arm64 image to Docker Hub. I only see amd64 
> [there|https://hub.docker.com/r/apache/hadoop/tags] so I assumed we didn't 
> have arm64 Docker image, hmm.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18031) Dockerfile: Support arm64 (aarch64)

2021-12-02 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18031:

Description: 
-Support building Linux arm64 (aarch64) Docker images. And bump up some 
dependency versions.-

-Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not a 
full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC may 
well need to be compiled from source because it hasn't started distributing 
Linux arm64 binaries yet.-

-The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image. For trunk (3.4.0), due 
to HADOOP-17727, I need to post a different PR.-

Just realized we already had {{Dockerfile_aarch64}}. Will try it out.

My approach builds the Docker images for both architectures (x86_64 and 
aarch64) with the same {{Dockerfile}}.

We should push the built arm64 image to Docker Hub. I only see amd64 
[there|https://hub.docker.com/r/apache/hadoop/tags] so I assumed we didn't have 
arm64 Docker image, hmm.

  was:
-Support building Linux arm64 (aarch64) Docker images.

Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not a 
full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC may 
well need to be compiled from source because it hasn't started distributing 
Linux arm64 binaries yet.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.-

Just realized we already had {{Dockerfile_aarch64}}. Will try it out.

My approach builds the Docker images for both architectures (x86_64 and 
aarch64) with the same {{Dockerfile}}.


> Dockerfile: Support arm64 (aarch64)
> ---
>
> Key: HADOOP-18031
> URL: https://issues.apache.org/jira/browse/HADOOP-18031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-18031.branch-3.3.1.001.testing.patch
>
>
> -Support building Linux arm64 (aarch64) Docker images. And bump up some 
> dependency versions.-
> -Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not 
> a full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC 
> may well need to be compiled from source because it hasn't started 
> distributing Linux arm64 binaries yet.-
> -The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 
> when I was trying to build arm64 Linux Hadoop Docker image. For trunk 
> (3.4.0), due to HADOOP-17727, I need to post a different PR.-
> Just realized we already had {{Dockerfile_aarch64}}. Will try it out.
> My approach builds the Docker images for both architectures (x86_64 and 
> aarch64) with the same {{Dockerfile}}.
> We should push the built arm64 image to Docker Hub. I only see amd64 
> [there|https://hub.docker.com/r/apache/hadoop/tags] so I assumed we didn't 
> have arm64 Docker image, hmm.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18031) Dockerfile: Support arm64 (aarch64)

2021-12-02 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18031:

Description: 
-Support building Linux arm64 (aarch64) Docker images.

Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not a 
full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC may 
well need to be compiled from source because it hasn't started distributing 
Linux arm64 binaries yet.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.-

Just realized we already had {{Dockerfile_aarch64}}. Will try it out.

My approach builds the Docker images for both architectures (x86_64 and 
aarch64) with the same {{Dockerfile}}.

  was:
-Support building Linux arm64 (aarch64) Docker images.

Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not a 
full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC may 
well need to be compiled from source because it hasn't started distributing 
Linux arm64 binaries yet.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.-

Just realized we already had {{Dockerfile_aarch64}}. Will try it out.

My approach builds the image with the same {{Dockerfile}}.


> Dockerfile: Support arm64 (aarch64)
> ---
>
> Key: HADOOP-18031
> URL: https://issues.apache.org/jira/browse/HADOOP-18031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-18031.branch-3.3.1.001.testing.patch
>
>
> -Support building Linux arm64 (aarch64) Docker images.
> Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not 
> a full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC 
> may well need to be compiled from source because it hasn't started 
> distributing Linux arm64 binaries yet.
> And bump up some dependency versions.
> The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 
> when I was trying to build arm64 Linux Hadoop Docker image.
> For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.-
> Just realized we already had {{Dockerfile_aarch64}}. Will try it out.
> My approach builds the Docker images for both architectures (x86_64 and 
> aarch64) with the same {{Dockerfile}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18031) Dockerfile: Support arm64 (aarch64)

2021-12-02 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18031:

Attachment: HADOOP-18031.branch-3.3.1.001.testing.patch

> Dockerfile: Support arm64 (aarch64)
> ---
>
> Key: HADOOP-18031
> URL: https://issues.apache.org/jira/browse/HADOOP-18031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-18031.branch-3.3.1.001.testing.patch
>
>
> -Support building Linux arm64 (aarch64) Docker images.
> Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not 
> a full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC 
> may well need to be compiled from source because it hasn't started 
> distributing Linux arm64 binaries yet.
> And bump up some dependency versions.
> The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 
> when I was trying to build arm64 Linux Hadoop Docker image.
> For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.-
> Just realized we already had {{Dockerfile_aarch64}}. Will try it out.
> My approach builds the image with the same {{Dockerfile}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18031) Dockerfile: Support arm64 (aarch64)

2021-12-02 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18031:

Description: 
-Support building Linux arm64 (aarch64) Docker images.

Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not a 
full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC may 
well need to be compiled from source because it hasn't started distributing 
Linux arm64 binaries yet.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.-

Just realized we already had {{Dockerfile_aarch64}}. Will try it out.

My approach builds the image with the same {{Dockerfile}}.

  was:
Support building Linux arm64 (aarch64) Docker images.

Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not a 
full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC may 
well need to be compiled from source because it hasn't started distributing 
Linux arm64 binaries yet.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.


> Dockerfile: Support arm64 (aarch64)
> ---
>
> Key: HADOOP-18031
> URL: https://issues.apache.org/jira/browse/HADOOP-18031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> -Support building Linux arm64 (aarch64) Docker images.
> Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not 
> a full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC 
> may well need to be compiled from source because it hasn't started 
> distributing Linux arm64 binaries yet.
> And bump up some dependency versions.
> The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 
> when I was trying to build arm64 Linux Hadoop Docker image.
> For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.-
> Just realized we already had {{Dockerfile_aarch64}}. Will try it out.
> My approach builds the image with the same {{Dockerfile}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18031) Dockerfile: Support arm64 (aarch64)

2021-12-02 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-18031:

Description: 
Support building Linux arm64 (aarch64) Docker images.

Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not a 
full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC may 
well need to be compiled from source because it hasn't started distributing 
Linux arm64 binaries yet.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.

  was:
Support building Linux arm64 (aarch64) Docker images.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17509, I need to post a different PR.


> Dockerfile: Support arm64 (aarch64)
> ---
>
> Key: HADOOP-18031
> URL: https://issues.apache.org/jira/browse/HADOOP-18031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Support building Linux arm64 (aarch64) Docker images.
> Note: This only provides a arm64 *runtime* environment for Hadoop 3. But not 
> a full environment for compiling Hadoop 3 in arm64 yet. For the latter, gRPC 
> may well need to be compiled from source because it hasn't started 
> distributing Linux arm64 binaries yet.
> And bump up some dependency versions.
> The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 
> when I was trying to build arm64 Linux Hadoop Docker image.
> For trunk (3.4.0), due to HADOOP-17727, I need to post a different PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18031) Dockerfile: Support arm64 (aarch64)

2021-12-02 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-18031:
---

 Summary: Dockerfile: Support arm64 (aarch64)
 Key: HADOOP-18031
 URL: https://issues.apache.org/jira/browse/HADOOP-18031
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Siyao Meng
Assignee: Siyao Meng


Support building Linux arm64 (aarch64) Docker images.

And bump up some dependency versions.

The patch for branch-3.3 is ready. I developed this patch on branch-3.3.1 when 
I was trying to build arm64 Linux Hadoop Docker image.

For trunk (3.4.0), due to HADOOP-17509, I need to post a different PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17509) Parallelize building of dependencies

2021-12-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452264#comment-17452264
 ] 

Siyao Meng commented on HADOOP-17509:
-

Looks like this commit is only committed to 3.3.1:

https://github.com/apache/hadoop/commits/branch-3.3.1/dev-support/docker/Dockerfile

but not trunk (3.4.0):

https://github.com/apache/hadoop/commits/trunk/dev-support/docker/Dockerfile

?

> Parallelize building of dependencies
> 
>
> Key: HADOOP-17509
> URL: https://issues.apache.org/jira/browse/HADOOP-17509
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Need to use make -j$(nproc) to parallelize building of Protocol buffers and 
> Intel ISA - L dependency.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17014) Upgrade jackson-databind to 2.9.10.4

2021-10-25 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17433904#comment-17433904
 ] 

Siyao Meng commented on HADOOP-17014:
-

[~brahmareddy] Yes. We should have this in branch-3.2. I see 3.2.2 in the jira 
fix version. Did I miss the cut-off at the time? If so we should cherry-pick it 
to branch-3.2

> Upgrade jackson-databind to 2.9.10.4
> 
>
> Key: HADOOP-17014
> URL: https://issues.apache.org/jira/browse/HADOOP-17014
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.4, 3.2.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.1.4, 3.2.2
>
>
> trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
> jackson-databind 2.9.
> I'm opening this jira since I'm unsure whether we are backporting 
> HADOOP-16905 to lower version branches due to compatibility concern or 
> whatever.
> GH PR: https://github.com/apache/hadoop/pull/1981
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14693) Upgrade JUnit from 4 to 5

2021-08-14 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399121#comment-17399121
 ] 

Siyao Meng edited comment on HADOOP-14693 at 8/14/21, 7:03 AM:
---

fyi I just stumbled upon this tool called OpenRewrite. apparently it claims to 
automate code refactoring including migration from junit 4 -> 5 and others. 
might be helpful?

https://docs.openrewrite.org/

https://docs.openrewrite.org/tutorials/migrate-from-junit-4-to-junit-5


was (Author: smeng):
fyi I just stumbled upon this tool called OpenRewrite. apparently it can 
seemingly automate some code refactoring including migration from junit 4 -> 5 
and others. might be helpful?

https://docs.openrewrite.org/

https://docs.openrewrite.org/tutorials/migrate-from-junit-4-to-junit-5

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2021-08-14 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399121#comment-17399121
 ] 

Siyao Meng commented on HADOOP-14693:
-

fyi I just stumbled into this tool called OpenRewrite. apparently it can 
seemingly automate some code refactoring including migration from junit 4 -> 5 
and others. might be helpful?

https://docs.openrewrite.org/tutorials/migrate-from-junit-4-to-junit-5

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14693) Upgrade JUnit from 4 to 5

2021-08-14 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399121#comment-17399121
 ] 

Siyao Meng edited comment on HADOOP-14693 at 8/14/21, 7:02 AM:
---

fyi I just stumbled upon this tool called OpenRewrite. apparently it can 
seemingly automate some code refactoring including migration from junit 4 -> 5 
and others. might be helpful?

https://docs.openrewrite.org/

https://docs.openrewrite.org/tutorials/migrate-from-junit-4-to-junit-5


was (Author: smeng):
fyi I just stumbled into this tool called OpenRewrite. apparently it can 
seemingly automate some code refactoring including migration from junit 4 -> 5 
and others. might be helpful?

https://docs.openrewrite.org/tutorials/migrate-from-junit-4-to-junit-5

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Status: Patch Available  (was: Open)

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Description: 
Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on jdom 
1.1.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.

  was:
Bump aliyun-sdk-oss to 3.13.0 in order to remove transient dependency on jdom 
1.1.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.


> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Summary: Bump aliyun-sdk-oss to 3.13.0  (was: Bump aliyun-sdk-oss to 2.0.6)

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Bump aliyun-sdk-oss to 2.0.6 in order to remove jdom 1.1 dependency.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Description: 
Bump aliyun-sdk-oss to 3.13.0 in order to remove transient dependency on jdom 
1.1.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.

  was:
Bump aliyun-sdk-oss to 2.0.6 in order to remove jdom 1.1 dependency.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.


> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transient dependency on jdom 
> 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17820) Remove dependency on jdom

2021-08-03 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392460#comment-17392460
 ] 

Siyao Meng commented on HADOOP-17820:
-

Thanks [~aajisaka]. I have filed HADOOP-17834.

> Remove dependency on jdom
> -
>
> Key: HADOOP-17820
> URL: https://issues.apache.org/jira/browse/HADOOP-17820
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> It doesn't seem that jdom is referenced anywhere in the code base now, yet it 
> exists in the distribution.
> {code}
> $ find . -name "*jdom*.jar"
> ./hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/jdom-1.1.jar
> {code}
> There is recently 
> [CVE-2021-33813|https://github.com/advisories/GHSA-2363-cqg2-863c] issued for 
> jdom. Let's remove the binary from the dist if not useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17834) Bump aliyun-sdk-oss to 2.0.6

2021-08-03 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-17834:
---

 Summary: Bump aliyun-sdk-oss to 2.0.6
 Key: HADOOP-17834
 URL: https://issues.apache.org/jira/browse/HADOOP-17834
 Project: Hadoop Common
  Issue Type: Task
Reporter: Siyao Meng
Assignee: Siyao Meng


Bump aliyun-sdk-oss to 2.0.6 in order to remove jdom 1.1 dependency.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17820) Remove dependency on jdom

2021-07-29 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390206#comment-17390206
 ] 

Siyao Meng commented on HADOOP-17820:
-

After some digging, jdom 1 and 2 are still required as transitive dependencies:

{code:title=mvn dependency}
[INFO] +- com.aliyun.oss:aliyun-sdk-oss:jar:3.4.1:compile
[INFO] |  +- org.jdom:jdom:jar:1.1:compile
{code}

{code:title=mvn dependency}
[INFO] +- org.apache.maven.plugins:maven-shade-plugin:jar:3.2.1:provided
...
[INFO] |  +- org.jdom:jdom2:jar:2.0.6:provided
{code}

> Remove dependency on jdom
> -
>
> Key: HADOOP-17820
> URL: https://issues.apache.org/jira/browse/HADOOP-17820
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> It doesn't seem that jdom is referenced anywhere in the code base now, yet it 
> exists in the distribution.
> {code}
> $ find . -name "*jdom*.jar"
> ./hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/jdom-1.1.jar
> {code}
> There is recently 
> [CVE-2021-33813|https://github.com/advisories/GHSA-2363-cqg2-863c] issued for 
> jdom. Let's remove the binary from the dist if not useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17820) Remove dependency on jdom

2021-07-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-17820.
-
Resolution: Won't Do

> Remove dependency on jdom
> -
>
> Key: HADOOP-17820
> URL: https://issues.apache.org/jira/browse/HADOOP-17820
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> It doesn't seem that jdom is referenced anywhere in the code base now, yet it 
> exists in the distribution.
> {code}
> $ find . -name "*jdom*.jar"
> ./hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/jdom-1.1.jar
> {code}
> There is recently 
> [CVE-2021-33813|https://github.com/advisories/GHSA-2363-cqg2-863c] issued for 
> jdom. Let's remove the binary from the dist if not useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17820) Remove dependency on jdom

2021-07-29 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-17820:
---

 Summary: Remove dependency on jdom
 Key: HADOOP-17820
 URL: https://issues.apache.org/jira/browse/HADOOP-17820
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Siyao Meng
Assignee: Siyao Meng


It doesn't seem that jdom is referenced anywhere in the code base now, yet it 
exists in the distribution.

{code}
$ find . -name "*jdom*.jar"
./hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/jdom-1.1.jar
{code}

There is recently 
[CVE-2021-33813|https://github.com/advisories/GHSA-2363-cqg2-863c] issued for 
jdom. Let's remove the binary from the dist if not useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2021-04-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16083:

Target Version/s:   (was: 3.3.1)

> DistCp shouldn't always overwrite the target file when checksums match
> --
>
> Key: HADOOP-16083
> URL: https://issues.apache.org/jira/browse/HADOOP-16083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16083.001.patch
>
>
> {code:java|title=CopyMapper#setup}
> ...
> try {
>   overWrite = overWrite || 
> targetFS.getFileStatus(targetFinalPath).isFile();
> } catch (FileNotFoundException ignored) {
> }
> ...
> {code}
> The above code overrides config key "overWrite" to "true" when the target 
> path is a file. Therefore, unnecessary transfer happens when the source and 
> target file have the same checksums.
> My suggestion is: remove the code above. If the user insists to overwrite, 
> just add -overwrite in the options:
> {code:bash|title=DistCp command with -overwrite option}
> hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
> hdfs://localhost:64464/target/5/6.txt
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-21 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327081#comment-17327081
 ] 

Siyao Meng commented on HADOOP-17650:
-

Thanks for filing this [~weichiu]. Thanks [~vjasani] for the bump solr patch.

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-21 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17650:

Status: Patch Available  (was: Open)

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.

2021-04-13 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-15457:

Description: 
As of today, YARN web-ui lacks certain security related http response headers. 
We are planning to add few default ones and also add support for headers to be 
able to get added via xml config. Planning to make the below two as default.
 * X-XSS-Protection: 1; mode=block
 * X-Content-Type-Options: nosniff

 

Support for headers via config properties in core-site.xml will be along the 
below lines
{code:java}

hadoop.http.header.Strict-Transport-Security
valHSTSFromXML
{code}
In the above example, valHSTSFromXML is an example value, this should be 
[configured|https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security]
 according to the security requirements.

With this Jira, users can set required headers by prefixing HTTP header with 
hadoop.http.header. and configure with the required value in their 
core-site.xml.

Example:

{code:java}

  hadoop.http.header.http-header
  http-header-value

{code}
 
A regex matcher will lift these properties and add into the response header 
when Jetty prepares the response.

  was:
As of today, YARN web-ui lacks certain security related http response headers. 
We are planning to add few default ones and also add support for headers to be 
able to get added via xml config. Planning to make the below two as default.
 * X-XSS-Protection: 1; mode=block
 * X-Content-Type-Options: nosniff

 

Support for headers via config properties in core-site.xml will be along the 
below lines
{code:java}

 hadoop.http.header.Strict_Transport_Security
 valHSTSFromXML
 {code}
 In the above example, valHSTSFromXML is an example value, this should be 
configured according to the security requirements.

With this Jira, users can set required headers by prefixing HTTP header with 
hadoop.http.header. and configure with the required value in their 
core-site.xml.

Example:

 
{code:java}

 hadoop.http.header.http-header>
 http-header-value

{code}
 

A regex matcher will lift these properties and add into the response header 
when Jetty prepares the response.


> Add Security-Related HTTP Response Header in WEBUIs.
> 
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Fix For: 3.2.0
>
> Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, 
> HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, 
> YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, 
> YARN-8198.004.patch, YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
> hadoop.http.header.Strict-Transport-Security
> valHSTSFromXML
> {code}
> In the above example, valHSTSFromXML is an example value, this should be 
> [configured|https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security]
>  according to the security requirements.
> With this Jira, users can set required headers by prefixing HTTP header with 
> hadoop.http.header. and configure with the required value in their 
> core-site.xml.
> Example:
> {code:java}
> 
>   hadoop.http.header.http-header
>   http-header-value
> 
> {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17603:

Fix Version/s: 2.10.2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
> Attachments: HADOOP-17603.branch-2.10.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Support OpenTelemetry

2021-03-25 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308426#comment-17308426
 ] 

Siyao Meng commented on HADOOP-15566:
-

Thanks [~liuml07]. I have posted the v2 scope doc to reflect the change of 
using OpenTelemetry [^OpenTelemetry Support Scope Doc v2.pdf] 

[~liuml07] [~kiran.maturi] Please feel free to pick up any subtasks that hasn't 
been assigned or updated in a while. e.g. HADOOP-16286

> Support OpenTelemetry
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available, security
> Attachments: HADOOP-15566.000.WIP.patch, OpenTelemetry Support Scope 
> Doc v2.pdf, OpenTracing Support Scope Doc.pdf, Screen Shot 2018-06-29 at 
> 11.59.16 AM.png, ss-trace-s3a.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15566) Support OpenTelemetry

2021-03-25 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-15566:

Attachment: OpenTelemetry Support Scope Doc v2.pdf

> Support OpenTelemetry
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available, security
> Attachments: HADOOP-15566.000.WIP.patch, OpenTelemetry Support Scope 
> Doc v2.pdf, OpenTracing Support Scope Doc.pdf, Screen Shot 2018-06-29 at 
> 11.59.16 AM.png, ss-trace-s3a.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17424) Replace HTrace with No-Op tracer

2021-02-01 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17276195#comment-17276195
 ] 

Siyao Meng commented on HADOOP-17424:
-

Thanks [~iwasakims]. Indeed the removal of {{TraceAdminProtocol}} (among other 
interfaces used by HTrace) might cause compatibility issues and thus this 
should only go in 3.4.0.

> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17424) Replace HTrace with No-Op tracer

2021-01-21 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17424:

Status: Patch Available  (was: In Progress)

> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17424) Replace HTrace with No-Op tracer

2021-01-20 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17424 started by Siyao Meng.
---
> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17424) Replace HTrace with No-Op tracer

2020-12-10 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17424:

Target Version/s: 3.4.0

> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17387) Replace HTrace with NoOp tracer

2020-12-10 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-17387.
-
Resolution: Duplicate

> Replace HTrace with NoOp tracer
> ---
>
> Key: HADOOP-17387
> URL: https://issues.apache.org/jira/browse/HADOOP-17387
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> HADOOP-17171 raised the concern that the deprecated htrace binaries has a few 
> CVEs in its dependency jackson-databind. Not that HADOOP-15566 may not be 
> merged any time soon. We can replace the existing htrace impl with a noop 
> tracer (dummy).
> This could be realized by reusing some code in HADOOP-15566's PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17424) Replace HTrace with No-Op tracer

2020-12-10 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247437#comment-17247437
 ] 

Siyao Meng commented on HADOOP-17424:
-

[~iwasakims] Ah my bad. Forgot to put that one under the umbrella jira. Thought 
I forgot to create it. Will do.

> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17424) Replace HTrace with No-Op tracer

2020-12-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17424:

Description: 
Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
tracer for now to eliminate potential security issues.

The plan is to move part of the code in 
[PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster review.

  was:
Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
tracer for now to eliminate potential security issues.

The plan is to move part of the code in PR #1846 out for faster review.


> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17424) Replace HTrace with No-Op tracer

2020-12-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17424:

Description: 
Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
tracer for now to eliminate potential security issues.

The plan is to move part of the code in PR #1846 out for faster review.

  was:Remove HTrace dependency as it is depending on old jackson jars. Use a 
no-op tracer for now to eliminate potential security issues.


> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in PR #1846 out for faster review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17424) Replace HTrace with No-Op tracer

2020-12-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reassigned HADOOP-17424:
---

Assignee: Siyao Meng

> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17424) Replace HTrace with No-Op tracer

2020-12-09 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-17424:
---

 Summary: Replace HTrace with No-Op tracer
 Key: HADOOP-17424
 URL: https://issues.apache.org/jira/browse/HADOOP-17424
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Siyao Meng


Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
tracer for now to eliminate potential security issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17387) Replace HTrace with NoOp tracer

2020-11-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17387:

Target Version/s: 3.4.0

> Replace HTrace with NoOp tracer
> ---
>
> Key: HADOOP-17387
> URL: https://issues.apache.org/jira/browse/HADOOP-17387
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> HADOOP-17171 raised the concern that the deprecated htrace binaries has a few 
> CVEs in its dependency jackson-databind. Not that HADOOP-15566 may not be 
> merged any time soon. We can replace the existing htrace impl with a noop 
> tracer (dummy).
> This could be realized by reusing some code in HADOOP-15566's PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17387) Replace HTrace with NoOp tracer

2020-11-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17387:

Summary: Replace HTrace with NoOp tracer  (was: Replace HTrace with dummy 
implementation)

> Replace HTrace with NoOp tracer
> ---
>
> Key: HADOOP-17387
> URL: https://issues.apache.org/jira/browse/HADOOP-17387
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> HADOOP-17171 raised the concern that the deprecated htrace binaries has a few 
> CVEs in its dependency jackson-databind. Not that HADOOP-15566 may not be 
> merged any time soon. We can replace the existing htrace impl with a noop 
> tracer (dummy).
> This could be realized by reusing some code in HADOOP-15566's PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17387) Replace HTrace with dummy implementation

2020-11-18 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-17387:
---

 Summary: Replace HTrace with dummy implementation
 Key: HADOOP-17387
 URL: https://issues.apache.org/jira/browse/HADOOP-17387
 Project: Hadoop Common
  Issue Type: Task
Reporter: Siyao Meng
Assignee: Siyao Meng


HADOOP-17171 raised the concern that the deprecated htrace binaries has a few 
CVEs in its dependency jackson-databind. Not that HADOOP-15566 may not be 
merged any time soon. We can replace the existing htrace impl with a noop 
tracer (dummy).

This could be realized by reusing some code in HADOOP-15566's PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2020-09-22 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HADOOP-16082.
-
Resolution: Abandoned

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Status: Patch Available  (was: Open)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Fix Version/s: 1.1.0

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Fix Version/s: (was: 1.1.0)
   3.4.0

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17014) Upgrade jackson-databind to 2.9.10.4

2020-04-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17014:

Fix Version/s: 3.2.2
   3.1.4
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade jackson-databind to 2.9.10.4
> 
>
> Key: HADOOP-17014
> URL: https://issues.apache.org/jira/browse/HADOOP-17014
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.4, 3.2.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.1.4, 3.2.2
>
>
> trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
> jackson-databind 2.9.
> I'm opening this jira since I'm unsure whether we are backporting 
> HADOOP-16905 to lower version branches due to compatibility concern or 
> whatever.
> GH PR: https://github.com/apache/hadoop/pull/1981
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17014) Upgrade jackson-databind to 2.9.10.4

2020-04-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17014:

Status: Patch Available  (was: Open)

> Upgrade jackson-databind to 2.9.10.4
> 
>
> Key: HADOOP-17014
> URL: https://issues.apache.org/jira/browse/HADOOP-17014
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.4, 3.2.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
>
> trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
> jackson-databind 2.9.
> I'm opening this jira since I'm unsure whether we are backporting 
> HADOOP-16905 to lower version branches due to compatibility concern or 
> whatever.
> GH PR: https://github.com/apache/hadoop/pull/1981
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17014) Upgrade jackson-databind to 2.9.10.4

2020-04-27 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17093891#comment-17093891
 ] 

Siyao Meng commented on HADOOP-17014:
-

[~weichiu] Agree. Patch posted: https://github.com/apache/hadoop/pull/1981

> Upgrade jackson-databind to 2.9.10.4
> 
>
> Key: HADOOP-17014
> URL: https://issues.apache.org/jira/browse/HADOOP-17014
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.4, 3.2.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
>
> trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
> jackson-databind 2.9.
> I'm opening this jira since I'm unsure whether we are backporting 
> HADOOP-16905 to lower version branches due to compatibility concern or 
> whatever.
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17014) Upgrade jackson-databind to 2.9.10.4

2020-04-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17014:

Description: 
trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
jackson-databind 2.9.

I'm opening this jira since I'm unsure whether we are backporting HADOOP-16905 
to lower version branches due to compatibility concern or whatever.

GH PR: https://github.com/apache/hadoop/pull/1981

CC [~weichiu]

  was:
trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
jackson-databind 2.9.

I'm opening this jira since I'm unsure whether we are backporting HADOOP-16905 
to lower version branches due to compatibility concern or whatever.

CC [~weichiu]


> Upgrade jackson-databind to 2.9.10.4
> 
>
> Key: HADOOP-17014
> URL: https://issues.apache.org/jira/browse/HADOOP-17014
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.4, 3.2.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
>
> trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
> jackson-databind 2.9.
> I'm opening this jira since I'm unsure whether we are backporting 
> HADOOP-16905 to lower version branches due to compatibility concern or 
> whatever.
> GH PR: https://github.com/apache/hadoop/pull/1981
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17014) Upgrade jackson-databind to 2.9.10.4

2020-04-26 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17014:

Description: 
trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
jackson-databind 2.9.

I'm opening this jira since I'm unsure whether we are backporting HADOOP-16905 
to lower version branches due to compatibility concern or whatever.

CC [~weichiu]

  was:
trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
jackson-databind 2.9.

I'm opening this jira since I'm unsure whether we are backporting HADOOP-16905 
to lower version branches due to compatibility concern or whatever.


> Upgrade jackson-databind to 2.9.10.4
> 
>
> Key: HADOOP-17014
> URL: https://issues.apache.org/jira/browse/HADOOP-17014
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.4, 3.2.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
>
> trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
> jackson-databind 2.9.
> I'm opening this jira since I'm unsure whether we are backporting 
> HADOOP-16905 to lower version branches due to compatibility concern or 
> whatever.
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17014) Upgrade jackson-databind to 2.9.10.4

2020-04-26 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-17014:
---

 Summary: Upgrade jackson-databind to 2.9.10.4
 Key: HADOOP-17014
 URL: https://issues.apache.org/jira/browse/HADOOP-17014
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.4, 3.2.2
Reporter: Siyao Meng
Assignee: Siyao Meng


trunk (3.3.0) now has HADOOP-16905. But branch 3.2/3.1/.. still uses 
jackson-databind 2.9.

I'm opening this jira since I'm unsure whether we are backporting HADOOP-16905 
to lower version branches due to compatibility concern or whatever.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16905) Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches

2020-04-26 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17092562#comment-17092562
 ] 

Siyao Meng commented on HADOOP-16905:
-

Good stuff. Awesome work [~weichiu].

Do we plan to backport this to 3.2/3.1 as well?

> Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches
> 
>
> Key: HADOOP-16905
> URL: https://issues.apache.org/jira/browse/HADOOP-16905
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0
>
>
> Jackson-databind 2.10 should relieve us from the endless CVE patches 
> according to 
> https://medium.com/@cowtowncoder/jackson-2-10-features-cd880674d8a2
> Not sure if this is an easy update, but i think we should do this in the 
> Hadoop 3.3.0 and before removing jackson-databind entirely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15338) Java 11 runtime support

2020-04-22 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089992#comment-17089992
 ] 

Siyao Meng commented on HADOOP-15338:
-

[~ayushtkn] Just curious, which GCs are you using in JDK 8 vs JDK 11? This can 
be a big factor I believe.

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16935) Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to branch-3.2

2020-03-24 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16935:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to 
> branch-3.2
> 
>
> Key: HADOOP-16935
> URL: https://issues.apache.org/jira/browse/HADOOP-16935
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>
> Backport HADOOP-10848 to lower branches like branch-3.2 so applications using 
> older hadoop jars (e.g. Ozone) can get rid of the annoying message
> {code}
> WARNING: Illegal reflective access by 
> org.apache.hadoop.security.authentication.util.KerberosUtil 
> (file:/opt/hadoop/share/ozone/lib/hadoop-auth-3.2.0.jar) to method 
> sun.security.krb5.Config.getInstance()
> {code}
> when running with JDK11+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16935) Backport HADOOP-10848 Cleanup calling of sun.security.krb5.Config to branch-3.2

2020-03-24 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-16935:
---

 Summary: Backport HADOOP-10848 Cleanup calling of 
sun.security.krb5.Config to branch-3.2
 Key: HADOOP-16935
 URL: https://issues.apache.org/jira/browse/HADOOP-16935
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Siyao Meng
Assignee: Siyao Meng


Backport HADOOP-10848 to lower branches so applications using hadoop 3.2.x can 
get rid of the annoying message:
{code}
WARNING: Illegal reflective access by 
org.apache.hadoop.security.authentication.util.KerberosUtil 
(file:/path/to/lib/hadoop-auth-3.2.0.jar) to method 
sun.security.krb5.Config.getInstance()
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16935) Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to branch-3.2

2020-03-24 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16935:

Summary: Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config 
to branch-3.2  (was: Backport HADOOP-10848 Cleanup calling of 
sun.security.krb5.Config to branch-3.2)

> Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to 
> branch-3.2
> 
>
> Key: HADOOP-16935
> URL: https://issues.apache.org/jira/browse/HADOOP-16935
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Backport HADOOP-10848 to lower branches so applications using hadoop 3.2.x 
> can get rid of the annoying message:
> {code}
> WARNING: Illegal reflective access by 
> org.apache.hadoop.security.authentication.util.KerberosUtil 
> (file:/path/to/lib/hadoop-auth-3.2.0.jar) to method 
> sun.security.krb5.Config.getInstance()
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16935) Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to branch-3.2

2020-03-24 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16935:

Status: Patch Available  (was: Open)

> Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to 
> branch-3.2
> 
>
> Key: HADOOP-16935
> URL: https://issues.apache.org/jira/browse/HADOOP-16935
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Backport HADOOP-10848 to lower branches so applications using hadoop 3.2.x 
> can get rid of the annoying message:
> {code}
> WARNING: Illegal reflective access by 
> org.apache.hadoop.security.authentication.util.KerberosUtil 
> (file:/path/to/lib/hadoop-auth-3.2.0.jar) to method 
> sun.security.krb5.Config.getInstance()
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16935) Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to branch-3.2

2020-03-24 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16935:

Description: 
Backport HADOOP-10848 to lower branches like branch-3.2 so applications using 
older hadoop jars (e.g. Ozone) can get rid of the annoying message
{code}
WARNING: Illegal reflective access by 
org.apache.hadoop.security.authentication.util.KerberosUtil 
(file:/opt/hadoop/share/ozone/lib/hadoop-auth-3.2.0.jar) to method 
sun.security.krb5.Config.getInstance()
{code}
when running with JDK11+

  was:
Backport HADOOP-10848 to lower branches so applications using hadoop 3.2.x can 
get rid of the annoying message:
{code}
WARNING: Illegal reflective access by 
org.apache.hadoop.security.authentication.util.KerberosUtil 
(file:/path/to/lib/hadoop-auth-3.2.0.jar) to method 
sun.security.krb5.Config.getInstance()
{code}


> Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to 
> branch-3.2
> 
>
> Key: HADOOP-16935
> URL: https://issues.apache.org/jira/browse/HADOOP-16935
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Backport HADOOP-10848 to lower branches like branch-3.2 so applications using 
> older hadoop jars (e.g. Ozone) can get rid of the annoying message
> {code}
> WARNING: Illegal reflective access by 
> org.apache.hadoop.security.authentication.util.KerberosUtil 
> (file:/opt/hadoop/share/ozone/lib/hadoop-auth-3.2.0.jar) to method 
> sun.security.krb5.Config.getInstance()
> {code}
> when running with JDK11+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16935) Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to branch-3.2

2020-03-24 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16935:

Priority: Minor  (was: Major)

> Backport HADOOP-10848. Cleanup calling of sun.security.krb5.Config to 
> branch-3.2
> 
>
> Key: HADOOP-16935
> URL: https://issues.apache.org/jira/browse/HADOOP-16935
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>
> Backport HADOOP-10848 to lower branches like branch-3.2 so applications using 
> older hadoop jars (e.g. Ozone) can get rid of the annoying message
> {code}
> WARNING: Illegal reflective access by 
> org.apache.hadoop.security.authentication.util.KerberosUtil 
> (file:/opt/hadoop/share/ozone/lib/hadoop-auth-3.2.0.jar) to method 
> sun.security.krb5.Config.getInstance()
> {code}
> when running with JDK11+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16902) Add OpenTracing in S3 Cloud Connector

2020-03-03 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-16902:
---

 Summary: Add OpenTracing in S3 Cloud Connector
 Key: HADOOP-16902
 URL: https://issues.apache.org/jira/browse/HADOOP-16902
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Siyao Meng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16891) Upgrade jackson-databind to 2.9.10.3

2020-02-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16891:

Description: 
New [RCE|https://nvd.nist.gov/vuln/detail/CVE-2020-8840] found in 
jackson-databind 2.0.0 through 2.9.10.2.

Patched in 2.9.10.3. [Looks 
critical|https://github.com/jas502n/CVE-2020-8840/blob/master/Poc.java#L13].

After HADOOP-16882 get in we should backport this to those lower-version 
branches ASAP.

  was:
New [RCE|https://nvd.nist.gov/vuln/detail/CVE-2020-8840] found in 
jackson-databind 2.0.0 through 2.9.10.2.

Patched in 2.9.10.3. [Looks 
critical|https://github.com/jas502n/CVE-2020-8840/blob/master/Poc.java#L13].

After HADOOP-16882 get in we should backport this to those lower-version 
branches as well


> Upgrade jackson-databind to 2.9.10.3
> 
>
> Key: HADOOP-16891
> URL: https://issues.apache.org/jira/browse/HADOOP-16891
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.3.0
>
>
> New [RCE|https://nvd.nist.gov/vuln/detail/CVE-2020-8840] found in 
> jackson-databind 2.0.0 through 2.9.10.2.
> Patched in 2.9.10.3. [Looks 
> critical|https://github.com/jas502n/CVE-2020-8840/blob/master/Poc.java#L13].
> After HADOOP-16882 get in we should backport this to those lower-version 
> branches ASAP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16891) Upgrade jackson-databind to 2.9.10.3

2020-02-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16891:

Description: 
New [RCE|https://nvd.nist.gov/vuln/detail/CVE-2020-8840] found in 
jackson-databind 2.0.0 through 2.9.10.2.

Patched in 2.9.10.3. [Looks 
critical|https://github.com/jas502n/CVE-2020-8840/blob/master/Poc.java#L13].

After HADOOP-16882 get in we should backport this to those lower-version 
branches as well

  was:
New RCE found in jackson-databind 2.0.0 through 2.9.10.2.

Patched in 2.9.10.3. Looks critical.


> Upgrade jackson-databind to 2.9.10.3
> 
>
> Key: HADOOP-16891
> URL: https://issues.apache.org/jira/browse/HADOOP-16891
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.3.0
>
>
> New [RCE|https://nvd.nist.gov/vuln/detail/CVE-2020-8840] found in 
> jackson-databind 2.0.0 through 2.9.10.2.
> Patched in 2.9.10.3. [Looks 
> critical|https://github.com/jas502n/CVE-2020-8840/blob/master/Poc.java#L13].
> After HADOOP-16882 get in we should backport this to those lower-version 
> branches as well



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16891) Upgrade jackson-databind to 2.9.10.3

2020-02-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16891:

Fix Version/s: 3.3.0

> Upgrade jackson-databind to 2.9.10.3
> 
>
> Key: HADOOP-16891
> URL: https://issues.apache.org/jira/browse/HADOOP-16891
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.3.0
>
>
> New RCE found in jackson-databind 2.0.0 through 2.9.10.2.
> Patched in 2.9.10.3. Looks critical.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16891) Upgrade jackson-databind to 2.9.10.3

2020-02-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16891:

Description: 
New RCE found in jackson-databind 2.0.0 through 2.9.10.2.

Patched in 2.9.10.3. Looks critical.

  was:New RCE found in jackson-databind 2.0.0 through 2.9.10.2. Patched in 
2.9.10.3. Looks critical.


> Upgrade jackson-databind to 2.9.10.3
> 
>
> Key: HADOOP-16891
> URL: https://issues.apache.org/jira/browse/HADOOP-16891
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
>
> New RCE found in jackson-databind 2.0.0 through 2.9.10.2.
> Patched in 2.9.10.3. Looks critical.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16891) Upgrade jackson-databind to 2.9.10.3

2020-02-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16891:

Status: Patch Available  (was: Open)

> Upgrade jackson-databind to 2.9.10.3
> 
>
> Key: HADOOP-16891
> URL: https://issues.apache.org/jira/browse/HADOOP-16891
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Blocker
>
> New RCE found in jackson-databind 2.0.0 through 2.9.10.2. Patched in 
> 2.9.10.3. Looks critical.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16891) Upgrade jackson-databind to 2.9.10.3

2020-02-27 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-16891:
---

 Summary: Upgrade jackson-databind to 2.9.10.3
 Key: HADOOP-16891
 URL: https://issues.apache.org/jira/browse/HADOOP-16891
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Siyao Meng
Assignee: Siyao Meng


New RCE found in jackson-databind 2.0.0 through 2.9.10.2. Patched in 2.9.10.3. 
Looks critical.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes

2020-02-21 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16071:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Fix typo in DistCp Counters - Bandwidth in Bytes
> 
>
> Key: HADOOP-16071
> URL: https://issues.apache.org/jira/browse/HADOOP-16071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16071.001.patch
>
>
> {code:bash|title=DistCp MR Job Counters}
> ...
>   DistCp Counters
>   Bandwidth in Btyes=20971520
>   Bytes Copied=20971520
>   Bytes Expected=20971520
>   Files Copied=1
> {code}
> {noformat}
> Bandwidth in Btyes -> Bandwidth in Bytes
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16867) [thirdparty] Add shaded JargerTracer

2020-02-21 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16867:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [thirdparty] Add shaded JargerTracer
> 
>
> Key: HADOOP-16867
> URL: https://issues.apache.org/jira/browse/HADOOP-16867
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Add artifact {{hadoop-shaded-jaeger}} to {{hadoop-thirdparty}} for 
> OpenTracing work in HADOOP-15566.
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16867) [thirdparty] Add shaded JargerTracer

2020-02-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16867:

Status: Patch Available  (was: Open)

> [thirdparty] Add shaded JargerTracer
> 
>
> Key: HADOOP-16867
> URL: https://issues.apache.org/jira/browse/HADOOP-16867
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Add artifact {{hadoop-shaded-jaeger}} to {{hadoop-thirdparty}} for 
> OpenTracing work in HADOOP-15566.
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   >