[jira] [Updated] (HADOOP-15927) Add @threadSafe annotation to hadoop-maven-plugins to enable Maven parallel build

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15927:
---
Description: 
Maven 3.x can build modules in parallel. 
https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3
When trying this feature, got the following warning:
{noformat}
[WARNING] *
[WARNING] * Your build is requesting parallel execution, but project  *
[WARNING] * contains the following plugin(s) that have goals not marked   *
[WARNING] * as @threadSafe to support parallel building.  *
[WARNING] * While this /may/ work fine, please look for plugin updates*
[WARNING] * and/or request plugins be made thread-safe.   *
[WARNING] * If reporting an issue, report it against the plugin in*
[WARNING] * question, not against maven-core  *
[WARNING] *
[WARNING] The following plugins are not marked @threadSafe in Apache Hadoop 
Common:
[WARNING] org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT
[WARNING] Enable debug to see more precisely which goals are not marked 
@threadSafe.
[WARNING] *
{noformat}
Let's mark hadoop-maven-plugins as @threadSafe to remove the warning.

  was:
Maven 3.x can build modules in parallel. 
https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3
When trying this feature, got the following warning:
{noformat}
[WARNING] *
[WARNING] * Your build is requesting parallel execution, but project  *
[WARNING] * contains the following plugin(s) that have goals not marked   *
[INFO] 
[WARNING] * as @threadSafe to support parallel building.  *
[WARNING] * While this /may/ work fine, please look for plugin updates*
[WARNING] * and/or request plugins be made thread-safe.   *
[WARNING] * If reporting an issue, report it against the plugin in*
[WARNING] * question, not against maven-core  *
[WARNING] *
[WARNING] The following plugins are not marked @threadSafe in Apache Hadoop 
Common:
[WARNING] org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT
[WARNING] Enable debug to see more precisely which goals are not marked 
@threadSafe.
[WARNING] *
{noformat}
Let's mark hadoop-maven-plugins as @threadSafe to remove the warning.


> Add @threadSafe annotation to hadoop-maven-plugins to enable Maven parallel 
> build
> -
>
> Key: HADOOP-15927
> URL: https://issues.apache.org/jira/browse/HADOOP-15927
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> Maven 3.x can build modules in parallel. 
> https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3
> When trying this feature, got the following warning:
> {noformat}
> [WARNING] *
> [WARNING] * Your build is requesting parallel execution, but project  *
> [WARNING] * contains the following plugin(s) that have goals not marked   *
> [WARNING] * as @threadSafe to support parallel building.  *
> [WARNING] * While this /may/ work fine, please look for plugin updates*
> [WARNING] * and/or request plugins be made thread-safe.   *
> [WARNING] * If reporting an issue, report it against the plugin in*
> [WARNING] * question, not against maven-core  *
> [WARNING] *
> [WARNING] The following plugins are not marked @threadSafe in Apache Hadoop 
> Common:
> [WARNING] org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT
> [WARNING] Enable debug to see more precisely which goals are not marked 
> @threadSafe.
> [WARNING] *
> {noformat}
> Let's mark hadoop-maven-plugins as @threadSafe to remove the warning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws does not use shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15924:

Summary: Hadoop aws does not use shaded jars  (was: Hadoop aws does not use 
with shaded jars)

> Hadoop aws does not use shaded jars
> ---
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws does not use shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15924:

Component/s: fs/s3

> Hadoop aws does not use shaded jars
> ---
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684985#comment-16684985
 ] 

wujinhu edited comment on HADOOP-15917 at 11/13/18 10:23 AM:
-

Thanks [~Sammi] for your comments.
 # I have added some comments to make them more clear.
 # yes, we only care about successful operations now. I plan to refactor 
statistics like hadoop-aws in the future
 # It's OK, AliyunOSSFileSystemStore is initialized with statistics.


was (Author: wujinhu):
Thanks [~Sammi] for your comments.
 # I have added some comments to make them more clear.
 # yes, we only care about successful operations now. I plan to refactor 
statistics like hadoop-aws
 # It's OK, AliyunOSSFileSystemStore is initialized with statistics.

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15917.001.patch, HADOOP-15917.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15387) Produce a shaded hadoop-cloud-storage JAR for applications to use

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15387:
---

Assignee: (was: Steve Loughran)

> Produce a shaded hadoop-cloud-storage JAR for applications to use
> -
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9911) hadoop 2.1.0-beta tarball only contains 32bit native libraries

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9911.

Resolution: Won't Fix

think this isn't going to be fixed

> hadoop 2.1.0-beta tarball only contains 32bit native libraries
> --
>
> Key: HADOOP-9911
> URL: https://issues.apache.org/jira/browse/HADOOP-9911
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 2.2.0
>Reporter: André Kelpe
>Priority: Major
>
> I am setting up a cluster on 64bit linux and I noticed, that the tarball only 
> ships with 32 bit libraries:
> $ pwd
> /opt/hadoop-2.1.0-beta/lib/native
> $ ls -al
> total 2376
> drwxr-xr-x 2 67974 users   4096 Aug 15 20:59 .
> drwxr-xr-x 3 67974 users   4096 Aug 15 20:59 ..
> -rw-r--r-- 1 67974 users 598578 Aug 15 20:59 libhadoop.a
> -rw-r--r-- 1 67974 users 764772 Aug 15 20:59 libhadooppipes.a
> lrwxrwxrwx 1 67974 users 18 Aug 15 20:59 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxr-xr-x 1 67974 users 407568 Aug 15 20:59 libhadoop.so.1.0.0
> -rw-r--r-- 1 67974 users 304632 Aug 15 20:59 libhadooputils.a
> -rw-r--r-- 1 67974 users 184414 Aug 15 20:59 libhdfs.a
> lrwxrwxrwx 1 67974 users 16 Aug 15 20:59 libhdfs.so -> libhdfs.so.0.0.0
> -rwxr-xr-x 1 67974 users 149556 Aug 15 20:59 libhdfs.so.0.0.0
> $ file *
> libhadoop.a:current ar archive
> libhadooppipes.a:   current ar archive
> libhadoop.so:   symbolic link to `libhadoop.so.1.0.0'
> libhadoop.so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0x527e88ec3e92a95389839bd3fc9d7dbdebf654d6, not stripped
> libhadooputils.a:   current ar archive
> libhdfs.a:  current ar archive
> libhdfs.so: symbolic link to `libhdfs.so.0.0.0'
> libhdfs.so.0.0.0:   ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0xddb2abae9272f584edbe22c76b43fcda9436f685, not stripped
> $ hadoop checknative
> 13/08/28 18:11:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Native library checking:
> hadoop: false 
> zlib:   false 
> snappy: false 
> lz4:false 
> bzip2:  false 
> 13/08/28 18:11:17 INFO util.ExitUtil: Exiting with status 1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684911#comment-16684911
 ] 

Sammi Chen commented on HADOOP-15917:
-

[~wujinhu], some comments,

1. TestAliyunOSSBlockOutputStream. java. 

would you add some comments about why read ops is "7" and write ops is "3" 
here? 

assertEquals({color:#ff}7{color}, statistics.getReadOps());

assertEquals({color:#ff}3{color}, statistics.getWriteOps());

2.  The statistics counts the successful read and write operations. Do we care 
about the failure operations counts?

3.  FSDataOutputStream is created with a null statistics 

 

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15917.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws does not use shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15924:

Affects Version/s: 3.2.0

> Hadoop aws does not use shaded jars
> ---
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws does not use with shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15924:

Summary: Hadoop aws does not use with shaded jars  (was: Hadoop aws cannot 
be used with shaded jars)

> Hadoop aws does not use with shaded jars
> 
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws does not use with shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15924:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Hadoop aws does not use with shaded jars
> 
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15870:
---

Assignee: lqjacklee

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15927) Add @threadSafe annotation to hadoop-maven-plugins to enable Maven parallel build

2018-11-13 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15927:
--

 Summary: Add @threadSafe annotation to hadoop-maven-plugins to 
enable Maven parallel build
 Key: HADOOP-15927
 URL: https://issues.apache.org/jira/browse/HADOOP-15927
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Maven 3.x can build modules in parallel. 
https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3
When trying this feature, got the following warning:
{noformat}
[WARNING] *
[WARNING] * Your build is requesting parallel execution, but project  *
[WARNING] * contains the following plugin(s) that have goals not marked   *
[INFO] 
[WARNING] * as @threadSafe to support parallel building.  *
[WARNING] * While this /may/ work fine, please look for plugin updates*
[WARNING] * and/or request plugins be made thread-safe.   *
[WARNING] * If reporting an issue, report it against the plugin in*
[WARNING] * question, not against maven-core  *
[WARNING] *
[WARNING] The following plugins are not marked @threadSafe in Apache Hadoop 
Common:
[WARNING] org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT
[WARNING] Enable debug to see more precisely which goals are not marked 
@threadSafe.
[WARNING] *
{noformat}
Let's mark hadoop-maven-plugins as @threadSafe to remove the warning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684969#comment-16684969
 ] 

Steve Loughran commented on HADOOP-15924:
-

-1

we have consistent use of the unshaded artifacts across the hadoop-* artifacts, 
and that is not going to change.

The shaded JARs are for downstream use only, and, as java 11 support goes in, 
something to eventually replace with java modules.

The solution you need is HADOOP-15387. If you contribute patches there they'll 
be reviewed

bq. Just ran tests against s3 gateway endpoint 

The full IT integration tests? thanks for this. For testing the shaded stuff 
life gets a bit more complex as they'll have to be downstream.

> Hadoop aws cannot be used with shaded jars
> --
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15681:

Component/s: security

> AuthenticationFilter should generate valid date format for Set-Cookie header 
> regardless of default Locale
> -
>
> Key: HADOOP-15681
> URL: https://issues.apache.org/jira/browse/HADOOP-15681
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: HADOOP-15681.patch
>
>
> Hi guys,
> When I try to set up Hadoop Kerberos authentication for Solr (HTTP2), I met 
> this exception:
> {code}
> java.lang.IllegalArgumentException: null
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:435) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:409) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encodeValue(HpackEncoder.java:368) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:302) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:179) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generateHeaders(HeadersGenerator.java:72)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generate(HeadersGenerator.java:56)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.Generator.control(Generator.java:80) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.HTTP2Session$ControlEntry.generate(HTTP2Session.java:1163)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:184) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
>  ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) 
> ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frame(HTTP2Session.java:685) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frames(HTTP2Session.java:657) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Stream.headers(HTTP2Stream.java:107) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.sendHeadersFrame(HttpTransportOverHTTP2.java:235)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.send(HttpTransportOverHTTP2.java:134)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:298) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpWriter.close(HttpWriter.java:49) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.ResponseWriter.close(ResponseWriter.java:163) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.Response.closeOutput(Response.java:1038) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.generateAcceptableResponse(ErrorHandler.java:178)
>  ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.doError(ErrorHandler.java:142) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:78) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]

[jira] [Updated] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15681:

Issue Type: Bug  (was: Improvement)

> AuthenticationFilter should generate valid date format for Set-Cookie header 
> regardless of default Locale
> -
>
> Key: HADOOP-15681
> URL: https://issues.apache.org/jira/browse/HADOOP-15681
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: HADOOP-15681.patch
>
>
> Hi guys,
> When I try to set up Hadoop Kerberos authentication for Solr (HTTP2), I met 
> this exception:
> {code}
> java.lang.IllegalArgumentException: null
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:435) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:409) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encodeValue(HpackEncoder.java:368) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:302) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:179) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generateHeaders(HeadersGenerator.java:72)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generate(HeadersGenerator.java:56)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.Generator.control(Generator.java:80) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.HTTP2Session$ControlEntry.generate(HTTP2Session.java:1163)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:184) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
>  ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) 
> ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frame(HTTP2Session.java:685) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frames(HTTP2Session.java:657) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Stream.headers(HTTP2Stream.java:107) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.sendHeadersFrame(HttpTransportOverHTTP2.java:235)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.send(HttpTransportOverHTTP2.java:134)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:298) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpWriter.close(HttpWriter.java:49) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.ResponseWriter.close(ResponseWriter.java:163) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.Response.closeOutput(Response.java:1038) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.generateAcceptableResponse(ErrorHandler.java:178)
>  ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.doError(ErrorHandler.java:142) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:78) 
> 

[jira] [Updated] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15681:

Affects Version/s: 3.2.0
   Status: Patch Available  (was: Open)

just hit the submit button to see what jenkins says

> AuthenticationFilter should generate valid date format for Set-Cookie header 
> regardless of default Locale
> -
>
> Key: HADOOP-15681
> URL: https://issues.apache.org/jira/browse/HADOOP-15681
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: HADOOP-15681.patch
>
>
> Hi guys,
> When I try to set up Hadoop Kerberos authentication for Solr (HTTP2), I met 
> this exception:
> {code}
> java.lang.IllegalArgumentException: null
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:435) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:409) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encodeValue(HpackEncoder.java:368) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:302) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:179) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generateHeaders(HeadersGenerator.java:72)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generate(HeadersGenerator.java:56)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.Generator.control(Generator.java:80) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.HTTP2Session$ControlEntry.generate(HTTP2Session.java:1163)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:184) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
>  ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) 
> ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frame(HTTP2Session.java:685) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frames(HTTP2Session.java:657) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Stream.headers(HTTP2Stream.java:107) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.sendHeadersFrame(HttpTransportOverHTTP2.java:235)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.send(HttpTransportOverHTTP2.java:134)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:298) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpWriter.close(HttpWriter.java:49) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.ResponseWriter.close(ResponseWriter.java:163) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.Response.closeOutput(Response.java:1038) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.generateAcceptableResponse(ErrorHandler.java:178)
>  ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.doError(ErrorHandler.java:142) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> 

[jira] [Updated] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15681:

Priority: Minor  (was: Major)

> AuthenticationFilter should generate valid date format for Set-Cookie header 
> regardless of default Locale
> -
>
> Key: HADOOP-15681
> URL: https://issues.apache.org/jira/browse/HADOOP-15681
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0
>Reporter: Cao Manh Dat
>Priority: Minor
> Attachments: HADOOP-15681.patch
>
>
> Hi guys,
> When I try to set up Hadoop Kerberos authentication for Solr (HTTP2), I met 
> this exception:
> {code}
> java.lang.IllegalArgumentException: null
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:435) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:409) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encodeValue(HpackEncoder.java:368) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:302) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:179) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generateHeaders(HeadersGenerator.java:72)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generate(HeadersGenerator.java:56)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.Generator.control(Generator.java:80) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.HTTP2Session$ControlEntry.generate(HTTP2Session.java:1163)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:184) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
>  ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) 
> ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frame(HTTP2Session.java:685) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frames(HTTP2Session.java:657) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Stream.headers(HTTP2Stream.java:107) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.sendHeadersFrame(HttpTransportOverHTTP2.java:235)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.send(HttpTransportOverHTTP2.java:134)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:298) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpWriter.close(HttpWriter.java:49) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.ResponseWriter.close(ResponseWriter.java:163) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.Response.closeOutput(Response.java:1038) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.generateAcceptableResponse(ErrorHandler.java:178)
>  ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.doError(ErrorHandler.java:142) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:78) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]

[jira] [Updated] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15917:
-
Attachment: HADOOP-15917.002.patch

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15917.001.patch, HADOOP-15917.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684985#comment-16684985
 ] 

wujinhu commented on HADOOP-15917:
--

Thanks [~Sammi] for your comments.
 # I have added some comments to make them more clear.
 # yes, we only care about successful operations now. I plan to refactor 
statistics like hadoop-aws
 # It's OK, AliyunOSSFileSystemStore is initialized with statistics.

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15917.001.patch, HADOOP-15917.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9973) wrong dependencies

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9973.

Resolution: Won't Fix

I think I'm going to resolve as a wontfix I'm afraid, on account of the age of 
the JIRA. That said, "wrong dependencies" is probably an eternal JIRA, the dark 
twin of HADOOP-9991

> wrong dependencies
> --
>
> Key: HADOOP-9973
> URL: https://issues.apache.org/jira/browse/HADOOP-9973
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 2.1.1-beta
>Reporter: Nicolas Liochon
>Priority: Minor
>
> See HBASE-9557 for the impact: for some of them, it seems it's pushing these 
> dependencies to the client applications even if they are not used.
> mvn dependency:analyze -pl hadoop-common
> [WARNING] Used undeclared dependencies found:
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]commons-collections:commons-collections:jar:3.2.1:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]tomcat:jasper-compiler:jar:5.5.23:runtime
> [WARNING]tomcat:jasper-runtime:jar:5.5.23:runtime
> [WARNING]javax.servlet.jsp:jsp-api:jar:2.1:runtime
> [WARNING]commons-el:commons-el:jar:1.0:runtime
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:runtime
> mvn dependency:analyze -pl hadoop-yarn-client
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.mortbay.jetty:jetty-util:jar:6.1.26:provided
> [WARNING]log4j:log4j:jar:1.2.17:compile
> [WARNING]com.google.guava:guava:jar:11.0.2:provided
> [WARNING]commons-lang:commons-lang:jar:2.5:provided
> [WARNING]commons-logging:commons-logging:jar:1.1.1:provided
> [WARNING]commons-cli:commons-cli:jar:1.2:provided
> [WARNING]
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.1.2-SNAPSHOT:test
> [WARNING] Unused declared dependencies found:
> [WARNING]org.slf4j:slf4j-api:jar:1.7.5:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:compile
> [WARNING]com.google.inject.extensions:guice-servlet:jar:3.0:compile
> [WARNING]io.netty:netty:jar:3.6.2.Final:compile
> [WARNING]com.google.protobuf:protobuf-java:jar:2.5.0:compile
> [WARNING]commons-io:commons-io:jar:2.1:compile
> [WARNING]org.apache.hadoop:hadoop-hdfs:jar:2.1.2-SNAPSHOT:test
> [WARNING]com.google.inject:guice:jar:3.0:compile
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:test
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-server:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]com.sun.jersey.contribs:jersey-guice:jar:1.9:compile



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15446) WASB: PageBlobInputStream.skip breaks HBASE replication

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15446:
---
Fix Version/s: 2.9.2

> WASB: PageBlobInputStream.skip breaks HBASE replication
> ---
>
> Key: HADOOP-15446
> URL: https://issues.apache.org/jira/browse/HADOOP-15446
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2
>
> Attachments: HADOOP-15446-001.patch, HADOOP-15446-002.patch, 
> HADOOP-15446-003.patch, HADOOP-15446-branch-2.001.patch
>
>
> Page Blobs are primarily used by HBASE.  HBASE replication, which apparently 
> has not been used with WASB until recently, performs non-sequential reads on 
> log files using PageBlobInputStream.  There are bugs in this stream 
> implementation which prevent skip and seek from working properly, and 
> eventually the stream state becomes corrupt and unusable.
> I believe this bug affects all releases of WASB/HADOOP.  It appears to be a 
> day-0 bug in PageBlobInputStream.  There were similar bugs opened in the past 
> (HADOOP-15042) but the issue was not properly fixed, and no test coverage was 
> added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Attachment: HADOOP-14556-019.patch

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685186#comment-16685186
 ] 

Steve Loughran commented on HADOOP-15870:
-

Reviewing patch 001 on HADOOP-15920 Looks like the new test fails on the local 
tests

* the assert should list the current (failing) value
* we're into the corners of the semantics of "what happens at the EOF". this is 
a troublespot as it shows up those places where HDFS and Posix diverge. This 
also means we'll have to submit this patch to HDFS too, to get its test 
coverage. (ideally: run them locally first, so that'll just be due diligence)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15731) TestDistributedShell fails on Windows

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15731:
---
Fix Version/s: (was: 2.9.0)
   2.9.2

> TestDistributedShell fails on Windows
> -
>
> Key: HADOOP-15731
> URL: https://issues.apache.org/jira/browse/HADOOP-15731
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15731.v1.patch, HADOOP-15731.v2.patch, 
> image-2018-09-07-13-39-04-523.png
>
>
> [ERROR] 
> testDSShellWithMultipleArgs(org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell)
>  Time elapsed: 25.68 s <<< FAILURE!
> java.lang.AssertionError
>  at org.junit.Assert.fail(Assert.java:86)
>  at org.junit.Assert.assertTrue(Assert.java:41)
>  at org.junit.Assert.assertTrue(Assert.java:52)
>  at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.verifyContainerLog(TestDistributedShell.java:1296)
> [ERROR] 
> testDSShellWithoutDomainV2CustomizedFlow(org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell)
>  Time elapsed: 90.021 s <<< ERROR!
> java.lang.Exception: test timed out after 9 milliseconds
>  at java.lang.Thread.sleep(Native Method)
>  at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShell(TestDistributedShell.java:398)
>  at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithoutDomainV2CustomizedFlow(TestDistributedShell.java:313)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Patch Available  (was: Open)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14651) Update okhttp version to 2.7.5

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14651:
---
Fix Version/s: 2.10.0

> Update okhttp version to 2.7.5
> --
>
> Key: HADOOP-14651
> URL: https://issues.apache.org/jira/browse/HADOOP-14651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HADOOP-14651-branch-2.0.004.patch, 
> HADOOP-14651-branch-2.0.004.patch, HADOOP-14651-branch-3.0.004.patch, 
> HADOOP-14651-branch-3.0.004.patch, HADOOP-14651.001.patch, 
> HADOOP-14651.002.patch, HADOOP-14651.003.patch, HADOOP-14651.004.patch
>
>
> The current artifact is:
> com.squareup.okhttp:okhttp:2.4.0
> That version could either be bumped to 2.7.5 (the latest of that line), or 
> use the latest artifact:
> com.squareup.okhttp3:okhttp:3.8.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15924) Hadoop aws does not use shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685144#comment-16685144
 ] 

Steve Loughran commented on HADOOP-15924:
-

ps, it's moot, but the tests failed due to the repackaging as 
{{MiniDFSCluster}} won't start. 

{code}
org/apache/hadoop/hdfs/server/namenode/FsImageProto$ErasureCodingSection$Builder.addPoliciesBuilder(I)Lorg/apache/hadoop/hdfs/protocol/proto/HdfsProtos$ErasureCodingPolicyProto$Builder;
 @8: invokevirtual
  Reason:
Type 
'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$ErasureCodingPolicyProto' 
(current frame, stack[2]) is not assignable to 
'com/google/protobuf/GeneratedMessage'
  Current Frame:
bci: @8
flags: { }
locals: { 
'org/apache/hadoop/hdfs/server/namenode/FsImageProto$ErasureCodingSection$Builder',
 integer }
stack: { 'com/google/protobuf/RepeatedFieldBuilder', integer, 
'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$ErasureCodingPolicyProto' }
  Bytecode:
0x000: 2ab7 000e 1bb8 004a b600 4cc0 0046 b0  
{code}

> Hadoop aws does not use shaded jars
> ---
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685052#comment-16685052
 ] 

Hadoop QA commented on HADOOP-15917:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15917 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947960/HADOOP-15917.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 97dd63e0c4cc 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e7b63ba |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15511/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15511/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
>

[jira] [Commented] (HADOOP-15919) AliyunOSS: Enable Yarn to use OSS

2018-11-13 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685136#comment-16685136
 ] 

wujinhu commented on HADOOP-15919:
--

upload patch 001, I will fix these issues in next patch.

> AliyunOSS: Enable Yarn to use OSS
> -
>
> Key: HADOOP-15919
> URL: https://issues.apache.org/jira/browse/HADOOP-15919
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15919.001.patch
>
>
> Uses DelegateToFileSystem to expose AliyunOSSFileSystem as an 
> AbstractFileSystem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15919) AliyunOSS: Enable Yarn to use OSS

2018-11-13 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15919:
-
Attachment: HADOOP-15919.001.patch
Status: Patch Available  (was: Open)

> AliyunOSS: Enable Yarn to use OSS
> -
>
> Key: HADOOP-15919
> URL: https://issues.apache.org/jira/browse/HADOOP-15919
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.3, 3.1.1, 2.9.1, 2.10.0, 3.2.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15919.001.patch
>
>
> Uses DelegateToFileSystem to expose AliyunOSSFileSystem as an 
> AbstractFileSystem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15497) TestTrash should use proper test path to avoid failing on Windows

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15497:
---
Fix Version/s: 2.9.2
   2.10.0

> TestTrash should use proper test path to avoid failing on Windows
> -
>
> Key: HADOOP-15497
> URL: https://issues.apache.org/jira/browse/HADOOP-15497
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3
>
> Attachments: HADOOP-15497.000.patch, HDFS-13625.000.patch
>
>
> The following fail on Windows due to improper path:
> * 
> [TestHDFSTrash#testNonDefaultFS|https://builds.apache.org/job/hadoop-trunk-win/478/testReport/org.apache.hadoop.hdfs/TestHDFSTrash/testNonDefaultFS/]
> * 
> [TestTrash|https://builds.apache.org/job/hadoop-trunk-win/478/testReport/org.apache.hadoop.fs/TestTrash/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685080#comment-16685080
 ] 

Hadoop QA commented on HADOOP-15681:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15681 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936239/HADOOP-15681.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5e7cf1396bb7 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e7b63ba |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15510/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15510/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> AuthenticationFilter should generate 

[jira] [Commented] (HADOOP-15919) AliyunOSS: Enable Yarn to use OSS

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685132#comment-16685132
 ] 

Hadoop QA commented on HADOOP-15919:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
17 new + 0 unchanged - 0 fixed = 17 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947968/HADOOP-15919.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 34b3134e67df 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e7b63ba |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15512/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt

[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685177#comment-16685177
 ] 

Steve Loughran commented on HADOOP-14556:
-

Patch 019: logging, resilience and debugging, mostly

* All S3A tokens have a (string) UUID, this is the sole field used for 
equality, and it is printed. Makes it easy to verify propagation.
* reverted constructor of Optional; instead tag as @Nullable, and make 
clear everywhere this is true...adding tests where appropriate to catch 
regressions. 

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen reassigned HADOOP-15917:
---

Assignee: wujinhu  (was: Sammi Chen)

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15917.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen reassigned HADOOP-15917:
---

Assignee: Sammi Chen  (was: wujinhu)

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HADOOP-15917.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen reassigned HADOOP-15917:
---

Assignee: Sammi Chen  (was: wujinhu)

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HADOOP-15917.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen reassigned HADOOP-15917:
---

Assignee: wujinhu  (was: Sammi Chen)

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15917.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws does not use shaded jars

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15924:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15620

> Hadoop aws does not use shaded jars
> ---
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15869) BlockDecompressorStream#decompress should not return -1 in case of IOException.

2018-11-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685429#comment-16685429
 ] 

Hudson commented on HADOOP-15869:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15418 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15418/])
HADOOP-15869. BlockDecompressorStream#decompress should not return -1 in 
(surendralilhore: rev 75291e6d53c13debf45493a870a898b63779914b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/BlockDecompressorStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestBlockDecompressorStream.java


> BlockDecompressorStream#decompress should not return -1 in case of 
> IOException.
> ---
>
> Key: HADOOP-15869
> URL: https://issues.apache.org/jira/browse/HADOOP-15869
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HADOOP-15869.01.patch
>
>
> BlockDecompressorStream#decompress() return -1 in case of 
> BlockMissingException. Application which is using BlockDecompressorStream may 
> think file is empty and proceed further. But actually read operation should 
> fail.
> {code:java}
> // Get original data size
> try {
>originalBlockSize = rawReadInt();
> } catch (IOException ioe) {
>return -1;
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15869) BlockDecompressorStream#decompress should not return -1 in case of IOException.

2018-11-13 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-15869:

   Resolution: Fixed
Fix Version/s: 3.2.1
   3.1.2
   3.3.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.2, branch-3.1

> BlockDecompressorStream#decompress should not return -1 in case of 
> IOException.
> ---
>
> Key: HADOOP-15869
> URL: https://issues.apache.org/jira/browse/HADOOP-15869
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15869.01.patch
>
>
> BlockDecompressorStream#decompress() return -1 in case of 
> BlockMissingException. Application which is using BlockDecompressorStream may 
> think file is empty and proceed further. But actually read operation should 
> fail.
> {code:java}
> // Get original data size
> try {
>originalBlockSize = rawReadInt();
> } catch (IOException ioe) {
>return -1;
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685777#comment-16685777
 ] 

Wei-Chiu Chuang edited comment on HADOOP-15928 at 11/13/18 9:50 PM:


bq. I don't understand what's going on here. Is it that impala is trying to use 
the hdfs native binding to talk to s3? And that's logging things which aren't 
relevant?

The summary was a little confusing. Yes Impala uses libhdfs as the native 
wrapper for hdfs client code to access s3. That UnsupportedOperationException 
message is printed into stderr every time it tries to read from byte butter, 
and the stderr could grow up to millions of UnsupportedOperationException 
messages.


was (Author: jojochuang):
bq. I don't understand what's going on here. Is it that impala is trying to use 
the hdfs native binding to talk to s3? And that's logging things which aren't 
relevant?

The summary was a little confusing. Yes Impala uses libhdfs as the native 
wrapper for hdfs client code. That UnsupportedOperationException message is 
printed into stderr every time it tries to read from byte butter, and the 
stderr could grow up to millions of UnsupportedOperationException messages.

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15876) Use keySet().removeAll() to remove multiple keys from Map in AzureBlobFileSystemStore

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15876:

   Resolution: Fixed
Fix Version/s: 3.2.1
   Status: Resolved  (was: Patch Available)

+1, committed to branch-3.2+

> Use keySet().removeAll() to remove multiple keys from Map in 
> AzureBlobFileSystemStore
> -
>
> Key: HADOOP-15876
> URL: https://issues.apache.org/jira/browse/HADOOP-15876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Ted Yu
>Assignee: Da Zhou
>Priority: Minor
> Fix For: 3.2.1
>
> Attachments: HADOOP-15876-001.patch
>
>
> Looking at 
> hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
>  , {{removeDefaultAcl}} in particular:
> {code}
> for (Map.Entry defaultAclEntry : 
> defaultAclEntries.entrySet()) {
>   aclEntries.remove(defaultAclEntry.getKey());
> }
> {code}
> The above operation can be written this way:
> {code}
> aclEntries.keySet().removeAll(defaultAclEntries.keySet());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685777#comment-16685777
 ] 

Wei-Chiu Chuang commented on HADOOP-15928:
--

bq. I don't understand what's going on here. Is it that impala is trying to use 
the hdfs native binding to talk to s3? And that's logging things which aren't 
relevant?

The summary was a little confusing. Yes Impala uses libhdfs as the native 
wrapper for hdfs client code. That UnsupportedOperationException message is 
printed into stderr every time it tries to read from byte butter, and the 
stderr could grow up to millions of UnsupportedOperationException messages.

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15872) ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685834#comment-16685834
 ] 

Steve Loughran commented on HADOOP-15872:
-

=0. 
if you run it now, everything fails with an error about headers : "The value 
for one of the HTTP headers is not in the correct format.".

{code}

[ERROR] 
testEnsureAclOperationWorksForRoot(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFilesystemAcl)
  Time elapsed: 0.032 s  <<< ERROR!
Operation failed: "The value for one of the HTTP headers is not in the correct 
format.", 400, HEAD, 
https://abfsamtest2.dfs.core.windows.net/abfs-testcontainer-dd8b2de2-478b-4f74-8d9b-206a5c888fb4?resource=filesystem=90
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:134)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.getFilesystemProperties(AbfsClient.java:198)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFilesystemProperties(AzureBlobFileSystemStore.java:231)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.fileSystemExists(AzureBlobFileSystem.java:806)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:114)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3302)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3351)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3325)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:532)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:544)
at 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.createFileSystem(AbstractAbfsIntegrationTest.java:215)
at 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.setup(AbstractAbfsIntegrationTest.java:134)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)

[INFO] 

{code}

> ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2
> -
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: junhua gu
>Priority: Major
> Attachments: HADOOP-15872-001.patch, HADOOP-15872-002.patch, 
> HADOOP-15872-003.patch
>
>
> This update to the latest REST version (2018-11-09) will make the following 
> changes to the ABFS driver:
> 1) The ABFS implementation of getFileStatus currently requires read 
> permission.  According to HDFS permissions guide, it should only require 
> execute on the parent folders (traversal access).  A new REST API has been 
> introduced in REST version "2018-11-09" of ADLS Gen 2 to fix this problem.
> 2) The new "2018-11-09" REST version introduces support to i) automatically 
> translate UPNs to OIDs when setting the owner, owning group, or ACL and ii) 
> optionally translate OIDs to UPNs in the responses when getting the owner, 
> owning group, or ACL.  Configuration will be introduced to optionally 
> translate OIDs to UPNs in the responses.  Since translation has a performance 
> impact, the default will be to perform no translation and return the OIDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15876) Use keySet().removeAll() to remove multiple keys from Map in AzureBlobFileSystemStore

2018-11-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685808#comment-16685808
 ] 

Hudson commented on HADOOP-15876:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15423 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15423/])
HADOOP-15876. Use keySet().removeAll() to remove multiple keys from Map 
(stevel: rev a13be203b7877ba56ef63aac4a2e65d4e1a4adbc)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java


> Use keySet().removeAll() to remove multiple keys from Map in 
> AzureBlobFileSystemStore
> -
>
> Key: HADOOP-15876
> URL: https://issues.apache.org/jira/browse/HADOOP-15876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Ted Yu
>Assignee: Da Zhou
>Priority: Minor
> Fix For: 3.2.1
>
> Attachments: HADOOP-15876-001.patch
>
>
> Looking at 
> hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
>  , {{removeDefaultAcl}} in particular:
> {code}
> for (Map.Entry defaultAclEntry : 
> defaultAclEntries.entrySet()) {
>   aclEntries.remove(defaultAclEntry.getKey());
> }
> {code}
> The above operation can be written this way:
> {code}
> aclEntries.keySet().removeAll(defaultAclEntries.keySet());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685787#comment-16685787
 ] 

Pranay Singh commented on HADOOP-15928:
---

[~ste...@apache.org] this problem is not particular to Impala it is seen every 
time when a file is opened (via hdfsOpenFileImpl()) in a S3 environment there 
is an error/exception logged (below) to STDERR, which is unwarranted. This 
error is generated because hdfsOpenFileImpl() calls readDirect() to do a 
buffered read, which is resulting in this exception.

Message dumped to STDERR
- 
UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Writing test case will require access to S3 which will require AWS credentials, 
I have done a manual test to verify the fix with my AWS keys (which cannot be 
shared)

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685751#comment-16685751
 ] 

Steve Loughran commented on HADOOP-15928:
-

bq. because of defect HADOOP-14603Link "S3A input stream to support 
ByteBufferReadable"

don't view that as a defect, just a feature which hasn't been implemented. As 
usual, patch and tests are welcome.

bq. when Impala uses HDFS in S3 environment,

I don't understand what's going on here. Is it that impala is trying to use the 
hdfs native binding to talk to s3? And that's logging things which aren't 
relevant?

This will be a fun one for you come up with a test for. And yes, it probably 
will need a test -its complex enough

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Description: 
Problem:

There is excessive error logging when a file is opened by libhdfs 
(DFSClient/HDFS) in S3 environment, this issue is caused because buffered read 
is not supported in S3 environment, HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get 
very large". This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.




  was:
Problem:

There is excessive error logging when Impala uses HDFS in S3 environment, this 
issue is caused because of  defect HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get 
very large". This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.





> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Description: 
Problem:

There is excessive error logging when a file is opened by libhdfs 
(DFSClient/HDFS) in S3 environment, this issue is caused because buffered read 
is not supported in S3 environment, HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

The following message is printed repeatedly in the error log/ to STDERR:
--
UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.




  was:
Problem:

There is excessive error logging when a file is opened by libhdfs 
(DFSClient/HDFS) in S3 environment, this issue is caused because buffered read 
is not supported in S3 environment, HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get 
very large". This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.





> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15872) ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685838#comment-16685838
 ] 

Steve Loughran commented on HADOOP-15872:
-

And, and this is where it gets really interesting, hadoop 3.2 & trunk without 
this patch get through all the tests, but getFileStatus() fails on the cli 
(hadoop fs -ls; cloudstore store diag)

As far as I can tell, this is a change in the behaviour in the ADLS endpoint

{code:java}
2018-11-13 22:16:15,835 [main] INFO  diag.StoreDiag 
(DurationInfo.java:(53)) - Starting: Creating filesystem
2018-11-13 22:16:15,936 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:initialize(103)) - Initializing AzureBlobFileSystem 
for abfs://stevel-test...@account.blob.core.windows.net/
2018-11-13 22:16:16,161 [main] DEBUG services.AbfsClientThrottlingIntercept 
(AbfsClientThrottlingIntercept.java:initializeSingleton(62)) - Client-side 
throttling is enabled for the ABFS file system.
2018-11-13 22:16:16,174 [main] INFO  diag.StoreDiag 
(DurationInfo.java:close(100)) - Creating filesystem: duration 0:00:341
AzureBlobFileSystem{uri=abfs://stevel-test...@account.blob.core.windows.net, 
user='stevel', primaryUserGroup='staff'}
2018-11-13 22:16:16,174 [main] INFO  diag.StoreDiag 
(DurationInfo.java:(53)) - Starting: GetFileStatus 
abfs://stevel-test...@account.blob.core.windows.net/
2018-11-13 22:16:16,174 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:getFileStatus(389)) - 
AzureBlobFileSystem.getFileStatus path: 
abfs://stevel-test...@account.blob.core.windows.net/
2018-11-13 22:16:16,175 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:performAbfsAuthCheck(1025)) - ABFS authorizer is not 
initialized. No authorization check will be performed.
2018-11-13 22:16:16,175 [main] DEBUG azurebfs.AzureBlobFileSystemStore 
(AzureBlobFileSystemStore.java:getIsNamespaceEnabled(178)) - 
getFilesystemProperties for filesystem: stevel-testing
2018-11-13 22:16:16,964 [main] DEBUG services.AbfsClient 
(AbfsRestOperation.java:executeHttpOperation(192)) - HttpRequest: 
400,,cid=5b89e9e3-b87e-4329-9451-8326331e26b0,rid=3a5caef2-401e-005c-529e-7bb87e00,sent=0,recv=0,HEAD,https://ACCOUNT.blob.core.windows.net/stevel-testing?resource=filesystem=90
2018-11-13 22:16:16,969 [main] INFO  diag.StoreDiag 
(DurationInfo.java:close(100)) - GetFileStatus 
abfs://stevel-test...@account.blob.core.windows.net/: duration 0:00:794
Operation failed: "The value for one of the HTTP headers is not in the correct 
format.", 400, HEAD, 
https://ACCOUNT.blob.core.windows.net/stevel-testing?resource=filesystem=90
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:134)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.getFilesystemProperties(AbfsClient.java:197)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:181)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:454)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:395)
at 
org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:692)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:386)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:331)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.store.diag.StoreDiag.exec(StoreDiag.java:989)
at org.apache.hadoop.fs.store.diag.StoreDiag.main(StoreDiag.java:998)
at storediag.main(storediag.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
2018-11-13 22:16:16,971 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status -1: Operation failed: "The 
value for one of the HTTP headers is not in the correct format.", 400, HEAD, 
https://ACCOUNT.blob.core.windows.net/stevel-testing?resource=filesystem=90
2018-11-13 22:16:16,973 [shutdown-hook-0] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:close(383)) - AzureBlobFileSystem.close
{code}

> ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2
> -
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> 

[jira] [Commented] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685849#comment-16685849
 ] 

Steve Loughran commented on HADOOP-15928:
-

OK, I understand now. changed the title

This is an hdfs issue; its their code. Can you move it to the HDFS project then 
set the version & component fields. 


bq. Writing test case will require access to S3 which will require AWS 
credentials, I 

all our hadoop-aws integration tests require user-supplied AWS credentials; all 
patches which go near S3 need them.

In this case, i don't think you do neet to go to that effort: all we need is an 
FS which doesn't implement ByteBufferReadable, just some dummy Fs like 
{{org.apache.hadoop.fs.TestFileSystemCanonicalization.DummyFileSystem}}




> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15876) Use keySet().removeAll() to remove multiple keys from Map in AzureBlobFileSystemStore

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15876:

Component/s: fs/azure

> Use keySet().removeAll() to remove multiple keys from Map in 
> AzureBlobFileSystemStore
> -
>
> Key: HADOOP-15876
> URL: https://issues.apache.org/jira/browse/HADOOP-15876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Ted Yu
>Assignee: Da Zhou
>Priority: Minor
> Attachments: HADOOP-15876-001.patch
>
>
> Looking at 
> hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
>  , {{removeDefaultAcl}} in particular:
> {code}
> for (Map.Entry defaultAclEntry : 
> defaultAclEntries.entrySet()) {
>   aclEntries.remove(defaultAclEntry.getKey());
> }
> {code}
> The above operation can be written this way:
> {code}
> aclEntries.keySet().removeAll(defaultAclEntries.keySet());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15876) Use keySet().removeAll() to remove multiple keys from Map in AzureBlobFileSystemStore

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15876:

Affects Version/s: 3.2.0

> Use keySet().removeAll() to remove multiple keys from Map in 
> AzureBlobFileSystemStore
> -
>
> Key: HADOOP-15876
> URL: https://issues.apache.org/jira/browse/HADOOP-15876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Ted Yu
>Assignee: Da Zhou
>Priority: Minor
> Attachments: HADOOP-15876-001.patch
>
>
> Looking at 
> hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
>  , {{removeDefaultAcl}} in particular:
> {code}
> for (Map.Entry defaultAclEntry : 
> defaultAclEntries.entrySet()) {
>   aclEntries.remove(defaultAclEntry.getKey());
> }
> {code}
> The above operation can be written this way:
> {code}
> aclEntries.keySet().removeAll(defaultAclEntries.keySet());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-11-13 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685947#comment-16685947
 ] 

lqjacklee commented on HADOOP-15870:


[~ste...@apache.org] Thanks , I will update it. 

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support logging when using HDFS in S3 environment

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15928:

Summary: libhdfs logs errors when opened FS doesn't support  logging when 
using HDFS in S3 environment  (was: Excessive error logging when using HDFS in 
S3 environment)

> libhdfs logs errors when opened FS doesn't support  logging when using HDFS 
> in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15928:

Summary: libhdfs logs errors when opened FS doesn't support 
ByteBufferReadable  (was: libhdfs logs errors when opened FS doesn't support  
logging when using HDFS in S3 environment)

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
   Labels: libhdfs  (was: )
Fix Version/s: 3.0.3
  Component/s: hdfs-client

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686100#comment-16686100
 ] 

Hudson commented on HADOOP-15917:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15424 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15424/])
HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in (sammi.chen: rev 
3fade865ce84dcf68bcd7de5a5ed1c7d904796e9)
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSBlockOutputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java


> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15917.001.patch, HADOOP-15917.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15931) support 'hadoop key create' with user specified key material

2018-11-13 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reassigned HADOOP-15931:
--

Assignee: Vinayakumar B

> support 'hadoop key create' with user specified key material
> 
>
> Key: HADOOP-15931
> URL: https://issues.apache.org/jira/browse/HADOOP-15931
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>
> {{hadoop key create}} command should support creation of keys with user 
> specified key material.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HADOOP-15917:

Fix Version/s: 2.9.3
   3.2.1
   3.1.2
   3.3.0
   3.0.4
   2.10.0

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1, 2.9.3
>
> Attachments: HADOOP-15917.001.patch, HADOOP-15917.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15931) support 'hadoop key create' with user specified key material

2018-11-13 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-15931:
---
Status: Patch Available  (was: Open)

> support 'hadoop key create' with user specified key material
> 
>
> Key: HADOOP-15931
> URL: https://issues.apache.org/jira/browse/HADOOP-15931
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15931-01.patch
>
>
> {{hadoop key create}} command should support creation of keys with user 
> specified key material.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15931) support 'hadoop key create' with user specified key material

2018-11-13 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-15931:
---
Attachment: HADOOP-15931-01.patch

> support 'hadoop key create' with user specified key material
> 
>
> Key: HADOOP-15931
> URL: https://issues.apache.org/jira/browse/HADOOP-15931
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15931-01.patch
>
>
> {{hadoop key create}} command should support creation of keys with user 
> specified key material.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-13 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HADOOP-15917:

Affects Version/s: (was: 2.10.0)

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1, 2.9.3
>
> Attachments: HADOOP-15917.001.patch, HADOOP-15917.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-13 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15930:
--

 Summary: Exclude MD5 checksum files from release artifact
 Key: HADOOP-15930
 URL: https://issues.apache.org/jira/browse/HADOOP-15930
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


create-release script creates md5 checksum file, but now it is useless.

https://www.apache.org/dev/release-distribution.html#sigs-and-sums
bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15930:
---
Description: 
create-release script creates md5 checksum files, but now md5 checksum is 
useless.

https://www.apache.org/dev/release-distribution.html#sigs-and-sums
bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
supply MD5 or SHA-1. Existing releases do not need to be changed.

  was:
create-release script creates md5 checksum file, but now it is useless.

https://www.apache.org/dev/release-distribution.html#sigs-and-sums
bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
supply MD5 or SHA-1. Existing releases do not need to be changed.


> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Critical
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15877) Upgrade ZooKeeper version to 3.5.4-beta and Curator version to 4.0.1

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15877:
---
Summary: Upgrade ZooKeeper version to 3.5.4-beta and Curator version to 
4.0.1  (was: Upgrade Curator version to 4.0.1)

> Upgrade ZooKeeper version to 3.5.4-beta and Curator version to 4.0.1
> 
>
> Key: HADOOP-15877
> URL: https://issues.apache.org/jira/browse/HADOOP-15877
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> A long-term option to fix YARN-8937.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15930:
---
Attachment: HADOOP-15930.01.patch

> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15930.01.patch
>
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15930:
---
Assignee: Akira Ajisaka
Target Version/s: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 2.8.6, 3.2.1, 2.9.3
  Status: Patch Available  (was: Open)

> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15930.01.patch
>
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686129#comment-16686129
 ] 

Hadoop QA commented on HADOOP-15930:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
18s{color} | {color:green} The patch generated 0 new + 104 unchanged - 2 fixed 
= 104 total (was 106) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15930 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948083/HADOOP-15930.01.patch 
|
| Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
| uname | Linux 25448c79d6d2 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3fade86 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15515/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15930.01.patch
>
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15931) support 'hadoop key create' with user specified key material

2018-11-13 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-15931:
--

 Summary: support 'hadoop key create' with user specified key 
material
 Key: HADOOP-15931
 URL: https://issues.apache.org/jira/browse/HADOOP-15931
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinayakumar B


{{hadoop key create}} command should support creation of keys with user 
specified key material.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15929) org.apache.hadoop.ipc.TestIPC fail

2018-11-13 Thread Elaine Ang (JIRA)
Elaine Ang created HADOOP-15929:
---

 Summary: org.apache.hadoop.ipc.TestIPC fail
 Key: HADOOP-15929
 URL: https://issues.apache.org/jira/browse/HADOOP-15929
 Project: Hadoop Common
  Issue Type: Test
  Components: common
Affects Versions: 2.8.5
Reporter: Elaine Ang
 Attachments: org.apache.hadoop.ipc.TestIPC-output.txt

The unit test for module **hadoop-common-project/hadoop-common (version 2.8.5 
checkout from Github) failed.

Reproduce:
 # Clone [Hadoop Github reop|https://github.com/apache/hadoop] and checkout tag 
release-2.8.5-RC0
 # Compile
{noformat}
mvn clean compile{noformat}

 #  
{noformat}
cd hadoop-common-project/hadoop-common/
mvn test{noformat}

 

 

Below is the failed test log when running as non-root user.

 
{noformat}
Failed tests:
 
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testSetTimesSymlinkToDir:233->SymlinkBaseTest.testSetTimesSymlinkToDir:1395
 expected:<3000> but was:<1542140218000>
 TestIPC.testUserBinding:1495->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872)

 TestIPC.testProxyUserBinding:1500->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at 
org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872){noformat}
 

 Attached is a more verbosed test output.  
[^org.apache.hadoop.ipc.TestIPC-output.txt]

^Suggestions regarding how to resolve this would be helpful.^



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15929) org.apache.hadoop.ipc.TestIPC fail

2018-11-13 Thread Elaine Ang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elaine Ang updated HADOOP-15929:

Description: 
The unit test for module **hadoop-common-project/hadoop-common (version 2.8.5 
checkout from Github) failed.

Reproduce:
 # Clone [Hadoop Github reop|https://github.com/apache/hadoop] and checkout tag 
release-2.8.5-RC0
 # Compile & test
{noformat}
mvn clean compile 
cd hadoop-common-project/hadoop-common/
mvn test{noformat}

 

Below is the failed test log when running as non-root user.

 
{noformat}
Failed tests:
 
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testSetTimesSymlinkToDir:233->SymlinkBaseTest.testSetTimesSymlinkToDir:1395
 expected:<3000> but was:<1542140218000>
 TestIPC.testUserBinding:1495->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872)

 TestIPC.testProxyUserBinding:1500->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at 
org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872){noformat}
 

 Attached is a more verbosed test output.  
[^org.apache.hadoop.ipc.TestIPC-output.txt]

^Suggestions regarding how to resolve this would be helpful.^

  was:
The unit test for module **hadoop-common-project/hadoop-common (version 2.8.5 
checkout from Github) failed.

Reproduce:
 # Clone [Hadoop Github reop|https://github.com/apache/hadoop] and checkout tag 
release-2.8.5-RC0
 # Compile
{noformat}
mvn clean compile{noformat}

 #  
{noformat}
cd hadoop-common-project/hadoop-common/
mvn test{noformat}

 

 

Below is the failed test log when running as non-root user.

 
{noformat}
Failed tests:
 
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testSetTimesSymlinkToDir:233->SymlinkBaseTest.testSetTimesSymlinkToDir:1395
 expected:<3000> but was:<1542140218000>
 TestIPC.testUserBinding:1495->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872)

 TestIPC.testProxyUserBinding:1500->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at 
org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872){noformat}
 

 Attached is a more verbosed test output.  
[^org.apache.hadoop.ipc.TestIPC-output.txt]

^Suggestions regarding how to resolve this would be helpful.^


> org.apache.hadoop.ipc.TestIPC fail
> --
>
> Key: HADOOP-15929
> URL: https://issues.apache.org/jira/browse/HADOOP-15929
> Project: Hadoop Common
>  Issue Type: Test
>  Components: common
>Affects Versions: 2.8.5
>Reporter: Elaine Ang
>Priority: Major
> Attachments: org.apache.hadoop.ipc.TestIPC-output.txt
>
>
> The unit test for module **hadoop-common-project/hadoop-common (version 2.8.5 
> checkout from Github) failed.
> Reproduce:
>  # Clone [Hadoop Github reop|https://github.com/apache/hadoop] and checkout 
> tag release-2.8.5-RC0
>  # Compile & test
> {noformat}
> mvn clean compile 
> cd hadoop-common-project/hadoop-common/
> mvn test{noformat}
>  
> Below is the failed test 

[jira] [Commented] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685742#comment-16685742
 ] 

Hadoop QA commented on HADOOP-15928:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
40m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
41s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15928 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948019/HADOOP-15928.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 9a1735f577c9 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 762a56c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15514/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15514/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR 

[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Attachment: HADOOP-15928.001.patch

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Status: Patch Available  (was: In Progress)

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15928 started by Pranay Singh.
-
> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)
Pranay Singh created HADOOP-15928:
-

 Summary: Excessive error logging when using HDFS in S3 environment
 Key: HADOOP-15928
 URL: https://issues.apache.org/jira/browse/HADOOP-15928
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Pranay Singh


Problem:

There is excessive error logging when Impala uses HDFS in S3 environment, this 
issue is caused because of  defect HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get 
very large". This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh reassigned HADOOP-15928:
-

Assignee: Pranay Singh

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org