[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-616683729


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  2s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  4s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 48s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  6s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 18s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  57m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1969 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 4dcee401d269 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e069a06 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/2/testReport/ |
   | Max. process+thread count | 423 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17004) ABFS: Improve the ABFS driver documentation

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17004 started by Bilahari T H.
-
> ABFS: Improve the ABFS driver documentation
> ---
>
> Key: HADOOP-17004
> URL: https://issues.apache.org/jira/browse/HADOOP-17004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add the missing configuration/settings details
> * Mention the default vales



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17004) ABFS: Improve the ABFS driver documentation

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17004:
--
Status: Patch Available  (was: In Progress)

> ABFS: Improve the ABFS driver documentation
> ---
>
> Key: HADOOP-17004
> URL: https://issues.apache.org/jira/browse/HADOOP-17004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add the missing configuration/settings details
> * Mention the default vales



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17004) ABFS: Improve the ABFS driver documentation

2020-04-20 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-17004:
-

 Summary: ABFS: Improve the ABFS driver documentation
 Key: HADOOP-17004
 URL: https://issues.apache.org/jira/browse/HADOOP-17004
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Bilahari T H
 Fix For: 3.4.0


* Add the missing configuration/settings details
* Mention the default vales



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16922) ABFS: Change in User-Agent header

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-16922:
--
Target Version/s: 3.4.0  (was: 3.3.1)

> ABFS: Change in User-Agent header
> -
>
> Key: HADOOP-16922
> URL: https://issues.apache.org/jira/browse/HADOOP-16922
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.3.1
>
>
> * Add more inforrmation to the User-Agent header like cluster name, cluster 
> type, java vendor etc.
> * Add APN/1.0 in the begining



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16922) ABFS: Change in User-Agent header

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-16922:
--
Affects Version/s: (was: 3.3.1)
   3.4.0

> ABFS: Change in User-Agent header
> -
>
> Key: HADOOP-16922
> URL: https://issues.apache.org/jira/browse/HADOOP-16922
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.3.1
>
>
> * Add more inforrmation to the User-Agent header like cluster name, cluster 
> type, java vendor etc.
> * Add APN/1.0 in the begining



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088030#comment-17088030
 ] 

Mingliang Liu commented on HADOOP-17001:


A final class with private constructor may be better for the purpose of 
Constants. An interface can be implemented and usually serves as contract of a 
group of related methods.

nit: we can replace
{code}
  /**
   * Default extension for {@link
   * org.apache.hadoop.io.compress.PassthroughCodec}.
   */
{code}
with
{code}
  /**
   * Default extension for
   * {@link org.apache.hadoop.io.compress.PassthroughCodec}.
   */
{code}

nit: and also replace
{code}
  /**
   * Default extension for {@link
   * org.apache.hadoop.io.compress.ZStandardCodec}.
   */
{code}
with
{code}
  /**
   * Default extension for {@link org.apache.hadoop.io.compress.ZStandardCodec}.
   */
{code}
since it's not over 80 chars.

The patch file naming convention is 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Namingyourpatch
 After uploading a patch, you can click "Submit Patch" to trigger the QA run. 

Also if you like Github, you can file PR directly there with the JIRA number in 
PR subject.

Thanks,

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.

2020-04-20 Thread GitBox


steveloughran commented on issue #1963:
URL: https://github.com/apache/hadoop/pull/1963#issuecomment-616771790


   run full suite 2x more; no recreation of the failure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087892#comment-17087892
 ] 

bianqi commented on HADOOP-17001:
-

[~liuml07] update patch , please review thank you very much~~

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Fix For: 3.2.2
>
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17001:
-
Target Version/s: 3.2.2  (was: 3.2)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17001:
-
Fix Version/s: (was: 3.2.2)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17004) ABFS: Improve the ABFS driver documentation

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-17004:
-

Assignee: Bilahari T H

> ABFS: Improve the ABFS driver documentation
> ---
>
> Key: HADOOP-17004
> URL: https://issues.apache.org/jira/browse/HADOOP-17004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add the missing configuration/settings details
> * Mention the default vales



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith opened a new pull request #1970: HADOOP-17004. ABFS: Improve the ABFS driver documentation

2020-04-20 Thread GitBox


bilaharith opened a new pull request #1970:
URL: https://github.com/apache/hadoop/pull/1970


   ABFS: Improve the ABFS driver documentation.
   There is no code change. So the tests has not been ran for this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17004) ABFS: Improve the ABFS driver documentation

2020-04-20 Thread Bilahari T H (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088026#comment-17088026
 ] 

Bilahari T H commented on HADOOP-17004:
---

There is no code change. So the tests has not been ran for this.

> ABFS: Improve the ABFS driver documentation
> ---
>
> Key: HADOOP-17004
> URL: https://issues.apache.org/jira/browse/HADOOP-17004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add the missing configuration/settings details
> * Mention the default vales



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-616654493


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 57s |  trunk passed  |
   | -1 :x: |  compile  |  17m 14s |  root in trunk failed.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  9s |  trunk passed  |
   | -0 :warning: |  patch  |   2m 31s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  the patch passed  |
   | -1 :x: |  compile  |  16m 27s |  root in the patch failed.  |
   | -1 :x: |  javac  |  16m 27s |  root in the patch failed.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 17s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 22s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 109m  9s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1952 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux b621d2dfa6ab 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1fdfaeb |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/5/artifact/out/branch-compile-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/5/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/5/artifact/out/patch-compile-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/5/testReport/ |
   | Max. process+thread count | 1978 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Assigned] (HADOOP-16933) Backport HADOOP-16890- "ABFS: Change in expiry calculation for MSI token provider" & HADOOP-16825 "ITestAzureBlobFileSystemCheckAccess failing" to branch-2

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-16933:
-

Assignee: Bilahari T H

> Backport HADOOP-16890- "ABFS: Change in expiry calculation for MSI token 
> provider" & HADOOP-16825 "ITestAzureBlobFileSystemCheckAccess failing" to 
> branch-2
> ---
>
> Key: HADOOP-16933
> URL: https://issues.apache.org/jira/browse/HADOOP-16933
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.10.1
>
>
> Backport "ABFS: Change in expiry calculation for MSI token provider" to 
> branch-2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #1898: HADOOP-16852: Report read-ahead error back

2020-04-20 Thread GitBox


bilaharith commented on a change in pull request #1898:
URL: https://github.com/apache/hadoop/pull/1898#discussion_r411515373



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java
##
@@ -0,0 +1,433 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.TimeoutException;
+
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.FORWARD_SLASH;
+
+/**
+ * Unit test AbfsInputStream.
+ */
+public class TestAbfsInputStream extends
+AbstractAbfsIntegrationTest {
+
+  private static final int KILOBYTE = 1024;
+
+  private AbfsRestOperation getMockRestOp() {
+AbfsRestOperation op = mock(AbfsRestOperation.class);
+AbfsHttpOperation httpOp = mock(AbfsHttpOperation.class);
+when(httpOp.getBytesReceived()).thenReturn(1024L);
+when(op.getResult()).thenReturn(httpOp);
+return op;
+  }
+
+  private AbfsClient getMockAbfsClient() {
+// Mock failure for client.read()
+AbfsClient client = mock(AbfsClient.class);
+AbfsPerfTracker tracker = new AbfsPerfTracker(
+"test",
+this.getAccountName(),
+this.getConfiguration());
+when(client.getAbfsPerfTracker()).thenReturn(tracker);
+
+return client;
+  }
+
+  private AbfsInputStream getAbfsInputStream(AbfsClient mockAbfsClient, String 
fileName) {
+// Create AbfsInputStream with the client instance
+AbfsInputStream inputStream = new AbfsInputStream(
+mockAbfsClient,
+null,
+FORWARD_SLASH + fileName,
+3 * KILOBYTE,
+1 * KILOBYTE, // Setting read ahead buffer size of 1 KB
+this.getConfiguration().getReadAheadQueueDepth(),
+this.getConfiguration().getTolerateOobAppends(),
+"eTag");
+
+return inputStream;
+  }
+
+  private void queueReadAheads(AbfsInputStream inputStream) {
+// Mimic AbfsInputStream readAhead queue requests
+ReadBufferManager.getBufferManager()
+.queueReadAhead(inputStream, 0, 1 * KILOBYTE);
+ReadBufferManager.getBufferManager()
+.queueReadAhead(inputStream, 1 * KILOBYTE, 1 * KILOBYTE);
+ReadBufferManager.getBufferManager()
+.queueReadAhead(inputStream, 2 * KILOBYTE, 1 * KILOBYTE);
+  }
+
+  private void verifyReadCallCount(AbfsClient client, int count) throws
+  AzureBlobFileSystemException, InterruptedException {
+// ReadAhead threads are triggered asynchronously.
+// Wait a second before verifying the number of total calls.
+Thread.sleep(1000);
+verify(client, times(count)).read(any(String.class), any(Long.class),
+any(byte[].class), any(Integer.class), any(Integer.class),
+any(String.class));
+  }
+
+  private void checkEvictedStatus(AbfsInputStream inputStream, int position, 
boolean expectedToThrowException)
+  throws Exception {
+// Sleep for the eviction threshold time
+
Thread.sleep(ReadBufferManager.getBufferManager().getThresholdAgeMilliseconds() 
+ 1000);
+
+// Eviction is done only when AbfsInputStream tries to queue new items.
+// 1 tryEvict will remove 1 eligible item. To ensure that the current test 
buffer
+// will get evicted (considering there could be other tests running in 
parallel),
+// call tryEvict for the number of items that are there in 
completedReadList.
+int numOfCompletedReadListItems = 
ReadBufferManager.getBufferManager().getCompletedReadListSize();
+while (numOfCompletedReadListItems > 0) {
+  

[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001-002.patch

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Fix For: 3.2.2
>
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on issue #1965: HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

2020-04-20 Thread GitBox


ThomasMarquardt commented on issue #1965:
URL: https://github.com/apache/hadoop/pull/1965#issuecomment-616670838


   Thanks for the heads-up, and I'll update this after PR 1956 is merged.  Yes, 
this is a big patch and all of it is related to enabling Delegation SAS support 
for Apache Ranger.  I considered breaking it up into multiple JIRAs but some 
changes have dependencies between each other.  Most of it is testing.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted

2020-04-20 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087871#comment-17087871
 ] 

Anoop Sam John edited comment on HADOOP-16998 at 4/20/20, 5:30 PM:
---

Thanks Steve.
The version on which this was observed was 2.7.3.. But I believe this should be 
there in all versions and even in master.
HADOOP-16785 handles cases where writes are called after close().  Here it is 
different.  When close() is been called there is still data pending for flush.  
That write fails with IOE from Azure Storage SDK. And then in finally block of 
the close() it try to close the Azure Storage SDK level OS which throws back 
same IOE.  This is the stack trace of the Exception what we see at HBase level.
{code}
Caused by: java.lang.IllegalArgumentException: ...
  at java.lang.Throwable.addSuppressed(Throwable.java:1072)
  at 
java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.close(NativeAzureFileSystem.java:1055)
  at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
  at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
  at 
org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.finishClose(AbstractHFileWriter.java:248)
  at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:133)
  at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:368)
  at 
org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1080)
  at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
  at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
  at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:960)
  at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2411)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2511)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2256)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2218)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2110)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2036)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:501)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: ...
  at 
com.microsoft.azure.storage.core.Utility.initIOException(Utility.java:778)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal.writeBlock(BlobOutputStreamInternal.java:462)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal.access$000(BlobOutputStreamInternal.java:47)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal$1.call(BlobOutputStreamInternal.java:406)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal$1.call(BlobOutputStreamInternal.java:403)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.azure.storage.StorageException: ..
  at 
com.microsoft.azure.storage.StorageException.translateException(StorageException.java:87)
  at 
com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:315)
  at 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:185)
  at 
com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlockInternal(CloudBlockBlob.java:1097)
  at 

[GitHub] [hadoop] jojochuang commented on a change in pull request #1967: YARN-9898. Workaround of Netty-all dependency aarch64 support

2020-04-20 Thread GitBox


jojochuang commented on a change in pull request #1967:
URL: https://github.com/apache/hadoop/pull/1967#discussion_r411560697



##
File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
##
@@ -32,6 +32,25 @@
 1.5.0.Final
 
 
+

Review comment:
   I think the patch makes sense to me. Thanks for working on this.
   Question: will openlab be responsible for making netty4 releases? How 
frequent does it do (netty typically makes one release every month)? if netty 
developers end up hosting aarm64 artifacts, should we use the official aarm64 
artifacts? 
   
   Code: suggest to add a comment in the pom.xml why we use unofficial 
artifacts here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16922) ABFS: Change in User-Agent header

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-16922:
--
Fix Version/s: (was: 3.3.1)
   3.4.0

> ABFS: Change in User-Agent header
> -
>
> Key: HADOOP-16922
> URL: https://issues.apache.org/jira/browse/HADOOP-16922
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add more inforrmation to the User-Agent header like cluster name, cluster 
> type, java vendor etc.
> * Add APN/1.0 in the begining



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17003) No Log compression and retention at KMS

2020-04-20 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17003:
--

 Summary: No Log compression and retention at KMS
 Key: HADOOP-17003
 URL: https://issues.apache.org/jira/browse/HADOOP-17003
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein



{code:bash}
-rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
-rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
-rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
-rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
-rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
-rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
-rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
-rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
{code}

KMS logs have no retention or compression.
They are eating up space generating disk space alerts



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-616681135


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 21s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 48s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 56s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 22s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  58m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1969 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 551ced82d417 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e069a06 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/1/testReport/ |
   | Max. process+thread count | 482 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1937: HADOOP-16937. ABFS: revert combined append+flush calls., with default config to disable append+flush calls.

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1937:
URL: https://github.com/apache/hadoop/pull/1937#issuecomment-616697666


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  23m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 45s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 10s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  84m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1937/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1937 |
   | Optional Tests | dupname asflicense xml compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 59fc6cea1ce5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e069a06 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1937/5/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1937/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1969:
URL: https://github.com/apache/hadoop/pull/1969#issuecomment-616706151


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 33s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  66m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1969 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 83d94f5c6cf1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e069a06 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/3/testReport/ |
   | Max. process+thread count | 299 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1938: HADOOP-16922. ABFS: Changing User-Agent header

2020-04-20 Thread GitBox


steveloughran commented on issue #1938:
URL: https://github.com/apache/hadoop/pull/1938#issuecomment-616757178


   @DadanielZ I'm happy if you are.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mpryahin edited a comment on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


mpryahin edited a comment on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-616799192


   I've looked through the console output which reports both trunk and patch 
builds failure due to the following module compilation failure:
   
   `[INFO] Apache Hadoop Tencent COS Support .. FAILURE [  
0.096 s]`
   
   The failure seems to be caused by [this 
commit](https://github.com/apache/hadoop/commit/82ff7bc9abc8f3ad549db898953d98ef142ab02d)
 
   This PR introduces no changes in the failing module. I've just incorporated 
all the latest changes from trunk into my PR branch before committing PR code 
improvements. 
   
   @ChenSammi  could you please check trunk compilation passes?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mpryahin edited a comment on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


mpryahin edited a comment on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-616799192


   I've looked through the console output which reports both trunk and patch 
builds failure due to the following module compilation failure:
   
   `[INFO] Apache Hadoop Tencent COS Support .. FAILURE [  
0.096 s]`
   
   The failure seems to be caused by [this 
commit](https://github.com/apache/hadoop/commit/82ff7bc9abc8f3ad549db898953d98ef142ab02d)
 
   This PR introduces no changes in the failing module. I've just incorporated 
all the latest changes from trunk into my PR branch before committing PR code 
improvements. 
   
   @ChenSammi  could you please check trunk compilation succeeds?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mpryahin commented on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


mpryahin commented on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-616799192


   I've looked through the console output which reports both trunk and patch 
builds failure due to the following module compilation failure:
   
   `[INFO] Apache Hadoop Tencent COS Support .. FAILURE [  
0.096 s]`
   
   The failure seems to be caused by [this 
commit](https://github.com/apache/hadoop/commit/82ff7bc9abc8f3ad549db898953d98ef142ab02d)
 
   This PR introduces no changes in the failing module. I've just incorporated 
all the latest changes from trunk into my PR branch before committing PR code 
improvements. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1970: HADOOP-17004. ABFS: Improve the ABFS driver documentation

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1970:
URL: https://github.com/apache/hadoop/pull/1970#issuecomment-616788015


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 55s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1970/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1970 |
   | Optional Tests | dupname asflicense mvnsite markdownlint |
   | uname | Linux 141c52c5c3a6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e069a06 |
   | Max. process+thread count | 459 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1970/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liusheng commented on a change in pull request #1967: YARN-9898. Workaround of Netty-all dependency aarch64 support

2020-04-20 Thread GitBox


liusheng commented on a change in pull request #1967:
URL: https://github.com/apache/hadoop/pull/1967#discussion_r411838315



##
File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
##
@@ -32,6 +32,25 @@
 1.5.0.Final
 
 
+

Review comment:
   @jojochuang thanks for your review,
   Actually we have do some  efforts to promote aarch64 support in netty 
community itself, please see: https://github.com/netty/netty-tcnative/pull/517, 
but because the maintainer of netty is too busy that the PR still in review 
progress.  We hope Hadoop can fully support aarch64 platform for recent 
releases, so this is a workaround, once the netty officially support aarch64, 
we will use the official aarch64 artifacts. It looks like the netty didn't have 
a fixed release schedule, but about per month have a release published: 
https://github.com/netty/netty/releases
   
   Thanks, I will add a comment in the pom.xml





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liusheng commented on a change in pull request #1967: YARN-9898. Workaround of Netty-all dependency aarch64 support

2020-04-20 Thread GitBox


liusheng commented on a change in pull request #1967:
URL: https://github.com/apache/hadoop/pull/1967#discussion_r411838315



##
File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
##
@@ -32,6 +32,25 @@
 1.5.0.Final
 
 
+

Review comment:
   @jojochuang thanks for your review,
   Actually we have do some  efforts to promote aarch64 support in netty 
community itself, please see: https://github.com/netty/netty/pull/9804, but 
because the maintainer of netty is too busy that the PR still in review 
progress.  We hope Hadoop can fully support aarch64 platform for recent 
releases, so this is a workaround, once the netty officially support aarch64, 
we will use the official aarch64 artifacts. It looks like the netty didn't have 
a fixed release schedule, but about per month have a release published: 
https://github.com/netty/netty/releases
   
   Thanks, I will add a comment in the pom.xml





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1967: YARN-9898. Workaround of Netty-all dependency aarch64 support

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1967:
URL: https://github.com/apache/hadoop/pull/1967#issuecomment-616945945


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m  8s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m  4s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 35s |  hadoop-yarn-csi in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  62m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1967 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 696b6fd3a703 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e069a06 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/2/testReport/ |
   | Max. process+thread count | 311 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted

2020-04-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087834#comment-17087834
 ] 

Steve Loughran commented on HADOOP-16998:
-

you got a full stack?

> WASB : NativeAzureFsOutputStream#close() throwing 
> java.lang.IllegalArgumentException instead of IOE which causes HBase RS to 
> get aborted
> 
>
> Key: HADOOP-16998
> URL: https://issues.apache.org/jira/browse/HADOOP-16998
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
> Attachments: HADOOP-16998.patch
>
>
> During HFile create, at the end when called close() on the OutputStream, 
> there is some pending data to get flushed. When this flush happens, an 
> Exception is thrown back from Storage. The Azure-storage SDK layer will throw 
> back IOE. (Even if it is a StorageException thrown from the Storage, the SDK 
> converts it to IOE.) But at HBase, we end up getting IllegalArgumentException 
> which causes the RS to get aborted. If we get back IOE, the flush will get 
> retried instead of aborting RS.
> The reason is this
> NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. 
> But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream 
> which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream 
> calls close on SyncableDataOutputStream and it uses below method from 
> FilterOutputStream
> {code}
> public void close() throws IOException {
>   try (OutputStream ostream = out) {
>   flush();
>   }
> }
> {code}
> Here the flush call caused an IOE to be thrown to here. The finally will 
> issue close call on ostream (Which is an instance of BlobOutputStreamInternal)
> When BlobOutputStreamInternal#close() is been called, if there was any 
> exception already occured on that Stream, it will throw back the same 
> Exception
> {code}
> public synchronized void close() throws IOException {
>   try {
>   // if the user has already closed the stream, this will throw a 
> STREAM_CLOSED exception
>   // if an exception was thrown by any thread in the 
> threadExecutor, realize it now
>   this.checkStreamState();
>   ...
> }
> private void checkStreamState() throws IOException {
>   if (this.lastError != null) {
>   throw this.lastError;
>   }
> }
> {code}
> So here both try and finally block getting Exceptions and Java uses 
> Throwable#addSuppressed() 
> Within this method if both Exceptions are same objects, it throws back 
> IllegalArgumentException
> {code}
> public final synchronized void addSuppressed(Throwable exception) {
>   if (exception == this)
>  throw new 
> IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception);
>   
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted

2020-04-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087838#comment-17087838
 ] 

Steve Loughran commented on HADOOP-16998:
-

can tag this with the specific version of hadoop you are having problems with.

Try hadoop branch-3/trunk if not already done -that is, something with 
HADOOP-16785 in, which tried to harden thos close work. If that's not enough, 
at least it has a start with where to begin testing this.

Patches up on github as PRs for review, thanks.

> WASB : NativeAzureFsOutputStream#close() throwing 
> java.lang.IllegalArgumentException instead of IOE which causes HBase RS to 
> get aborted
> 
>
> Key: HADOOP-16998
> URL: https://issues.apache.org/jira/browse/HADOOP-16998
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
> Attachments: HADOOP-16998.patch
>
>
> During HFile create, at the end when called close() on the OutputStream, 
> there is some pending data to get flushed. When this flush happens, an 
> Exception is thrown back from Storage. The Azure-storage SDK layer will throw 
> back IOE. (Even if it is a StorageException thrown from the Storage, the SDK 
> converts it to IOE.) But at HBase, we end up getting IllegalArgumentException 
> which causes the RS to get aborted. If we get back IOE, the flush will get 
> retried instead of aborting RS.
> The reason is this
> NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. 
> But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream 
> which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream 
> calls close on SyncableDataOutputStream and it uses below method from 
> FilterOutputStream
> {code}
> public void close() throws IOException {
>   try (OutputStream ostream = out) {
>   flush();
>   }
> }
> {code}
> Here the flush call caused an IOE to be thrown to here. The finally will 
> issue close call on ostream (Which is an instance of BlobOutputStreamInternal)
> When BlobOutputStreamInternal#close() is been called, if there was any 
> exception already occured on that Stream, it will throw back the same 
> Exception
> {code}
> public synchronized void close() throws IOException {
>   try {
>   // if the user has already closed the stream, this will throw a 
> STREAM_CLOSED exception
>   // if an exception was thrown by any thread in the 
> threadExecutor, realize it now
>   this.checkStreamState();
>   ...
> }
> private void checkStreamState() throws IOException {
>   if (this.lastError != null) {
>   throw this.lastError;
>   }
> }
> {code}
> So here both try and finally block getting Exceptions and Java uses 
> Throwable#addSuppressed() 
> Within this method if both Exceptions are same objects, it throws back 
> IllegalArgumentException
> {code}
> public final synchronized void addSuppressed(Throwable exception) {
>   if (exception == this)
>  throw new 
> IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception);
>   
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: (was: HADOOP-17001-002.patch)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Fix For: 3.2.2
>
> Attachments: HADOOP-17001-001.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Comment: was deleted

(was: [~liuml07] update patch, please review, thank you very much ~)

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Fix For: 3.2.2
>
> Attachments: HADOOP-17001-001.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16977) in javaApi, UGI params should be overidden through FileSystem conf

2020-04-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087842#comment-17087842
 ] 

Steve Loughran commented on HADOOP-16977:
-

If you have kerberos disabled then the username of the submitter gets passed 
throuh YARN to deployed containers as the env var HADOOP_USER_NAME. distcp 
should pick that up as UGI falls back to it in login if security is disabled

> in javaApi, UGI params should be overidden through FileSystem conf
> --
>
> Key: HADOOP-16977
> URL: https://issues.apache.org/jira/browse/HADOOP-16977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Hongbing Wang
>Priority: Major
> Attachments: HADOOP-16977.001.patch, HADOOP-16977.002.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files. Like below:
> {code:java}
> private static void ensureInitialized() {
>   if (conf == null) {
> synchronized(UserGroupInformation.class) {
>   if (conf == null) { // someone might have beat us
> initialize(new Configuration(), false);
>   }
> }
>   }
> }{code}
> So that, if FileSystem is created through FileSystem#get or 
> FileSystem#newInstance with conf, the conf values different from the 
> configuration files will not take effect in UserGroupInformation.  E.g:
> {code:java}
> Configuration conf = new Configuration();
> conf.set("k1","v1");
> conf.set("k2","v2");
> FileSystem fs = FileSystem.get(uri, conf);{code}
> "k1" or "k2" will not work in UserGroupInformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-20 Thread GitBox


hadoop-yetus removed a comment on issue #1899:
URL: https://github.com/apache/hadoop/pull/1899#issuecomment-609805651


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  24m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 19s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 57s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | -1 :x: |  findbugs  |   0m 55s |  hadoop-tools/hadoop-azure generated 2 
new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 23s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  82m  2s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Increment of volatile field 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl.queueShrink
 in 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl.queueShrinked()
  At AbfsOutputStreamStatisticsImpl.java:in 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl.queueShrinked()
  At AbfsOutputStreamStatisticsImpl.java:[line 112] |
   |  |  Increment of volatile field 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl.writeCurrentBufferOperations
 in 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl.writeCurrentBuffer()
  At AbfsOutputStreamStatisticsImpl.java:in 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl.writeCurrentBuffer()
  At AbfsOutputStreamStatisticsImpl.java:[line 123] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1899 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6588c0c15f15 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ab7495d |
   | Default Java | 1.8.0_242 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/4/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/4/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-20 Thread GitBox


hadoop-yetus removed a comment on issue #1899:
URL: https://github.com/apache/hadoop/pull/1899#issuecomment-600547500







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-20 Thread GitBox


hadoop-yetus removed a comment on issue #1899:
URL: https://github.com/apache/hadoop/pull/1899#issuecomment-600068083


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 13s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  1s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 23s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javac  |   0m 23s |  hadoop-azure in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hadoop-tools/hadoop-azure: The 
patch generated 21 new + 0 unchanged - 0 fixed = 21 total (was 0)  |
   | -1 :x: |  mvnsite  |   0m 24s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 54s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-tools_hadoop-azure generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0)  |
   | -1 :x: |  findbugs  |   0m 27s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 26s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 31s |  The patch generated 3 ASF License 
warnings.  |
   |  |   |  76m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1899 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3847b4d4a6fb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1975479 |
   | Default Java | 1.8.0_242 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-20 Thread GitBox


hadoop-yetus removed a comment on issue #1899:
URL: https://github.com/apache/hadoop/pull/1899#issuecomment-610641314







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-20 Thread GitBox


hadoop-yetus removed a comment on issue #1899:
URL: https://github.com/apache/hadoop/pull/1899#issuecomment-614531091







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-20 Thread GitBox


steveloughran commented on a change in pull request #1899:
URL: https://github.com/apache/hadoop/pull/1899#discussion_r411472091



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
##
@@ -0,0 +1,233 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStreamStatistics
+extends AbstractAbfsIntegrationTest {
+  private static final int LARGE_OPERATIONS = 10;
+
+  public ITestAbfsOutputStreamStatistics() throws Exception {

Review comment:
   test suites can throw exceptions, but the constructor shouldn't have to. 
But if you want to keep it, then go ahead and keep it. It's not important





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-04-20 Thread GitBox


steveloughran commented on a change in pull request #1899:
URL: https://github.com/apache/hadoop/pull/1899#discussion_r411474545



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -436,4 +453,28 @@ private void waitForTaskToComplete() throws IOException {
   public synchronized void waitForPendingUploads() throws IOException {
 waitForTaskToComplete();
   }
+
+  /**
+   * Getter method for AbfsOutputStream Statistics.
+   *
+   * @return statistics for AbfsOutputStream.
+   */
+  @VisibleForTesting
+  public AbfsOutputStreamStatisticsImpl getOutputStreamStatistics() {

Review comment:
   this would be coding into the implementation/semi-public API something 
only relevant for testing -and make it very hard to ever change to a different 
implementation.
   
   Just add some static method in the test suite
   ```
   AbfsOutputStreamStatisticsImp getStreamStatistics(AbfsOutputStream)
   ```
   and you could do the casting in just one place.
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17002) ABFS: Avoid storage calls to check if the account is HNS enabled or not

2020-04-20 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-17002:
-

 Summary: ABFS: Avoid storage calls to check if the account is HNS 
enabled or not
 Key: HADOOP-17002
 URL: https://issues.apache.org/jira/browse/HADOOP-17002
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Bilahari T H
Assignee: Bilahari T H
 Fix For: 3.4.0


Each time an FS instance is created a Getacl call is made. If the call fails 
with 400 Bad request, the account is determined to be a non-HNS account. 

Recommendation is to create a config and be able to avoid store calls to 
determine account HNS status,

If config is available, use that to determine account HNS status. If config is 
not present in core-site, default behaviour will be calling getAcl. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1956: HADOOP-16965 Refactor abfs stream configuration.

2020-04-20 Thread GitBox


steveloughran commented on a change in pull request #1956:
URL: https://github.com/apache/hadoop/pull/1956#discussion_r411477050



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -61,21 +61,19 @@
   private boolean closed = false;
 
   public AbfsInputStream(
-  final AbfsClient client,
-  final Statistics statistics,
-  final String path,
-  final long contentLength,
-  final int bufferSize,
-  final int readAheadQueueDepth,
-  final boolean tolerateOobAppends,
-  final String eTag) {
+  final AbfsClient client,
+  final Statistics statistics,
+  final String path,
+  final long contentLength,
+  AbfsInputStreamContext abfsInputStreamContext,

Review comment:
   for consistency, mark as `final`. I know, it's not that significant in a 
constructor, but it just keeps things the same.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java
##
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+/**
+ * Class to hold extra output stream configs.
+ */
+public class AbfsOutputStreamContext extends AbfsStreamContext {
+
+  private int writeBufferSize;
+
+  private boolean enableFlush;
+
+  private boolean disableOutputStreamFlush;
+
+  public AbfsOutputStreamContext() {
+  }
+
+  public AbfsOutputStreamContext withWriteBufferSize(
+  final int writeBufferSize) {
+this.writeBufferSize = writeBufferSize;
+return this;
+  }
+
+  public AbfsOutputStreamContext enableFlush(final boolean enableFlush) {
+this.enableFlush = enableFlush;
+return this;
+  }
+
+  public AbfsOutputStreamContext disableOutputStreamFlush(
+  final boolean disableOutputStreamFlush) {
+this.disableOutputStreamFlush = disableOutputStreamFlush;
+return this;
+  }
+
+  public AbfsOutputStreamContext build() {
+// Validation of parameters to be done here.
+return this;

Review comment:
   do we need any validators here yet, or is there enough in the code as it 
is?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java
##
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+/**
+ * Class to hold extra input stream configs.
+ */
+public class AbfsInputStreamContext extends AbfsStreamContext {
+
+  private int readBufferSize;
+
+  private int readAheadQueueDepth;
+
+  private boolean tolerateOobAppends;
+
+  public AbfsInputStreamContext() {
+  }
+
+  public AbfsInputStreamContext withReadBufferSize(final int readBufferSize) {
+this.readBufferSize = readBufferSize;
+return this;
+  }
+
+  public AbfsInputStreamContext withReadAheadQueueDepth(
+  final int readAheadQueueDepth) {
+this.readAheadQueueDepth = (readAheadQueueDepth >= 0)
+? readAheadQueueDepth
+: Runtime.getRuntime().availableProcessors();;

Review comment:
   trailing ;





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries 

[GitHub] [hadoop] steveloughran commented on issue #1946: HADOOP-16961. ABFS: Adding metrics to AbfsInputStream

2020-04-20 Thread GitBox


steveloughran commented on issue #1946:
URL: https://github.com/apache/hadoop/pull/1946#issuecomment-616634674


   @bgaborg, Can you wait for #1956 to go in? It's trying to get the args to 
the input stream under control. I hope to merge it on Tuesday Apr 20.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #1937: HADOOP-16937. ABFS: revert combined append+flush calls., with default config to disable append+flush calls.

2020-04-20 Thread GitBox


ishaniahuja commented on a change in pull request #1937:
URL: https://github.com/apache/hadoop/pull/1937#discussion_r411481780



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
##
@@ -119,7 +119,6 @@ public AbfsClient(final URL baseUrl, final 
SharedKeyCredentials sharedKeyCredent
 this.sasTokenProvider = sasTokenProvider;
   }
 
-  @Override
   public void close() throws IOException {

Review comment:
   added back.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -56,6 +56,8 @@
   private boolean closed;
   private boolean supportFlush;
   private boolean disableOutputStreamFlush;
+  private boolean supportAppendWithFlush;
+  private boolean appendBlob;

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1965: HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

2020-04-20 Thread GitBox


steveloughran commented on issue #1965:
URL: https://github.com/apache/hadoop/pull/1965#issuecomment-616635540


   Well, this is a big patch :)
   
   I hope to merge  #1956 in on Tuesday Apr 20; it'll break your arg passing to 
the streams, but its designed to reduce future merge conflict (e.g. #1946)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #1937: HADOOP-16937. ABFS: revert combined append+flush calls., with default config to disable append+flush calls.

2020-04-20 Thread GitBox


ishaniahuja commented on a change in pull request #1937:
URL: https://github.com/apache/hadoop/pull/1937#discussion_r411482133



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -51,6 +51,7 @@
   public static final String FS_AZURE_ENABLE_AUTOTHROTTLING = 
"fs.azure.enable.autothrottling";
   public static final String FS_AZURE_ALWAYS_USE_HTTPS = 
"fs.azure.always.use.https";
   public static final String FS_AZURE_ATOMIC_RENAME_KEY = 
"fs.azure.atomic.rename.key";
+  public static final String FS_AZURE_APPEND_BLOB_KEY = 
"fs.azure.appendblob.key";

Review comment:
   added comment. doc would be added when the new rest version is released. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #1937: HADOOP-16937. ABFS: revert combined append+flush calls., with default config to disable append+flush calls.

2020-04-20 Thread GitBox


ishaniahuja commented on a change in pull request #1937:
URL: https://github.com/apache/hadoop/pull/1937#discussion_r411482408



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -405,10 +426,15 @@ public OutputStream createFile(final Path path,
   umask.toString(),
   isNamespaceEnabled);
 
-  final AbfsRestOperation op = 
client.createPath(AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path), 
true, overwrite,
-  isNamespaceEnabled ? getOctalNotation(permission) : null,
-  isNamespaceEnabled ? getOctalNotation(umask) : null);
-  perfInfo.registerResult(op.getResult()).registerSuccess(true);
+boolean appendBlob = false;
+if (isAppendBlobKey(path.toString())) {
+  appendBlob = true;
+}
+
+  client.createPath(AbfsHttpConstants.FORWARD_SLASH + 
getRelativePath(path), true, overwrite,
+  isNamespaceEnabled ? getOctalNotation(permission) : null,
+  isNamespaceEnabled ? getOctalNotation(umask) : null,
+  appendBlob);

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on issue #1937: HADOOP-16937. ABFS: revert combined append+flush calls., with default config to disable append+flush calls.

2020-04-20 Thread GitBox


ishaniahuja commented on issue #1937:
URL: https://github.com/apache/hadoop/pull/1937#issuecomment-616636078


   Result Summary
   
   PUBLIC ENDPOINT, SOUTH CENTRAL US
   
   Namespace Account
   
   Default(fs.azure.enable.appendwithflush=false)
   
   Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 416, Failures: 0, Errors: 0, Skipped: 33
   Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   Non Namespace Account
   
   Default(fs.azure.enable.appendwithflush=false)
   
   Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 416, Failures: 0, Errors: 0, Skipped: 236
   Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   ==
   PRIVATE ENDPOINT, LATEST BUILD, LATEST REST VERSION
   
   ===
   
   Namespace Account
   
   Default(fs.azure.enable.appendwithflush=false)
   
   Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 416, Failures: 0, Errors: 0, Skipped: 33
   Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   ===
   
   fs.azure.enable.appendwithflush=true
   
   Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 416, Failures: 0, Errors: 0, Skipped: 33
   Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   ===
   
   Non Namespace Account
   
   Default(fs.azure.enable.appendwithflush=false)
   
   Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 416, Failures: 0, Errors: 0, Skipped: 236
   Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   ===
   
   fs.azure.enable.appendwithflush=true
   
   Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 416, Failures: 0, Errors: 0, Skipped: 236
   Tests run: 206, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17002) ABFS: Avoid storage calls to check if the account is HNS enabled or not

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17002:
--
Status: Patch Available  (was: In Progress)

> ABFS: Avoid storage calls to check if the account is HNS enabled or not
> ---
>
> Key: HADOOP-17002
> URL: https://issues.apache.org/jira/browse/HADOOP-17002
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> Each time an FS instance is created a Getacl call is made. If the call fails 
> with 400 Bad request, the account is determined to be a non-HNS account. 
> Recommendation is to create a config and be able to avoid store calls to 
> determine account HNS status,
> If config is available, use that to determine account HNS status. If config 
> is not present in core-site, default behaviour will be calling getAcl. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith opened a new pull request #1969: ABFS: Adding config to determine if the account is HNS enabled or not

2020-04-20 Thread GitBox


bilaharith opened a new pull request #1969:
URL: https://github.com/apache/hadoop/pull/1969


   Each time an FS instance is created a Getacl call is made. If the call fails 
with 400 Bad request, the account is determined to be a non-HNS account.
   
   Recommendation is to create a config and be able to avoid store calls to 
determine account HNS status,
   
   If config is available, use that to determine account HNS status. If config 
is not present in core-site, default behaviour will be calling getAcl.
   
   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   **Account with HNS Support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 66
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   **Account without HNS support**
   [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 240
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17002) ABFS: Avoid storage calls to check if the account is HNS enabled or not

2020-04-20 Thread Bilahari T H (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087863#comment-17087863
 ] 

Bilahari T H commented on HADOOP-17002:
---

*Driver test results using accounts in Central India*
mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify

*Account with HNS Support*
[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
[WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 66
[WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

*Account without HNS support*
[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0
[WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 240
[WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

> ABFS: Avoid storage calls to check if the account is HNS enabled or not
> ---
>
> Key: HADOOP-17002
> URL: https://issues.apache.org/jira/browse/HADOOP-17002
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> Each time an FS instance is created a Getacl call is made. If the call fails 
> with 400 Bad request, the account is determined to be a non-HNS account. 
> Recommendation is to create a config and be able to avoid store calls to 
> determine account HNS status,
> If config is available, use that to determine account HNS status. If config 
> is not present in core-site, default behaviour will be calling getAcl. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted

2020-04-20 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087871#comment-17087871
 ] 

Anoop Sam John commented on HADOOP-16998:
-

Thanks Steve.
The version on which this was observed was 2.7.3.. But I believe this should be 
there in all versions and even in master.
HADOOP-16785 having handles cases where writes are called after close().  Here 
it is different.  When close() is been called there is still data pending for 
flush.  That write fails with IOE from Azure Storage SDK. And then in finally 
block of the close() it try to close the Azure Storage SDK level OS which 
throws back same IOE.  This is the stack trace of the Exception what we see at 
HBase level.
{code}
Caused by: java.lang.IllegalArgumentException: ...
  at java.lang.Throwable.addSuppressed(Throwable.java:1072)
  at 
java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.close(NativeAzureFileSystem.java:1055)
  at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
  at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
  at 
org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.finishClose(AbstractHFileWriter.java:248)
  at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:133)
  at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:368)
  at 
org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1080)
  at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
  at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
  at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:960)
  at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2411)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2511)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2256)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2218)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2110)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2036)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:501)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75)
  at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: ...
  at 
com.microsoft.azure.storage.core.Utility.initIOException(Utility.java:778)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal.writeBlock(BlobOutputStreamInternal.java:462)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal.access$000(BlobOutputStreamInternal.java:47)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal$1.call(BlobOutputStreamInternal.java:406)
  at 
com.microsoft.azure.storage.blob.BlobOutputStreamInternal$1.call(BlobOutputStreamInternal.java:403)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.azure.storage.StorageException: ..
  at 
com.microsoft.azure.storage.StorageException.translateException(StorageException.java:87)
  at 
com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:315)
  at 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:185)
  at 
com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlockInternal(CloudBlockBlob.java:1097)
  at 

[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing java.lang.IllegalArgumentException instead of IOE which causes HBase RS to get aborted

2020-04-20 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087873#comment-17087873
 ] 

Anoop Sam John commented on HADOOP-16998:
-

bq.Patches up on github as PRs for review
Bit busy with some other stuff.. Surely will do after that. Tks.

> WASB : NativeAzureFsOutputStream#close() throwing 
> java.lang.IllegalArgumentException instead of IOE which causes HBase RS to 
> get aborted
> 
>
> Key: HADOOP-16998
> URL: https://issues.apache.org/jira/browse/HADOOP-16998
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
> Attachments: HADOOP-16998.patch
>
>
> During HFile create, at the end when called close() on the OutputStream, 
> there is some pending data to get flushed. When this flush happens, an 
> Exception is thrown back from Storage. The Azure-storage SDK layer will throw 
> back IOE. (Even if it is a StorageException thrown from the Storage, the SDK 
> converts it to IOE.) But at HBase, we end up getting IllegalArgumentException 
> which causes the RS to get aborted. If we get back IOE, the flush will get 
> retried instead of aborting RS.
> The reason is this
> NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. 
> But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream 
> which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream 
> calls close on SyncableDataOutputStream and it uses below method from 
> FilterOutputStream
> {code}
> public void close() throws IOException {
>   try (OutputStream ostream = out) {
>   flush();
>   }
> }
> {code}
> Here the flush call caused an IOE to be thrown to here. The finally will 
> issue close call on ostream (Which is an instance of BlobOutputStreamInternal)
> When BlobOutputStreamInternal#close() is been called, if there was any 
> exception already occured on that Stream, it will throw back the same 
> Exception
> {code}
> public synchronized void close() throws IOException {
>   try {
>   // if the user has already closed the stream, this will throw a 
> STREAM_CLOSED exception
>   // if an exception was thrown by any thread in the 
> threadExecutor, realize it now
>   this.checkStreamState();
>   ...
> }
> private void checkStreamState() throws IOException {
>   if (this.lastError != null) {
>   throw this.lastError;
>   }
> }
> {code}
> So here both try and finally block getting Exceptions and Java uses 
> Throwable#addSuppressed() 
> Within this method if both Exceptions are same objects, it throws back 
> IllegalArgumentException
> {code}
> public final synchronized void addSuppressed(Throwable exception) {
>   if (exception == this)
>  throw new 
> IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception);
>   
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17002) ABFS: Avoid storage calls to check if the account is HNS enabled or not

2020-04-20 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17002 started by Bilahari T H.
-
> ABFS: Avoid storage calls to check if the account is HNS enabled or not
> ---
>
> Key: HADOOP-17002
> URL: https://issues.apache.org/jira/browse/HADOOP-17002
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> Each time an FS instance is created a Getacl call is made. If the call fails 
> with 400 Bad request, the account is determined to be a non-HNS account. 
> Recommendation is to create a config and be able to avoid store calls to 
> determine account HNS status,
> If config is available, use that to determine account HNS status. If config 
> is not present in core-site, default behaviour will be calling getAcl. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque

2020-04-20 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087403#comment-17087403
 ] 

Masatake Iwasaki commented on HADOOP-14597:
---

cherry-picked this to branch-2.10.

> Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been 
> made opaque
> 
>
> Key: HADOOP-14597
> URL: https://issues.apache.org/jira/browse/HADOOP-14597
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
> Environment: openssl-1.1.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.1
>
> Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, 
> HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch
>
>
> Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails 
> with this error
> {code}[WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:
>  In function ‘check_update_max_output_len’:
> [WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14:
>  error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct 
> evp_cipher_ctx_st}’
> [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) {
> [WARNING]   ^~
> {code}
> https://github.com/openssl/openssl/issues/962 mattcaswell says
> {quote}
> One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 
> version is that many types have been made opaque, i.e. applications are no 
> longer allowed to look inside the internals of the structures
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque

2020-04-20 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-14597:
--
Fix Version/s: 2.10.1

> Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been 
> made opaque
> 
>
> Key: HADOOP-14597
> URL: https://issues.apache.org/jira/browse/HADOOP-14597
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
> Environment: openssl-1.1.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.1
>
> Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, 
> HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch
>
>
> Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails 
> with this error
> {code}[WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:
>  In function ‘check_update_max_output_len’:
> [WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14:
>  error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct 
> evp_cipher_ctx_st}’
> [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) {
> [WARNING]   ^~
> {code}
> https://github.com/openssl/openssl/issues/962 mattcaswell says
> {quote}
> One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 
> version is that many types have been made opaque, i.e. applications are no 
> longer allowed to look inside the internals of the structures
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16739) Fix native build failure of hadoop-pipes on CentOS 8

2020-04-20 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16739:
--
Fix Version/s: 2.10.1

> Fix native build failure of hadoop-pipes on CentOS 8
> 
>
> Key: HADOOP-16739
> URL: https://issues.apache.org/jira/browse/HADOOP-16739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/pipes
>Affects Versions: 2.10.0, 3.2.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0, 2.10.1
>
> Attachments: HADOOP-16739-branch-2.10.001.patch, 
> HADOOP-16739.001.patch
>
>
> Native build fails in hadoop-tools/hadoop-pips on CentOS 8 due to lack of 
> rpc.h which was removed from glibc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2020-04-20 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16647:
--
Fix Version/s: 2.10.1

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Rakesh Radhakrishnan
>Priority: Critical
> Fix For: 3.3.0, 2.10.1
>
> Attachments: HADOOP-16647-00.patch, HADOOP-16647-01.patch, 
> HADOOP-16647-02.patch
>
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2020-04-20 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087405#comment-17087405
 ] 

Masatake Iwasaki commented on HADOOP-16647:
---

cherry-picked this following HADOOP-14597, HADOOP-15062, HADOOP-16739.

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Rakesh Radhakrishnan
>Priority: Critical
> Fix For: 3.3.0, 2.10.1
>
> Attachments: HADOOP-16647-00.patch, HADOOP-16647-01.patch, 
> HADOOP-16647-02.patch
>
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2020-04-20 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087404#comment-17087404
 ] 

Masatake Iwasaki commented on HADOOP-15062:
---

cherry-picked this to branch-2.10.

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 2.10.1
>
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2020-04-20 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-15062:
--
Fix Version/s: 2.10.1

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 2.10.1
>
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #1964: HDFS-15281: Make sure ZKFC uses dfs.namenode.rpc-address to bind to host address

2020-04-20 Thread GitBox


liuml07 commented on a change in pull request #1964:
URL: https://github.com/apache/hadoop/pull/1964#discussion_r411109845



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFCRespectsBindHostKeys.java
##
@@ -0,0 +1,98 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.hamcrest.core.IsNot.not;
+import static org.hamcrest.core.Is.is;
+import static org.junit.Assert.assertThat;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.apache.hadoop.net.ServerSocketUtil;
+import org.junit.Test;
+import java.io.IOException;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.*;
+
+

Review comment:
   nit: this can be one blank line.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFCRespectsBindHostKeys.java
##
@@ -0,0 +1,98 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.hamcrest.core.IsNot.not;
+import static org.hamcrest.core.Is.is;
+import static org.junit.Assert.assertThat;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.apache.hadoop.net.ServerSocketUtil;
+import org.junit.Test;
+import java.io.IOException;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.*;
+
+
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+
+public class TestDFSZKFCRespectsBindHostKeys {

Review comment:
   Alternatively, this test can go to `TestDFSZKFailoverController`? We may 
reuse existing setup and shutdown methods hopefully?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFCRespectsBindHostKeys.java
##
@@ -0,0 +1,98 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.hamcrest.core.IsNot.not;
+import static org.hamcrest.core.Is.is;
+import static org.junit.Assert.assertThat;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.apache.hadoop.net.ServerSocketUtil;
+import org.junit.Test;
+import java.io.IOException;
+import org.apache.commons.logging.Log;
+import 

[GitHub] [hadoop] aajisaka opened a new pull request #1968: HDFS-14742. RBF: RBF:TestRouterFaultTolerant tests are flaky

2020-04-20 Thread GitBox


aajisaka opened a new pull request #1968:
URL: https://github.com/apache/hadoop/pull/1968


   JIRA: https://issues.apache.org/jira/browse/HDFS-14742



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-20 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HADOOP-16959:

Fix Version/s: 3.4.0

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-20 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HADOOP-16959:

Fix Version/s: 3.3.1

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-20 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HADOOP-16959:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mpryahin commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


mpryahin commented on a change in pull request #1952:
URL: https://github.com/apache/hadoop/pull/1952#discussion_r411300453



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
##
@@ -340,8 +343,19 @@ public FSDataOutputStream create(Path file, FsPermission 
permission,
 // file. The FTP client connection is closed when close() is called on the
 // FSDataOutputStream.
 client.changeWorkingDirectory(parent.toUri().getPath());
-FSDataOutputStream fos = new FSDataOutputStream(client.storeFileStream(file
-.getName()), statistics) {
+OutputStream outputStream = client.storeFileStream(file.getName());
+
+if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
+  // The ftpClient is an inconsistent state. Must close the stream
+  // which in turn will logout and disconnect from FTP server
+  if (outputStream != null) {
+outputStream.close();

Review comment:
   thank you, fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mpryahin commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


mpryahin commented on a change in pull request #1952:
URL: https://github.com/apache/hadoop/pull/1952#discussion_r411300562



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java
##
@@ -37,9 +54,70 @@
  */
 public class TestFTPFileSystem {
 
+  private FtpTestServer server;
+
   @Rule
   public Timeout testTimeout = new Timeout(18);
 
+  @Before
+  public void setUp() throws Exception {
+server = new FtpTestServer(GenericTestUtils.getTestDir().toPath()).start();
+  }
+
+  @After
+  @SuppressWarnings("ResultOfMethodCallIgnored")
+  public void tearDown() throws Exception {
+server.stop();

Review comment:
   thank you, fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1968: HDFS-14742. RBF: TestRouterFaultTolerant tests are flaky

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1968:
URL: https://github.com/apache/hadoop/pull/1968#issuecomment-616469659


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  8s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  8s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 49s |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  67m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1968/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1968 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b9fcd172266d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 79e03fb |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1968/1/testReport/ |
   | Max. process+thread count | 3449 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1968/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mpryahin commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


mpryahin commented on a change in pull request #1952:
URL: https://github.com/apache/hadoop/pull/1952#discussion_r411299384



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
##
@@ -110,7 +111,9 @@ public void initialize(URI uri, Configuration conf) throws 
IOException { // get
 
 // get port information from uri, (overrides info in conf)
 int port = uri.getPort();
-port = (port == -1) ? FTP.DEFAULT_PORT : port;
+if(port == -1){
+  port = conf.getInt(FS_FTP_HOST_PORT, FTP.DEFAULT_PORT);

Review comment:
   indeed, but unfortunately there is no documentation for this connector 
at the moment. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-20 Thread Sammi Chen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087571#comment-17087571
 ] 

Sammi Chen commented on HADOOP-16959:
-

Thanks [~yuyang733] for continuously improving the patch based on our offline 
discussion. 

The last patch LGTM. 

+1.

Will commit to trunk soon.  

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087588#comment-17087588
 ] 

Hudson commented on HADOOP-16959:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18165 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18165/])
HADOOP-16959. Resolve hadoop-cos dependency conflict. Contributed by 
(sammichen: rev 82ff7bc9abc8f3ad549db898953d98ef142ab02d)
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java
* (delete) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/COSCredentialProviderList.java
* (delete) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/SimpleCredentialProvider.java
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/EnvironmentVariableCredentialsProvider.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/dev-support/findbugs-exclude.xml
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/AbstractCOSCredentialsProvider.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNUtils.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
* (delete) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/EnvironmentVariableCredentialProvider.java
* (edit) hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/SimpleCredentialsProvider.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNativeFileSystemStore.java
* (edit) hadoop-cloud-storage-project/hadoop-cos/pom.xml
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/test/java/org/apache/hadoop/fs/cosn/TestCosCredentials.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/BufferPool.java
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/COSCredentialsProviderList.java


> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


steveloughran commented on a change in pull request #1952:
URL: https://github.com/apache/hadoop/pull/1952#discussion_r411272114



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
##
@@ -340,8 +343,19 @@ public FSDataOutputStream create(Path file, FsPermission 
permission,
 // file. The FTP client connection is closed when close() is called on the
 // FSDataOutputStream.
 client.changeWorkingDirectory(parent.toUri().getPath());
-FSDataOutputStream fos = new FSDataOutputStream(client.storeFileStream(file
-.getName()), statistics) {
+OutputStream outputStream = client.storeFileStream(file.getName());
+
+if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
+  // The ftpClient is an inconsistent state. Must close the stream
+  // which in turn will logout and disconnect from FTP server
+  if (outputStream != null) {
+outputStream.close();

Review comment:
   could this raise an IOE? If so, that disconnect() afterwards still needs 
to be called, so make close() a catch/log operation. IOUtils.closeStream could 
do this (and it includes the null check)

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
##
@@ -110,7 +111,9 @@ public void initialize(URI uri, Configuration conf) throws 
IOException { // get
 
 // get port information from uri, (overrides info in conf)
 int port = uri.getPort();
-port = (port == -1) ? FTP.DEFAULT_PORT : port;
+if(port == -1){
+  port = conf.getInt(FS_FTP_HOST_PORT, FTP.DEFAULT_PORT);

Review comment:
   assuming we have documentation for the FTP connector, you are going to 
have to document this new option. 

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java
##
@@ -37,9 +54,70 @@
  */
 public class TestFTPFileSystem {
 
+  private FtpTestServer server;
+
   @Rule
   public Timeout testTimeout = new Timeout(18);
 
+  @Before
+  public void setUp() throws Exception {
+server = new FtpTestServer(GenericTestUtils.getTestDir().toPath()).start();
+  }
+
+  @After
+  @SuppressWarnings("ResultOfMethodCallIgnored")
+  public void tearDown() throws Exception {
+server.stop();

Review comment:
   handle case where server==null, i.e. setup failed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1950: HADOOP-16586. ITestS3GuardFsck, others fails when run using a local m…

2020-04-20 Thread GitBox


steveloughran commented on a change in pull request #1950:
URL: https://github.com/apache/hadoop/pull/1950#discussion_r411349770



##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestPartialRenamesDeletes.java
##
@@ -54,7 +54,12 @@
 import static org.apache.hadoop.fs.contract.ContractTestUtils.*;
 import static org.apache.hadoop.fs.s3a.Constants.*;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.MetricDiff;
-import static org.apache.hadoop.fs.s3a.S3ATestUtils.*;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.assume;

Review comment:
   not sure the expansion here is needed..,you know how imports are such a 
backporting troublespot.
   
   in fact, given there's no other diff to this class, I'm not sure this file 
needs changing at all

##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolLocal.java
##
@@ -165,18 +165,19 @@ public void testImportNoFilesystem() throws Throwable {
 
   @Test
   public void testInfoBucketAndRegionNoFS() throws Throwable {
-intercept(FileNotFoundException.class,
+intercept(UnknownStoreException.class,

Review comment:
   if this is triggering failures, I'm not seeing them -but then I do have 
fs.s3a.bucket.probe == 0it's probably not triggering any bucket probe at 
all. For consistent failures here, the config we run with should have that 
option set to 2, "use v2 probe on instantiation". 

##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolLocal.java
##
@@ -165,18 +165,19 @@ public void testImportNoFilesystem() throws Throwable {
 
   @Test
   public void testInfoBucketAndRegionNoFS() throws Throwable {
-intercept(FileNotFoundException.class,
+intercept(UnknownStoreException.class,
 () -> run(BucketInfo.NAME, "-meta",
 LOCAL_METADATA, "-region",
 "any-region", S3A_THIS_BUCKET_DOES_NOT_EXIST));
   }
 
   @Test
   public void testInitNegativeRead() throws Throwable {
-runToFailure(INVALID_ARGUMENT,
-Init.NAME, "-meta", LOCAL_METADATA, "-region",
-"eu-west-1",
-READ_FLAG, "-10");
+intercept(CommandFormat.UnknownOptionException.class,

Review comment:
   if this is triggering bucket probe failures, it's not doing the -ve arg 
check. So here we should either switch to a probe option of 0, or just cut the 
test entirely. I'm almost going for the latter given the way people should be 
creating tables is with read = write =0 for on-demand use

##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolLocal.java
##
@@ -97,7 +98,6 @@ public void testImportCommand() throws Exception {
 .getListing().size());
 assertEquals("Expected 2 items: empty directory and a parent directory", 2,
 ms.listChildren(parent).getListing().size());
-assertTrue(children.isAuthoritative());

Review comment:
   why did you cut this assertion?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1948: HADOOP-16986. s3a to not need wildfly on the classpath

2020-04-20 Thread GitBox


steveloughran commented on issue #1948:
URL: https://github.com/apache/hadoop/pull/1948#issuecomment-616535042


   Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.

2020-04-20 Thread GitBox


hadoop-yetus commented on issue #1952:
URL: https://github.com/apache/hadoop/pull/1952#issuecomment-616555963


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  2s |  trunk passed  |
   | -1 :x: |  compile  |  16m 59s |  root in trunk failed.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  7s |  trunk passed  |
   | -0 :warning: |  patch  |   2m 29s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  the patch passed  |
   | -1 :x: |  compile  |  16m 29s |  root in the patch failed.  |
   | -1 :x: |  javac  |  16m 29s |  root in the patch failed.  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 18s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 14s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 107m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1952 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux b5e2405d3aa0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 82ff7bc |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/4/artifact/out/branch-compile-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/4/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/4/artifact/out/patch-compile-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/4/testReport/ |
   | Max. process+thread count | 1370 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1939: YARN-10223. Duplicate jersey-test-framework-core dependency in yarn-server-common

2020-04-20 Thread GitBox


aajisaka commented on issue #1939:
URL: https://github.com/apache/hadoop/pull/1939#issuecomment-616563294


   It's intentional. I think these imports are not required.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16986) s3a to not need wildfly on the classpath

2020-04-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087749#comment-17087749
 ] 

Hudson commented on HADOOP-16986:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18166 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18166/])
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948) (github: rev 
42711081e3cba5835493b5cbedc23d16dfea7667)
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/DelegatingSSLSocketFactory.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestWildflyAndOpenSSLBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/NetworkBinding.java


> s3a to not need wildfly on the classpath
> 
>
> Key: HADOOP-16986
> URL: https://issues.apache.org/jira/browse/HADOOP-16986
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> see : https://github.com/apache/hadoop/pull/1948 and HADOOP-16855
> * remove a hard dependency on wildfly.jar being on the classpath for S3; it's 
> used if present, but handled if not
> * even if openssl is requested
> * and NPEs are caught and swallowed in case wildfly 1.0.4.Final ever gets on 
> the classpath again



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1962: HADOOP-16953. tuning s3guard disabled warnings

2020-04-20 Thread GitBox


bgaborg commented on issue #1962:
URL: https://github.com/apache/hadoop/pull/1962#issuecomment-616570277


   LGTM. It's better to have this turned off by default than shout whenever 
it's disabled.
   Thanks for fixing this @steveloughran 
   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1962: HADOOP-16953. tuning s3guard disabled warnings

2020-04-20 Thread GitBox


steveloughran commented on issue #1962:
URL: https://github.com/apache/hadoop/pull/1962#issuecomment-616572882


   thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087775#comment-17087775
 ] 

bianqi commented on HADOOP-17001:
-

[~liuml07] update patch, please review, thank you very much ~

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Fix For: 3.2.2
>
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16953) HADOOP-16953. tune s3guard disabled warnings

2020-04-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087781#comment-17087781
 ] 

Hudson commented on HADOOP-16953:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18167 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18167/])
HADOOP-16953. tuning s3guard disabled warnings (#1962) (github: rev 
93b662db47aa4e9bd0e2cecabddf949c0fea19f2)
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java


> HADOOP-16953. tune s3guard disabled warnings
> 
>
> Key: HADOOP-16953
> URL: https://issues.apache.org/jira/browse/HADOOP-16953
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> config option org.apache.hadoop.fs.s3a.s3guard.disabled.warn.level should be  
> fs.s3a.s3guard.disabled.warn.level
> need to fix that and add the existing one as deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1939: YARN-10223. Duplicate jersey-test-framework-core dependency in yarn-server-common

2020-04-20 Thread GitBox


steveloughran commented on issue #1939:
URL: https://github.com/apache/hadoop/pull/1939#issuecomment-616557738


   this cuts out both imports. intentional?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16986) s3a to not need wildfly on the classpath

2020-04-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16986.
-
Fix Version/s: 3.3.0
 Release Note: 
hadoop-aws can use native openssl libraries for better HTTPS performance 
-consult the S3A performance document for details.

To enable this, wildfly.jar is declared as a compile-time dependency of the 
hadoop-aws module, so ensuring it ends up on the classpath of the hadoop 
command line, distribution packages and downstream modules. 

It is however, still optional, unless fs.s3a.ssl.channel.mode is set to openssl
   Resolution: Fixed

> s3a to not need wildfly on the classpath
> 
>
> Key: HADOOP-16986
> URL: https://issues.apache.org/jira/browse/HADOOP-16986
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> see : https://github.com/apache/hadoop/pull/1948 and HADOOP-16855
> * remove a hard dependency on wildfly.jar being on the classpath for S3; it's 
> used if present, but handled if not
> * even if openssl is requested
> * and NPEs are caught and swallowed in case wildfly 1.0.4.Final ever gets on 
> the classpath again



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16953) HADOOP-16953. tune s3guard disabled warnings

2020-04-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16953:

Summary: HADOOP-16953. tune s3guard disabled warnings  (was: remove 
org.apache.hadoop off s3guard off warning config name)

> HADOOP-16953. tune s3guard disabled warnings
> 
>
> Key: HADOOP-16953
> URL: https://issues.apache.org/jira/browse/HADOOP-16953
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> config option org.apache.hadoop.fs.s3a.s3guard.disabled.warn.level should be  
> fs.s3a.s3guard.disabled.warn.level
> need to fix that and add the existing one as deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16953) HADOOP-16953. tune s3guard disabled warnings

2020-04-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16953:

Fix Version/s: 3.3.1

> HADOOP-16953. tune s3guard disabled warnings
> 
>
> Key: HADOOP-16953
> URL: https://issues.apache.org/jira/browse/HADOOP-16953
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> config option org.apache.hadoop.fs.s3a.s3guard.disabled.warn.level should be  
> fs.s3a.s3guard.disabled.warn.level
> need to fix that and add the existing one as deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17001) The suffix name of the unified compression class

2020-04-20 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HADOOP-17001:

Attachment: HADOOP-17001-002.patch

> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Fix For: 3.2.2
>
> Attachments: HADOOP-17001-001.patch, HADOOP-17001-002.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org