[jira] [Commented] (HADOOP-16009) Replace the url of the repository in Apache Hadoop source code

2018-12-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723777#comment-16723777
 ] 

Akira Ajisaka commented on HADOOP-16009:


Do not commit this until the migration finishes.

> Replace the url of the repository in Apache Hadoop source code
> --
>
> Key: HADOOP-16009
> URL: https://issues.apache.org/jira/browse/HADOOP-16009
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16009.01.patch
>
>
> This issue is for the source code change in Apache Hadoop repository.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16009) Replace the url of the repository in Apache Hadoop source code

2018-12-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16009:
---
Attachment: HADOOP-16009.01.patch

> Replace the url of the repository in Apache Hadoop source code
> --
>
> Key: HADOOP-16009
> URL: https://issues.apache.org/jira/browse/HADOOP-16009
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16009.01.patch
>
>
> This issue is for the source code change in Apache Hadoop repository.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16009) Replace the url of the repository in Apache Hadoop source code

2018-12-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16009:
---
Assignee: Akira Ajisaka
  Status: Patch Available  (was: Open)

> Replace the url of the repository in Apache Hadoop source code
> --
>
> Key: HADOOP-16009
> URL: https://issues.apache.org/jira/browse/HADOOP-16009
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16009.01.patch
>
>
> This issue is for the source code change in Apache Hadoop repository.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16010) Replace the url of the repository in Apache Hadoop site

2018-12-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16010:
---
Status: Patch Available  (was: Open)

> Replace the url of the repository in Apache Hadoop site
> ---
>
> Key: HADOOP-16010
> URL: https://issues.apache.org/jira/browse/HADOOP-16010
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> This issue is for the source code change in Apache Hadoop site.
> https://github.com/apache/hadoop-site



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16003) Migrate the Hadoop jenkins jobs to use new gitbox urls

2018-12-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723768#comment-16723768
 ] 

Akira Ajisaka commented on HADOOP-16003:


Hi [~elek], what time are you available? I'd like you to specify when the 
migration happens, that way you can update all the jenkins job as soon as the 
migration finishes.

> Migrate the Hadoop jenkins jobs to use new gitbox urls
> --
>
> Key: HADOOP-16003
> URL: https://issues.apache.org/jira/browse/HADOOP-16003
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Elek, Marton
>Priority: Major
>
> As it's announced the INFRA team all the apache git repositories will be 
> migrated to use gitbox. I created this jira to sync on the required steps to 
> update the jenkins job, and record the changes.
> By default it could be as simple as changing the git url for all the jenkins 
> jobs under the Hadoop view:
> https://builds.apache.org/view/H-L/view/Hadoop/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723764#comment-16723764
 ] 

Shweta commented on HADOOP-16008:
-

[~ajisakaa], thanks for the necessary change and for the commit. 

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16008-branch-2-001.patch, HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15941) [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible

2018-12-17 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723713#comment-16723713
 ] 

Takanobu Asanuma commented on HADOOP-15941:
---

I've seen the following javadoc error since HADOOP-15950. I confirmed that the 
patch fixes it.
{noformat}
[ERROR] 
/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java:47:
 error: package com.sun.jndi.ldap is not visible
[ERROR] import com.sun.jndi.ldap.LdapCtxFactory;
[ERROR]^
[ERROR]   (package com.sun.jndi.ldap is declared in module java.naming, which 
does not export it)
[ERROR] 
[ERROR] Command line was: /usr/java/jdk-11/bin/javadoc -J-Xmx768m @options 
@packages
{noformat}
Hi [~umamaheswararao], does the 1st patch fix your problem? I also want to konw 
your environment.

> [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible
> --
>
> Key: HADOOP-15941
> URL: https://issues.apache.org/jira/browse/HADOOP-15941
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Uma Maheswara Rao G
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15941.1.patch
>
>
> With JDK 11: Compilation failed because package com.sun.jndi.ldap is not 
> visible.
>  
> {noformat}
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile 
> (default-compile) on project hadoop-common: Compilation failure
> /C:/Users/umgangum/Work/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java:[545,23]
>  package com.sun.jndi.ldap is not visible
>  (package com.sun.jndi.ldap is declared in module java.naming, which does not 
> export it){noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723680#comment-16723680
 ] 

Akira Ajisaka edited comment on HADOOP-16008 at 12/18/18 4:43 AM:
--

Committed this to branch-2 and branch-2.9. Thanks [~shwetayakkali] for the 
contribution.


was (Author: ajisakaa):
Committed this to trunk. Thanks [~shwetayakkali] for the contribution.

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16008-branch-2-001.patch, HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723676#comment-16723676
 ] 

Akira Ajisaka commented on HADOOP-16008:


+1

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16008-branch-2-001.patch, HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16008:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.3
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~shwetayakkali] for the contribution.

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16008-branch-2-001.patch, HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723641#comment-16723641
 ] 

Hadoop QA commented on HADOOP-16008:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | HADOOP-16008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952115/HADOOP-16008-branch-2-001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 939866eda634 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / eb8b1ea |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Max. process+thread count | 66 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15664/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16008-branch-2-001.patch, HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723628#comment-16723628
 ] 

Akira Ajisaka commented on HADOOP-16008:


Thanks [~shwetayakkali] for providing the patch. Renamed the patch to run the 
precommit Jenkins job against branch-2.

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16008-branch-2-001.patch, HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16008:
---
Attachment: HADOOP-16008-branch-2-001.patch

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16008-branch-2-001.patch, HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16008:

Attachment: HADOOP-16008.001.patch
Status: Patch Available  (was: Open)

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.2, 2.9.1, 2.9.0
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723601#comment-16723601
 ] 

Shweta commented on HADOOP-16008:
-

posted a patch for the typo. please review.

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723600#comment-16723600
 ] 

Hadoop QA commented on HADOOP-16008:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-16008 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952112/HADOOP-16008.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15663/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16008.001.patch
>
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16008) Fix typo in CommandsManual.md

2018-12-17 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HADOOP-16008:
---

Assignee: Shweta

> Fix typo in CommandsManual.md
> -
>
> Key: HADOOP-16008
> URL: https://issues.apache.org/jira/browse/HADOOP-16008
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Akira Ajisaka
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie
>
> http://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/CommandsManual.html
> {noformat}
> hdoop daemonlog -setlevel[-protocol 
> (http|https)]
> {noformat}
> hdoop should be hadoop.
> This issue was reported from user mailing list:  
> https://lists.apache.org/thread.html/0d57c60d3242e4bd8f0401669957c251e687077bb7b7fb2725837ba4@%3Cuser.hadoop.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16003) Migrate the Hadoop jenkins jobs to use new gitbox urls

2018-12-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723537#comment-16723537
 ] 

Akira Ajisaka commented on HADOOP-16003:


https://issues.apache.org/jira/browse/INFRA-17448?focusedCommentId=16722158=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16722158

bq. Yes, you can either specify when you want the migration to happen, or we 
can do it ASAP. The migration only takes a few minutes.

Now we need to fix when to start the migration.

> Migrate the Hadoop jenkins jobs to use new gitbox urls
> --
>
> Key: HADOOP-16003
> URL: https://issues.apache.org/jira/browse/HADOOP-16003
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Elek, Marton
>Priority: Major
>
> As it's announced the INFRA team all the apache git repositories will be 
> migrated to use gitbox. I created this jira to sync on the required steps to 
> update the jenkins job, and record the changes.
> By default it could be as simple as changing the git url for all the jenkins 
> jobs under the Hadoop view:
> https://builds.apache.org/view/H-L/view/Hadoop/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15973) Configuration: Included properties are not cached if resource is a stream

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723530#comment-16723530
 ] 

Hadoop QA commented on HADOOP-15973:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 250 unchanged - 1 fixed = 255 total (was 251) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15973 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952092/HADOOP-15973.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9fb54bff21a6 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5426653 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15660/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15660/testReport/ |
| Max. process+thread count | 1375 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15660/console |
| Powered 

[jira] [Commented] (HADOOP-15860) ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723499#comment-16723499
 ] 

Hadoop QA commented on HADOOP-15860:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 19m  
2s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15860 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952091/HADOOP-15860.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 346c7a8faabd 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5426653 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15661/artifact/out/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15661/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15661/testReport/ |
| Max. process+thread count | 442 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 

[jira] [Commented] (HADOOP-15991) testMultipartUpload timing out

2018-12-17 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723479#comment-16723479
 ] 

lqjacklee commented on HADOOP-15991:


# what happens if you don't run with s3guard on?
 ## The configuration is off
 # what are the s3guard settings for that bucket (e.g IO allocation).
 ## the default one, just change the region and credentials
 # How far is that AWS region from you?
 ## PING s3.ap-south-1.amazonaws.com (52.219.66.29): 56 data bytes
64 bytes from 52.219.66.29: icmp_seq=0 ttl=32 time=513.278 ms
 # and what is your bandwidth, especially uploading
 ## 
DOWNLOAD 48.58 Mbps /UPLOAD 3.46 Mbps
 

> testMultipartUpload timing out
> --
>
> Key: HADOOP-15991
> URL: https://issues.apache.org/jira/browse/HADOOP-15991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: lqjacklee
>Assignee: lqjacklee
>Priority: Minor
>
> timeout of S3 mpu tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723477#comment-16723477
 ] 

Hadoop QA commented on HADOOP-15847:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-15847 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15847 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952098/HADOOP-15847-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15662/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-17 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723468#comment-16723468
 ] 

lqjacklee commented on HADOOP-15847:


[~gabor.bota] Thanks the comment. In order to reduce the cost in the test 
case.We provide the option to limit . 

I have change the logic only in the ITestS3GuardConcurrentOps. Please help 
review. [^HADOOP-15847-002.patch]

 

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-17 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15847:
---
Attachment: HADOOP-15847-002.patch

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15991) testMultipartUpload timing out

2018-12-17 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15991:
--

Assignee: lqjacklee

> testMultipartUpload timing out
> --
>
> Key: HADOOP-15991
> URL: https://issues.apache.org/jira/browse/HADOOP-15991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: lqjacklee
>Assignee: lqjacklee
>Priority: Minor
>
> timeout of S3 mpu tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)

2018-12-17 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-15860:

Attachment: HADOOP-15860.002.patch
Status: Patch Available  (was: Open)

Thanks for the review and the suggestions [~mackrorysd] . As suggested above, I 
have added the assertTrue(flag) in the patch for the times when the test 
doesn't perform as expected. I ran the ABFS tests locally and they pass for 
this patch. 

Please review and suggest changes. Thanks.

> ABFS: Throw IllegalArgumentException when Directory/File name ends with a 
> period(.)
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-15860.001.patch, HADOOP-15860.002.patch, 
> trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15973) Configuration: Included properties are not cached if resource is a stream

2018-12-17 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723447#comment-16723447
 ] 

Eric Payne commented on HADOOP-15973:
-

Attaching patch 002. This patch invokes a new parser when processing includes 
rather than loading a resource.

This should also fix HADOOP-16007.

> Configuration: Included properties are not cached if resource is a stream
> -
>
> Key: HADOOP-15973
> URL: https://issues.apache.org/jira/browse/HADOOP-15973
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Critical
> Attachments: HADOOP-15973.001.patch, HADOOP-15973.002.patch
>
>
> If a configuration resource is a bufferedinputstream and the resource has an 
> included xml file, the properties from the included file are read and stored 
> in the properties of the configuration, but they are not stored in the 
> resource cache. So, if a later resource is added to the config and the 
> properties are recalculated from the first resource, the included properties 
> are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15973) Configuration: Included properties are not cached if resource is a stream

2018-12-17 Thread Eric Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HADOOP-15973:

Attachment: HADOOP-15973.002.patch

> Configuration: Included properties are not cached if resource is a stream
> -
>
> Key: HADOOP-15973
> URL: https://issues.apache.org/jira/browse/HADOOP-15973
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Critical
> Attachments: HADOOP-15973.001.patch, HADOOP-15973.002.patch
>
>
> If a configuration resource is a bufferedinputstream and the resource has an 
> included xml file, the properties from the included file are read and stored 
> in the properties of the configuration, but they are not stored in the 
> resource cache. So, if a later resource is added to the config and the 
> properties are recalculated from the first resource, the included properties 
> are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15860) ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)

2018-12-17 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723443#comment-16723443
 ] 

Shweta edited comment on HADOOP-15860 at 12/17/18 10:54 PM:


Thanks for the review and the suggestions [~mackrorysd] . As suggested above, I 
have added the assertTrue with flag in the patch for the times when the test 
doesn't perform as expected. I ran the ABFS tests locally and they pass for 
this patch. 

Please review and suggest changes. Thanks.


was (Author: shwetayakkali):
Thanks for the review and the suggestions [~mackrorysd] . As suggested above, I 
have added the assertTrue(flag) in the patch for the times when the test 
doesn't perform as expected. I ran the ABFS tests locally and they pass for 
this patch. 

Please review and suggest changes. Thanks.

> ABFS: Throw IllegalArgumentException when Directory/File name ends with a 
> period(.)
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-15860.001.patch, HADOOP-15860.002.patch, 
> trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723400#comment-16723400
 ] 

Hadoop QA commented on HADOOP-15229:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
55s{color} | {color:green} root generated 0 new + 1488 unchanged - 2 fixed = 
1488 total (was 1490) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 39s{color} | {color:orange} root: The patch generated 22 new + 1141 
unchanged - 2 fixed = 1163 total (was 1143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 96 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-common-project_hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
27s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 

[jira] [Commented] (HADOOP-15364) Add support for S3 Select to S3A

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723292#comment-16723292
 ] 

Steve Loughran commented on HADOOP-15364:
-

There is a patch for this in HADOOP-15229. Reviews and testing are encouraged. 

put differently: this is your chance to provide constructive feedback on the 
design. 

> Add support for S3 Select to S3A
> 
>
> Key: HADOOP-15364
> URL: https://issues.apache.org/jira/browse/HADOOP-15364
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15364-001.patch, HADOOP-15364-002.patch, 
> HADOOP-15364-004.patch
>
>
> Expect a PoC patch for this in a couple of days; 
> * it'll depend on an SDK update to work, plus a couple of of other minor 
> changes
> * Adds command line option too 
> {code}
> hadoop s3guard select -header use -compression gzip -limit 100 
> s3a://landsat-pds/scene_list.gz" \
> "SELECT s.entityId FROM S3OBJECT s WHERE s.cloudCover = '0.0' "
> {code}
> For wider use we'll need to implement the HADOOP-15229 so that callers can 
> pass down the expression along with any other parameters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15364) Add support for S3 Select to S3A

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15364:
---

Assignee: Steve Loughran

> Add support for S3 Select to S3A
> 
>
> Key: HADOOP-15364
> URL: https://issues.apache.org/jira/browse/HADOOP-15364
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15364-001.patch, HADOOP-15364-002.patch, 
> HADOOP-15364-004.patch
>
>
> Expect a PoC patch for this in a couple of days; 
> * it'll depend on an SDK update to work, plus a couple of of other minor 
> changes
> * Adds command line option too 
> {code}
> hadoop s3guard select -header use -compression gzip -limit 100 
> s3a://landsat-pds/scene_list.gz" \
> "SELECT s.entityId FROM S3OBJECT s WHERE s.cloudCover = '0.0' "
> {code}
> For wider use we'll need to implement the HADOOP-15229 so that callers can 
> pass down the expression along with any other parameters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723291#comment-16723291
 ] 

Steve Loughran commented on HADOOP-15229:
-

also: the per-file encryption settings complicates merging in the HADOOP-14556 
patch. I'm going to pull it from this patch until that's in:

h3. If you want this patch in, review HADOOP-14556 patch too

> Add FileSystem builder-based openFile() API to match createFile()
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15229:

Status: Patch Available  (was: Open)

HADOOP-15229 patch 011

* switch to simpler fs.s3a. optiiions everywhere, docs updated to recommend 
this (and mandate either SCHEMA. or fs.SCHEMA. 
* slight cleanup of SelectConstants.
* s3a input streams also support fs.s3a.encryption.key for both select and open 
options (untested, yet)
* input format (fs.s3a.select.input.format) and output format 
(fs.s3a.select.output.format) can be specifed; currently only CSV is allowed. 
(validated in a test).
The default is still CSV though...it may make sense to mandate the spec
* setting tests up fo test the S3Guard CLI Tool, but nothing implemented there 
yet.
* new test of Line record reader against Landsat .gz. file

Tested: s3 ireland. 

Fun feature: the LineRecordReader it doesn't work, because the codecs 
automatically map .gz filename to GZipDecompressor, which breaks on the 
CSV-formatted text coming in.

Next actions

# people need to review this. I am trying to define a new API for filesystem 
interaction, the first async one: early feedback matters. 
# plan to add JSON and stop there, will force a rework of the currrent s3 
binding code.
# and a couple of tests for the S3Guard tool, which will need to work with this 
too (add a --inputformat option, etc)

I don't know what to do about the landsat gz failure. the logic for binding 
decompressors is more than just a simple "edit the config" as the service 
loader mech is the main way compression codecs are found. You'd need to 
implement a new dummy decompressor which registered support for .gz files but 
really just passed the text through. That is a fairly major piece of work which 
I don't intend to do. I think I'll give up at that point and say "you'll need a 
better record reader for this world"

{code}
[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.419 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat
[ERROR] 
testReadLandsatRecords(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat)  
Time elapsed: 1.436 s  <<< ERROR!
java.io.IOException: not a gzip file
at 
org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:496)
at 
org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:257)
at 
org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:186)
at 
org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
at 
org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:179)
at org.apache.hadoop.util.LineReader.readCustomLine(LineReader.java:303)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:171)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:158)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:198)
at 
org.apache.hadoop.fs.s3a.select.AbstractS3SelectTest.readRecords(AbstractS3SelectTest.java:391)
at 
org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat.testReadLandsatRecords(ITestS3SelectLandsat.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)

[INFO] Running org.apache.hadoop.fs.s3a.select.ITestS3Select
{code}

> Add FileSystem builder-based 

[jira] [Updated] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15229:

Attachment: HADOOP-15229-011.patch

> Add FileSystem builder-based openFile() API to match createFile()
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15229:

Status: Open  (was: Patch Available)

> Add FileSystem builder-based openFile() API to match createFile()
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16001) ZKDelegationTokenSecretManager should use KerberosName#getShortName to get the user name for ZK ACL

2018-12-17 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723204#comment-16723204
 ] 

Íñigo Goiri edited comment on HADOOP-16001 at 12/17/18 5:45 PM:


Is the unit test already covering this?
Not very familiar with {{KerberosName}} but it looks like it provides more 
functionality than just taking the part before the @.

We should try to test this with HDFS and YARN too.


was (Author: elgoiri):
Is the unit test already covering this?
Not very familiar with {{KerberosName}} but it looks like it provides more 
functionality than just taking the part before the @.

> ZKDelegationTokenSecretManager should use KerberosName#getShortName to get 
> the user name for ZK ACL
> ---
>
> Key: HADOOP-16001
> URL: https://issues.apache.org/jira/browse/HADOOP-16001
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Blocker
> Attachments: HDFS-14136.001.patch, image-2018-12-10-17-54-33-361.png
>
>
> !image-2018-12-10-17-54-33-361.png!
> ZKDelegationTokenSecretManager use only first part of principal to set the 
> znode ACL's.
> We can use *{{KerberosName#getShortName()}}* method for getting the principal 
> based upon rules configured in *{{hadoop.security.auth_to_local}}* and 
> setting the ACL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16001) ZKDelegationTokenSecretManager should use KerberosName#getShortName to get the user name for ZK ACL

2018-12-17 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723204#comment-16723204
 ] 

Íñigo Goiri commented on HADOOP-16001:
--

Is the unit test already covering this?
Not very familiar with {{KerberosName}} but it looks like it provides more 
functionality than just taking the part before the @.

> ZKDelegationTokenSecretManager should use KerberosName#getShortName to get 
> the user name for ZK ACL
> ---
>
> Key: HADOOP-16001
> URL: https://issues.apache.org/jira/browse/HADOOP-16001
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Blocker
> Attachments: HDFS-14136.001.patch, image-2018-12-10-17-54-33-361.png
>
>
> !image-2018-12-10-17-54-33-361.png!
> ZKDelegationTokenSecretManager use only first part of principal to set the 
> znode ACL's.
> We can use *{{KerberosName#getShortName()}}* method for getting the principal 
> based upon rules configured in *{{hadoop.security.auth_to_local}}* and 
> setting the ACL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-17 Thread Eric Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reassigned HADOOP-16007:
---

Assignee: Eric Payne

> Order of property settings is incorrect when includes are processed
> ---
>
> Key: HADOOP-16007
> URL: https://issues.apache.org/jira/browse/HADOOP-16007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0, 3.1.1, 3.0.4
>Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Blocker
>
> If a configuration file contains a setting for a property then later includes 
> another file that also sets that property to a different value then the 
> property will be parsed incorrectly. For example, consider the following 
> configuration file:
> {noformat}
> http://www.w3.org/2001/XInclude;>
>  
>  myprop
>  val1
>  
> 
> 
> {noformat}
> with the contents of /some/other/file.xml as:
> {noformat}
>  
>myprop
>val2
>  
> {noformat}
> Parsing this configuration should result in myprop=val2, but it actually 
> results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2018-12-17 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723108#comment-16723108
 ] 

Gabor Bota commented on HADOOP-15847:
-

Hey [~Jack-Lee], thanks for the patch!

The exception you got is because of you got throttled by aws, please try to 
re-run the test, you should be fine after that. 

Looking at your patch you've made modification in {{S3AScaleTestBase}}, but 
this issue is about changing the r capacity first only in 
{{ITestS3GuardConcurrentOps}} I'm afraid. Could you clarify how will a change 
in {{S3AScaleTestBase}} affect {{ITestS3GuardConcurrentOps}}?

> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-17 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723105#comment-16723105
 ] 

Adam Antal commented on HADOOP-15819:
-

Thanks, [~gabor.bota]. I uploaded patch [^HADOOP-15819.002.patch] (removed the 
unused method).

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Adam Antal
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> HADOOP-15819.002.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)

[jira] [Updated] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-17 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-15819:

Attachment: HADOOP-15819.002.patch

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Adam Antal
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> HADOOP-15819.002.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40)
>   

[jira] [Commented] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-17 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723015#comment-16723015
 ] 

Jason Lowe commented on HADOOP-16007:
-

The behavior will only be noticed if an included resource overrides a 
previously set property from the same resource doing the include.  If the 
include was overriding a value from a previously parsed resource (like 
core-default.xml) then the problem does not manifest.

The parser directly sets the included properties on the conf as a side-effect 
of parsing but the non-included properties are returned as a parse result and 
those results are iterated to set them.  The sideband processing of includes 
effectively reverses the order in which properties are processed if the 
xinclude appears after the property setting in the original resource.

Here's the simple code I used to test it:
{code:title=testconf.java}
import org.apache.hadoop.conf.Configuration;
class testconf {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
System.out.println("myconf = " + conf.get("myconf"));
}
}
{code}

Using this sample code with a core-site.xml and included file setup as 
described in the JIRA description, The following shows what I get at two 
adjacent commits on the trunk line:
{noformat}
$ git log -1
commit f51da9c4d1423c2ac92eb4f40e973264e7e968cc
Author: Andrew Wang 
Date:   Mon Jul 2 18:31:21 2018 +0200

HADOOP-15554. Improve JIT performance for Configuration parsing. 
Contributed by Todd Lipcon.
$ mvn clean && mvn install -Pdist -DskipTests -DskipShade -Dmaven.javadoc.skip 
-am -pl :hadoop-common
[...]
$ java -cp 
"hadoop/testconf:hadoop/apache/hadoop/hadoop-common-project/hadoop-common/target/hadoop-common-3.2.0-SNAPSHOT/share/hadoop/common/*:hadoop/apache/hadoop/hadoop-common-project/hadoop-common/target/hadoop-common-3.2.0-SNAPSHOT/share/hadoop/common/lib/*:."
 testconf
myconf = val1
{noformat}
So the above shows the broken behavior.  core-site.xml set myconf to val1 then 
xincluded another file which set it to val2, yet the property acts as if the 
xinclude occurred at the top of core-site.xml.  Moving one commit earlier in 
time shows the expected behavior:
{noformat}
$ git checkout HEAD~1
Previous HEAD position was f51da9c... HADOOP-15554. Improve JIT performance for 
Configuration parsing. Contributed by Todd Lipcon.
HEAD is now at 5d748bd... HDFS-13702. Remove HTrace hooks from DFSClient to 
reduce CPU usage. Contributed by Todd Lipcon.
$ mvn clean && mvn install -Pdist -DskipTests -DskipShade -Dmaven.javadoc.skip 
-am -pl :hadoop-common
[...]
$ java -cp 
"hadoop/testconf:hadoop/apache/hadoop/hadoop-common-project/hadoop-common/target/hadoop-common-3.2.0-SNAPSHOT/share/hadoop/common/*:hadoop/apache/hadoop/hadoop-common-project/hadoop-common/target/hadoop-common-3.2.0-SNAPSHOT/share/hadoop/common/lib/*:."
 testconf
myconf = val2
{noformat}


> Order of property settings is incorrect when includes are processed
> ---
>
> Key: HADOOP-16007
> URL: https://issues.apache.org/jira/browse/HADOOP-16007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0, 3.1.1, 3.0.4
>Reporter: Jason Lowe
>Priority: Blocker
>
> If a configuration file contains a setting for a property then later includes 
> another file that also sets that property to a different value then the 
> property will be parsed incorrectly. For example, consider the following 
> configuration file:
> {noformat}
> http://www.w3.org/2001/XInclude;>
>  
>  myprop
>  val1
>  
> 
> 
> {noformat}
> with the contents of /some/other/file.xml as:
> {noformat}
>  
>myprop
>val2
>  
> {noformat}
> Parsing this configuration should result in myprop=val2, but it actually 
> results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722956#comment-16722956
 ] 

Steve Loughran commented on HADOOP-14556:
-

Tested on S3 Ireland BTW

There are 10 people watching this. I need 1 or 2 people to actually look at the 
code and comment. Yes, it's a big piece of work, yes, its complex -but that's 
because unlike the DT plugin points of the other object stores (wasb, abfs) I'm 
actually implementing the token support, with simple options (session) and 
advanced (generating restricted roles after determining exact requirements of 
the user).

If anyone watching this JIRA has any intention of using this feature, then they 
should really review it. Thanks.

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722926#comment-16722926
 ] 

Steve Loughran commented on HADOOP-16007:
-

Really? I thought I'd been seeing the "correct" behaviour, but maybe not. I do 
chained /nested XIncludes though

> Order of property settings is incorrect when includes are processed
> ---
>
> Key: HADOOP-16007
> URL: https://issues.apache.org/jira/browse/HADOOP-16007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0, 3.1.1, 3.0.4
>Reporter: Jason Lowe
>Priority: Blocker
>
> If a configuration file contains a setting for a property then later includes 
> another file that also sets that property to a different value then the 
> property will be parsed incorrectly. For example, consider the following 
> configuration file:
> {noformat}
> http://www.w3.org/2001/XInclude;>
>  
>  myprop
>  val1
>  
> 
> 
> {noformat}
> with the contents of /some/other/file.xml as:
> {noformat}
>  
>myprop
>val2
>  
> {noformat}
> Parsing this configuration should result in myprop=val2, but it actually 
> results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16004) ABFS: Convert 404 error response in AbfsInputStream and AbfsOutPutStream to FileNotFoundException

2018-12-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722906#comment-16722906
 ] 

Hudson commented on HADOOP-16004:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15620 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15620/])
HADOOP-16004. ABFS: Convert 404 error response in AbfsInputStream and (stevel: 
rev 346c0c8aff0b206d45f34dbce4fcc81364115d95)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemE2E.java


> ABFS: Convert 404 error response in AbfsInputStream and AbfsOutPutStream to 
> FileNotFoundException
> -
>
> Key: HADOOP-16004
> URL: https://issues.apache.org/jira/browse/HADOOP-16004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-16004-001.patch
>
>
> In AbfsInputStream and AbfsOutPutStream, client error response is used to 
> create an IOException.
> We should convert 404 error response to FileNotFoundException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722899#comment-16722899
 ] 

Steve Loughran commented on HADOOP-15954:
-

h3. {{DefaultSPIdentityTransformer.transformAclEntries}}


I don't see this feature being needed at all when !isSecurityEnabled, but on my 
reading of the code it's going to log at error every time initialize() is 
called. This isn't appropriate there.

# only worry about name extraction when running secure
# Log @ warn

I'd worry about the logs being full of these error messages in any long-lived 
service,
Spark, Hive LLAP, where FS instances are not just created, they are destroyed 
afte work is done (especially LLAP). Is there a way to minimise the logging?


h3. {{getShortName}}

Is the case conversion going to work in all locales, or should the locale for 
the toLowerCase() call be set to LOCALE_EN? I ask as I don't know how 
AD/kerberos realms
with I in their name get converted in Turkey, but I suspect it's not what you 
want
across a global system.

Elsewhere (/HADOOP-15996) we're looking at how to handle more complex names,
e.g. cross realm problems and users who have an @ in their short name.
Is this code going to handle that? As a plug-in mechanism is underway, 
getting involved in that/designing the code for it (how?) is wise.


+ general, minor: Use the size of the incoming list to set the size of the 
output ArrayList; saves reallocation & GC


> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15969) ABFS: getNamespaceEnabled can fail blocking user access thru ACLs

2018-12-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722895#comment-16722895
 ] 

Hudson commented on HADOOP-15969:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15619 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15619/])
HADOOP-15969. ABFS: getNamespaceEnabled can fail blocking user access (stevel: 
rev b2523d8100844338e073531c47666d744a101caf)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/constants/TestConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java


> ABFS: getNamespaceEnabled can fail blocking user access thru ACLs
> -
>
> Key: HADOOP-15969
> URL: https://issues.apache.org/jira/browse/HADOOP-15969
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15969-001.patch, HADOOP-15969-002.patch, 
> HADOOP-15969-003.patch
>
>
> The Get Filesystem Properties operation requires Read permission to the 
> Filesystem.  Read permission to the Filesystem can only be granted thru RBAC, 
> Shared Key, or SAS.  This prevents giving low privilege users access to 
> specific files or directories within the filesystem.  An administrator should 
> be able to set an ACL on a file granting read permission to a user, without 
> giving them read permission to the entire Filesystem.
> Fortunately there is another way to determine if HNS is enabled.  The Get 
> Path Access Control (getAclStatus) operation only requires traversal access, 
> and for the root folder / all authenticated users have traversal access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16004) ABFS: Convert 404 error response in AbfsInputStream and AbfsOutPutStream to FileNotFoundException

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16004:

   Resolution: Fixed
Fix Version/s: 3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

+1, committed

always good to map service errors to the classic OS ones, not just for ease of 
debugging, but because retry handlers generally assume that FNFEs are 
unrecoverable, so won't repeat the call

> ABFS: Convert 404 error response in AbfsInputStream and AbfsOutPutStream to 
> FileNotFoundException
> -
>
> Key: HADOOP-16004
> URL: https://issues.apache.org/jira/browse/HADOOP-16004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-16004-001.patch
>
>
> In AbfsInputStream and AbfsOutPutStream, client error response is used to 
> create an IOException.
> We should convert 404 error response to FileNotFoundException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15975) ABFS: remove timeout check for DELETE and RENAME

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722887#comment-16722887
 ] 

Steve Loughran commented on HADOOP-15975:
-

afraid this patch doesn't apply; after HADOOP-15972; can you update and 
resubmit.

+1 pending that

> ABFS: remove timeout check for DELETE and RENAME
> 
>
> Key: HADOOP-15975
> URL: https://issues.apache.org/jira/browse/HADOOP-15975
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15975-001.patch
>
>
> Currently, ABFS rename and delete is doing a timeout check, which will fail 
> the request for rename/delete when the target contains tons of file/dirs.
> Because timeout check is already there for each HTTP call, we should remove 
> the timeout check in RENAME and DELETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15972) ABFS: reduce list page size to to 500

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15972:

   Resolution: Fixed
Fix Version/s: 3.2.1
   Status: Resolved  (was: Patch Available)

+1, committed

# changed title to cover outcome, rather than action
# when changing this to track future API changes, consider moving from a 
duplicate constant in the tests to a shared constant
# and if there is a problem with page size, is there a test which can 
demonstrate the issue? Is it related to the actual #of responses, or the size 
of the payload if the listing is of objects with very long names?

> ABFS: reduce list page size to to 500 
> --
>
> Key: HADOOP-15972
> URL: https://issues.apache.org/jira/browse/HADOOP-15972
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HADOOP-15972-001.patch
>
>
> This will be the temporary fix, as the service fix take much longer time to 
> roll out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15972) ABFS: reduce list page size to to 500

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15972:

Fix Version/s: 3.3.0

> ABFS: reduce list page size to to 500 
> --
>
> Key: HADOOP-15972
> URL: https://issues.apache.org/jira/browse/HADOOP-15972
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15972-001.patch
>
>
> This will be the temporary fix, as the service fix take much longer time to 
> roll out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15972) ABFS: reduce list page size to to 500

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15972:

Summary: ABFS: reduce list page size to to 500   (was: ABFS: update 
LIST_MAX_RESULTS)

> ABFS: reduce list page size to to 500 
> --
>
> Key: HADOOP-15972
> URL: https://issues.apache.org/jira/browse/HADOOP-15972
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15972-001.patch
>
>
> This will be the temporary fix, as the service fix take much longer time to 
> roll out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15969) ABFS: getNamespaceEnabled can fail blocking user access thru ACLs

2018-12-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15969:

   Resolution: Fixed
Fix Version/s: 3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

+1: Committed to Hadoop 3.2.1+

thanks!

> ABFS: getNamespaceEnabled can fail blocking user access thru ACLs
> -
>
> Key: HADOOP-15969
> URL: https://issues.apache.org/jira/browse/HADOOP-15969
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15969-001.patch, HADOOP-15969-002.patch, 
> HADOOP-15969-003.patch
>
>
> The Get Filesystem Properties operation requires Read permission to the 
> Filesystem.  Read permission to the Filesystem can only be granted thru RBAC, 
> Shared Key, or SAS.  This prevents giving low privilege users access to 
> specific files or directories within the filesystem.  An administrator should 
> be able to set an ACL on a file granting read permission to a user, without 
> giving them read permission to the entire Filesystem.
> Fortunately there is another way to determine if HNS is enabled.  The Get 
> Path Access Control (getAclStatus) operation only requires traversal access, 
> and for the root folder / all authenticated users have traversal access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16005) NativeAzureFileSystem does not support setXAttr

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722876#comment-16722876
 ] 

Steve Loughran commented on HADOOP-16005:
-

I should add: serving up the etag as the file checksum would be nice —lets you 
do backups which use a change in the etag as the sign of a file being out of 
date

 Look at

* class to describe the etag 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/EtagChecksum.java
 *HADOOP-13282 is the change to S3A to add this; HADOOP-15287 the discovery 
we'd better make it optional to stop distcp backups from HDFS failing, as too 
many jobs weren't using {{-skipCrc}} on the command line, it 

> NativeAzureFileSystem does not support setXAttr
> ---
>
> Key: HADOOP-16005
> URL: https://issues.apache.org/jira/browse/HADOOP-16005
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Clemens Wolff
>Priority: Major
>
> When interacting with Azure Blob Storage via the Hadoop FileSystem client, 
> it's currently (as of 
> [a8bbd81|https://github.com/apache/hadoop/commit/a8bbd818d5bc4762324bcdb7cf1fdd5c2f93891b])
>  not possible to set custom metadata attributes.
> Here is a snippet that demonstrates the missing behavior (throws an 
> UnsupportedOperationException):
> {code:java}
> val blobAccount = "SET ME"
> val blobKey = "SET ME"
> val blobContainer = "SET ME"
> val blobFile = "SET ME"
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> val conf = new Configuration()
> conf.set("fs.wasbs.impl", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
> conf.set(s"fs.azure.account.key.$blobAccount.blob.core.windows.net", blobKey)
> val path = new 
> Path(s"wasbs://$blobContainer@$blobAccount.blob.core.windows.net/$blobFile")
> val fs = FileSystem.get(path, conf)
> fs.setXAttr(path, "somekey", "somevalue".getBytes)
> {code}
> Looking at the code in hadoop-tools/hadoop-azure, NativeAzureFileSystem 
> inherits the default setXAttr from FileSystem which throws the 
> UnsupportedOperationException.
> The underlying Azure Blob Storage service does support custom metadata 
> ([service 
> docs|https://docs.microsoft.com/en-us/azure/storage/blobs/storage-properties-metadata])
>  as does the azure-storage SDK that's being used by NativeAzureFileSystem 
> ([SDK 
> docs|http://javadox.com/com.microsoft.azure/azure-storage/2.0.0/com/microsoft/azure/storage/blob/CloudBlob.html#setMetadata(java.util.HashMap)]).
> Is there another way that I should be setting custom metadata on Azure Blob 
> Storage files? Is there a specific reason why setXAttr hasn't been 
> implemented on NativeAzureFileSystem? If not, I can take a shot at 
> implementing it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16005) NativeAzureFileSystem does not support setXAttr

2018-12-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722876#comment-16722876
 ] 

Steve Loughran edited comment on HADOOP-16005 at 12/17/18 11:01 AM:


I should add: serving up the etag as the file checksum would be nice —lets you 
do backups which use a change in the etag as the sign of a file being out of 
date

 Look at

* class to describe the etag 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/EtagChecksum.java
 * HADOOP-13282 is the change to S3A to add this; HADOOP-15287 the discovery 
we'd better make it optional to stop distcp backups from HDFS failing, as too 
many jobs weren't using {{-skipCrc}} on the command line, it 


was (Author: ste...@apache.org):
I should add: serving up the etag as the file checksum would be nice —lets you 
do backups which use a change in the etag as the sign of a file being out of 
date

 Look at

* class to describe the etag 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/EtagChecksum.java
 *HADOOP-13282 is the change to S3A to add this; HADOOP-15287 the discovery 
we'd better make it optional to stop distcp backups from HDFS failing, as too 
many jobs weren't using {{-skipCrc}} on the command line, it 

> NativeAzureFileSystem does not support setXAttr
> ---
>
> Key: HADOOP-16005
> URL: https://issues.apache.org/jira/browse/HADOOP-16005
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Clemens Wolff
>Priority: Major
>
> When interacting with Azure Blob Storage via the Hadoop FileSystem client, 
> it's currently (as of 
> [a8bbd81|https://github.com/apache/hadoop/commit/a8bbd818d5bc4762324bcdb7cf1fdd5c2f93891b])
>  not possible to set custom metadata attributes.
> Here is a snippet that demonstrates the missing behavior (throws an 
> UnsupportedOperationException):
> {code:java}
> val blobAccount = "SET ME"
> val blobKey = "SET ME"
> val blobContainer = "SET ME"
> val blobFile = "SET ME"
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> val conf = new Configuration()
> conf.set("fs.wasbs.impl", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
> conf.set(s"fs.azure.account.key.$blobAccount.blob.core.windows.net", blobKey)
> val path = new 
> Path(s"wasbs://$blobContainer@$blobAccount.blob.core.windows.net/$blobFile")
> val fs = FileSystem.get(path, conf)
> fs.setXAttr(path, "somekey", "somevalue".getBytes)
> {code}
> Looking at the code in hadoop-tools/hadoop-azure, NativeAzureFileSystem 
> inherits the default setXAttr from FileSystem which throws the 
> UnsupportedOperationException.
> The underlying Azure Blob Storage service does support custom metadata 
> ([service 
> docs|https://docs.microsoft.com/en-us/azure/storage/blobs/storage-properties-metadata])
>  as does the azure-storage SDK that's being used by NativeAzureFileSystem 
> ([SDK 
> docs|http://javadox.com/com.microsoft.azure/azure-storage/2.0.0/com/microsoft/azure/storage/blob/CloudBlob.html#setMetadata(java.util.HashMap)]).
> Is there another way that I should be setting custom metadata on Azure Blob 
> Storage files? Is there a specific reason why setXAttr hasn't been 
> implemented on NativeAzureFileSystem? If not, I can take a shot at 
> implementing it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16001) ZKDelegationTokenSecretManager should use KerberosName#getShortName to get the user name for ZK ACL

2018-12-17 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-16001:

Description: 
!image-2018-12-10-17-54-33-361.png!

ZKDelegationTokenSecretManager use only first part of principal to set the 
znode ACL's.

We can use *{{KerberosName#getShortName()}}* method for getting the principal 
based upon rules configured in *{{hadoop.security.auth_to_local}}* and setting 
the ACL.

  was:
!image-2018-12-10-17-54-33-361.png!

ZKDelegationTokenSecretManager use only first part of principal to set the 
znode ACL's.

We can use *{{KerberosName#getShortName()}}* method for getting the principal 
based upon rules configured in *{{hadoop.security.auth_to_local}}*and setting 
the ACL.


> ZKDelegationTokenSecretManager should use KerberosName#getShortName to get 
> the user name for ZK ACL
> ---
>
> Key: HADOOP-16001
> URL: https://issues.apache.org/jira/browse/HADOOP-16001
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Blocker
> Attachments: HDFS-14136.001.patch, image-2018-12-10-17-54-33-361.png
>
>
> !image-2018-12-10-17-54-33-361.png!
> ZKDelegationTokenSecretManager use only first part of principal to set the 
> znode ACL's.
> We can use *{{KerberosName#getShortName()}}* method for getting the principal 
> based upon rules configured in *{{hadoop.security.auth_to_local}}* and 
> setting the ACL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org