[jira] [Updated] (HADOOP-14001) Improve delegation token validity checking

2017-01-19 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14001:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, branch-2.8, branch-2.8.0, and branch-2.7. 
Thanks [~tlipcon] for the review.

> Improve delegation token validity checking
> --
>
> Key: HADOOP-14001
> URL: https://issues.apache.org/jira/browse/HADOOP-14001
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-14001.01.patch
>
>
> In AbstractDelegationSecretManager#verifyToken, MessageDigest.isEqual should 
> be used instead of Arrays.equals to compare byte arrays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13496:

   Resolution: Fixed
Fix Version/s: 2.8.1
   Status: Resolved  (was: Patch Available)

> Include file lengths in Mismatch in length error for distcp
> ---
>
> Key: HADOOP-13496
> URL: https://issues.apache.org/jira/browse/HADOOP-13496
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: distcp
> Fix For: 2.8.1
>
> Attachments: HADOOP-13496.v1.patch, HADOOP-13496.v1.patch
>
>
> Currently RetriableFileCopyCommand doesn't show the perceived lengths in 
> Mismatch in length error:
> {code}
> 2016-08-12 10:23:14,231 ERROR [LocalJobRunner Map Task Executor #0] 
> util.RetriableCommand(89): Failure in Retriable command: Copying 
> hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-   
> c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  to hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/10.22.9.
>171%2C53952%2C1471022508087.regiongroup-1.1471022510182
> java.io.IOException: Mismatch in length of 
> source:hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  and 
> target:hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/.distcp.tmp.attempt_local344329843_0006_m_00_0
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> {code}
> It would be helpful to include what's the expected length and what's the real 
> length.
> Thanks to [~yzhangal] for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829748#comment-15829748
 ] 

Steve Loughran commented on HADOOP-13496:
-

sorry, must have applied it to branch-2 with some other applied patch in, then 
looked @ the diff

> Include file lengths in Mismatch in length error for distcp
> ---
>
> Key: HADOOP-13496
> URL: https://issues.apache.org/jira/browse/HADOOP-13496
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: distcp
> Attachments: HADOOP-13496.v1.patch, HADOOP-13496.v1.patch
>
>
> Currently RetriableFileCopyCommand doesn't show the perceived lengths in 
> Mismatch in length error:
> {code}
> 2016-08-12 10:23:14,231 ERROR [LocalJobRunner Map Task Executor #0] 
> util.RetriableCommand(89): Failure in Retriable command: Copying 
> hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-   
> c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  to hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/10.22.9.
>171%2C53952%2C1471022508087.regiongroup-1.1471022510182
> java.io.IOException: Mismatch in length of 
> source:hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  and 
> target:hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/.distcp.tmp.attempt_local344329843_0006_m_00_0
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> {code}
> It would be helpful to include what's the expected length and what's the real 
> length.
> Thanks to [~yzhangal] for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14001) Improve delegation token validity checking

2017-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829624#comment-15829624
 ] 

Hudson commented on HADOOP-14001:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11143 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11143/])
HADOOP-14001. Improve delegation token validity checking. (aajisaka: rev 
176346721006a03f41d028560e9e29b5931d5be2)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java


> Improve delegation token validity checking
> --
>
> Key: HADOOP-14001
> URL: https://issues.apache.org/jira/browse/HADOOP-14001
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-14001.01.patch
>
>
> In AbstractDelegationSecretManager#verifyToken, MessageDigest.isEqual should 
> be used instead of Arrays.equals to compare byte arrays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13589) S3Guard: Allow execution of all S3A integration tests with S3Guard enabled.

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829743#comment-15829743
 ] 

Steve Loughran commented on HADOOP-13589:
-

-Thanks for getting it in; I'll look at the others now

> S3Guard: Allow execution of all S3A integration tests with S3Guard enabled.
> ---
>
> Key: HADOOP-13589
> URL: https://issues.apache.org/jira/browse/HADOOP-13589
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Steve Loughran
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13589-HADOOP-13345-001.patch, 
> HADOOP-13589-HADOOP-13345-002.patch, HADOOP-13589-HADOOP-13345-002.patch, 
> HADOOP-13589-HADOOP-13345-004.patch, HADOOP-13589-HADOOP-13345-005.patch
>
>
> With S3Guard enabled, S3A must continue to be functionally correct.  The best 
> way to verify this is to execute our existing S3A integration tests in a mode 
> with S3A enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829753#comment-15829753
 ] 

Steve Loughran commented on HADOOP-13496:
-

+1, committed to branch 2.8+

> Include file lengths in Mismatch in length error for distcp
> ---
>
> Key: HADOOP-13496
> URL: https://issues.apache.org/jira/browse/HADOOP-13496
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: distcp
> Fix For: 2.8.1
>
> Attachments: HADOOP-13496.v1.patch, HADOOP-13496.v1.patch
>
>
> Currently RetriableFileCopyCommand doesn't show the perceived lengths in 
> Mismatch in length error:
> {code}
> 2016-08-12 10:23:14,231 ERROR [LocalJobRunner Map Task Executor #0] 
> util.RetriableCommand(89): Failure in Retriable command: Copying 
> hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-   
> c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  to hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/10.22.9.
>171%2C53952%2C1471022508087.regiongroup-1.1471022510182
> java.io.IOException: Mismatch in length of 
> source:hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  and 
> target:hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/.distcp.tmp.attempt_local344329843_0006_m_00_0
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> {code}
> It would be helpful to include what's the expected length and what's the real 
> length.
> Thanks to [~yzhangal] for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2017-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829776#comment-15829776
 ] 

Hudson commented on HADOOP-13496:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11144 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11144/])
HADOOP-13496. Include file lengths in Mismatch in length error for (stevel: rev 
ed33ce11dd8de36fb79e103d8491d077cd4aaf77)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java


> Include file lengths in Mismatch in length error for distcp
> ---
>
> Key: HADOOP-13496
> URL: https://issues.apache.org/jira/browse/HADOOP-13496
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: distcp
> Fix For: 2.8.1
>
> Attachments: HADOOP-13496.v1.patch, HADOOP-13496.v1.patch
>
>
> Currently RetriableFileCopyCommand doesn't show the perceived lengths in 
> Mismatch in length error:
> {code}
> 2016-08-12 10:23:14,231 ERROR [LocalJobRunner Map Task Executor #0] 
> util.RetriableCommand(89): Failure in Retriable command: Copying 
> hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-   
> c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  to hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/10.22.9.
>171%2C53952%2C1471022508087.regiongroup-1.1471022510182
> java.io.IOException: Mismatch in length of 
> source:hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  and 
> target:hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/.distcp.tmp.attempt_local344329843_0006_m_00_0
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> {code}
> It would be helpful to include what's the expected length and what's the real 
> length.
> Thanks to [~yzhangal] for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13999) Add -DskipShade maven profile to disable jar shading to reduce compile time

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13999:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

+1

tested locally, works great!

committed to s3guard branch alongside trunk; there may be some merge issues 
later on, but I'll deal with that by reverting this one prior to the next 
merge-from-trunk

> Add -DskipShade maven profile to disable jar shading to reduce compile time
> ---
>
> Key: HADOOP-13999
> URL: https://issues.apache.org/jira/browse/HADOOP-13999
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Fix For: 3.0.0-alpha2, HADOOP-13345
>
> Attachments: HADOOP-13999.001.patch, HADOOP-13999.002.patch
>
>
> Adding a maven profile to disable client jar shading



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829780#comment-15829780
 ] 

Steve Loughran commented on HADOOP-13877:
-

yes, but not until I'm awake, have been through my emails and drunk enough 
coffee to be able to code; that's normally ~11:00 GMT

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13999) Add -DskipShade maven profile to disable jar shading to reduce compile time

2017-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829832#comment-15829832
 ] 

Hudson commented on HADOOP-13999:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11145 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11145/])
HADOOP-13999 Add -DskipShade maven profile to disable jar shading to (stevel: 
rev 85e4961f60b7f8cd1343b6f2b9f4c8bb1a5de6ac)
* (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml
* (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml
* (edit) hadoop-client-modules/hadoop-client-api/pom.xml
* (edit) hadoop-client-modules/hadoop-client-integration-tests/pom.xml


> Add -DskipShade maven profile to disable jar shading to reduce compile time
> ---
>
> Key: HADOOP-13999
> URL: https://issues.apache.org/jira/browse/HADOOP-13999
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Fix For: 3.0.0-alpha2, HADOOP-13345
>
> Attachments: HADOOP-13999.001.patch, HADOOP-13999.002.patch
>
>
> Adding a maven profile to disable client jar shading



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13956) Read ADLS credentials from Credential Provider

2017-01-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829885#comment-15829885
 ] 

Lei (Eddy) Xu commented on HADOOP-13956:


Hey, [~jzhuge]

The patch looks good overall.

I ran {{mvn test}} under {{hadoop-tools/hadoop-azure-datalake}}, the following 
tests failed:

{code}
Failed tests:
  
TestAdlContractRootDirLive>AbstractContractRootDirectoryTest.testRmNonEmptyRootDirNonRecursive:132->Assert.fail:88
 non recursive delete should have raised an exception, but completed with exit 
code false
{code}

Is it relevant to this patch?

> Read ADLS credentials from Credential Provider
> --
>
> Key: HADOOP-13956
> URL: https://issues.apache.org/jira/browse/HADOOP-13956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-13956.001.patch, HADOOP-13956.002.patch, 
> HADOOP-13956.003.patch, HADOOP-13956.004.patch, HADOOP-13956.005.patch, 
> HADOOP-13956.006.patch
>
>
> Read ADLS credentials using Hadoop CredentialProvider API. See 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829778#comment-15829778
 ] 

Steve Loughran commented on HADOOP-13345:
-

FYI, I've just cherry-picked in HADOOP-13999, "Add -DskipShade maven profile to 
disable jar shading to reduce compile time"

I/we will probably need to roll this back before the next trunk merge; until 
then it lets us build this branch without waiting 6+ minutes for the shade

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13999) Add -DskipShade maven profile to disable jar shading to reduce compile time

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13999:

Summary: Add -DskipShade maven profile to disable jar shading to reduce 
compile time  (was: Add maven profile to dissable jar shading to reduce compile 
time)

> Add -DskipShade maven profile to disable jar shading to reduce compile time
> ---
>
> Key: HADOOP-13999
> URL: https://issues.apache.org/jira/browse/HADOOP-13999
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Attachments: HADOOP-13999.001.patch, HADOOP-13999.002.patch
>
>
> Adding a maven profile to disable client jar shading



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13877:

Status: Open  (was: Patch Available)

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13959) S3guard: replace dynamo.describe() call in init with more efficient query

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829924#comment-15829924
 ] 

Steve Loughran commented on HADOOP-13959:
-

This will be fixed as part of HADOOP-13985; when that is in we can close this 
at the same time

> S3guard: replace dynamo.describe() call in init with more efficient query
> -
>
> Key: HADOOP-13959
> URL: https://issues.apache.org/jira/browse/HADOOP-13959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Priority: Minor
>
> HADOOP-13908 adds initialization when a table isn't created, using the 
> {{describe()}} call.
> AWS document this as inefficient, and throttle it. We should be able to get 
> away with a simple table lookup as the probe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14000) s3guard metadata stores to support millons of children

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829961#comment-15829961
 ] 

Steve Loughran commented on HADOOP-14000:
-

DDB docs say 

bq. The result set from a Query is limited to 1 MB per call. You can use the 
LastEvaluatedKey from the query response to retrieve more results.

the max # of files you get then will be limited by parent path length and the # 
of children: the longer the directory tree, the fewer children you get

As well as this limit marker, there's a paging mechanism for paged responses, 
which can then be iterated over.

To scale, then

# {{DirListingMetadata()}} needs to move from a simple collection of children, 
to an abstract class offering an iterator over the children
# the DDB store must return a special iterator here, with the same flow as 
{{org.apache.hadoop.fs.s3a.Listing}}. Ideally, it should return 
{{RemoteIterator}}, so that it can be directly wired up to 
the listing mechanism of {{LocatedFileStatusIterator}}
# the local store could still cache the values in its own subclass of 
{{DirListingMetadata()}}
# testing!


> s3guard metadata stores to support millons of children
> --
>
> Key: HADOOP-14000
> URL: https://issues.apache.org/jira/browse/HADOOP-14000
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>
> S3 repos can have millions of child entries
> Currently {{DirListingMetaData}} can't and {{MetadataStore.listChildren(Path 
> path)}} won't be able to handle directories that big, for listing, deleting 
> or naming.
> We will need a paged response from the listing operation, something which can 
> be iterated over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13926) S3Guard: Improve listLocatedStatus and listLocatedStatus

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829968#comment-15829968
 ] 

Steve Loughran commented on HADOOP-13926:
-

I've added {{listLocatedStatus}}, as they are related.

Note to implementors: all new iterators *must* go into 
{{org.apache.hadoop.fs.s3a.Listing}}. Why? That's where all the listing 
iterators go, and keeps the S3aFS class vaguely manageable.

> S3Guard: Improve listLocatedStatus and listLocatedStatus
> 
>
> Key: HADOOP-13926
> URL: https://issues.apache.org/jira/browse/HADOOP-13926
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13926.wip.proto.branch-13345.1.patch
>
>
> Need to check if {{listLocatedStatus}} can make use of metastore's 
> listChildren feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829973#comment-15829973
 ] 

Steve Loughran commented on HADOOP-13345:
-

One thing everyone needs to keep an eye on is HADOOP-14000 : support for 
millions of files. This'll require big changes in the current 
{{DirListingMetadata}} class, as well as hooking up all the S3aFS list* 
operations

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13877:

Status: Patch Available  (was: Open)

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch, 
> HADOOP-13877-HADOOP-13345.004.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13877:

Attachment: HADOOP-13877-HADOOP-13345.004.patch

LGTM, with some minor cleanup

patch 004: this is what I'll vote on if yetus is happy.

This is patch 003 + coalescing some of the repeated calls to 
getContract().getMetastore() into a method, along with the same for the FS

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch, 
> HADOOP-13877-HADOOP-13345.004.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829941#comment-15829941
 ] 

Hadoop QA commented on HADOOP-13877:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  3m 
54s{color} | {color:red} root in HADOOP-13345 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848309/HADOOP-13877-HADOOP-13345.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 85ca1f53b712 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / b3cd1a2 |
| Default Java | 1.8.0_111 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11467/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11467/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11467/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>

[jira] [Updated] (HADOOP-13926) S3Guard: Improve listLocatedStatus and listLocatedStatus

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13926:

Summary: S3Guard: Improve listLocatedStatus and listLocatedStatus  (was: 
S3Guard: Improve listLocatedStatus and listFiles(recursive))

> S3Guard: Improve listLocatedStatus and listLocatedStatus
> 
>
> Key: HADOOP-13926
> URL: https://issues.apache.org/jira/browse/HADOOP-13926
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13926.wip.proto.branch-13345.1.patch
>
>
> Need to check if {{listLocatedStatus}} can make use of metastore's 
> listChildren feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13926) S3Guard: Improve listLocatedStatus and listFiles(recursive)

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13926:

Summary: S3Guard: Improve listLocatedStatus and listFiles(recursive)  (was: 
S3Guard: Improve listLocatedStatus)

> S3Guard: Improve listLocatedStatus and listFiles(recursive)
> ---
>
> Key: HADOOP-13926
> URL: https://issues.apache.org/jira/browse/HADOOP-13926
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13926.wip.proto.branch-13345.1.patch
>
>
> Need to check if {{listLocatedStatus}} can make use of metastore's 
> listChildren feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13877:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

+1

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch, HADOOP-13877-HADOOP-13345.003.patch, 
> HADOOP-13877-HADOOP-13345.004.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13956) Read ADLS credentials from Credential Provider

2017-01-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830085#comment-15830085
 ] 

John Zhuge commented on HADOOP-13956:
-

The unit test failure is tracked by HADOOP-13927 "ADLS 
TestAdlContractRootDirLive.testRmNonEmptyRootDirNonRecursive failed". It is 
confirmed to be an ADLS backend bug.

> Read ADLS credentials from Credential Provider
> --
>
> Key: HADOOP-13956
> URL: https://issues.apache.org/jira/browse/HADOOP-13956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-13956.001.patch, HADOOP-13956.002.patch, 
> HADOOP-13956.003.patch, HADOOP-13956.004.patch, HADOOP-13956.005.patch, 
> HADOOP-13956.006.patch
>
>
> Read ADLS credentials using Hadoop CredentialProvider API. See 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13977) IntelliJ Compilation error in ITUseMiniCluster.java

2017-01-19 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830666#comment-15830666
 ] 

Miklos Szegedi commented on HADOOP-13977:
-

Thank you, [~asuresh]. That helped.

> IntelliJ Compilation error in ITUseMiniCluster.java
> ---
>
> Key: HADOOP-13977
> URL: https://issues.apache.org/jira/browse/HADOOP-13977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Sean Busbey
> Attachments: build.log
>
>
> The repro steps:
> mvn clean install -DskipTests and then "Build/Build Project" in IntelliJ IDEA 
> to update indexes, etc.
> ...hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java
> Error:(34, 28) java: package org.apache.hadoop.fs does not exist
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13977) IntelliJ Compilation error in ITUseMiniCluster.java

2017-01-19 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830688#comment-15830688
 ] 

Sean Busbey commented on HADOOP-13977:
--

the profile is an okay work around, but having all IntelliJ users skip the 
tests for shading isn't a good long term solution.

> IntelliJ Compilation error in ITUseMiniCluster.java
> ---
>
> Key: HADOOP-13977
> URL: https://issues.apache.org/jira/browse/HADOOP-13977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Sean Busbey
> Attachments: build.log
>
>
> The repro steps:
> mvn clean install -DskipTests and then "Build/Build Project" in IntelliJ IDEA 
> to update indexes, etc.
> ...hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java
> Error:(34, 28) java: package org.apache.hadoop.fs does not exist
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830677#comment-15830677
 ] 

Hadoop QA commented on HADOOP-13989:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
8s{color} | {color:green} hadoop-client-runtime in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848387/HADOOP-13989.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 71274503bfc6 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f5839fd |
| Default Java | 1.8.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11472/testReport/ |
| modules | C: hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-minicluster U: hadoop-client-modules |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11472/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>

[jira] [Comment Edited] (HADOOP-13995) s3guard cli: make tests easier to run and address failure

2017-01-19 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830621#comment-15830621
 ] 

Sean Mackrory edited comment on HADOOP-13995 at 1/19/17 10:41 PM:
--

Restarted the precommit job - I can't reproduce locally. I had that happen once 
before and it disappeared with no changes. I wonder if that repository is 
occasional transiently unavailable?

edit: To be clear, the error in mvninstall was failing to find a bunch of HDFS 
artifacts in the DynamoDB local repository... Not sure why it had to look for 
them there in the first place...


was (Author: mackrorysd):
Restarted the precommit job - I can't reproduce locally. I had that happen once 
before and it disappeared with no changes. I wonder if that repository is 
occasional transiently unavailable?

> s3guard cli: make tests easier to run and address failure
> -
>
> Key: HADOOP-13995
> URL: https://issues.apache.org/jira/browse/HADOOP-13995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13995.001.patch, HADOOP-13995.002.patch, 
> HADOOP-13995-HADOOP-13345.003.patch
>
>
> Following up on HADOOP-13650, we should:
> - Make it clearer which config parameters need to be set for test to succeed, 
> and provide good defaults.
> - Address any remaining test failures.
> - Change TestS3GuardTool to an ITest



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830654#comment-15830654
 ] 

Xiaoyu Yao commented on HADOOP-13988:
-

[~gss2002], the change looks good to me overall. I just have a few comments 
about the additional logging. 
Can you also describe the manual testing that has been done with the patch?

1. Some the if(LOG.isDebugEnabled()) guard is not needed as we are using slf4j
line 1065, 1083, 1072.

2. Line 1075 can be moved into UGI#logAllUserInfo

3. Log 1089-109, I think we want to log UGI#loginUser instead of 
UGI#loginUser#loginUser, which has already been covered in line 1075.

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13995) s3guard cli: make tests easier to run and address failure

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830651#comment-15830651
 ] 

Hadoop QA commented on HADOOP-13995:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
43s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 5 
new + 4 unchanged - 0 fixed = 9 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13995 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848374/HADOOP-13995-HADOOP-13345.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d77dd13ef348 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 31cee35 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11471/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11471/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11471/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard cli: make tests easier to run and address failure
> -
>
> Key: HADOOP-13995
> URL: https://issues.apache.org/jira/browse/HADOOP-13995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: 

[jira] [Commented] (HADOOP-13985) s3guard: add a version marker to every table

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830427#comment-15830427
 ] 

Hadoop QA commented on HADOOP-13985:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  3m 
26s{color} | {color:red} root in HADOOP-13345 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
8s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
37s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13985 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848346/HADOOP-13985-HADOOP-13345-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1033049456a4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 31cee35 |
| Default Java | 1.8.0_111 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11468/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11468/testReport/ |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11468/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HADOOP-13995) s3guard cli: make tests easier to run and address failure

2017-01-19 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13995:
---
Attachment: HADOOP-13995-HADOOP-13345.003.patch

Definitely - I ran into that myself. I also removed my fix for the issue in 
HADOOP-13877 now that was already fixed as a separate issue.

I now pass in S3 test paths so the endpoint is the same as the configured S3 
bucket if nothing else is specified. I had to tweak some of the logic in 
parseDynamoDBEndPoint to allow a metastore URL and s3 bucket to be passed in. 
I'm not sure why that wouldn't be allowed - I think the logic now matches how 
we actually end up deciding what to use. I'd appreciate it if [~eddyxu] could 
take a quick look to make sure I'm not overlooking something important in the 
original intentions.

> s3guard cli: make tests easier to run and address failure
> -
>
> Key: HADOOP-13995
> URL: https://issues.apache.org/jira/browse/HADOOP-13995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13995.001.patch, HADOOP-13995.002.patch, 
> HADOOP-13995-HADOOP-13345.003.patch
>
>
> Following up on HADOOP-13650, we should:
> - Make it clearer which config parameters need to be set for test to succeed, 
> and provide good defaults.
> - Address any remaining test failures.
> - Change TestS3GuardTool to an ITest



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-19 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas updated HADOOP-13989:

Attachment: HADOOP-13989.002.patch

Repeal now, replace later

> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Priority: Minor
> Attachments: HADOOP-13989.001.patch, HADOOP-13989.002.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13995) s3guard cli: make tests easier to run and address failure

2017-01-19 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13995:
---
Status: Patch Available  (was: Open)

> s3guard cli: make tests easier to run and address failure
> -
>
> Key: HADOOP-13995
> URL: https://issues.apache.org/jira/browse/HADOOP-13995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13995.001.patch, HADOOP-13995.002.patch, 
> HADOOP-13995-HADOOP-13345.003.patch
>
>
> Following up on HADOOP-13650, we should:
> - Make it clearer which config parameters need to be set for test to succeed, 
> and provide good defaults.
> - Address any remaining test failures.
> - Change TestS3GuardTool to an ITest



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-19 Thread Joe Pallas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830532#comment-15830532
 ] 

Joe Pallas edited comment on HADOOP-13989 at 1/19/17 8:16 PM:
--

Repeal now, replace later (new patch)


was (Author: jpallas):
Repeal now, replace later

> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Priority: Minor
> Attachments: HADOOP-13989.001.patch, HADOOP-13989.002.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-19 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas updated HADOOP-13989:

Assignee: Joe Pallas
  Status: Patch Available  (was: Reopened)

> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Attachments: HADOOP-13989.001.patch, HADOOP-13989.002.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2017-01-19 Thread Ilya Ganelin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830536#comment-15830536
 ] 

Ilya Ganelin commented on HADOOP-12990:
---

Hi, all - we've just run into this. Is there any chance that this has been 
resolved along the way? 

Any further advice on resolving this if I were to take it on?

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2017-01-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830555#comment-15830555
 ] 

John Zhuge commented on HADOOP-12990:
-

Not that I am aware of.

Go for it!  Past comments are good source of info.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830565#comment-15830565
 ] 

Hadoop QA commented on HADOOP-13989:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 13s{color} 
| {color:red} HADOOP-13989 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848375/HADOOP-13989.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11469/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Attachments: HADOOP-13989.001.patch, HADOOP-13989.002.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2017-01-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830570#comment-15830570
 ] 

Jason Lowe commented on HADOOP-12990:
-

After having worked on the ZStandard codec in HADOOP-13578, I have a fresher 
perspective on this.  One main problem with creating a separate codec for LZ4 
compatibility is that the existing Hadoop LZ4 codec has claimed the standard 
'.lz4' extension.  That means when users upload files into HDFS that have been 
compressed with the standard LZ4 CLI tool, it will try to use the existing, 
broken LZ4 codec rather than any new one.  They'd have to rename the files to 
use some non-standard LZ4 extension to select the new codec.  That's not ideal.

In hindsight, the Hadoop LZ4 codec really should have used the streaming APIs 
for LZ4 rather than the one-shot or single-step APIs.  Then it wouldn't need 
the extra framing bytes that broke compatibility with the existing LZ4 CLI, and 
it wouldn't lead to weird failures where the decoder can't decode anything that 
was encoded with a larger buffer size.  The streaming API solves all those 
problems, being able to decode with an arbitrary user-supplied buffer size and 
without the extra block header hints that Hadoop added.

The cleanest solution from an end-user standpoint would be to have the existing 
LZ4 codec automatically detect the format when decoding so that we just have 
one codec and it works both with the old (IMHO broken) format and the standard 
LZ4 format.  I'm hoping there are some key signature bytes that LZ4 always 
places at the beginning of the compressed data stream so that we can 
automatically detect which one it is.  If that is possible then that would be 
my preference on how to tackle the issue.  If we can't then the end-user story 
is much less compelling -- two codecs with significant confusion on which one 
to use.

However there is one gotcha even if we can pull off this approach.  Files 
generated on clusters with the updated LZ4 codec would not be able to be 
decoded on clusters that only have the old codec.  If that case has to be 
supported then we have no choice but to develop a new codec and make users live 
with the non-standard LZ4 file extensions used by the new codec.  .lz4 files 
uploaded to Hadoop would continue to fail as they do today until renamed to the 
non-standard extension.


> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> 

[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2017-01-19 Thread Ilya Ganelin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830582#comment-15830582
 ] 

Ilya Ganelin commented on HADOOP-12990:
---

[~jzhuge] My proposed approach is to create a new file based loosely on 
hadoop/io/compress/Lz4Codec.java reproducing the byte structure analagous to 
your 4/3 hack. Does that seem reasonable?  

If my goal is ultimately to use this in something like Spark, if the version of 
Hadoop we're using is patched with the appropriate class, where would I add 
additional logic to switch between the two codecs (Lz4Codec vs. Lz4FrameCodec)? 
 



> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13995) s3guard cli: make tests easier to run and address failure

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830592#comment-15830592
 ] 

Hadoop QA commented on HADOOP-13995:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
47s{color} | {color:red} root in HADOOP-13345 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
47s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 5 
new + 4 unchanged - 0 fixed = 9 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13995 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848374/HADOOP-13995-HADOOP-13345.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea1551cfd412 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 31cee35 |
| Default Java | 1.8.0_111 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11470/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11470/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11470/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11470/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard cli: make tests easier to run and address failure
> -
>
> Key: HADOOP-13995
> URL: https://issues.apache.org/jira/browse/HADOOP-13995

[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2017-01-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830596#comment-15830596
 ] 

Jason Lowe commented on HADOOP-12990:
-

bq. My proposed approach is to create a new file based loosely on 
hadoop/io/compress/Lz4Codec.java reproducing the byte structure analagous to 
your 4/3 hack. Does that seem reasonable? 

The problem with that approach is the existing codec is using one-shot decode.  
If the amount of data to decode exceeds the buffer size being used by the 
decoder it just blows up.  So it's not as simple as tacking on a header and 
passing it through to the existing code.  That will work with small amounts of 
data but not once it exceeds the decoder buffer size.

To handle arbitrary files generated by the LZ4 CLI tool it really needs to use 
the streaming API so the buffer sizes used by the encoder and decoder are 
decoupled, and the decoder can handle arbitrarily large amounts of input 
without needing to chunk it into one-shot blocks as the current one does.


> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13995) s3guard cli: make tests easier to run and address failure

2017-01-19 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830621#comment-15830621
 ] 

Sean Mackrory commented on HADOOP-13995:


Restarted the precommit job - I can't reproduce locally. I had that happen once 
before and it disappeared with no changes. I wonder if that repository is 
occasional transiently unavailable?

> s3guard cli: make tests easier to run and address failure
> -
>
> Key: HADOOP-13995
> URL: https://issues.apache.org/jira/browse/HADOOP-13995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13995.001.patch, HADOOP-13995.002.patch, 
> HADOOP-13995-HADOOP-13345.003.patch
>
>
> Following up on HADOOP-13650, we should:
> - Make it clearer which config parameters need to be set for test to succeed, 
> and provide good defaults.
> - Address any remaining test failures.
> - Change TestS3GuardTool to an ITest



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-19 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas updated HADOOP-13989:

Attachment: HADOOP-13989.003.patch

Needed rebase after HADOOP-13999

> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Attachments: HADOOP-13989.001.patch, HADOOP-13989.002.patch, 
> HADOOP-13989.003.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13956) Read ADLS credentials from Credential Provider

2017-01-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830632#comment-15830632
 ] 

John Zhuge commented on HADOOP-13956:
-

Deployed patch 006 to a 4-node cluster, and set up JCE keystore on HDFS. 
Everything works well.

core-site.xml must has these configured:
{code}

  hadoop.security.credential.provider.path
  jceks://hdfs/cdep/keystores/creds.jceks


  dfs.adls.oauth2.access.token.provider.type
  ClientCredential

{code}

Run these commands to populate the keystore:
{code}
hadoop credential create dfs.adls.oauth2.client.id -value '123'
hadoop credential create dfs.adls.oauth2.credential -value '456'
hadoop credential create dfs.adls.oauth2.refresh.url -value '789'
{code}

Unfortunately {{dfs.adls.oauth2.access.token.provider.type}} can not be easily 
put into the keystore because {{Configuration#getEnum}} is used to get this 
property. Anyway it is not really a secret.

> Read ADLS credentials from Credential Provider
> --
>
> Key: HADOOP-13956
> URL: https://issues.apache.org/jira/browse/HADOOP-13956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-13956.001.patch, HADOOP-13956.002.patch, 
> HADOOP-13956.003.patch, HADOOP-13956.004.patch, HADOOP-13956.005.patch, 
> HADOOP-13956.006.patch
>
>
> Read ADLS credentials using Hadoop CredentialProvider API. See 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14001) Improve delegation token validity checking

2017-01-19 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830642#comment-15830642
 ] 

Yongjun Zhang commented on HADOOP-14001:


HI [~ajisakaa] and [~tlipcon],

Per https://docs.oracle.com/javase/7/docs/api/java/security/MessageDigest.html

{code}
public static boolean isEqual(byte[] digesta,
  byte[] digestb)
Compares two digests for equality. Does a simple byte compare.
Parameters:
digesta - one of the digests to compare.
digestb - the other digest to compare.
Returns:
true if the digests are equal, false otherwise.
{code}

Seems the orginal code has same behavior as the changed code. The newer code 
looks more symbolic though. Would you please comment if there is any other diff?

Thanks.


> Improve delegation token validity checking
> --
>
> Key: HADOOP-14001
> URL: https://issues.apache.org/jira/browse/HADOOP-14001
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-14001.01.patch
>
>
> In AbstractDelegationSecretManager#verifyToken, MessageDigest.isEqual should 
> be used instead of Arrays.equals to compare byte arrays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13496:
-
Fix Version/s: 3.0.0-alpha2

> Include file lengths in Mismatch in length error for distcp
> ---
>
> Key: HADOOP-13496
> URL: https://issues.apache.org/jira/browse/HADOOP-13496
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: distcp
> Fix For: 3.0.0-alpha2, 2.8.1
>
> Attachments: HADOOP-13496.v1.patch, HADOOP-13496.v1.patch
>
>
> Currently RetriableFileCopyCommand doesn't show the perceived lengths in 
> Mismatch in length error:
> {code}
> 2016-08-12 10:23:14,231 ERROR [LocalJobRunner Map Task Executor #0] 
> util.RetriableCommand(89): Failure in Retriable command: Copying 
> hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-   
> c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  to hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/10.22.9.
>171%2C53952%2C1471022508087.regiongroup-1.1471022510182
> java.io.IOException: Mismatch in length of 
> source:hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  and 
> target:hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/.distcp.tmp.attempt_local344329843_0006_m_00_0
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> {code}
> It would be helpful to include what's the expected length and what's the real 
> length.
> Thanks to [~yzhangal] for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14002:

Attachment: HADOOP-14002.000.patch

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14002.000.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13876) S3Guard: better support for multi-bucket access including read-only

2017-01-19 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830928#comment-15830928
 ] 

Aaron Fabbri commented on HADOOP-13876:
---

Thanks [~steve_l].

I agree that most of this is addressed by per-bucket config.  On the "one 
DynamoDB table per cluster" part, however, there are still assumptions in the 
DynamoDB (DDB) code that a DynamoDBMetadataStore is 1:1 with a S3AFileSystem:

- Paths stored in DDB do not include the bucket name.
- DDB code uses {{S3AFileSystem#getUri()}} value for call to 
{{Path#makeQualified()}}.  See callers of {{itemToPathMetadata()}}. (This part 
actually breaks when the new 
{{DynamoDBMetadataStore#initialize(Configuration)}} method added for the CLI 
work is used).

I want to fix this part, as the single DDB table per cluster is the main use 
case my users want.  I already went through this exercise in LocalMetadataStore 
(which stores bucket name with path), so it should be straightforward.

I could see us merging to trunk without this fixed, if we could enforce that 
users can't access the same fs.s3a.s3guard.ddb.table with multiple buckets.  If 
they did that, it appears they'd risk collisions (e.g. s3a://bucket-a/path1 == 
s3a://bucket-b/path1)






> S3Guard: better support for multi-bucket access including read-only
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-13876-HADOOP-13345.000.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> One of the issues is with accessing read-only buckets.  If a user accesses a 
> read-only bucket with credentials that do not have DynamoDB write 
> permissions, they will get errors when trying to access the read-only bucket. 
>  This manifests causes test failures for {{ITestS3AAWSCredentialsProvider}}.
> Goals for this JIRA:
> - Fix {{ITestS3AAWSCredentialsProvider}} in a way that makes sense for the 
> real use-case.
> - Allow for a "one DynamoDB table per cluster" configuration with a way to 
> chose which credentials are used for DynamoDB.
> - Document limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830929#comment-15830929
 ] 

Arpit Agarwal commented on HADOOP-14002:


Thanks for documenting this [~hanishakoneru]. Can you please add a comment that 
the option may be used to speed up build times in a development environment and 
should not be used to build release artifacts.

Also it can be added it as a bullet point to the Build options section earlier 
to make it more visible.
{code}
 Build options:

  * Use -Pnative to compile/bundle native code
  * Use -Pdocs to generate & bundle the documentation in the distribution 
(using -Pdist)
  * Use -Psrc to create a project source TAR.GZ
  * Use -Dtar to create a TAR with the distribution (using -Pdist)
  * Use -Preleasedocs to include the changelog and release docs (requires 
Internet connectivity)
  * Use -Pyarn-ui to build YARN UI v2. (Requires Internet connectivity)
{code}

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14002.000.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830932#comment-15830932
 ] 

Arun Suresh commented on HADOOP-14002:
--

Thanks for the patch Hanisha. You should maybe also mention that this should be 
used only for dev environments to quickly compile the codebase, and should 
ideally NOT be used when creating a distribution using mvn package -Pdist ..

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14002.000.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830984#comment-15830984
 ] 

Arun Suresh commented on HADOOP-14002:
--

+1 from me too.. Thanks 

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-14002.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

Thanks for the review [~asuresh]. Committed this to trunk.

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14002:
---
Component/s: documentation

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13336:
-
Fix Version/s: 3.0.0-alpha2

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13336-006.patch, HADOOP-13336-007.patch, 
> HADOOP-13336-010.patch, HADOOP-13336-011.patch, 
> HADOOP-13336-HADOOP-13345-001.patch, HADOOP-13336-HADOOP-13345-002.patch, 
> HADOOP-13336-HADOOP-13345-003.patch, HADOOP-13336-HADOOP-13345-004.patch, 
> HADOOP-13336-HADOOP-13345-005.patch, HADOOP-13336-HADOOP-13345-006.patch, 
> HADOOP-13336-HADOOP-13345-008.patch, HADOOP-13336-HADOOP-13345-009.patch, 
> HADOOP-13336-HADOOP-13345-010.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830888#comment-15830888
 ] 

Larry McCay commented on HADOOP-13988:
--

[~gss2002] - I just want to be clear that your latest patch is what is running 
in your cluster not your original one.
The fix that was required affected the code path taken.

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830894#comment-15830894
 ] 

Hadoop QA commented on HADOOP-11794:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
33 new + 230 unchanged - 11 fixed = 263 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-tools_hadoop-distcp generated 3 new + 49 
unchanged - 0 fixed = 52 total (was 49) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 47s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.mapred.TestCopyCommitter |
|   | hadoop.tools.TestOptionsParser |
|   | hadoop.tools.TestDistCpSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-11794 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848419/HADOOP-11794.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc131b55cd5f 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5d8b80e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11473/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11473/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-distcp.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11473/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11473/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11473/console |
| Powered by | 

[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830995#comment-15830995
 ] 

Greg Senia commented on HADOOP-13988:
-

yes its running in our cluster. Just put the newest patch out there here is log 
output from DN getting the request from Knox:

2017-01-19 20:33:12,835 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction 
as:gss2002 (auth:PROXY) via knox (auth:TOKEN) 
from:org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
2017-01-19 20:33:12,835 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction 
as:gss2002 (auth:PROXY) via knox (auth:TOKEN) 
from:org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
2017-01-19 20:33:12,873 DEBUG security.SecurityUtil 
(SecurityUtil.java:setTokenService(421)) - Acquired token Kind: 
HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:12,873 DEBUG security.SecurityUtil 
(SecurityUtil.java:setTokenService(421)) - Acquired token Kind: 
HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:12,874 DEBUG security.SecurityUtil 
(SecurityUtil.java:setTokenService(421)) - Acquired token Kind: 
HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:12,874 DEBUG security.SecurityUtil 
(SecurityUtil.java:setTokenService(421)) - Acquired token Kind: 
HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:13,061 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction 
as:knox (auth:TOKEN) 
from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
2017-01-19 20:33:13,061 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1767)) - PrivilegedAction 
as:knox (auth:TOKEN) 
from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
2017-01-19 20:33:13,099 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) 
via knox (auth:TOKEN)
2017-01-19 20:33:13,099 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1774)) - UGI: gss2002 (auth:PROXY) 
via knox (auth:TOKEN)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1776)) - +RealUGI: knox (auth:TOKEN)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1777)) - +RealUGI: shortName: knox
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1777)) - +RealUGI: shortName: knox
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: 
dn/ha20t5002dn.tech.hdp.example@tech.hdp.example.com (auth:KERBEROS)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1780)) - +LoginUGI: 
dn/ha20t5002dn.tech.hdp.example@tech.hdp.example.com (auth:KERBEROS)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1781)) - +LoginUGI shortName: hdfs
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: 
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: 
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: 
HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:13,100 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: 
HDFS_DELEGATION_TOKEN, Service: 10.70.33.7:8020, Ident: (HDFS_DELEGATION_TOKEN 
token 14666 for gss2002)
2017-01-19 20:33:13,101 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logAllUserInfo(1784)) - +UGI token:Kind: 
HDFS_DELEGATION_TOKEN, Service: 10.70.33.6:8020, Ident: (HDFS_DELEGATION_TOKEN 

[jira] [Updated] (HADOOP-11794) distcp can copy blocks in parallel

2017-01-19 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-11794:
---
Attachment: HADOOP-11794.001.patch

Sorry for the long delay, attaching patch rev 001. 

With this patch, we can pass -chunksize  to distcp,  to tell distcp to split 
large files into chunks, each containing a number of blocks specified by this 
new parameter, except the last chunk of a file may be smaller.  CopyMapper will 
treat each chunk as a single file so the chunks can be copied in parallel; And 
the CopyCommitter concat the parts into one target file.

With this switch, we will enable preserving block size, disable the 
randomization of entries in the sequence file, disable append feature.
We could do further optimization as follow-ups.

Any review is very welcome!

Thanks a lot.



> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11794) distcp can copy blocks in parallel

2017-01-19 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-11794:
---
Status: Patch Available  (was: Open)

> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13431) Fix error propagation when hot swap drives disallow HSM types.

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13431:
-
Target Version/s: 2.8.0, 3.0.0-alpha3  (was: 2.8.0, 3.0.0-alpha2)

> Fix error propagation when hot swap drives disallow HSM types.
> --
>
> Key: HADOOP-13431
> URL: https://issues.apache.org/jira/browse/HADOOP-13431
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-13431.00.patch
>
>
> When DataNode detects HSM type change during hot swap drive process, it 
> should raises IOE and pass through it to {{DFSAdmin}} shell command. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13858) TestGridmixMemoryEmulation and TestResourceUsageEmulators fail on the environment other than Linux or Windows

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13858:
-
Target Version/s: 2.8.0, 3.0.0-alpha3  (was: 2.8.0, 3.0.0-alpha2)

> TestGridmixMemoryEmulation and TestResourceUsageEmulators fail on the 
> environment other than Linux or Windows
> -
>
> Key: HADOOP-13858
> URL: https://issues.apache.org/jira/browse/HADOOP-13858
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: other than Linux or Windows, such as Mac
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13858.01.patch
>
>
> TestGridmixMemoryEmulation.testTotalHeapUsageEmulatorPlugin fails on Mac.
> {noformat}
> Running org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation
> Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.896 sec <<< 
> FAILURE! - in org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation
> testTotalHeapUsageEmulatorPlugin(org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation)
>   Time elapsed: 0.009 sec  <<< ERROR!
> java.lang.UnsupportedOperationException: Could not determine OS
>   at org.apache.hadoop.util.SysInfo.newInstance(SysInfo.java:43)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.(ResourceCalculatorPlugin.java:42)
>   at 
> org.apache.hadoop.mapred.gridmix.DummyResourceCalculatorPlugin.(DummyResourceCalculatorPlugin.java:32)
>   at 
> org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation.testTotalHeapUsageEmulatorPlugin(TestGridmixMemoryEmulation.java:131)
> {noformat}
> The following tests fail on Mac as well:
> * TestResourceUsageEmulators.testCumulativeCpuUsageEmulatorPlugin
> * TestResourceUsageEmulators.testCpuUsageEmulator
> * TestResourceUsageEmulators.testResourceUsageMatcherRunner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13805:
-
Target Version/s: 2.8.0, 2.7.4, 3.0.0-alpha3  (was: 2.8.0, 2.7.4, 
3.0.0-alpha2)

> UGI.getCurrentUser() fails if user does not have a keytab associated
> 
>
> Key: HADOOP-13805
> URL: https://issues.apache.org/jira/browse/HADOOP-13805
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Attachments: HADOOP-13805.006.patch, HADOOP-13805.01.patch, 
> HADOOP-13805.02.patch, HADOOP-13805.03.patch, HADOOP-13805.04.patch, 
> HADOOP-13805.05.patch
>
>
> HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the 
> UGI is created from an existing Subject as in that case the keytab is not 
> 'own' by UGI but by the creator of the Subject.
> In HADOOP-13558 we introduced a new private UGI constructor 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and 
> we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}.
> The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created 
> via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new 
> UserGroupInformation(subject)}} which will delegate to 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}}  and 
> that will use externalKeyTab == *FALSE*. 
> Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using 
> a non-existing keytab if the TGT expired.
> This problem is experienced in {{KMSClientProvider}} when used by the HDFS 
> filesystem client accessing an an encryption zone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13858) TestGridmixMemoryEmulation and TestResourceUsageEmulators fail on the environment other than Linux or Windows

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13858:
-
Target Version/s: 2.9.0, 3.0.0-alpha3  (was: 2.8.0, 3.0.0-alpha3)

> TestGridmixMemoryEmulation and TestResourceUsageEmulators fail on the 
> environment other than Linux or Windows
> -
>
> Key: HADOOP-13858
> URL: https://issues.apache.org/jira/browse/HADOOP-13858
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: other than Linux or Windows, such as Mac
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13858.01.patch
>
>
> TestGridmixMemoryEmulation.testTotalHeapUsageEmulatorPlugin fails on Mac.
> {noformat}
> Running org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation
> Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.896 sec <<< 
> FAILURE! - in org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation
> testTotalHeapUsageEmulatorPlugin(org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation)
>   Time elapsed: 0.009 sec  <<< ERROR!
> java.lang.UnsupportedOperationException: Could not determine OS
>   at org.apache.hadoop.util.SysInfo.newInstance(SysInfo.java:43)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.(ResourceCalculatorPlugin.java:42)
>   at 
> org.apache.hadoop.mapred.gridmix.DummyResourceCalculatorPlugin.(DummyResourceCalculatorPlugin.java:32)
>   at 
> org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation.testTotalHeapUsageEmulatorPlugin(TestGridmixMemoryEmulation.java:131)
> {noformat}
> The following tests fail on Mac as well:
> * TestResourceUsageEmulators.testCumulativeCpuUsageEmulatorPlugin
> * TestResourceUsageEmulators.testCpuUsageEmulator
> * TestResourceUsageEmulators.testResourceUsageMatcherRunner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14002:
---

 Summary: Document -DskipShade property in BUILDING.txt
 Key: HADOOP-14002
 URL: https://issues.apache.org/jira/browse/HADOOP-14002
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
Priority: Minor
 Fix For: 3.0.0-alpha2


HADOOP-13999 added a maven profile to disable client jar shading. This property 
should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-19 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14003:


 Summary: Make additional KMS tomcat settings configurable
 Key: HADOOP-14003
 URL: https://issues.apache.org/jira/browse/HADOOP-14003
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.8.0
Reporter: Andrew Wang
Assignee: Andrew Wang


Doing some Tomcat performance tuning on a loaded cluster, we found that 
{{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. Let's 
make these configurable in the kms startup script.

Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Attachment: HADOOP-13988.patch

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Status: Patch Available  (was: Open)

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3, 2.8.0
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-13988:

Status: Open  (was: Patch Available)

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.3, 2.8.0
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13876) S3Guard: better support for multi-bucket access including read-only

2017-01-19 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-13876:
-

Assignee: Aaron Fabbri  (was: Mingliang Liu)

> S3Guard: better support for multi-bucket access including read-only
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13876-HADOOP-13345.000.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> One of the issues is with accessing read-only buckets.  If a user accesses a 
> read-only bucket with credentials that do not have DynamoDB write 
> permissions, they will get errors when trying to access the read-only bucket. 
>  This manifests causes test failures for {{ITestS3AAWSCredentialsProvider}}.
> Goals for this JIRA:
> - Fix {{ITestS3AAWSCredentialsProvider}} in a way that makes sense for the 
> real use-case.
> - Allow for a "one DynamoDB table per cluster" configuration with a way to 
> chose which credentials are used for DynamoDB.
> - Document limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831010#comment-15831010
 ] 

Greg Senia commented on HADOOP-13988:
-

[~lmccay]  the logs from above are from the patch uploaded an hour ago. Let me 
know if it looks like code path is wrong from what I can see the code path is 
working correctly and the !equals is definitely working correctly if it wasn't 
it would of failed.


Also here is the patch output from my last build about an hour ago with the 
updated path from today:

ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < 
../../kmsfixes/HADOOP-13558.02.patch 
patching file 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
Hunk #1 succeeded at 618 with fuzz 1 (offset -14 lines).
Hunk #2 succeeded at 825 (offset -40 lines).
patching file 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
Hunk #1 succeeded at 31 (offset -1 lines).
Hunk #2 succeeded at 902 with fuzz 2 (offset -111 lines).




ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < 
../../kmsfixes/HADOOP-13749.00.patch 
patching file 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
Hunk #4 succeeded at 901 (offset 2 lines).
Hunk #5 succeeded at 924 (offset 2 lines).
Hunk #6 succeeded at 996 (offset 2 lines).
Hunk #7 succeeded at 1042 (offset 2 lines).
patching file 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
Hunk #1 succeeded at 1768 (offset -55 lines).
patching file 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
Hunk #1 succeeded at 1825 (offset -8 lines).
Hunk #2 succeeded at 2149 (offset -5 lines).


ETG-GSeni-MBP:hadoop-release gss2002$ patch -p1 < ../../HADOOP-13988.patch 
patching file 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
Hunk #1 succeeded at 1052 (offset -10 lines).
patching file 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
Hunk #1 succeeded at 1774 (offset -67 lines).

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no 

[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830882#comment-15830882
 ] 

Greg Senia commented on HADOOP-13988:
-

[~xyao] We are currently running the fix patched into our HDP 2.5.3.0 build. We 
grabbed the HDP-2.5.3.0-tag from HWX github and recompiled with this fix and 
the two fixes this is dependent on. We have been running this fix for over a 
week now in our test environment with 2 NNs w/HA and their associated 
components 3 JN's and 2 ZKFC's, 2 RM's, 4 DN's/RS's/NM's, 2 
HiveServer2/Metastores, 2 HBaseMasters and a node running Knox for WebHDFS, 
Oozie and HiveServer2 http access and 1 Node as an Oozie Server. We have a data 
ingest framework that runs continuously in this environment and has run with no 
issues for the last week since applying the fixes and Knox to WebHDFS at a TDE 
file is returned correctly. I will look at adjusting the above code in regards 
to logging.

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
> Attachments: HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14003:
-
Attachment: hadoop-14003.001.patch

Patch attached, did some manual testing. You can check in the {{/kms/jmx}} 
endpoint to validate that tomcat is picking up the configuration settings.

> Make additional KMS tomcat settings configurable
> 
>
> Key: HADOOP-14003
> URL: https://issues.apache.org/jira/browse/HADOOP-14003
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-14003.001.patch
>
>
> Doing some Tomcat performance tuning on a loaded cluster, we found that 
> {{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. 
> Let's make these configurable in the kms startup script.
> Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14003:
-
Status: Patch Available  (was: Open)

> Make additional KMS tomcat settings configurable
> 
>
> Key: HADOOP-14003
> URL: https://issues.apache.org/jira/browse/HADOOP-14003
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-14003.001.patch
>
>
> Doing some Tomcat performance tuning on a loaded cluster, we found that 
> {{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. 
> Let's make these configurable in the kms startup script.
> Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830921#comment-15830921
 ] 

Hadoop QA commented on HADOOP-13592:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13592 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13592 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827976/HADOOP-13592.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11475/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14002:

Attachment: HADOOP-14002.001.patch

Thanks [~arpitagarwal] and [~asuresh] for reviewing the patch.
I have addressed your comments in patch v01.

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830981#comment-15830981
 ] 

Arpit Agarwal commented on HADOOP-14002:


+1 thanks [~hanishakoneru]. [~asuresh], does this look accurate to you?

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14003:
-
Attachment: hadoop-14003-branch-2.001.patch

forgot to name it with "branch-2", same patch different name

> Make additional KMS tomcat settings configurable
> 
>
> Key: HADOOP-14003
> URL: https://issues.apache.org/jira/browse/HADOOP-14003
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-14003.001.patch, hadoop-14003-branch-2.001.patch
>
>
> Doing some Tomcat performance tuning on a loaded cluster, we found that 
> {{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. 
> Let's make these configurable in the kms startup script.
> Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831122#comment-15831122
 ] 

Hudson commented on HADOOP-14002:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11150 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11150/])
HADOOP-14002. Document -DskipShade property in BUILDING.txt. Contributed (arp: 
rev 60865c8ea08053f3d6ac23f81c3376a3de3ca996)
* (edit) BUILDING.txt


> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831059#comment-15831059
 ] 

Hadoop QA commented on HADOOP-14003:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-14003 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14003 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848431/hadoop-14003.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11476/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make additional KMS tomcat settings configurable
> 
>
> Key: HADOOP-14003
> URL: https://issues.apache.org/jira/browse/HADOOP-14003
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-14003.001.patch
>
>
> Doing some Tomcat performance tuning on a loaded cluster, we found that 
> {{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. 
> Let's make these configurable in the kms startup script.
> Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13431) Fix error propagation when hot swap drives disallow HSM types.

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831108#comment-15831108
 ] 

Hadoop QA commented on HADOOP-13431:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13431 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820292/HDFS-13431.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c320be78f983 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5d8b80e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11474/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11474/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11474/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix error propagation when hot swap drives disallow HSM types.
> 

[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831124#comment-15831124
 ] 

Hadoop QA commented on HADOOP-13988:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848433/HADOOP-13988.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1892cdc91763 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 60865c8 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11477/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11477/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>  

[jira] [Commented] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831125#comment-15831125
 ] 

Hadoop QA commented on HADOOP-14003:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-14003 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14003 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848452/hadoop-14003-branch-2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11478/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make additional KMS tomcat settings configurable
> 
>
> Key: HADOOP-14003
> URL: https://issues.apache.org/jira/browse/HADOOP-14003
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-14003.001.patch, hadoop-14003-branch-2.001.patch
>
>
> Doing some Tomcat performance tuning on a loaded cluster, we found that 
> {{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. 
> Let's make these configurable in the kms startup script.
> Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13957) prevent bad PATHs

2017-01-19 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830376#comment-15830376
 ] 

Andres Perez commented on HADOOP-13957:
---

I guess you are right, was thinking more from a dev environment POV, but still 
then having the directories world writable doesn't make sense.

> prevent bad PATHs
> -
>
> Key: HADOOP-13957
> URL: https://issues.apache.org/jira/browse/HADOOP-13957
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>
> Apache Hadoop daemons should fail to start if the shell PATH contains world 
> writable directories or '.' (cwd).  Doing so would close an attack vector on 
> misconfigured systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2017-01-19 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830410#comment-15830410
 ] 

Andres Perez commented on HADOOP-12953:
---

Retesting this patch

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Uday Kale
>Assignee: Uday Kale
> Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch
>
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13904) DynamoDBMetadataStore to handle DDB throttling failures through retry policy

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830348#comment-15830348
 ] 

Steve Loughran commented on HADOOP-13904:
-

good to see this, and nice to see test first dev at work

# do these run iff `-Dscale` is set? I'd prefer that, as I've tried to split 
the slow stuff from the fast stuff in the s3a tests to date
# If you want a standard `NanoTimer.operationsPerSecond()` value for printing, 
feel free to add it

> DynamoDBMetadataStore to handle DDB throttling failures through retry policy
> 
>
> Key: HADOOP-13904
> URL: https://issues.apache.org/jira/browse/HADOOP-13904
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13904-HADOOP-13345.001.patch
>
>
> When you overload DDB, you get error messages warning of throttling, [as 
> documented by 
> AWS|http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes]
> Reduce load on DDB by doing a table lookup before the create, then, in table 
> create/delete operations and in get/put actions, recognise the error codes 
> and retry using an appropriate retry policy (exponential backoff + ultimate 
> failure) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13977) IntelliJ Compilation error in ITUseMiniCluster.java

2017-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830365#comment-15830365
 ] 

Arun Suresh commented on HADOOP-13977:
--

This is due to the fact the 'hadoop-client-integration-tests' depend on the 
post shaded artifacts.
HADOOP-13999 introduces a 'noshade' profile that adds the dependencies. You 
just need to enable the profile in the Projects dropdown in IntelliJ Maven 
projects window (View > Tools Windows > Maven Projects). 

> IntelliJ Compilation error in ITUseMiniCluster.java
> ---
>
> Key: HADOOP-13977
> URL: https://issues.apache.org/jira/browse/HADOOP-13977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Sean Busbey
> Attachments: build.log
>
>
> The repro steps:
> mvn clean install -DskipTests and then "Build/Build Project" in IntelliJ IDEA 
> to update indexes, etc.
> ...hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java
> Error:(34, 28) java: package org.apache.hadoop.fs does not exist
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14004) Missing hadoop-cloud-storage-project module in pom.xml

2017-01-19 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14004:
--

 Summary: Missing hadoop-cloud-storage-project module in pom.xml
 Key: HADOOP-14004
 URL: https://issues.apache.org/jira/browse/HADOOP-14004
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha2
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka
Priority: Critical


{code}
  hadoop-cloud-storage-project
{code}
is missing in pom.xml. That way {{mvn versions:set}} does not work for the 
project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14004) Missing hadoop-cloud-storage-project module in pom.xml

2017-01-19 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14004:
---
Attachment: HADOOP-14004.01.patch

Attaching a patch. I verified {{mvn set:versions 
-DnewVersion=3.0.0-alpha3-SNAPSHOT}} updated the version of 
hadoop-cloud-storage-project and hadoop-cloud-storage modules after the fix.

> Missing hadoop-cloud-storage-project module in pom.xml
> --
>
> Key: HADOOP-14004
> URL: https://issues.apache.org/jira/browse/HADOOP-14004
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-14004.01.patch
>
>
> {code}
>   hadoop-cloud-storage-project
> {code}
> is missing in pom.xml. That way {{mvn versions:set}} does not work for the 
> project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14004) Missing hadoop-cloud-storage-project module in pom.xml

2017-01-19 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14004:
---
Target Version/s: 3.0.0-alpha2
  Status: Patch Available  (was: Open)

> Missing hadoop-cloud-storage-project module in pom.xml
> --
>
> Key: HADOOP-14004
> URL: https://issues.apache.org/jira/browse/HADOOP-14004
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-14004.01.patch
>
>
> {code}
>   hadoop-cloud-storage-project
> {code}
> is missing in pom.xml. That way {{mvn versions:set}} does not work for the 
> project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13956) Read ADLS credentials from Credential Provider

2017-01-19 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13956:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

That makes sense, [~jzhuge]. Thanks.

+1. Committed to trunk. 

> Read ADLS credentials from Credential Provider
> --
>
> Key: HADOOP-13956
> URL: https://issues.apache.org/jira/browse/HADOOP-13956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13956.001.patch, HADOOP-13956.002.patch, 
> HADOOP-13956.003.patch, HADOOP-13956.004.patch, HADOOP-13956.005.patch, 
> HADOOP-13956.006.patch
>
>
> Read ADLS credentials using Hadoop CredentialProvider API. See 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13956) Read ADLS credentials from Credential Provider

2017-01-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831305#comment-15831305
 ] 

John Zhuge commented on HADOOP-13956:
-

Thanks [~eddyxu] for the review and commit! Thanks [~jojochuang] for the review!

> Read ADLS credentials from Credential Provider
> --
>
> Key: HADOOP-13956
> URL: https://issues.apache.org/jira/browse/HADOOP-13956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13956.001.patch, HADOOP-13956.002.patch, 
> HADOOP-13956.003.patch, HADOOP-13956.004.patch, HADOOP-13956.005.patch, 
> HADOOP-13956.006.patch
>
>
> Read ADLS credentials using Hadoop CredentialProvider API. See 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13985) s3guard: add a version marker to every table

2017-01-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830156#comment-15830156
 ] 

Steve Loughran commented on HADOOP-13985:
-

This patch is going to force everyone running the tests to delete their tables 
via the s3 CLI or simply the AWS console. Otherwise: stack traces on fs init. 
Which is how I've verified that when you run with {{-Ds3guard -Ddynamo}} then 
ddb is used and the version checking kicks in.
{code}
test_100_renameHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesDiskBlocks)
  Time elapsed: 0.386 sec  <<< ERROR!
java.io.IOException: S3Guard table lacks version marker. Table: 
hwdev-steve-frankfurt-new
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.verifyVersionCompatibility(DynamoDBMetadataStore.java:618)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:583)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:246)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:92)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:258)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3246)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3295)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3263)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:177)
at 
org.apache.hadoop.fs.s3a.scale.S3AScaleTestBase.setup(S3AScaleTestBase.java:90)
at 
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.setup(AbstractSTestS3AHugeFiles.java:75)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}


> s3guard: add a version marker to every table
> 
>
> Key: HADOOP-13985
> URL: https://issues.apache.org/jira/browse/HADOOP-13985
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13985-HADOOP-13345-001.patch
>
>
> This is something else we need before any preview: a way to identify a table 
> version, so that if future versions change the table structure:
> * older clients can recognise that it's a newer format, and fail
> * the future version can identify that it's an older format, and fail until 
> some fsck-upgrade operation has taken place
> I think something like a row on a path which is impossible in a real 
> filesystem, such as "../VERSION" would allow a version marker to go in; the 
> length field could be abused for the version number.
> This field would be something that'd be checked in init(), so be the simple 
> test for table existence we need for faster init



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7821) Hadoop event notification system

2017-01-19 Thread Abhijit C Patil (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830128#comment-15830128
 ] 

Abhijit C Patil commented on HADOOP-7821:
-

I would like to understand if this ticket is resolved or is it still open. 
If it is still open would like to contribute if anyone has been working on this.

> Hadoop event notification system
> 
>
> Key: HADOOP-7821
> URL: https://issues.apache.org/jira/browse/HADOOP-7821
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: John George
>
> It will be good if Hadoop supports some sort of a messaging service that 
> allows users to subscribe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13876) S3Guard: better support for multi-bucket access including read-only

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13876:

Attachment: (was: HADOOP-13985-HADOOP-13345-002.patch)

> S3Guard: better support for multi-bucket access including read-only
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-13876-HADOOP-13345.000.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> One of the issues is with accessing read-only buckets.  If a user accesses a 
> read-only bucket with credentials that do not have DynamoDB write 
> permissions, they will get errors when trying to access the read-only bucket. 
>  This manifests causes test failures for {{ITestS3AAWSCredentialsProvider}}.
> Goals for this JIRA:
> - Fix {{ITestS3AAWSCredentialsProvider}} in a way that makes sense for the 
> real use-case.
> - Allow for a "one DynamoDB table per cluster" configuration with a way to 
> chose which credentials are used for DynamoDB.
> - Document limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13876) S3Guard: better support for multi-bucket access including read-only

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13876:

Attachment: HADOOP-13985-HADOOP-13345-002.patch

HADOOP-13985 versioning patch 002
* adds a created timestamp to the version field as well
* fixes AbstractFileSystem to unwrap IOEs raised in FS construction better, so 
preserving error text
* reinstates table.describe() on initTable, rather than just the getItem. Why? 
getItem() seemed to trogger table creation, which would then fail as there was 
no version field. This is is something odd that we really need to understand. 
(Repeatable BTW: delete the table in the AWS console, run a test, see the stack 
trace)

tested s3 ireland with {{-Ds3guard -Dparallel-tests -DtestsThreadCount=6 
-Ddynamo -Dauth -Dtest=moo  -Dscale -Dfs.s3a.scale.test.huge.filesize=6M}}

> S3Guard: better support for multi-bucket access including read-only
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-13876-HADOOP-13345.000.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> One of the issues is with accessing read-only buckets.  If a user accesses a 
> read-only bucket with credentials that do not have DynamoDB write 
> permissions, they will get errors when trying to access the read-only bucket. 
>  This manifests causes test failures for {{ITestS3AAWSCredentialsProvider}}.
> Goals for this JIRA:
> - Fix {{ITestS3AAWSCredentialsProvider}} in a way that makes sense for the 
> real use-case.
> - Allow for a "one DynamoDB table per cluster" configuration with a way to 
> chose which credentials are used for DynamoDB.
> - Document limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13876) S3Guard: better support for multi-bucket access including read-only

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13876:

Comment: was deleted

(was: HADOOP-13985 versioning patch 002
* adds a created timestamp to the version field as well
* fixes AbstractFileSystem to unwrap IOEs raised in FS construction better, so 
preserving error text
* reinstates table.describe() on initTable, rather than just the getItem. Why? 
getItem() seemed to trogger table creation, which would then fail as there was 
no version field. This is is something odd that we really need to understand. 
(Repeatable BTW: delete the table in the AWS console, run a test, see the stack 
trace)

tested s3 ireland with {{-Ds3guard -Dparallel-tests -DtestsThreadCount=6 
-Ddynamo -Dauth -Dtest=moo  -Dscale -Dfs.s3a.scale.test.huge.filesize=6M}})

> S3Guard: better support for multi-bucket access including read-only
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-13876-HADOOP-13345.000.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> One of the issues is with accessing read-only buckets.  If a user accesses a 
> read-only bucket with credentials that do not have DynamoDB write 
> permissions, they will get errors when trying to access the read-only bucket. 
>  This manifests causes test failures for {{ITestS3AAWSCredentialsProvider}}.
> Goals for this JIRA:
> - Fix {{ITestS3AAWSCredentialsProvider}} in a way that makes sense for the 
> real use-case.
> - Allow for a "one DynamoDB table per cluster" configuration with a way to 
> chose which credentials are used for DynamoDB.
> - Document limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13985) s3guard: add a version marker to every table

2017-01-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13985:

Status: Open  (was: Patch Available)

> s3guard: add a version marker to every table
> 
>
> Key: HADOOP-13985
> URL: https://issues.apache.org/jira/browse/HADOOP-13985
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13985-HADOOP-13345-001.patch
>
>
> This is something else we need before any preview: a way to identify a table 
> version, so that if future versions change the table structure:
> * older clients can recognise that it's a newer format, and fail
> * the future version can identify that it's an older format, and fail until 
> some fsck-upgrade operation has taken place
> I think something like a row on a path which is impossible in a real 
> filesystem, such as "../VERSION" would allow a version marker to go in; the 
> length field could be abused for the version number.
> This field would be something that'd be checked in init(), so be the simple 
> test for table existence we need for faster init



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >