[jira] [Comment Edited] (HADOOP-14430) the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0

2017-05-25 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025675#comment-16025675
 ] 

Hongyuan Li edited comment on HADOOP-14430 at 5/26/17 1:55 AM:
---

[~ste...@apache.org] thanks.  i will try HADOOP-1 whenever i have time.


was (Author: hongyuan li):
[~ste...@apache.org] i will try HADOOP-1 whenever i have time.

> the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is always 0
> --
>
> Key: HADOOP-14430
> URL: https://issues.apache.org/jira/browse/HADOOP-14430
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch
>
>
> the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is 
> always 0
> {{long accessTime = 0}} in code below; 
>   {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry 
> sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is 
> wrong too, according to HADOOP-14431
> long accessTime = 0;
>   ……
>   }  {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14430) the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0

2017-05-25 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025675#comment-16025675
 ] 

Hongyuan Li commented on HADOOP-14430:
--

[~ste...@apache.org] i will try HADOOP-1 whenever i have time.

> the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is always 0
> --
>
> Key: HADOOP-14430
> URL: https://issues.apache.org/jira/browse/HADOOP-14430
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch
>
>
> the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is 
> always 0
> {{long accessTime = 0}} in code below; 
>   {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry 
> sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is 
> wrong too, according to HADOOP-14431
> long accessTime = 0;
>   ……
>   }  {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14431) the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong

2017-05-25 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025672#comment-16025672
 ] 

Hongyuan Li commented on HADOOP-14431:
--

[~ste...@apache.org] Could you give me a code review?

> the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is wrong
> ---
>
> Key: HADOOP-14431
> URL: https://issues.apache.org/jira/browse/HADOOP-14431
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14431-001.patch
>
>
> {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} 
> get FileStatus as code below:
> {code}
>   private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds
>……
>   }
> {code}
> ,which {{attr.getMTime}} return int, which meansthe modTime is wrong



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14431) the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong

2017-05-25 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14431:
-
Summary: the modifyTime of FileStatus returned by SFTPFileSystem's 
getFileStatus method is wrong  (was: the modifyTime of FileStatus got by 
SFTPFileSystem's getFileStatus method is wrong)

> the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is wrong
> ---
>
> Key: HADOOP-14431
> URL: https://issues.apache.org/jira/browse/HADOOP-14431
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14431-001.patch
>
>
> {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} 
> get FileStatus as code below:
> {code}
>   private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds
>……
>   }
> {code}
> ,which {{attr.getMTime}} return int, which meansthe modTime is wrong



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11829) Improve the vector size of Bloom Filter from int to long, and storage from memory to disk

2017-05-25 Thread Hongbo Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025647#comment-16025647
 ] 

Hongbo Xu commented on HADOOP-11829:


YES

> Improve the vector size of Bloom Filter from int to long, and storage from 
> memory to disk
> -
>
> Key: HADOOP-11829
> URL: https://issues.apache.org/jira/browse/HADOOP-11829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Hongbo Xu
>Assignee: Hongbo Xu
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> org.apache.hadoop.util.bloom.BloomFilter(int vectorSize, int nbHash, int 
> hashType) 
> This filter almost can insert 900 million objects, when False Positives 
> Probability is 0.0001, and it needs 2.1G RAM.
> In My project, I needs established a filter which capacity is 2 billion, and 
> it needs 4.7G RAM, the vector size is 38340233509, out the range of int, and 
> I does not have so much RAM to do this, so I rebuild a big bloom filter which 
> vector size type is long, and split the bit data to some files on disk, then 
> distribute files to work node, and the performance is very good.
> I think I can contribute this code to Hadoop Common, and a 128-bit Hash 
> function (MurmurHash)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14457) LocalMetadataStore falsely reports empty parent directory authoritatively

2017-05-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025633#comment-16025633
 ] 

Sean Mackrory commented on HADOOP-14457:


FWIW I tested this with the Local, DynamoDB, and Null implementations, with and 
without -Dauth. All tests passing except for 2 in ITestS3AContractGetFileStatus 
(which were failing previously, as reported in HADOOP-13345). That JIRA also 
reported ITestS3AContractRename.testRenamePopulatesFileAncestors failing with 
-Dlocal and -Dauth, and that is fixed by this patch (as are a couple of tests I 
added that would fail without the accompanying fix).

> LocalMetadataStore falsely reports empty parent directory authoritatively
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
> +final Path parent = path("testMkdirPopulatesFileAncestors/source");
> +try {
> +  fs.mkdirs(parent);
> +  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
> +  byte[] srcDataset = dataset(256, 'a', 'z');
> +  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
> +  1024, false);
> +
> +  DirListingMetadata list = ms.listChildren(parent);
> +  assertTrue("MetadataStore falsely reports authoritative empty list",
> +  list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
> +} finally {
> +  fs.delete(parent, true);
> +}
> +  }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14457) LocalMetadataStore falsely reports empty parent directory authoritatively

2017-05-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14457:
---
Attachment: HADOOP-14457-HADOOP-13345.001.patch

Attaching a patch with a better test and a fix. Was originally looking at this 
as a bug in the Local implementation for not creating parents, but after a 
quick discussion about it with @fabbri, I was convinced that that's merely an 
implementation detail of the DynamoDB implementation, and that it's actually 
the FS' responsibility to create things it needs to exist (which makes sense, 
since innerMkdirs does that). I'm more or less doing something similar to 
innerMkdirs. Maybe they should share a common function here, but there are a 
few key differences we would need to preserve if we went that route:

* I think it'd be wasteful to create the empty directory placeholder. Removing 
it later makes it 2 S3 round trips that aren't needed.
* I think directories created in this manner should NOT be considered 
authoritative. ITestS3GuardEmptyDirs agrees with me :) I think there are a few 
special cases where we could set it to true here, but in my head that logic 
gets pretty complex and IMO it's best to just leave it as false if someone is 
creating the directories implicitly with create().

There's a second issue addressed here also (since the fix for that issue is 
inside the loop I would have had to do anyway): one could create /a/b.txt and 
subsequently /a/b.txt/c/d.txt and nothing would stop you. Obviously not common 
in practice, but very incorrect IMO. I added a test to the S3A contract test. 
If this is considered part of the general FS contract we could move it up a 
level, but I haven't tested anything but S3. It also did occur to me that there 
*could* be applications depending on this behavior. So I fixed it while I was 
in the code, but I'm not entirely convinced about it myself.

> LocalMetadataStore falsely reports empty parent directory authoritatively
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) 

[jira] [Updated] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangBing Lin updated HADOOP-14456:
---
Description: Java enumeration type is a static constant, implicitly 
modified with static final,Modifier 'static' is redundant for inner enums 
less.So I suggest deleting the 'static' modifier.

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>
> Java enumeration type is a static constant, implicitly modified with static 
> final,Modifier 'static' is redundant for inner enums less.So I suggest 
> deleting the 'static' modifier.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025597#comment-16025597
 ] 

ZhangBing Lin commented on HADOOP-14456:


I just deleted the redundant code and will not cause this problem

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025595#comment-16025595
 ] 

ZhangBing Lin commented on HADOOP-14456:


Hi,[~hanishakoneru],I have resubmitted it

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangBing Lin updated HADOOP-14456:
---
Status: Open  (was: Patch Available)

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangBing Lin updated HADOOP-14456:
---
Status: Patch Available  (was: Open)

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025592#comment-16025592
 ] 

ZhangBing Lin commented on HADOOP-14456:


OK

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025451#comment-16025451
 ] 

Hanisha Koneru commented on HADOOP-14456:
-

[~linzhangbing], could you submit the patch again. The findbug warnings look 
unrelated to the patch. 
The patch otherwise LGTM.

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14428) s3a: mkdir appears to be broken

2017-05-25 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025418#comment-16025418
 ] 

Mingliang Liu commented on HADOOP-14428:


I tested the following ones: {{ITestS3AContractMkdir}} for S3, 
{{TestHDFSContractMkdir}} for HDFS, {{TestRawlocalContractMkdir}} and 
{{TestLocalFSContractMkdir}} for local FS, {{TestAzureNativeContractMkdir}} for 
WASB, and {{TestAdlContractMkdirLive}} for ADLS.

I also have run all S3 integration tests successfully against us-west-1 region, 
as the change is in {{S3AFileSystem}}.

I built the jar file and operated the fs shell commands [~fabbri] reported in 
the description, and it shows the directory instead of complaining FNFE. Can 
you verify this from your side?

> s3a: mkdir appears to be broken
> ---
>
> Key: HADOOP-14428
> URL: https://issues.apache.org/jira/browse/HADOOP-14428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2, HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
>Priority: Blocker
> Attachments: HADOOP-14428.000.patch, HADOOP-14428.001.patch
>
>
> Reproduction is:
> hadoop fs -mkdir s3a://my-bucket/dir/
> hadoop fs -ls s3a://my-bucket/dir/
> ls: `s3a://my-bucket/dir/': No such file or directory
> I believe this is a regression from HADOOP-14255.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10128) Please delete old releases from mirroring system

2017-05-25 Thread Sebb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025350#comment-16025350
 ] 

Sebb commented on HADOOP-10128:
---

PING

> Please delete old releases from mirroring system
> 
>
> Key: HADOOP-10128
> URL: https://issues.apache.org/jira/browse/HADOOP-10128
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: http://www.apache.org/dist/hadoop/common/
> http://www.apache.org/dist/hadoop/core/
>Reporter: Sebb
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases.
> Please can you remove all non-current releases?
> i.e. anything except
> 0.23.9
> 1.2.1
> 2.2.0
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14442) Owner support for ranger-wasb integration

2017-05-25 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025331#comment-16025331
 ] 

Mingliang Liu commented on HADOOP-14442:


Can you briefly describe the owner mode here in the "Description" section?

# As we're using the slf4j, we don't need the String.format for parameters. 
Instead, we can use placeholder. E.g.
{code}
- LOG.debug(String.format("Cannot find file/folder - '%s'. Returning owner as 
empty string", absolutePath));
+LOG.debug("Cannot find file/folder - '{}'. Returning owner as empty string", 
absolutePath);
{code}
# The {{ex}} to be thrown include the error message for easier debugging. The 
error message can be the same as the {{LOG.error()}} before it.
{code:title=getOwnerForPath()}
3200} catch(IOException ex) {
3201
3202  Throwable innerException = 
NativeAzureFileSystemHelper.checkForAzureStorageException(ex);
3203  boolean isfileNotFoundException = innerException instanceof 
StorageException
3204&& 
NativeAzureFileSystemHelper.isFileNotFoundException((StorageException) 
innerException);
3205
3206  // should not throw when the exception is related to 
blob/container/file/folder not found
3207  if (!isfileNotFoundException) {
3208LOG.error(String.format("Could not retrieve owner 
information for path - '%s'", absolutePath));
3209throw ex;
3210  }
3211  }
3212return owner;
{code}
# It's better to use {{UserGroupInformation.createUserForTesting.doAs()}} 
clause in {{testOwnerPermissionNegative}} instead of simply initialize the 
current user name. Is it possible to do that?
# Nit: the test path can use the test method (test case) name as the directory 
name, to avoid future conflicts. e.g. {{Path parentDir = new 
Path("/testOwnerPositive");}} => {{final Path parentDir = new 
Path("testOwnerPermissionPositive");}}
# Nit: the checkstyle warnings seem related.

> Owner support for ranger-wasb integration
> -
>
> Key: HADOOP-14442
> URL: https://issues.apache.org/jira/browse/HADOOP-14442
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: filesystem, secure, wasb
> Attachments: HADOOP-14442.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14458) TestAliyunOSSFileSystemContract missing imports?

2017-05-25 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025311#comment-16025311
 ] 

Mingliang Liu commented on HADOOP-14458:


Ping [~uncleGen]. Can you kindly help test this? We recently refactored the 
base contract test class. Thanks,

> TestAliyunOSSFileSystemContract missing imports?
> 
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14458.000.patch
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> 

[jira] [Updated] (HADOOP-14458) TestAliyunOSSFileSystemContract missing imports?

2017-05-25 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14458:
---
Priority: Trivial  (was: Major)

> TestAliyunOSSFileSystemContract missing imports?
> 
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14458.000.patch
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> 

[jira] [Commented] (HADOOP-14458) TestAliyunOSSFileSystemContract missing imports?

2017-05-25 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025307#comment-16025307
 ] 

Mingliang Liu commented on HADOOP-14458:


Ping [~ajisakaa]. I've no idea why the Jenkins did not complain about this but 
I can not compile on my local machine.

> TestAliyunOSSFileSystemContract missing imports?
> 
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14458.000.patch
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: 

[jira] [Updated] (HADOOP-14458) TestAliyunOSSFileSystemContract missing imports?

2017-05-25 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14458:
---
Status: Patch Available  (was: Open)

> TestAliyunOSSFileSystemContract missing imports?
> 
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14458.000.patch
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> 

[jira] [Updated] (HADOOP-14458) TestAliyunOSSFileSystemContract missing imports?

2017-05-25 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14458:
---
Attachment: HADOOP-14458.000.patch

> TestAliyunOSSFileSystemContract missing imports?
> 
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14458.000.patch
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> 

[jira] [Updated] (HADOOP-14458) TestAliyunOSSFileSystemContract missing imports?

2017-05-25 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14458:
---
Summary: TestAliyunOSSFileSystemContract missing imports?  (was: 
TestAliyunOSSFileSystemContract missing imports/)

> TestAliyunOSSFileSystemContract missing imports?
> 
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> 

[jira] [Created] (HADOOP-14458) TestAliyunOSSFileSystemContract missing imports/

2017-05-25 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14458:
--

 Summary: TestAliyunOSSFileSystemContract missing imports/
 Key: HADOOP-14458
 URL: https://issues.apache.org/jira/browse/HADOOP-14458
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss, test
Reporter: Mingliang Liu
Assignee: Mingliang Liu


{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-aliyun: Compilation failure: 
Compilation failure:
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
 cannot find symbol
[ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
 cannot find symbol
[ERROR]   symbol:   method fail(java.lang.String)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
 cannot find symbol
[ERROR]   symbol:   method fail(java.lang.String)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
 cannot find symbol
[ERROR]   symbol:   method fail(java.lang.String)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[163,7]
 cannot find symbol
[ERROR]   symbol:   method fail(java.lang.String)
[ERROR]   location: class 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
[ERROR] 

[jira] [Commented] (HADOOP-14441) LoadBalancingKMSClientProvider#addDelegationTokens should add delegation tokens from all KMS instances

2017-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025254#comment-16025254
 ] 

Hadoop QA commented on HADOOP-14441:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-common-project: The patch generated 4 new 
+ 100 unchanged - 3 fixed = 104 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
35s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  instanceof will always return true for all non-null values in 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(String,
 Credentials), since all RuntimeException are instances of RuntimeException  At 
LoadBalancingKMSClientProvider.java:for all non-null values in 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(String,
 Credentials), since all RuntimeException are instances of RuntimeException  At 
LoadBalancingKMSClientProvider.java:[line 154] |
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14441 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869902/HADOOP-14441.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (HADOOP-14441) LoadBalancingKMSClientProvider#addDelegationTokens should add delegation tokens from all KMS instances

2017-05-25 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025088#comment-16025088
 ] 

Yongjun Zhang commented on HADOOP-14441:


HI [~shahrs87],

Would you mind posting your patch to HADOOP-14445 so that we can iterate? 

Thanks a lot.


> LoadBalancingKMSClientProvider#addDelegationTokens should add delegation 
> tokens from all KMS instances
> --
>
> Key: HADOOP-14441
> URL: https://issues.apache.org/jira/browse/HADOOP-14441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14441.001.patch, HADOOP-14441.002.patch, 
> HADOOP-14441.003.patch, HADOOP-14441.004.patch
>
>
> LoadBalancingKMSClientProvider only gets delegation token from one KMS 
> instance, in a round-robin fashion. This is arguably a bug, as JavaDoc for 
> {{KeyProviderDelegationTokenExtension#addDelegationTokens}} states:
> {quote}
> /**
>  * The implementer of this class will take a renewer and add all
>  * delegation tokens associated with the renewer to the 
>  * Credentials object if it is not already present, 
> ...
> **/
> {quote}
> This bug doesn't pop up very often, because HDFS clients such as MapReduce 
> unintentionally calls {{FileSystem#addDelegationTokens}} multiple times.
> We have a custom client that accesses HDFS/KMS-HA using delegation token, and 
> we were puzzled why it always throws "Failed to find any Kerberos tgt" 
> exceptions talking to one KMS but not the other. Turns out that client 
> couldn't talk to the KMS because {{FileSystem#addDelegationTokens}} only gets 
> one KMS delegation token at a time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14441) LoadBalancingKMSClientProvider#addDelegationTokens should add delegation tokens from all KMS instances

2017-05-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14441:
-
Attachment: HADOOP-14441.004.patch

Post my rev 004 patch. This patch updates log message and prints kms delegation 
token obtained. I believe this will help troubleshotting KMS bugs easier in the 
future. With this patch, it print message similar to the following:

2017-05-25 10:36:57,468 INFO  LoadBalancingKMSClientProvider - Added delegation 
token Kind: kms-dt, Service: 127.0.0.1:51233, Ident: (kms-dt 
owner=SET_KEY_MATERIAL, renewer=foo, realUser=, issueDate=1495733816938, 
maxDate=1496338616938, sequenceNumber=1, masterKeyId=2) from 
http://localhost:51233/kms/v1/

> LoadBalancingKMSClientProvider#addDelegationTokens should add delegation 
> tokens from all KMS instances
> --
>
> Key: HADOOP-14441
> URL: https://issues.apache.org/jira/browse/HADOOP-14441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14441.001.patch, HADOOP-14441.002.patch, 
> HADOOP-14441.003.patch, HADOOP-14441.004.patch
>
>
> LoadBalancingKMSClientProvider only gets delegation token from one KMS 
> instance, in a round-robin fashion. This is arguably a bug, as JavaDoc for 
> {{KeyProviderDelegationTokenExtension#addDelegationTokens}} states:
> {quote}
> /**
>  * The implementer of this class will take a renewer and add all
>  * delegation tokens associated with the renewer to the 
>  * Credentials object if it is not already present, 
> ...
> **/
> {quote}
> This bug doesn't pop up very often, because HDFS clients such as MapReduce 
> unintentionally calls {{FileSystem#addDelegationTokens}} multiple times.
> We have a custom client that accesses HDFS/KMS-HA using delegation token, and 
> we were puzzled why it always throws "Failed to find any Kerberos tgt" 
> exceptions talking to one KMS but not the other. Turns out that client 
> couldn't talk to the KMS because {{FileSystem#addDelegationTokens}} only gets 
> one KMS delegation token at a time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14442) Owner support for ranger-wasb integration

2017-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025033#comment-16025033
 ] 

Hadoop QA commented on HADOOP-14442:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 9 
new + 34 unchanged - 0 fixed = 43 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 14 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14442 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869875/HADOOP-14442.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f65a685322f0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e41f88 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12396/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12396/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12396/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12396/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Owner support for ranger-wasb integration
> -
>
> Key: HADOOP-14442
> URL: https://issues.apache.org/jira/browse/HADOOP-14442
> 

[jira] [Updated] (HADOOP-14426) Upgrade Kerby version from 1.0.0-RC2 to 1.0.0

2017-05-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14426:
-
Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

Updating dependency version is an incompatible change.

> Upgrade Kerby version from 1.0.0-RC2 to 1.0.0
> -
>
> Key: HADOOP-14426
> URL: https://issues.apache.org/jira/browse/HADOOP-14426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jiajia Li
>Assignee: Jiajia Li
>Priority: Blocker
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14426-001.patch
>
>
> Apache Kerby 1.0.0 with some bug fixes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14428) s3a: mkdir appears to be broken

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024967#comment-16024967
 ] 

Steve Loughran commented on HADOOP-14428:
-

which filesystems have you tested against?

> s3a: mkdir appears to be broken
> ---
>
> Key: HADOOP-14428
> URL: https://issues.apache.org/jira/browse/HADOOP-14428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2, HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
>Priority: Blocker
> Attachments: HADOOP-14428.000.patch, HADOOP-14428.001.patch
>
>
> Reproduction is:
> hadoop fs -mkdir s3a://my-bucket/dir/
> hadoop fs -ls s3a://my-bucket/dir/
> ls: `s3a://my-bucket/dir/': No such file or directory
> I believe this is a regression from HADOOP-14255.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S reassigned HADOOP-14451:


Assignee: (was: Ajith S)

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Priority: Blocker
> Attachments: HADOOP-14451-01.patch, Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14442) Owner support for ranger-wasb integration

2017-05-25 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14442:
---
Target Version/s: 3.0.0-alpha2, 3.0.0-alpha1, 2.7.3  (was: 2.7.3, 
3.0.0-alpha1, 3.0.0-alpha2)
  Status: Patch Available  (was: Open)

> Owner support for ranger-wasb integration
> -
>
> Key: HADOOP-14442
> URL: https://issues.apache.org/jira/browse/HADOOP-14442
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: filesystem, secure, wasb
> Attachments: HADOOP-14442.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14457) LocalMetadataStore falsely reports empty parent directory authoritatively

2017-05-25 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14457:
--

 Summary: LocalMetadataStore falsely reports empty parent directory 
authoritatively
 Key: HADOOP-14457
 URL: https://issues.apache.org/jira/browse/HADOOP-14457
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory


Not a great test yet, but it at least reliably demonstrates the issue. 
LocalMetadataStore will sometimes erroneously report that a directory is empty 
with isAuthoritative = true when it *definitely* has children the metadatastore 
should know about. It doesn't appear to happen if the children are just 
directory. The fact that it's returning an empty listing is concerning, but the 
fact that it says it's authoritative *might* be a second bug.

{code}
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 78b3970..1821d19 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
   }
 
   @VisibleForTesting
-  MetadataStore getMetadataStore() {
+  public MetadataStore getMetadataStore() {
 return metadataStore;
   }
 
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
index 4339649..881bdc9 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
@@ -23,6 +23,11 @@
 import org.apache.hadoop.fs.contract.AbstractFSContract;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.Tristate;
+import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
+import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
+import org.junit.Test;
 
 import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
@@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws Throwable {
 boolean rename = fs.rename(srcDir, destDir);
 assertFalse("s3a doesn't support rename to non-empty directory", rename);
   }
+
+  @Test
+  public void testMkdirPopulatesFileAncestors() throws Exception {
+final FileSystem fs = getFileSystem();
+final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
+final Path parent = path("testMkdirPopulatesFileAncestors/source");
+try {
+  fs.mkdirs(parent);
+  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
+  byte[] srcDataset = dataset(256, 'a', 'z');
+  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
+  1024, false);
+
+  DirListingMetadata list = ms.listChildren(parent);
+  assertTrue("MetadataStore falsely reports authoritative empty list",
+  list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
+} finally {
+  fs.delete(parent, true);
+}
+  }
 }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024890#comment-16024890
 ] 

Hadoop QA commented on HADOOP-14456:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 22s{color} | {color:orange} root: The patch generated 19 new + 1783 
unchanged - 84 fixed = 1802 total (was 1867) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
37s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
0s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
10s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-extras in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-25 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024882#comment-16024882
 ] 

Yongjun Zhang commented on HADOOP-14407:


Welcome [~omkarksa]. 

Thanks [~ste...@apache.org], I already committed to branch-2 yesterday.


> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch, HADOOP-14407.002.patch, 
> HADOOP-14407.002.patch, HADOOP-14407.003.patch, 
> HADOOP-14407.004.branch2.patch, HADOOP-14407.004.patch, 
> HADOOP-14407.004.patch, HADOOP-14407.branch2.002.patch, 
> TotalTime-vs-CopyBufferSize.jpg
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-25 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HADOOP-14407.

Resolution: Fixed

> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch, HADOOP-14407.002.patch, 
> HADOOP-14407.002.patch, HADOOP-14407.003.patch, 
> HADOOP-14407.004.branch2.patch, HADOOP-14407.004.patch, 
> HADOOP-14407.004.patch, HADOOP-14407.branch2.002.patch, 
> TotalTime-vs-CopyBufferSize.jpg
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14430) the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0

2017-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024836#comment-16024836
 ] 

Hudson commented on HADOOP-14430:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11782 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11782/])
HADOOP-14430 the accessTime of FileStatus returned by SFTPFileSystem's (stevel: 
rev 8bf0e2d6b38a2cbd3c3d45557ede7575c1f18312)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java


> the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is always 0
> --
>
> Key: HADOOP-14430
> URL: https://issues.apache.org/jira/browse/HADOOP-14430
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch
>
>
> the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is 
> always 0
> {{long accessTime = 0}} in code below; 
>   {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry 
> sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is 
> wrong too, according to HADOOP-14431
> long accessTime = 0;
>   ……
>   }  {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13760) S3Guard: add delete tracking

2017-05-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13760:
---
Comment: was deleted

(was: Committed upstream and pushed for cdh5-2.6.0.)

> S3Guard: add delete tracking
> 
>
> Key: HADOOP-13760
> URL: https://issues.apache.org/jira/browse/HADOOP-13760
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13760-HADOOP-13345.001.patch, 
> HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, 
> HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch, 
> HADOOP-13760-HADOOP-13345.006.patch, HADOOP-13760-HADOOP-13345.007.patch, 
> HADOOP-13760-HADOOP-13345.008.patch, HADOOP-13760-HADOOP-13345.009.patch, 
> HADOOP-13760-HADOOP-13345.010.patch, HADOOP-13760-HADOOP-13345.011.patch, 
> HADOOP-13760-HADOOP-13345.012.patch, HADOOP-13760-HADOOP-13345.013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> delete tracking.
> Current behavior on delete is to remove the metadata from the MetadataStore.  
> To make deletes consistent, we need to add a {{isDeleted}} flag to 
> {{PathMetadata}} and check it when returning results from functions like 
> {{getFileStatus()}} and {{listStatus()}}.  In HADOOP-13651, I added TODO 
> comments in most of the places these new conditions are needed.  The work 
> does not look too bad.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13760) S3Guard: add delete tracking

2017-05-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024831#comment-16024831
 ] 

Sean Mackrory commented on HADOOP-13760:


Pushed and committed. Apologies for the misplaced comment :)

> S3Guard: add delete tracking
> 
>
> Key: HADOOP-13760
> URL: https://issues.apache.org/jira/browse/HADOOP-13760
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13760-HADOOP-13345.001.patch, 
> HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, 
> HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch, 
> HADOOP-13760-HADOOP-13345.006.patch, HADOOP-13760-HADOOP-13345.007.patch, 
> HADOOP-13760-HADOOP-13345.008.patch, HADOOP-13760-HADOOP-13345.009.patch, 
> HADOOP-13760-HADOOP-13345.010.patch, HADOOP-13760-HADOOP-13345.011.patch, 
> HADOOP-13760-HADOOP-13345.012.patch, HADOOP-13760-HADOOP-13345.013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> delete tracking.
> Current behavior on delete is to remove the metadata from the MetadataStore.  
> To make deletes consistent, we need to add a {{isDeleted}} flag to 
> {{PathMetadata}} and check it when returning results from functions like 
> {{getFileStatus()}} and {{listStatus()}}.  In HADOOP-13651, I added TODO 
> comments in most of the places these new conditions are needed.  The work 
> does not look too bad.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13760) S3Guard: add delete tracking

2017-05-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13760:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed upstream and pushed for cdh5-2.6.0.

> S3Guard: add delete tracking
> 
>
> Key: HADOOP-13760
> URL: https://issues.apache.org/jira/browse/HADOOP-13760
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13760-HADOOP-13345.001.patch, 
> HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, 
> HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch, 
> HADOOP-13760-HADOOP-13345.006.patch, HADOOP-13760-HADOOP-13345.007.patch, 
> HADOOP-13760-HADOOP-13345.008.patch, HADOOP-13760-HADOOP-13345.009.patch, 
> HADOOP-13760-HADOOP-13345.010.patch, HADOOP-13760-HADOOP-13345.011.patch, 
> HADOOP-13760-HADOOP-13345.012.patch, HADOOP-13760-HADOOP-13345.013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> delete tracking.
> Current behavior on delete is to remove the metadata from the MetadataStore.  
> To make deletes consistent, we need to add a {{isDeleted}} flag to 
> {{PathMetadata}} and check it when returning results from functions like 
> {{getFileStatus()}} and {{listStatus()}}.  In HADOOP-13651, I added TODO 
> comments in most of the places these new conditions are needed.  The work 
> does not look too bad.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024810#comment-16024810
 ] 

Vinayakumar B commented on HADOOP-14451:


Looks like the 'simple' approach is not working for this after all for this 
Native IO.
 [~ajithshetty], Please continue your analysis.

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: HADOOP-14451-01.patch, Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14399) Configuration does not correctly XInclude absolute file URIs

2017-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024798#comment-16024798
 ] 

Hudson commented on HADOOP-14399:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11781 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11781/])
HADOOP-14399. Configuration does not correctly XInclude absolute file (stevel: 
rev 1ba9704eec22c75f8aec653ee15eb6767b5a7f4b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


> Configuration does not correctly XInclude absolute file URIs
> 
>
> Key: HADOOP-14399
> URL: https://issues.apache.org/jira/browse/HADOOP-14399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0
>
> Attachments: HADOOP-14399.1.patch, HADOOP-14399.2.patch, 
> HADOOP-14399.3.patch
>
>
> [Reported 
> by|https://issues.apache.org/jira/browse/HADOOP-14216?focusedCommentId=15967816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15967816]
>  [~ste...@apache.org] on HADOOP-14216, filing this JIRA on his behalf:
> {quote}
> Just tracked this down as the likely cause of my S3A test failures. This is 
> pulling in core-site.xml, which then xincludes auth-keys.xml, which finally 
> references an absolute path, file://home/stevel/(secret)/aws-keys.xml. This 
> is failing for me even with the latest patch in. Either transient XIncludes 
> aren't being picked up or
> Note also I think the error could be improved. 1. It's in the included file 
> where the problem appears to lie and 2. we should really know the missing 
> entry. Perhaps a wiki link too: I had to read the XInclude spec to work out 
> what was going on here before I could go back to finding the cause
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024791#comment-16024791
 ] 

Steve Loughran commented on HADOOP-14407:
-

re-opened the issue while a branch-2 version is done. Alternativelly, create a 
new JIRA, "backport HADOOP-14407 to branch-2" and work on things there.

We now have distcp tests for S3 and Azure; it'd be good test those, which isn't 
automatically done by yetus

> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch, HADOOP-14407.002.patch, 
> HADOOP-14407.002.patch, HADOOP-14407.003.patch, 
> HADOOP-14407.004.branch2.patch, HADOOP-14407.004.patch, 
> HADOOP-14407.004.patch, HADOOP-14407.branch2.002.patch, 
> TotalTime-vs-CopyBufferSize.jpg
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size

2017-05-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14407:
-

> DistCp - Introduce a configurable copy buffer size
> --
>
> Key: HADOOP-14407
> URL: https://issues.apache.org/jira/browse/HADOOP-14407
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14407.001.patch, HADOOP-14407.002.patch, 
> HADOOP-14407.002.patch, HADOOP-14407.003.patch, 
> HADOOP-14407.004.branch2.patch, HADOOP-14407.004.patch, 
> HADOOP-14407.004.patch, HADOOP-14407.branch2.002.patch, 
> TotalTime-vs-CopyBufferSize.jpg
>
>
> Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just 
> 8KB. We have noticed in our performance tests that with bigger buffer sizes 
> we saw upto ~3x performance boost. Hence, making the copy buffer size a 
> configurable setting via the new parameter .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14430) the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0

2017-05-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14430:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

LTGM; tested locally

+1

Committed to branch-2 & trunk.

On the topic of SFTP, have you seen HADOOP-1, which proposes a new FS 
client? That sounds good, but it will take effort to get in. If you are able to 
get involved, that'd help

> the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is always 0
> --
>
> Key: HADOOP-14430
> URL: https://issues.apache.org/jira/browse/HADOOP-14430
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch
>
>
> the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is 
> always 0
> {{long accessTime = 0}} in code below; 
>   {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry 
> sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is 
> wrong too, according to HADOOP-14431
> long accessTime = 0;
>   ……
>   }  {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14430) the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0

2017-05-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14430:

Affects Version/s: 2.9.0

> the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is always 0
> --
>
> Key: HADOOP-14430
> URL: https://issues.apache.org/jira/browse/HADOOP-14430
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch
>
>
> the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is 
> always 0
> {{long accessTime = 0}} in code below; 
>   {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry 
> sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is 
> wrong too, according to HADOOP-14431
> long accessTime = 0;
>   ……
>   }  {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14430) the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0

2017-05-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14430:

Summary: the accessTime of FileStatus returned by SFTPFileSystem's 
getFileStatus method is always 0  (was: the accessTime of FileStatus got by 
SFTPFileSystem's getFileStatus method is always 0)

> the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus 
> method is always 0
> --
>
> Key: HADOOP-14430
> URL: https://issues.apache.org/jira/browse/HADOOP-14430
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch
>
>
> the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is 
> always 0
> {{long accessTime = 0}} in code below; 
>   {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry 
> sftpFile,
>   Path parentPath) throws IOException {
> SftpATTRS attr = sftpFile.getAttrs();
>……
> long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is 
> wrong too, according to HADOOP-14431
> long accessTime = 0;
>   ……
>   }  {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024771#comment-16024771
 ] 

Steve Loughran commented on HADOOP-14313:
-

Not been reviewing anything for the last fortnight, now I've got too much of a 
backlog. [~busbey]: you had a chance?

> Replace/improve Hadoop's byte[] comparator
> --
>
> Key: HADOOP-14313
> URL: https://issues.apache.org/jira/browse/HADOOP-14313
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.7.4
>Reporter: Vikas Vishwakarma
> Fix For: 2.7.4
>
> Attachments: HADOOP-14313.001.patch, 
> HADOOP-14313.branch-2.7.001.patch, HADOOP-14313.branch-2.7.002.patch
>
>
> Hi,
> Recently we were looking at the Lexicographic byte array comparison in HBase. 
> We did microbenchmark for the byte array comparator of HADOOP ( 
> https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161
>  ) , HBase Vs the latest byte array comparator from guava  ( 
> https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362
>  ) and observed that the guava main branch version is much faster. 
> Specifically we see very good improvement when the byteArraySize%8 != 0 and 
> also for large byte arrays. I will update the benchmark results using JMH for 
> Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14423) s3guard will set file length to -1 on a putObjectDirect(stream, -1) call

2017-05-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-14423:
---

Assignee: Steve Loughran

> s3guard will set file length to -1 on a putObjectDirect(stream, -1) call
> 
>
> Key: HADOOP-14423
> URL: https://issues.apache.org/jira/browse/HADOOP-14423
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> You can pass a negative number into {{S3AFileSystem.putObjectDirect}}, which 
> means "put until the end of the stream". S3guard has been using this {{len}} 
> argument: it needs to be using the actual number of bytes uploaded. Also 
> relevant with client side encryption, when the amount of data put > the 
> amount of data in the file or stream.
> Noted in the committer branch after I added some more assertions, I've 
> changed it there so making changes to S3AFS.putObjectDirect to pull the 
> content length to pass in to finishedWrite() from the {{PutObjectResult}} 
> instead. This can be picked into the s3guard branch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14399) Configuration does not correctly XInclude absolute file URIs

2017-05-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14399:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

> Configuration does not correctly XInclude absolute file URIs
> 
>
> Key: HADOOP-14399
> URL: https://issues.apache.org/jira/browse/HADOOP-14399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0
>
> Attachments: HADOOP-14399.1.patch, HADOOP-14399.2.patch, 
> HADOOP-14399.3.patch
>
>
> [Reported 
> by|https://issues.apache.org/jira/browse/HADOOP-14216?focusedCommentId=15967816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15967816]
>  [~ste...@apache.org] on HADOOP-14216, filing this JIRA on his behalf:
> {quote}
> Just tracked this down as the likely cause of my S3A test failures. This is 
> pulling in core-site.xml, which then xincludes auth-keys.xml, which finally 
> references an absolute path, file://home/stevel/(secret)/aws-keys.xml. This 
> is failing for me even with the latest patch in. Either transient XIncludes 
> aren't being picked up or
> Note also I think the error could be improved. 1. It's in the included file 
> where the problem appears to lie and 2. we should really know the missing 
> entry. Perhaps a wiki link too: I had to read the XInclude spec to work out 
> what was going on here before I could go back to finding the cause
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14399) Configuration does not correctly XInclude absolute file URIs

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024764#comment-16024764
 ] 

Steve Loughran edited comment on HADOOP-14399 at 5/25/17 2:03 PM:
--

S3A tests all pass with a file:/// path; 
+1
committed to trunk  & branch 2.

Thanks!


was (Author: ste...@apache.org):
S3A tests all pass with a file:/// path; 
+1
committed to trunk.

Thanks!

> Configuration does not correctly XInclude absolute file URIs
> 
>
> Key: HADOOP-14399
> URL: https://issues.apache.org/jira/browse/HADOOP-14399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0
>
> Attachments: HADOOP-14399.1.patch, HADOOP-14399.2.patch, 
> HADOOP-14399.3.patch
>
>
> [Reported 
> by|https://issues.apache.org/jira/browse/HADOOP-14216?focusedCommentId=15967816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15967816]
>  [~ste...@apache.org] on HADOOP-14216, filing this JIRA on his behalf:
> {quote}
> Just tracked this down as the likely cause of my S3A test failures. This is 
> pulling in core-site.xml, which then xincludes auth-keys.xml, which finally 
> references an absolute path, file://home/stevel/(secret)/aws-keys.xml. This 
> is failing for me even with the latest patch in. Either transient XIncludes 
> aren't being picked up or
> Note also I think the error could be improved. 1. It's in the included file 
> where the problem appears to lie and 2. we should really know the missing 
> entry. Perhaps a wiki link too: I had to read the XInclude spec to work out 
> what was going on here before I could go back to finding the cause
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14399) Configuration does not correctly XInclude absolute file URIs

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024764#comment-16024764
 ] 

Steve Loughran commented on HADOOP-14399:
-

S3A tests all pass with a file:/// path; 
+1
committed to trunk.

Thanks!

> Configuration does not correctly XInclude absolute file URIs
> 
>
> Key: HADOOP-14399
> URL: https://issues.apache.org/jira/browse/HADOOP-14399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14399.1.patch, HADOOP-14399.2.patch, 
> HADOOP-14399.3.patch
>
>
> [Reported 
> by|https://issues.apache.org/jira/browse/HADOOP-14216?focusedCommentId=15967816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15967816]
>  [~ste...@apache.org] on HADOOP-14216, filing this JIRA on his behalf:
> {quote}
> Just tracked this down as the likely cause of my S3A test failures. This is 
> pulling in core-site.xml, which then xincludes auth-keys.xml, which finally 
> references an absolute path, file://home/stevel/(secret)/aws-keys.xml. This 
> is failing for me even with the latest patch in. Either transient XIncludes 
> aren't being picked up or
> Note also I think the error could be improved. 1. It's in the included file 
> where the problem appears to lie and 2. we should really know the missing 
> entry. Perhaps a wiki link too: I had to read the XInclude spec to work out 
> what was going on here before I could go back to finding the cause
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14442) Owner support for ranger-wasb integration

2017-05-25 Thread Varada Hemeswari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024766#comment-16024766
 ] 

Varada Hemeswari commented on HADOOP-14442:
---

[~liuml07], Can you please review the patch attached.

> Owner support for ranger-wasb integration
> -
>
> Key: HADOOP-14442
> URL: https://issues.apache.org/jira/browse/HADOOP-14442
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: filesystem, secure, wasb
> Attachments: HADOOP-14442.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14442) Owner support for ranger-wasb integration

2017-05-25 Thread Varada Hemeswari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varada Hemeswari updated HADOOP-14442:
--
Attachment: HADOOP-14442.patch

> Owner support for ranger-wasb integration
> -
>
> Key: HADOOP-14442
> URL: https://issues.apache.org/jira/browse/HADOOP-14442
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: filesystem, secure, wasb
> Attachments: HADOOP-14442.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14399) Configuration does not correctly XInclude absolute file URIs

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024746#comment-16024746
 ] 

Steve Loughran commented on HADOOP-14399:
-

seems good. How about I do a quick test & see

> Configuration does not correctly XInclude absolute file URIs
> 
>
> Key: HADOOP-14399
> URL: https://issues.apache.org/jira/browse/HADOOP-14399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14399.1.patch, HADOOP-14399.2.patch, 
> HADOOP-14399.3.patch
>
>
> [Reported 
> by|https://issues.apache.org/jira/browse/HADOOP-14216?focusedCommentId=15967816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15967816]
>  [~ste...@apache.org] on HADOOP-14216, filing this JIRA on his behalf:
> {quote}
> Just tracked this down as the likely cause of my S3A test failures. This is 
> pulling in core-site.xml, which then xincludes auth-keys.xml, which finally 
> references an absolute path, file://home/stevel/(secret)/aws-keys.xml. This 
> is failing for me even with the latest patch in. Either transient XIncludes 
> aren't being picked up or
> Note also I think the error could be improved. 1. It's in the included file 
> where the problem appears to lie and 2. we should really know the missing 
> entry. Perhaps a wiki link too: I had to read the XInclude spec to work out 
> what was going on here before I could go back to finding the cause
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14432) S3A copyFromLocalFile to be robust, tested

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024742#comment-16024742
 ] 

Steve Loughran commented on HADOOP-14432:
-

thanks, let's leave on trunk for now. There's no bug fixes in it, just tests 
and assertions. (OK one bug: you can copy onto a directory. But nobody has 
noticed :)

> S3A copyFromLocalFile to be robust, tested
> --
>
> Key: HADOOP-14432
> URL: https://issues.apache.org/jira/browse/HADOOP-14432
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14432-001.patch
>
>
> {{S3AFileSystem.copyFromLocalFile()}}
> Doesn't
> * check for local file existing. Fix: check and raise FNFE (today: 
> AmazonClientException is raised)
> * check for dest being a directory. Fix: Better checks before upload
> * have any tests. Fix: write the tests
> this is related to the committer work, but doesn't depend on it



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024739#comment-16024739
 ] 

Hadoop QA commented on HADOOP-14451:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 125 unchanged - 6 fixed = 128 total (was 131) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.io.nativeio.TestNativeIO |
|   | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14451 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869855/HADOOP-14451-01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dadb630df3c8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a56a3d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12395/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12395/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12395/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12395/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-14453) Split the maven modules into several profiles

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024715#comment-16024715
 ] 

Steve Loughran commented on HADOOP-14453:
-

# If you are finding the builds are slow, turn off shading. -DskipShade 
# you don't need >1 profile to selectively build things, its what the {{--pl}} 
command does.


> Split the maven modules into several profiles
> -
>
> Key: HADOOP-14453
> URL: https://issues.apache.org/jira/browse/HADOOP-14453
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c14453_20170524.patch
>
>
> Current all the modules are defined at directly under .  As a 
> result, we cannot select to build only some of the modules.  We have to build 
> all the modules in any cases and, unfortunately, it takes a long time.
> We propose split all the modules into multiple profiles so that we could 
> build some of the modules by disabling some of the profiles.  All the 
> profiles are enabled by default so that all the modules will be built by 
> default. 
> For example, when we are making change in common.  We could build and run 
> tests under common by disabling hdfs, yarn, mapreduce, etc. modules.  This 
> will reduce the development time spend on compiling unrelated modules.
> Note that this is for local maven builds.   We are not proposing to change 
> Jenkins builds, which always build all the modules.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024697#comment-16024697
 ] 

Ajith S commented on HADOOP-14451:
--

i think people here are more interested in uploading a patch rather than 
discussing with the submitter if analysis and approach is right. If i have 
added well amount of analysis, i would probably have a solution too. Anyways, 
feel free to assign to self if that's the case

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: HADOOP-14451-01.patch, Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14425) Add more s3guard metrics

2017-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024676#comment-16024676
 ] 

Steve Loughran commented on HADOOP-14425:
-

Latest dump of stats from a committer test run. Note how the latency values are 
in a different case from the others, including the s3guard_ props. They should 
be downcased
{code}
2017-05-25 12:24:08,466 [ScalaTest-main-running-S3ACommitDataframeSuite] INFO  
s3.S3AOperations (CloudLogging.scala:logInfo(56)) -   
S3guard_metadatastore_put_path_latency50thPercentileLatency =  0
  S3guard_metadatastore_put_path_latency75thPercentileLatency =  0
  S3guard_metadatastore_put_path_latency90thPercentileLatency =  0
  S3guard_metadatastore_put_path_latency95thPercentileLatency =  0
  S3guard_metadatastore_put_path_latency99thPercentileLatency =  0
  S3guard_metadatastore_put_path_latencyNumOps =  0
  committer_bytes_committed =  314
  committer_commits_aborted =  0
  committer_commits_completed =  1
  committer_commits_created =  1
  committer_commits_failed =  0
  committer_commits_reverted =  0
  committer_jobs_completed =  1
  committer_jobs_failed =  0
  committer_tasks_completed =  1
  committer_tasks_failed =  0
  directories_created =  0
  directories_deleted =  0
  fake_directories_deleted =  4
  files_copied =  0
  files_copied_bytes =  0
  files_created =  0
  files_deleted =  2
  ignored_errors =  1
  object_continue_list_requests =  0
  object_copy_requests =  0
  object_delete_requests =  2
  object_list_requests =  5
  object_metadata_requests =  8
  object_multipart_aborted =  0
  object_put_bytes =  314
  object_put_bytes_pending =  0
  object_put_requests =  1
  object_put_requests_active =  0
  object_put_requests_completed =  1
  op_copy_from_local_file =  0
  op_exists =  1
  op_get_file_status =  3
  op_glob_status =  0
  op_is_directory =  0
  op_is_file =  0
  op_list_files =  0
  op_list_located_status =  0
  op_list_status =  0
  op_mkdirs =  0
  op_rename =  0
  s3guard_metadatastore_initialization =  0
  s3guard_metadatastore_put_path_request =  1
  stream_aborted =  0
  stream_backward_seek_operations =  0
  stream_bytes_backwards_on_seek =  0
  stream_bytes_discarded_in_abort =  0
  stream_bytes_read =  0
  stream_bytes_read_in_close =  0
  stream_bytes_skipped_on_seek =  0
  stream_close_operations =  0
  stream_closed =  0
  stream_forward_seek_operations =  0
  stream_opened =  0
  stream_read_exceptions =  0
  stream_read_fully_operations =  0
  stream_read_operations =  0
  stream_read_operations_incomplete =  0
  stream_seek_operations =  0
  stream_write_block_uploads =  0
  stream_write_block_uploads_aborted =  0
  stream_write_block_uploads_active =  0
  stream_write_block_uploads_committed =  0
  stream_write_block_uploads_data_pending =  0
  stream_write_block_uploads_pending =  0
  stream_write_failures =  0
  stream_write_total_data =  0
  stream_write_total_time =  0
{code}


> Add more s3guard metrics
> 
>
> Key: HADOOP-14425
> URL: https://issues.apache.org/jira/browse/HADOOP-14425
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ai Deng
>
> The metrics suggested to add:
> Status:
> S3GUARD_METADATASTORE_ENABLED
> S3GUARD_METADATASTORE_IS_AUTHORITATIVE
> Operations:
> S3GUARD_METADATASTORE_INITIALIZATION
> S3GUARD_METADATASTORE_DELETE_PATH
> S3GUARD_METADATASTORE_DELETE_PATH_LATENCY
> S3GUARD_METADATASTORE_DELETE_SUBTREE_PATCH
> S3GUARD_METADATASTORE_GET_PATH
> S3GUARD_METADATASTORE_GET_PATH_LATENCY
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH_LATENCY
> S3GUARD_METADATASTORE_MOVE_PATH
> S3GUARD_METADATASTORE_PUT_PATH
> S3GUARD_METADATASTORE_PUT_PATH_LATENCY
> S3GUARD_METADATASTORE_CLOSE
> S3GUARD_METADATASTORE_DESTORY
> From S3Guard:
> S3GUARD_METADATASTORE_MERGE_DIRECTORY
> For the failures:
> S3GUARD_METADATASTORE_DELETE_FAILURE
> S3GUARD_METADATASTORE_GET_FAILURE
> S3GUARD_METADATASTORE_PUT_FAILURE
> Etc:
> S3GUARD_METADATASTORE_PUT_RETRY_TIMES



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14443) Azure: Add retry and client side failover for authorization, SASKey generation and delegation token generation requests to remote service

2017-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024655#comment-16024655
 ] 

Hadoop QA commented on HADOOP-14443:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-tools_hadoop-azure generated 1 new + 1 unchanged - 0 fixed 
= 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-tools/hadoop-azure: The patch generated 0 new 
+ 28 unchanged - 23 fixed = 28 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14443 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12869844/HADOOP-14443.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a73cfd81fda6 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a56a3d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12394/artifact/patchprocess/diff-compile-javac-hadoop-tools_hadoop-azure.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12394/artifact/patchprocess/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12394/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12394/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: Add retry and client side failover for authorization, SASKey 
> generation and delegation token generation requests to remote service
> 

[jira] [Updated] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-14451:
---
Status: Patch Available  (was: Open)

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1, 2.8.0
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: HADOOP-14451-01.patch, Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-14451:
---
Attachment: HADOOP-14451-01.patch

Attached a simple approach of avoiding deadlock.

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: HADOOP-14451-01.patch, Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Martin Walsh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024620#comment-16024620
 ] 

Martin Walsh commented on HADOOP-14451:
---

Only other suggestion I can think of is to break down {{initNative()}} into 
smaller functions, e.g. {{initNativePosix()}}, {{initNativeStat()}}... Each of 
which is initialised by a static block within its inner class.  

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13760) S3Guard: add delete tracking

2017-05-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024618#comment-16024618
 ] 

Sean Mackrory commented on HADOOP-13760:


Turns out you just have to keep retrying over and over and over until Jenkins 
stops rejecting form submissions and Docker finds what its looking for :)

Will commit shortly. Thanks for all the review, here!

> S3Guard: add delete tracking
> 
>
> Key: HADOOP-13760
> URL: https://issues.apache.org/jira/browse/HADOOP-13760
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13760-HADOOP-13345.001.patch, 
> HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, 
> HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch, 
> HADOOP-13760-HADOOP-13345.006.patch, HADOOP-13760-HADOOP-13345.007.patch, 
> HADOOP-13760-HADOOP-13345.008.patch, HADOOP-13760-HADOOP-13345.009.patch, 
> HADOOP-13760-HADOOP-13345.010.patch, HADOOP-13760-HADOOP-13345.011.patch, 
> HADOOP-13760-HADOOP-13345.012.patch, HADOOP-13760-HADOOP-13345.013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> delete tracking.
> Current behavior on delete is to remove the metadata from the MetadataStore.  
> To make deletes consistent, we need to add a {{isDeleted}} flag to 
> {{PathMetadata}} and check it when returning results from functions like 
> {{getFileStatus()}} and {{listStatus()}}.  In HADOOP-13651, I added TODO 
> comments in most of the places these new conditions are needed.  The work 
> does not look too bad.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024616#comment-16024616
 ] 

ZhangBing Lin commented on HADOOP-14456:


Submit a patch!

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14443) Azure: Add retry and client side failover for authorization, SASKey generation and delegation token generation requests to remote service

2017-05-25 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-14443:
--
Status: Patch Available  (was: Open)

> Azure: Add retry and client side failover for authorization, SASKey 
> generation and delegation token generation requests to remote service
> -
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14443) Azure: Add retry and client side failover for authorization, SASKey generation and delegation token generation requests to remote service

2017-05-25 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-14443:
--
Attachment: HADOOP-14443.2.patch

Thanks [~ste...@apache.org] for reviewing this. Addressed the code review 
comments.
This patch is test against {{Azure West US}} endpoint. 

> Azure: Add retry and client side failover for authorization, SASKey 
> generation and delegation token generation requests to remote service
> -
>
> Key: HADOOP-14443
> URL: https://issues.apache.org/jira/browse/HADOOP-14443
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch
>
>
> Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL 
> for authorization, SASKey generation and delegation token generation. If for 
> some reason the service is down, all the requests will fail.
> So proposal is to,
> - Add support to configure multiple URLs, so that if communication to one URL 
> fails, client can retry on another instance of the service running on 
> different node for authorization, SASKey generation and delegation token 
> generation. 
> - Rename the configurations {{fs.azure.authorization.remote.service.url}} to 
> {{fs.azure.authorization.remote.service.urls}} and 
> {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support 
> the comma separated list of URLs.
> Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to 
> configure the comma separated list of service URLs to get the delegation 
> token.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024512#comment-16024512
 ] 

Bibin A Chundatt commented on HADOOP-14451:
---

[~martinw]
We tried the same the approach could solve the problem. But in all 
{{NativeIO.POSIX}} usage location we have to check.
[~ajithshetty] Any other solution in your mind?

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Martin Walsh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024487#comment-16024487
 ] 

Martin Walsh commented on HADOOP-14451:
---

I am not actively working on this, so my thoughts may be wide of the mark.  
Does this issue occur with

{code}
if (NativeIO.isAvailable()) {
  NativeIO.POSIX.getCacheManipulator().posixFadviseIfPossible(identifier, fd, 
getPosition(), getCount(), POSIX_FADV_DONTNEED);
}
{code}






> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangBing Lin updated HADOOP-14456:
---
Status: Patch Available  (was: Open)

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangBing Lin updated HADOOP-14456:
---
Attachment: HADOOP-14456.001.patch

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Attachments: HADOOP-14456.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangBing Lin updated HADOOP-14456:
---
Affects Version/s: 3.0.0-alpha3

> Modifier 'static' is redundant for inner enums less
> ---
>
> Key: HADOOP-14456
> URL: https://issues.apache.org/jira/browse/HADOOP-14456
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14456) Modifier 'static' is redundant for inner enums less

2017-05-25 Thread ZhangBing Lin (JIRA)
ZhangBing Lin created HADOOP-14456:
--

 Summary: Modifier 'static' is redundant for inner enums less
 Key: HADOOP-14456
 URL: https://issues.apache.org/jira/browse/HADOOP-14456
 Project: Hadoop Common
  Issue Type: Bug
Reporter: ZhangBing Lin
Assignee: ZhangBing Lin
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14451) Deadlock in NativeIO

2017-05-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024449#comment-16024449
 ] 

Bibin A Chundatt commented on HADOOP-14451:
---

Hi [~vinayrpet] , [~rakesh_r] , [~martinw] any thoughtss??

> Deadlock in NativeIO
> 
>
> Key: HADOOP-14451
> URL: https://issues.apache.org/jira/browse/HADOOP-14451
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: Nodemanager.jstack
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11829) Improve the vector size of Bloom Filter from int to long, and storage from memory to disk

2017-05-25 Thread Yonatan Gottesman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024406#comment-16024406
 ] 

Yonatan Gottesman commented on HADOOP-11829:


so on each query, you load different parts of the bitset that you need and 
check there?


> Improve the vector size of Bloom Filter from int to long, and storage from 
> memory to disk
> -
>
> Key: HADOOP-11829
> URL: https://issues.apache.org/jira/browse/HADOOP-11829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Hongbo Xu
>Assignee: Hongbo Xu
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> org.apache.hadoop.util.bloom.BloomFilter(int vectorSize, int nbHash, int 
> hashType) 
> This filter almost can insert 900 million objects, when False Positives 
> Probability is 0.0001, and it needs 2.1G RAM.
> In My project, I needs established a filter which capacity is 2 billion, and 
> it needs 4.7G RAM, the vector size is 38340233509, out the range of int, and 
> I does not have so much RAM to do this, so I rebuild a big bloom filter which 
> vector size type is long, and split the bit data to some files on disk, then 
> distribute files to work node, and the performance is very good.
> I think I can contribute this code to Hadoop Common, and a 128-bit Hash 
> function (MurmurHash)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10873) Fix dead links in the API doc

2017-05-25 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-10873.
---
Resolution: Fixed

> Fix dead links in the API doc
> -
>
> Key: HADOOP-10873
> URL: https://issues.apache.org/jira/browse/HADOOP-10873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>
> There are a lot of dead links in [Hadoop API 
> doc|http://hadoop.apache.org/docs/r2.4.1/api/]. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11829) Improve the vector size of Bloom Filter from int to long, and storage from memory to disk

2017-05-25 Thread Hongbo Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024357#comment-16024357
 ] 

Hongbo Xu commented on HADOOP-11829:


Each query need all data. the bit data is sequencing data, if you store all 
data one file, when you query a new entry, you must open the very big file, and 
seed to the position, split it to some small files with number file name, you 
can find you data quickly.

> Improve the vector size of Bloom Filter from int to long, and storage from 
> memory to disk
> -
>
> Key: HADOOP-11829
> URL: https://issues.apache.org/jira/browse/HADOOP-11829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Hongbo Xu
>Assignee: Hongbo Xu
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> org.apache.hadoop.util.bloom.BloomFilter(int vectorSize, int nbHash, int 
> hashType) 
> This filter almost can insert 900 million objects, when False Positives 
> Probability is 0.0001, and it needs 2.1G RAM.
> In My project, I needs established a filter which capacity is 2 billion, and 
> it needs 4.7G RAM, the vector size is 38340233509, out the range of int, and 
> I does not have so much RAM to do this, so I rebuild a big bloom filter which 
> vector size type is long, and split the bit data to some files on disk, then 
> distribute files to work node, and the performance is very good.
> I think I can contribute this code to Hadoop Common, and a 128-bit Hash 
> function (MurmurHash)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11829) Improve the vector size of Bloom Filter from int to long, and storage from memory to disk

2017-05-25 Thread Yonatan Gottesman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024339#comment-16024339
 ] 

Yonatan Gottesman commented on HADOOP-11829:


Hi thanks,
What exactly do you mean by "split the bit data"?  do you load the relevant 
part for each query?

> Improve the vector size of Bloom Filter from int to long, and storage from 
> memory to disk
> -
>
> Key: HADOOP-11829
> URL: https://issues.apache.org/jira/browse/HADOOP-11829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Hongbo Xu
>Assignee: Hongbo Xu
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> org.apache.hadoop.util.bloom.BloomFilter(int vectorSize, int nbHash, int 
> hashType) 
> This filter almost can insert 900 million objects, when False Positives 
> Probability is 0.0001, and it needs 2.1G RAM.
> In My project, I needs established a filter which capacity is 2 billion, and 
> it needs 4.7G RAM, the vector size is 38340233509, out the range of int, and 
> I does not have so much RAM to do this, so I rebuild a big bloom filter which 
> vector size type is long, and split the bit data to some files on disk, then 
> distribute files to work node, and the performance is very good.
> I think I can contribute this code to Hadoop Common, and a 128-bit Hash 
> function (MurmurHash)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2017-05-25 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024331#comment-16024331
 ] 

Akira Ajisaka commented on HADOOP-13921:


+1, thanks Sean.

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11829) Improve the vector size of Bloom Filter from int to long, and storage from memory to disk

2017-05-25 Thread Hongbo Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024323#comment-16024323
 ] 

Hongbo Xu commented on HADOOP-11829:


I'm sorry, I can not put the implement code online.
But it is very easy, just rebuild a big bloom filter which vector size type is 
long, and split the bit data to some files on disk.

> Improve the vector size of Bloom Filter from int to long, and storage from 
> memory to disk
> -
>
> Key: HADOOP-11829
> URL: https://issues.apache.org/jira/browse/HADOOP-11829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Hongbo Xu
>Assignee: Hongbo Xu
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> org.apache.hadoop.util.bloom.BloomFilter(int vectorSize, int nbHash, int 
> hashType) 
> This filter almost can insert 900 million objects, when False Positives 
> Probability is 0.0001, and it needs 2.1G RAM.
> In My project, I needs established a filter which capacity is 2 billion, and 
> it needs 4.7G RAM, the vector size is 38340233509, out the range of int, and 
> I does not have so much RAM to do this, so I rebuild a big bloom filter which 
> vector size type is long, and split the bit data to some files on disk, then 
> distribute files to work node, and the performance is very good.
> I think I can contribute this code to Hadoop Common, and a 128-bit Hash 
> function (MurmurHash)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2017-05-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024322#comment-16024322
 ] 

Xiao Chen commented on HADOOP-13437:


Thanks [~shahrs87] for the ping and [~kihwal] for the backport. Sorry I was on 
leave until today.

Mixed feeling on backporting to earlier than branch-2 unless its 
critical/blocker, but I guess it depends on each fix. On the flip side, below 
is the jira that's fixed recently regarding KMS: 
HADOOP-12559.
HADOOP-11722.
HADOOP-13251.
HADOOP-13155.
YARN-5048. 
YARN-3055.
HADOOP-13487.
HADOOP-13255.
HADOOP-13132.
HADOOP-12659.
HADOOP-13381.
HADOOP-13437.
HADOOP-12682.
HADOOP-12901.
HADOOP-13638.
HADOOP-12453.
HADOOP-13838.
HADOOP-8751. 

There were also some encryption related fixes but I think checking history of 
TestEncryptionZones (and similar) classes should show.
I didn't manage to go through each jira about 2.8/2.7/2.6 inclusion, but feel 
free to ping me on the jira if you see it should be backported.

Hope this helps.

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha1, 2.8.2
>
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2017-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024311#comment-16024311
 ] 

Hudson commented on HADOOP-14180:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11778/])
HADOOP-14180. FileSystem contract tests to replace JUnit 3 with 4. (aajisaka: 
rev 6a52b5e14495c5b2e0257aec65e61acd43aef309)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileSystemContractLive.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractLive.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractEmulator.java
* (edit) 
hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractPageBlobLive.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemContractMocked.java


> FileSystem contract tests to replace JUnit 3 with 4
> ---
>
> Key: HADOOP-14180
> URL: https://issues.apache.org/jira/browse/HADOOP-14180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>  Labels: test
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14180.000.patch, HADOOP-14180.001.patch, 
> HADOOP-14180.002.patch, HADOOP-14180.003.patch
>
>
> This is from discussion in [HADOOP-14170], as Steve commented:
> {quote}
> ...it's time to move this to JUnit 4, annotate all tests with @test, and make 
> the test cases skip if they don't have the test FS defined. JUnit 3 doesn't 
> support Assume, so when I do test runs without the s3n or s3 fs specced, I 
> get lots of errors I just ignore.
> ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a 
> method to make the test a no-op, and insert an Assume.assumeTrue(false) in 
> there so they skip properly.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2017-05-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14180:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~xiaobingo] for the contribution and thanks 
[~liuml07] and [~ste...@apache.org] for the reviews.

> FileSystem contract tests to replace JUnit 3 with 4
> ---
>
> Key: HADOOP-14180
> URL: https://issues.apache.org/jira/browse/HADOOP-14180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>  Labels: test
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14180.000.patch, HADOOP-14180.001.patch, 
> HADOOP-14180.002.patch, HADOOP-14180.003.patch
>
>
> This is from discussion in [HADOOP-14170], as Steve commented:
> {quote}
> ...it's time to move this to JUnit 4, annotate all tests with @test, and make 
> the test cases skip if they don't have the test FS defined. JUnit 3 doesn't 
> support Assume, so when I do test runs without the s3n or s3 fs specced, I 
> get lots of errors I just ignore.
> ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a 
> method to make the test a no-op, and insert an Assume.assumeTrue(false) in 
> there so they skip properly.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4

2017-05-25 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024285#comment-16024285
 ] 

Akira Ajisaka commented on HADOOP-14180:


+1, the test failures are not related to the patch.

> FileSystem contract tests to replace JUnit 3 with 4
> ---
>
> Key: HADOOP-14180
> URL: https://issues.apache.org/jira/browse/HADOOP-14180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>  Labels: test
> Attachments: HADOOP-14180.000.patch, HADOOP-14180.001.patch, 
> HADOOP-14180.002.patch, HADOOP-14180.003.patch
>
>
> This is from discussion in [HADOOP-14170], as Steve commented:
> {quote}
> ...it's time to move this to JUnit 4, annotate all tests with @test, and make 
> the test cases skip if they don't have the test FS defined. JUnit 3 doesn't 
> support Assume, so when I do test runs without the s3n or s3 fs specced, I 
> get lots of errors I just ignore.
> ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a 
> method to make the test a no-op, and insert an Assume.assumeTrue(false) in 
> there so they skip properly.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11829) Improve the vector size of Bloom Filter from int to long, and storage from memory to disk

2017-05-25 Thread Yonatan Gottesman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024258#comment-16024258
 ] 

Yonatan Gottesman commented on HADOOP-11829:


Hi, this is something i might also need.
Can i find your code somewhere online?

Thanks

> Improve the vector size of Bloom Filter from int to long, and storage from 
> memory to disk
> -
>
> Key: HADOOP-11829
> URL: https://issues.apache.org/jira/browse/HADOOP-11829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Hongbo Xu
>Assignee: Hongbo Xu
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> org.apache.hadoop.util.bloom.BloomFilter(int vectorSize, int nbHash, int 
> hashType) 
> This filter almost can insert 900 million objects, when False Positives 
> Probability is 0.0001, and it needs 2.1G RAM.
> In My project, I needs established a filter which capacity is 2 billion, and 
> it needs 4.7G RAM, the vector size is 38340233509, out the range of int, and 
> I does not have so much RAM to do this, so I rebuild a big bloom filter which 
> vector size type is long, and split the bit data to some files on disk, then 
> distribute files to work node, and the performance is very good.
> I think I can contribute this code to Hadoop Common, and a 128-bit Hash 
> function (MurmurHash)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org