[jira] [Assigned] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2016-09-14 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-12760:
--

Assignee: Akira Ajisaka

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Minor
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓=sun.misc.Cleaner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13434) Add quoting to Shell class

2016-09-14 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-13434:
-
Fix Version/s: 2.6.5

Cherry-picked it to 2.6.5 (trivial).

> Add quoting to Shell class
> --
>
> Key: HADOOP-13434
> URL: https://issues.apache.org/jira/browse/HADOOP-13434
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13434-branch-2.7.01.patch, HADOOP-13434.patch, 
> HADOOP-13434.patch, HADOOP-13434.patch
>
>
> The Shell class makes assumptions that the parameters won't have spaces or 
> other special characters, even when it invokes bash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13269) Define resource type and tokenidentifier

2016-09-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13269:

Summary: Define resource type and tokenidentifier  (was: Define RPC 
resource token for HDFS)

> Define resource type and tokenidentifier
> 
>
> Key: HADOOP-13269
> URL: https://issues.apache.org/jira/browse/HADOOP-13269
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Namenode: RPC handler usage percentage
> Datanode: Aggregate DN bandwidth



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491760#comment-15491760
 ] 

Mingliang Liu commented on HADOOP-13448:


{quote}
I can make the change as part of the v2 patch for HADOOP-13452
{quote}
That looks good. I'm not blocked anyway. Will bring you guys to discussion for 
sure. Basically I likes what Chris designed in his prototype, and minor changes 
on the way. The major effort now is about starting up a DynamoDBLocal in the 
unit test. Sadly DynamoDBLocal is not well maintained (after I left Amazon :P).

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491745#comment-15491745
 ] 

Aaron Fabbri commented on HADOOP-13448:
---

I can make the change as part of the v2 patch for HADOOP-13452.  I'll try to 
get that out soon so we don't disrupt your ongoing DynamoDB work too much.  
Does that work for you?

Excited to see your DynamoDB implementation, by the way.  Shout if myself or 
[~eddyxu] can help.

Yes, for the LocalMetadataStore unit test I will probably use a stubbed 
FileSystem or RawLocalFilesystem.  I only need it to get current working dir so 
far (for making paths qualified).  It will be nice to have some basic test 
coverage that does not require S3A integration test setup.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491728#comment-15491728
 ] 

Mingliang Liu commented on HADOOP-13448:


Sure it's more flexible, though we're not exploring RawLocalFileSystem yet in 
the current S3A test I guess?

Should we file a JIRA for this? Or I can change this in [HADOOP-13449], or you 
change this in [HADOOP-13452] (or both). Thanks.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491708#comment-15491708
 ] 

Mingliang Liu commented on HADOOP-13448:


Precisely. See my above comment. Ping [~cnauroth] for 3rd opinion.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491710#comment-15491710
 ] 

Aaron Fabbri commented on HADOOP-13448:
---

Ok, thanks.  Another thing to consider is, I shouldn't have to create or mock a 
S3AFileSystem to test LocalMetadataStore.  I should be able to just use 
something like RawLocalFileSystem, right?  It can be a unit test instead of an 
integration test.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491701#comment-15491701
 ] 

Aaron Fabbri commented on HADOOP-13448:
---

i.e. 

{code}

public class DynamoDBMetadataStore implements MetadataStore {
...
public void initialize(FileSystem fs) {
if (! fs instanceof S3AFileSystem) {
throw new IOException("DynamoDBMetadataStore only supports S3A 
filesystem.");
}
...
{code}

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491699#comment-15491699
 ] 

Mingliang Liu commented on HADOOP-13448:


Sorry I never thought out of the S3A box. It looks a bit over-design to 
consolidate the efforts of improving consistency for Swift/WASB here; but I 
think the proposal is acceptable. One approach is in DynamoDBMetadataStore, we 
always assume the FileSystem is actually a S3AFileSystem; at least we can cast 
type.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491679#comment-15491679
 ] 

Mingliang Liu commented on HADOOP-13448:


Are we moving the S3Guard out of S3A module someday? If not, class circular 
dependency seems OK to me.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491681#comment-15491681
 ] 

Aaron Fabbri commented on HADOOP-13448:
---

Just saw your followup comment.

How about this idea:  We keep MetadataStore independent of particular 
implementations' dependencies, but add extra parameters as needed for 
implementations.

e.g. MetadataStore could eventually be separate submodule in Hadoop Common, 
with DynamoDBMetadataStore living in hadoop-aws and taking additional init 
parameters for S3 client, etc.

Then, for example, Swift/WASB/ADLS could also have implementations that require 
additional dependencies (e.g. AzureDBMetadataStore). 

Another way of thinking of this is, MetadataStore interface only includes 
things that all implementations have in common, instead of providing a union of 
everything all the implementations use.


> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491663#comment-15491663
 ] 

Aaron Fabbri commented on HADOOP-13448:
---

[~liuml07] can you elaborate on why you prefer S3AFileSystem?

Yes, it is feasible to extract this work because there are no dependencies on 
S3A as the code stands today, besides the fact that it happens to live in that 
project.

Also from layering perspective, I'd prefer S3A --depends-on> MetadataStore 
instead of a circular dependency. 



> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491653#comment-15491653
 ] 

Chris Nauroth commented on HADOOP-13448:


Yes, I forgot about this point, which I encountered myself in the prototype 
patch.  Thank you, Mingliang.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491649#comment-15491649
 ] 

Mingliang Liu commented on HADOOP-13448:


Another point is that, DynamoDBMetadataStore should feel her life much easier 
if S3AFileSystem is provided, say s3client configuration, regions etc. 
Actually, in my in-progress patch for HADOOP-13449, I already assume an 
S3AFileSystem object.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491636#comment-15491636
 ] 

Mingliang Liu commented on HADOOP-13448:


I'd still prefer S3AFileSystem.

{quote}
envisioning the directory entry caching features potentially useful to other 
clients
{quote}
Is it feasible to extract the common code out of S3AGuard? Thx.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491609#comment-15491609
 ] 

Chris Nauroth commented on HADOOP-13448:


[~fabbri], yes, that sounds like a good idea.  If you pass a {{FileSystem}} 
parameter, then I think you could drop the {{Configuration}} parameter, because 
every {{FileSystem}} is a {{Configured}}, and therefore it carries the conf 
with it.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-09-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13449 started by Mingliang Liu.
--
> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491603#comment-15491603
 ] 

Aaron Fabbri commented on HADOOP-13448:
---

(Posting here since it is related, but I'd probably add the change as part of 
HADOOP-13452 or a subsequent patch.)

[~cnauroth] how do you feel about adding a FileSystem parameter to 
MetadataStore#initialize()?  The motivation is that I'm adding path 
qualification / normalization to the LocalMetadataStore, and finding I could 
use the associated FileSystem's working directory to make sure it is an 
absolute path.

I'm still trying not to add S3A dependencies unless strictly needed 
(envisioning the directory entry caching features potentially useful to other 
clients).


> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13454) S3Guard: Provide custom FileSystem Statistics.

2016-09-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13454:
--

Assignee: Mingliang Liu

> S3Guard: Provide custom FileSystem Statistics.
> --
>
> Key: HADOOP-13454
> URL: https://issues.apache.org/jira/browse/HADOOP-13454
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
>
> Provide custom {{FileSystem}} {{Statistics}} with information about the 
> internal operational details of S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13615) Convert uses of AtomicLong for counter metrics to LongAdder

2016-09-14 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-13615:


 Summary: Convert uses of AtomicLong for counter metrics to 
LongAdder
 Key: HADOOP-13615
 URL: https://issues.apache.org/jira/browse/HADOOP-13615
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang


LongAdder (available in JDK8) can provide much better performance than 
AtomicLong since it uses thread locals under the hood.

We should consider switching over our uses of AtomicLong and friends over to 
LongAdder.

If we want to target for JDK7, we can also pull in the implementation since 
it's pure Java (public domain):

http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/LongAdder.java?view=co



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-09-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491220#comment-15491220
 ] 

Allen Wittenauer edited comment on HADOOP-13344 at 9/14/16 7:12 PM:


Two problems:

#1:

{code}
if [[ -n "${HADOOP_USE_BUILTIN_SLF4J_BINDING}:-true" ]];
{code}

This will always be true because there will always be a value for -n to 
evaluate. You can test this on the command line:

{code}
$ echo "${HADOOP_USE_BUILTIN_SLF4J_BINDING}:-true"
:-true
$ HADOOP_USE_BUILTIN_SLF4J_BINDING=false
$ echo "${HADOOP_USE_BUILTIN_SLF4J_BINDING}:-true"
false:-true
{code}

-n is probably the wrong thing to use here.  I'd recommend reconstructing so 
that you compare HADOOP_USE_BUILTIN_SLF4J_BINDING against an explicit true or 
false setting.

#2:

There are unit tests for the shell scripts. In this case, 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_add_common_to_classpath.bats
 failed because the value generated by hadoop_add_common_to_classpath is now 
generating a different value than what was previously expected due to the extra 
classpath being present.  The test case should be updated to test both settings 
of HADOOP_USE_BUILTIN_SLF4J_BINDING.


was (Author: aw):
Two problems:

#1:

{code}
if [[ -n "${HADOOP_USE_BUILTIN_SLF4J_BINDING}:-true" ]];
{code}

This will always be true because there will always be a value for -n to 
evaluate. You can test this on the command line:

{code}
$ echo "${HADOOP_USE_BUILTIN_SLF4J_BINDING}:-true"
:-true
{code}

-n is probably the wrong thing to use here.  I'd recommend reconstructing so 
that you compare HADOOP_USE_BUILTIN_SLF4J_BINDING against an explicit true or 
false setting.

#2:

There are unit tests for the shell scripts. In this case, 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_add_common_to_classpath.bats
 failed because the value generated by hadoop_add_common_to_classpath is now 
generating a different value than what was previously expected due to the extra 
classpath being present.  The test case should be updated to test both settings 
of HADOOP_USE_BUILTIN_SLF4J_BINDING.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-09-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491220#comment-15491220
 ] 

Allen Wittenauer commented on HADOOP-13344:
---

Two problems:

#1:

{code}
if [[ -n "${HADOOP_USE_BUILTIN_SLF4J_BINDING}:-true" ]];
{code}

This will always be true because there will always be a value for -n to 
evaluate. You can test this on the command line:

{code}
$ echo "${HADOOP_USE_BUILTIN_SLF4J_BINDING}:-true"
:-true
{code}

-n is probably the wrong thing to use here.  I'd recommend reconstructing so 
that you compare HADOOP_USE_BUILTIN_SLF4J_BINDING against an explicit true or 
false setting.

#2:

There are unit tests for the shell scripts. In this case, 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_add_common_to_classpath.bats
 failed because the value generated by hadoop_add_common_to_classpath is now 
generating a different value than what was previously expected due to the extra 
classpath being present.  The test case should be updated to test both settings 
of HADOOP_USE_BUILTIN_SLF4J_BINDING.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-09-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13614:
---

 Summary: Purge some superfluous/obsolete S3 FS tests that are 
slowing test runs down
 Key: HADOOP-13614
 URL: https://issues.apache.org/jira/browse/HADOOP-13614
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


Some of the slow test cases contain tests that are now obsoleted by newer ones. 
For example, {{ITestS3ADeleteManyFiles}} has the test case {{testOpenCreate()}} 
which writes then reads files up 25 MB.

Have a look at which of the s3a tests are taking time, review them to see if 
newer tests have superceded the slow ones; and cut them where appropriate.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11552) Allow handoff on the server side for RPC requests

2016-09-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491196#comment-15491196
 ] 

Daryn Sharp commented on HADOOP-11552:
--

The patch is interesting (I planted the idea for this type of functionality).  
HADOOP-10300 is a bit different.  It does allow for a return value but not a 
dynamic response.  The ipc processing occurs exactly as normal, down through 
the engine, back up, response encoded.  The difference is the ability to tell 
the ipc later "hold on, don't send the response, there's another precondition 
that must be satisfied".  Edit logging for hdfs.

This lets the processing return nothing, allowing something else to later 
encode an arbitrary response.

Patch needs extensive rebasing, but some things we'll need to be careful about:
* ensure encrypted SASL works correctly, there are subtle ordering issues.
* consider how to deal with the static Server methods that depend on a 
thread-local for the call.
* how to make the rpc metrics accurate, otherwise processing time becomes 
meaningless

> Allow handoff on the server side for RPC requests
> -
>
> Key: HADOOP-11552
> URL: https://issues.apache.org/jira/browse/HADOOP-11552
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11552.1.wip.txt, HADOOP-11552.2.txt, 
> HADOOP-11552.3.txt, HADOOP-11552.3.txt, HADOOP-11552.4.txt
>
>
> An RPC server handler thread is tied up for each incoming RPC request. This 
> isn't ideal, since this essentially implies that RPC operations should be 
> short lived, and most operations which could take time end up falling back to 
> a polling mechanism.
> Some use cases where this is useful.
> - YARN submitApplication - which currently submits, followed by a poll to 
> check if the application is accepted while the submit operation is written 
> out to storage. This can be collapsed into a single call.
> - YARN allocate - requests and allocations use the same protocol. New 
> allocations are received via polling.
> The allocate protocol could be split into a request/heartbeat along with a 
> 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
> on a much longer heartbeat interval. awaitResponse is always left active with 
> the RM - and returns the moment something is available.
> MapReduce/Tez task to AM communication is another example of this pattern.
> The same pattern of splitting calls can be used for other protocols as well. 
> This should serve to improve latency, as well as reduce network traffic since 
> the keep-alive heartbeat can be sent less frequently.
> I believe there's some cases in HDFS as well, where the DN gets told to 
> perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-09-14 Thread Thomas Poepping (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491179#comment-15491179
 ] 

Thomas Poepping commented on HADOOP-13344:
--

Can someone point to what failed in the previous build? I'm not sure what went 
wrong.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-09-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491126#comment-15491126
 ] 

Allen Wittenauer commented on HADOOP-13344:
---

bq. Hm, didn't get a test run. I'll resubmit the patch?

It's becoming more and more common that the precommit-admin job that fires off 
the other job that actually does the work fails because builds.apache.org times 
out.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491106#comment-15491106
 ] 

Hadoop QA commented on HADOOP-13344:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 1 new + 75 unchanged - 0 fixed = 
76 total (was 75) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  4s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed TAP tests | hadoop_add_common_to_classpath.bats.tap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13344 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825353/HADOOP-13344.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  shellcheck  shelldocs  |
| uname | Linux b0d429e82bd8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2a8f55a |
| Default Java | 1.8.0_101 |
| shellcheck | v0.4.4 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10515/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| TAP logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/10515/artifact/patchprocess/patch-hadoop-common-project_hadoop-common.tap
 |
| unit | 

[jira] [Updated] (HADOOP-13612) FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, permissions)

2016-09-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13612:
---
Assignee: Steve Loughran

> FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, 
> permissions)
> --
>
> Key: HADOOP-13612
> URL: https://issues.apache.org/jira/browse/HADOOP-13612
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13612-branch-2-001.patch
>
>
> Currently {{FileSystem}}'s static {{mkdirs(FileSystem fs, Path dir, 
> FsPermission permission)}} creates the directory in a two step operation
> {code}
> // create the directory using the default permission
> boolean result = fs.mkdirs(dir);
> // set its permission to be the supplied one
> fs.setPermission(dir, permission);
> {code}
> this isn't atomic and creates a risk of race/security conditions. *This code 
> is used in production*
> Better to simply forward to mkdirs(path, permissions).
> is there any reason to not do that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-09-14 Thread Thomas Poepping (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Poepping updated HADOOP-13344:
-
Status: Patch Available  (was: Open)

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.7.2, 2.8.0
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-09-14 Thread Thomas Poepping (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490991#comment-15490991
 ] 

Thomas Poepping commented on HADOOP-13344:
--

Hm, didn't get a test run. I'll resubmit the patch?

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11552) Allow handoff on the server side for RPC requests

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490962#comment-15490962
 ] 

Hadoop QA commented on HADOOP-11552:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
4s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-11552 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-11552 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12708476/HADOOP-11552.4.txt |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10514/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow handoff on the server side for RPC requests
> -
>
> Key: HADOOP-11552
> URL: https://issues.apache.org/jira/browse/HADOOP-11552
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11552.1.wip.txt, HADOOP-11552.2.txt, 
> HADOOP-11552.3.txt, HADOOP-11552.3.txt, HADOOP-11552.4.txt
>
>
> An RPC server handler thread is tied up for each incoming RPC request. This 
> isn't ideal, since this essentially implies that RPC operations should be 
> short lived, and most operations which could take time end up falling back to 
> a polling mechanism.
> Some use cases where this is useful.
> - YARN submitApplication - which currently submits, followed by a poll to 
> check if the application is accepted while the submit operation is written 
> out to storage. This can be collapsed into a single call.
> - YARN allocate - requests and allocations use the same protocol. New 
> allocations are received via polling.
> The allocate protocol could be split into a request/heartbeat along with a 
> 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
> on a much longer heartbeat interval. awaitResponse is always left active with 
> the RM - and returns the moment something is available.
> MapReduce/Tez task to AM communication is another example of this pattern.
> The same pattern of splitting calls can be used for other protocols as well. 
> This should serve to improve latency, as well as reduce network traffic since 
> the keep-alive heartbeat can be sent less frequently.
> I believe there's some cases in HDFS as well, where the DN gets told to 
> perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11552) Allow handoff on the server side for RPC requests

2016-09-14 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490950#comment-15490950
 ] 

Jian He commented on HADOOP-11552:
--

Hi [~sseth], I looked at the patch. the approach looks good to me. Could you 
rebase it  ?
I'm going to use this for the relocalize API in YARN.

> Allow handoff on the server side for RPC requests
> -
>
> Key: HADOOP-11552
> URL: https://issues.apache.org/jira/browse/HADOOP-11552
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11552.1.wip.txt, HADOOP-11552.2.txt, 
> HADOOP-11552.3.txt, HADOOP-11552.3.txt, HADOOP-11552.4.txt
>
>
> An RPC server handler thread is tied up for each incoming RPC request. This 
> isn't ideal, since this essentially implies that RPC operations should be 
> short lived, and most operations which could take time end up falling back to 
> a polling mechanism.
> Some use cases where this is useful.
> - YARN submitApplication - which currently submits, followed by a poll to 
> check if the application is accepted while the submit operation is written 
> out to storage. This can be collapsed into a single call.
> - YARN allocate - requests and allocations use the same protocol. New 
> allocations are received via polling.
> The allocate protocol could be split into a request/heartbeat along with a 
> 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
> on a much longer heartbeat interval. awaitResponse is always left active with 
> the RM - and returns the moment something is available.
> MapReduce/Tez task to AM communication is another example of this pattern.
> The same pattern of splitting calls can be used for other protocols as well. 
> This should serve to improve latency, as well as reduce network traffic since 
> the keep-alive heartbeat can be sent less frequently.
> I believe there's some cases in HDFS as well, where the DN gets told to 
> perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13612) FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, permissions)

2016-09-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490877#comment-15490877
 ] 

Chris Nauroth commented on HADOOP-13612:


Steve, thank you for the patch.  I hear where you're coming from, but 
unfortunately, -1 on grounds of backward compatibility.

The key point is this statement from the JavaDocs of the static {{mkdirs}}:

{code}
   * The permission of the directory is set to be the provided permission as in
   * setPermission, not permission&~umask
{code}

This is what sets apart static mkdirs from member mkdirs.  If you look at, for 
example, {{DistributedFileSystem}}/{{DFSClient}}, the member mkdirs always 
applies umask on top of the passed {{FsPermission}} argument.  The semantics of 
static mkdirs is that it ignores umask.

The weaknesses of the static mkdirs (multiple RPCs/lack of atomicity) are 
unfortunate, but we can't solve it by forwarding to member mkdirs.  We'd 
probably need to make a more complicated change, like providing a new specific 
"mkdirs-ignore-umask" member method with this default inefficient 
implementation, and optionally changing subclasses to override it for better 
performance and atomicity, or perhaps reviewing application call sites to see 
if they can somehow implement their requirements with the existing member 
mkdirs.  (Maybe they could set umask to 000 and completely take over their 
permission logic?)

BTW, the same argument applies to the static {{FileSystem#create}}.

> FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, 
> permissions)
> --
>
> Key: HADOOP-13612
> URL: https://issues.apache.org/jira/browse/HADOOP-13612
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
> Attachments: HADOOP-13612-branch-2-001.patch
>
>
> Currently {{FileSystem}}'s static {{mkdirs(FileSystem fs, Path dir, 
> FsPermission permission)}} creates the directory in a two step operation
> {code}
> // create the directory using the default permission
> boolean result = fs.mkdirs(dir);
> // set its permission to be the supplied one
> fs.setPermission(dir, permission);
> {code}
> this isn't atomic and creates a risk of race/security conditions. *This code 
> is used in production*
> Better to simply forward to mkdirs(path, permissions).
> is there any reason to not do that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13611) FileSystem/s3a processDeleteOnExit to skip the exists() check

2016-09-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490834#comment-15490834
 ] 

Chris Nauroth commented on HADOOP-13611:


bq. That exists() check is superfluous; on s3 it add an extra 1-4 HTTP GETs

These checks were added in HADOOP-8634 to prevent some logging noise during 
{{FileSystem#close}} if delete-on-exit tries to delete a path that doesn't 
exist.  I never saw that happen myself, so I don't know exactly which log 
messages were getting triggered.  If we revert these checks, then we need to 
check for those log messages, and if still present, come up with a different 
solution for suppressing them.

bq. This is easy to do, but low priority, as it is generally used in testing 
rather than production.

I know at least Hive and HBase use it in production code, though I believe it's 
less critical path than other bottlenecks we've been reviewing.

> FileSystem/s3a processDeleteOnExit to skip the exists() check
> -
>
> Key: HADOOP-13611
> URL: https://issues.apache.org/jira/browse/HADOOP-13611
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Priority: Minor
>
> If you look at {{FileSystem.processDeleteOnExit()}}, it does an exists() 
> check for each entry, before calling delete().
> That exists() check is superfluous; on s3 it add an extra 1-4 HTTP GETs
> This could be fixed with a subclass in s3a to avoid it, but as the call is 
> superfluous in *all* filesystems, it could be removed in {{FileSystem}} and 
> so picked up by all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Status: Patch Available  (was: Open)

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch, 
> HADOOP-13169-branch-2-008.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: HADOOP-13169-branch-2-008.patch

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch, 
> HADOOP-13169-branch-2-008.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: (was: HADOOP-13169-branch-2-008.patch)

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch, 
> HADOOP-13169-branch-2-008.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Status: Open  (was: Patch Available)

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch, 
> HADOOP-13169-branch-2-008.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: HADOOP-13169-branch-2-008.patch

Thanks [~ste...@apache.org].  Added isDebugEnabled() to be consistent with rest 
of the code in the latest patch.

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch, 
> HADOOP-13169-branch-2-008.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13311) S3A shell entry point to support commands specific to S3A.

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490752#comment-15490752
 ] 

Steve Loughran commented on HADOOP-13311:
-

I have just managed to delete a bucket by mistake.

It is, as you note, fairly dramatic

> S3A shell entry point to support commands specific to S3A.
> --
>
> Key: HADOOP-13311
> URL: https://issues.apache.org/jira/browse/HADOOP-13311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> Create a new {{s3a}} shell entry point.  This can support diagnostic and 
> administrative commands that are specific to S3A and wouldn't make sense to 
> group under existing scripts like {{hadoop}} or {{hdfs}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12977) s3a ignores delete("/", true)

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490743#comment-15490743
 ] 

Steve Loughran commented on HADOOP-12977:
-

while looking at this again, I managed to delete the bucket entirely. worth 
knowing that it is possible. For the curious, here is the stack trace
{code}
testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 0.295 sec  <<< ERROR! java.io.FileNotFoundException: 
innerMkdirs on /test: com.amazonaws.services.s3.model.AmazonS3Exception: The 
specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error 
Code: NoSuchBucket; Request ID: 090FF7B0739884CD), S3 Extended Request ID: 
D7uOVeMMQqJ/Xtmz9CHHJGvSj27MSXMLU7sRc+KqAq0uXWr06U5WBKLo2tzUiFvadg1iCeaAV6E=
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:130)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:85)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1180)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1916)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.setup(AbstractContractRootDirectoryTest.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified 
bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: 
NoSuchBucket; Request ID: 090FF7B0739884CD)
at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at 
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1472)
at 
com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:131)
at 
com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:123)
at 
com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139)
at 
com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
at 
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
An S3AFileSystem instance will not start up if the bucket is missing. This is 
the stack you see if the bucket is deleted during the lifespan of the FS 
instance

> s3a ignores delete("/", true)
> -
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12977-001.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 

[jira] [Commented] (HADOOP-13588) ConfServlet should respect Accept request header

2016-09-14 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490719#comment-15490719
 ] 

Weiwei Yang commented on HADOOP-13588:
--

Thanks a lot [~liuml07]

> ConfServlet should respect Accept request header
> 
>
> Key: HADOOP-13588
> URL: https://issues.apache.org/jira/browse/HADOOP-13588
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13588.001.patch, HADOOP-13588.002.patch
>
>
> ConfServlet provides a general service to retrieve daemon configurations. 
> However it doesn't set response content-type according to *Accept* header. 
> For example, by issuing following command, 
> {code}
> curl --header "Accept:application/json" 
> http://${resourcemanager_host}:8088/conf
> {code}
> I am expecting the response would be in JSON format, however it is still in 
> XML. I can only get JSON if I issue
> {code}
> curl http://${resourcemanager_host}:8088/conf?format="json;
> {code}
> This is not the common way how clients set content-type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13612) FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, permissions)

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490694#comment-15490694
 ] 

Hadoop QA commented on HADOOP-13612:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
6s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828470/HADOOP-13612-branch-2-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 428442af 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | 

[jira] [Commented] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490618#comment-15490618
 ] 

Hadoop QA commented on HADOOP-13605:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 67 unchanged - 77 fixed = 74 total (was 144) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
58s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13605 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828464/HADOOP-13605-branch-2-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 061761f947c2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 3f36ac9 |
| 

[jira] [Commented] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490640#comment-15490640
 ] 

Hudson commented on HADOOP-13606:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10441 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10441/])
HADOOP-13606 swift FS to add a service load metadata file. Contributed (stevel: 
rev 53a12fa721bb431f7d481aac7d245c93efb56153)
* (add) 
hadoop-tools/hadoop-openstack/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-09-14 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490620#comment-15490620
 ] 

Andras Bokor commented on HADOOP-7363:
--

Thanks a lot for all of your help [~anu].

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Fix For: 2.9.0
>
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch, 
> HADOOP-7363.06.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490631#comment-15490631
 ] 

Steve Loughran commented on HADOOP-13169:
-

I'll let Chris do the final review.

Now, some bad news about logging @ debug. For commons logging APIs, that needs 
to be wrapped by {{if (LOG.isDebugEnabled()}} clauses; this skips the expense 
of building strings which are then never used.

For classes which use the SLF4J logging APIs, you can get away with uising 
{{LOG.debug()}}, provided the style is
{code}
LOG.debug("Adding {}", fileStatusInfo.fileStatus);
{code}
Here the string concat only happens if the log is @ debug level, so less 
expensive and no longer needs to be wrapped. That's why we get away with this 
in the s3a classes, which have all been upgraded.

This leaves you with a choice: wrap the debug statements or move the LOG up to 
SLF4J, the latter simply by changing the class of the log and its factory, 
adding the new imports and deleting the old one
{code}
Logger LOG = LoggerFactory.getLogger(...)
{code)

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13606) swift FS to add a service load metadata file

2016-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13606:

  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 2.9.0)
  Status: Resolved  (was: Patch Available)

> swift FS to add a service load metadata file
> 
>
> Key: HADOOP-13606
> URL: https://issues.apache.org/jira/browse/HADOOP-13606
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-13606-branch-2-001.patch
>
>
> add a metadata file giving the FS impl of swift; remove the entry from 
> core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13612) FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, permissions)

2016-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13612:

Status: Patch Available  (was: Open)

> FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, 
> permissions)
> --
>
> Key: HADOOP-13612
> URL: https://issues.apache.org/jira/browse/HADOOP-13612
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
> Attachments: HADOOP-13612-branch-2-001.patch
>
>
> Currently {{FileSystem}}'s static {{mkdirs(FileSystem fs, Path dir, 
> FsPermission permission)}} creates the directory in a two step operation
> {code}
> // create the directory using the default permission
> boolean result = fs.mkdirs(dir);
> // set its permission to be the supplied one
> fs.setPermission(dir, permission);
> {code}
> this isn't atomic and creates a risk of race/security conditions. *This code 
> is used in production*
> Better to simply forward to mkdirs(path, permissions).
> is there any reason to not do that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13612) FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, permissions)

2016-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13612:

Attachment: HADOOP-13612-branch-2-001.patch

Patch 001, calls the relevant mkdirs operation.

No tests; people who understand HDFS permissions need to review this to see if 
there is something fundamentally wrong with this approach

> FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, 
> permissions)
> --
>
> Key: HADOOP-13612
> URL: https://issues.apache.org/jira/browse/HADOOP-13612
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
> Attachments: HADOOP-13612-branch-2-001.patch
>
>
> Currently {{FileSystem}}'s static {{mkdirs(FileSystem fs, Path dir, 
> FsPermission permission)}} creates the directory in a two step operation
> {code}
> // create the directory using the default permission
> boolean result = fs.mkdirs(dir);
> // set its permission to be the supplied one
> fs.setPermission(dir, permission);
> {code}
> this isn't atomic and creates a risk of race/security conditions. *This code 
> is used in production*
> Better to simply forward to mkdirs(path, permissions).
> is there any reason to not do that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13612) FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, permissions)

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490526#comment-15490526
 ] 

Steve Loughran commented on HADOOP-13612:
-

HADOOP-1873 added permissions to the static mkdirs operation into Hadoop 0.16; 
it came shortly after the actual addition of the mkdirs(path, permission) call 
(HADOOP-2288). There's no obvious reason recorded in the discussion as to why 
the sequence of mkdirs/set perms is called. Maybe they just hadn't noticed the 
method

> FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, 
> permissions)
> --
>
> Key: HADOOP-13612
> URL: https://issues.apache.org/jira/browse/HADOOP-13612
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>
> Currently {{FileSystem}}'s static {{mkdirs(FileSystem fs, Path dir, 
> FsPermission permission)}} creates the directory in a two step operation
> {code}
> // create the directory using the default permission
> boolean result = fs.mkdirs(dir);
> // set its permission to be the supplied one
> fs.setPermission(dir, permission);
> {code}
> this isn't atomic and creates a risk of race/security conditions. *This code 
> is used in production*
> Better to simply forward to mkdirs(path, permissions).
> is there any reason to not do that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13613) Credentials writeTokenStorageFile should create files with permission 0600

2016-09-14 Thread Doug Balog (JIRA)
Doug Balog created HADOOP-13613:
---

 Summary: Credentials writeTokenStorageFile should create files 
with permission 0600 
 Key: HADOOP-13613
 URL: https://issues.apache.org/jira/browse/HADOOP-13613
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.2
Reporter: Doug Balog


A recent audit discovered that writeTokenStorageFile creates keytab files that 
are readable by others. IMHO, this code should create the file with perm 0600 
explicitly. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Status: Patch Available  (was: Open)

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch, 
> HADOOP-13605-branch-2-002.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13612) FileSystem static mkdirs(FS, path, permissions) to invoke FS.mkdirs(path, permissions)

2016-09-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13612:
---

 Summary: FileSystem static mkdirs(FS, path, permissions) to invoke 
FS.mkdirs(path, permissions)
 Key: HADOOP-13612
 URL: https://issues.apache.org/jira/browse/HADOOP-13612
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.7.3
Reporter: Steve Loughran


Currently {{FileSystem}}'s static {{mkdirs(FileSystem fs, Path dir, 
FsPermission permission)}} creates the directory in a two step operation

{code}
// create the directory using the default permission
boolean result = fs.mkdirs(dir);
// set its permission to be the supplied one
fs.setPermission(dir, permission);
{code}

this isn't atomic and creates a risk of race/security conditions. *This code is 
used in production*

Better to simply forward to mkdirs(path, permissions).

is there any reason to not do that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13611) FileSystem/s3a processDeleteOnExit to skip the exists() check

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490493#comment-15490493
 ] 

Steve Loughran commented on HADOOP-13611:
-

This is easy to do, but low priority, as it is generally used in testing rather 
than production.

if adding to FileSystem, adding some logging at the same time could be useful, 
as it would help show why things were taking time.

Test-wise, a FilterFileSystem subclass could make the method public; 
experiments could made to verify it works even after the file to delete has 
been removed. Ideally, this could be done in a new FS contract test, so that 
the correct operation of all filesystems are verified.

> FileSystem/s3a processDeleteOnExit to skip the exists() check
> -
>
> Key: HADOOP-13611
> URL: https://issues.apache.org/jira/browse/HADOOP-13611
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Priority: Minor
>
> If you look at {{FileSystem.processDeleteOnExit()}}, it does an exists() 
> check for each entry, before calling delete().
> That exists() check is superfluous; on s3 it add an extra 1-4 HTTP GETs
> This could be fixed with a subclass in s3a to avoid it, but as the call is 
> superfluous in *all* filesystems, it could be removed in {{FileSystem}} and 
> so picked up by all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Attachment: HADOOP-13605-branch-2-002.patch

HADOOP-13605 patch 002: 
* read the javadocs and make them consistent with what happens
* use consistently FileSystem to refer to instance of the class, filesystem to 
refer to the place where files live
* explain that and other things in the introduction
* trim all trailing whitespace
* fix up the  references and a few other javadoc issues
* fix the failing test

My IDE is mostly happy now, the javadocs are in sync with the FS spec

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch, 
> HADOOP-13605-branch-2-002.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13611) FileSystem/s3a processDeleteOnExit to skip the exists() check

2016-09-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13611:
---

 Summary: FileSystem/s3a processDeleteOnExit to skip the exists() 
check
 Key: HADOOP-13611
 URL: https://issues.apache.org/jira/browse/HADOOP-13611
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs, fs/s3
Affects Versions: 2.7.3
Reporter: Steve Loughran
Priority: Minor


If you look at {{FileSystem.processDeleteOnExit()}}, it does an exists() check 
for each entry, before calling delete().

That exists() check is superfluous; on s3 it add an extra 1-4 HTTP GETs

This could be fixed with a subclass in s3a to avoid it, but as the call is 
superfluous in *all* filesystems, it could be removed in {{FileSystem}} and so 
picked up by all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13222) s3a.mkdirs() to delete empty fake parent directories

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490473#comment-15490473
 ] 

Steve Loughran commented on HADOOP-13222:
-

if you create the object /aa/bb/ with the aws SDK, that's taken as a directory. 
create one /aa/bb then that's viewed as a file: listings will stop at that 
point, and if you try to create /aa/bb/cc then you'll get an error, because you 
can't have a file or directory under a file. If the hadoop fs shell let you do 
that: it's a bug in the shell

> s3a.mkdirs() to delete empty fake parent directories
> 
>
> Key: HADOOP-13222
> URL: https://issues.apache.org/jira/browse/HADOOP-13222
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AFileSystem.mkdirs()}} has a TODO comment: what do do about fake parent 
> directories.
> The answer is: as with files, they should be deleted. This can be done 
> asynchronously



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13605:

Status: Open  (was: Patch Available)

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490362#comment-15490362
 ] 

Hadoop QA commented on HADOOP-13609:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828449/HADOOP-13609-HADOOP-12756.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 31e910e14fde 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 60f66a9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10510/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10510/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13609-HADOOP-12756.001.patch
>
>
> looking at the 

[jira] [Comment Edited] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489981#comment-15489981
 ] 

Genmao Yu edited comment on HADOOP-13610 at 9/14/16 12:34 PM:
--

[~drankye]

update 1:

1. rename "OSS" => "AliyunOSS" or "Aliyun OSS"
2. rename "oss" => "aliyun-oss"
3. rewrite "generateUniqueTestPath" and "getTestPath" function.  

update 2:

1. code style issue



was (Author: unclegen):
[~drankye]

update 1:

1. rename "OSS" => "AliyunOSS" or "Aliyun OSS"
2. rename "oss" => "aliyun-oss"
3. rewrite "generateUniqueTestPath" and "getTestPath" function.  

update 2:

1. replcace "fs.oss.accessKeyId" with "Constants.ACCESS_KEY",  and so on
2. code style issue


> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13609:
---
Attachment: HADOOP-13609-HADOOP-12756.001.patch

> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13609-HADOOP-12756.001.patch
>
>
> looking at the AliyunOss integration codes, some findings:
> 1. {{TemporaryAliyunCredentialsProvider}} could be better named;
> 2. TemporaryAliyunCredentialsProvider shared many codes with 
> {{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
> 3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
> things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13609:
---
Status: Patch Available  (was: In Progress)

> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13609-HADOOP-12756.001.patch
>
>
> looking at the AliyunOss integration codes, some findings:
> 1. {{TemporaryAliyunCredentialsProvider}} could be better named;
> 2. TemporaryAliyunCredentialsProvider shared many codes with 
> {{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
> 3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
> things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-14 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490265#comment-15490265
 ] 

Genmao Yu edited comment on HADOOP-13609 at 9/14/16 12:27 PM:
--

[~drankye]

1: "TemporaryAliyunCredentialsProvider" keep the same naming style with S3A.
2: Also keep the same implementation with S3A, providing a support for session 
credentials  
3: +1

updates:

1. AliyunOSSUtils#getPassword => AliyunOSSUtils#getAliyunAccessKeys
2. some renaming: 
"accessKey" => "accessKeyId"
"secretKey" => "accessKeySecret"
"ACCESS_KEY" => "ACCESS_KEY_ID"
"SECRET_KEY" => "ACCESS_KEY_SECRET"


was (Author: unclegen):
[~drankye]

1: "TemporaryAliyunCredentialsProvider" keep the same naming style with S3A.
2: Also keep the same implementation with S3A, providing a support for session 
credentials  
3: +1

> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> looking at the AliyunOss integration codes, some findings:
> 1. {{TemporaryAliyunCredentialsProvider}} could be better named;
> 2. TemporaryAliyunCredentialsProvider shared many codes with 
> {{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
> 3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
> things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-14 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490265#comment-15490265
 ] 

Genmao Yu commented on HADOOP-13609:


[~drankye]

1: "TemporaryAliyunCredentialsProvider" keep the same naming style with S3A.
2: Also keep the same implementation with S3A, providing a support for session 
credentials  
3: +1

> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> looking at the AliyunOss integration codes, some findings:
> 1. {{TemporaryAliyunCredentialsProvider}} could be better named;
> 2. TemporaryAliyunCredentialsProvider shared many codes with 
> {{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
> 3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
> things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490158#comment-15490158
 ] 

Hadoop QA commented on HADOOP-13610:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 34 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828430/HADOOP-13610-HADOOP-12756.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux c04ce5521b89 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 60f66a9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10509/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10509/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> 

[jira] [Comment Edited] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489981#comment-15489981
 ] 

Genmao Yu edited comment on HADOOP-13610 at 9/14/16 10:52 AM:
--

[~drankye]

update 1:

1. rename "OSS" => "AliyunOSS" or "Aliyun OSS"
2. rename "oss" => "aliyun-oss"
3. rewrite "generateUniqueTestPath" and "getTestPath" function.  

update 2:

1. replcace "fs.oss.accessKeyId" with "Constants.ACCESS_KEY",  and so on
2. code style issue



was (Author: unclegen):
[~drankye]

updates:

1. rename "OSS" => "AliyunOSS" or "Aliyun OSS"
2. rename "oss" => "aliyun-oss"
3. rewrite "generateUniqueTestPath" and "getTestPath" function.  

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13610:
---
Attachment: (was: HADOOP-13610-HADOOP-12756.001.patch)

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13610:
---
Attachment: HADOOP-13610-HADOOP-12756.001.patch

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: HADOOP-13169-branch-2-007.patch

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Status: Patch Available  (was: Open)

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Status: Open  (was: Patch Available)

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13605) Clean up FileSystem javadocs, logging; improve diagnostics on FS load

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490051#comment-15490051
 ] 

Steve Loughran commented on HADOOP-13605:
-

Test failed as the error message on a load failure now quotes the scheme, and 
the test was looking for the exact string.
{code}
org.junit.ComparisonFailure: expected:<...ileSystem for scheme[: null]> but 
was:<...ileSystem for scheme[ "null"]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.fs.TestFileSystemCaching.testDefaultFsUris(TestFileSystemCaching.java:99)
{code}
as the FS code now throws a specific subclass of IOE, 
{{UnsupportedFileSystemException}}, the message check can be replaced with a 
catch of that explicit exception type instead.

> Clean up FileSystem javadocs, logging; improve diagnostics on FS load
> -
>
> Key: HADOOP-13605
> URL: https://issues.apache.org/jira/browse/HADOOP-13605
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13605-branch-2-001.patch
>
>
> We can't easily debug FS instantiation problems as there isn't much detail in 
> what was going on.
> We can add more logging, but cannot simply switch {{FileSystem.LOG}} to SLF4J 
> —the class is used in too many places, including tests which cast it. 
> Instead, add a new private SLF4J Logger, {{LOGGER}} and switch logging to it. 
> While working in the base FileSystem class, take the opportunity to clean up 
> javadocs and comments
> # add the list of exceptions, including indicating which base classes throw 
> UnsupportedOperationExceptions
> # cut bits in the comments which are not true
> The outcome of this patch is that IDEs shouldn't highlight most of the file 
> as flawed in some way or another



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490043#comment-15490043
 ] 

Hadoop QA commented on HADOOP-13610:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 34 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828423/HADOOP-13610-HADOOP-12756.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 6d18e6a1c5b8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 60f66a9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10507/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10507/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10507/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: 

[jira] [Commented] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490042#comment-15490042
 ] 

Hadoop QA commented on HADOOP-13169:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-tools/hadoop-distcp: The patch generated 0 
new + 51 unchanged - 1 fixed = 51 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} hadoop-distcp in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.tools.TestCopyListing |
| JDK v1.7.0_111 Failed junit tests | hadoop.tools.TestCopyListing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828422/HADOOP-13169-branch-2-006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 296911ae3b7b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2016-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490020#comment-15490020
 ] 

Steve Loughran commented on HADOOP-13600:
-

Rather than naively issuing the copy calls in the order the list came back, we 
should sort them in file size.

why? Assuming there is thread capacity, it means the largest files would all be 
copied simultaneously; if some are smaller then after they complete the next 
copies could start, while the biggest copy was still ongoing.


This would be faster than a list-ordered approach if the list contained a mix 
of long and short blobs

> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu reassigned HADOOP-13609:
--

Assignee: Genmao Yu

> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> looking at the AliyunOss integration codes, some findings:
> 1. {{TemporaryAliyunCredentialsProvider}} could be better named;
> 2. TemporaryAliyunCredentialsProvider shared many codes with 
> {{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
> 3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
> things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489981#comment-15489981
 ] 

Genmao Yu commented on HADOOP-13610:


[~drankye]

updates:

1. rename "OSS" => "AliyunOSS" or "Aliyun OSS"
2. rename "oss" => "aliyun-oss"
3. rewrite "generateUniqueTestPath" and "getTestPath" function.  

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489983#comment-15489983
 ] 

Genmao Yu commented on HADOOP-13610:


[~drankye]

updates:

1. rename "OSS" => "AliyunOSS" or "Aliyun OSS"
2. rename "oss" => "aliyun-oss"
3. rewrite "generateUniqueTestPath" and "getTestPath" function.  

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13609 started by Genmao Yu.
--
> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> looking at the AliyunOss integration codes, some findings:
> 1. {{TemporaryAliyunCredentialsProvider}} could be better named;
> 2. TemporaryAliyunCredentialsProvider shared many codes with 
> {{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
> 3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
> things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13610:
---
Comment: was deleted

(was: [~drankye]

updates:

1. rename "OSS" => "AliyunOSS" or "Aliyun OSS"
2. rename "oss" => "aliyun-oss"
3. rewrite "generateUniqueTestPath" and "getTestPath" function.  )

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13610:
---
Status: Patch Available  (was: In Progress)

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13610) Clean up AliyunOss integration tests

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13610:
---
Attachment: HADOOP-13610-HADOOP-12756.001.patch

> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). For example:
> 1. OSSContract => AliyunOSSFileSystemContract
> 2. OSSTestUtils => AliyunOSSTestUtils
> 3. All the tests like TestOSSContractCreate => TestAliyunOSSContractCreate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: HADOOP-13169-branch-2-006.patch

Thank you very much for the review [~cnauroth].

Changes:
1. Made {{fileStatusLimit, randomizeFileListing}} as final fields.
2. Fixed logging to debug level in {{SimpleCopyListing}} related change.
3. You are correct about the {{synchronizedList}} related change. It is not 
accessed in multi-threaded mode. Marked it as LinkedList.
4. Added diamond operator instead of {{new ArrayList}}
5. Added "try-with-resources" in test case
6. Removed the IOException in test case and let it throw the exception.
7. Fixed "/tmp/" in Path
8. Added better error message by including {{idx}} in test case
9. For "Collection.shuffle()", it ends up shuffling 10 items  (1,2,..10). If 
its smaller list, there are higher chances of getting the same result. With 
more items (increased to 100 now), it might not be the case. Please correct me 
if I am wrong.
10. Fixed the checkstyle issues.



> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch, 
> HADOOP-13169-branch-2-006.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-14 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489869#comment-15489869
 ] 

Genmao Yu commented on HADOOP-13591:


[~ste...@apache.org] and [~shimingfei]

updates:

1. revert "making "/" a constant"
2. factor out some repeated operations
3. some code style issue

result of unit test:

{code}
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun ---
[INFO] Deleting /develop/github/hadoop/hadoop-tools/hadoop-aliyun/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun 
---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/src/main/resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun 
---
[INFO] Compiling 8 source files to 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 5 resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-aliyun ---
[INFO] Compiling 16 source files to 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun ---
[INFO] Surefire report directory: 
/develop/github/hadoop/hadoop-tools/hadoop-aliyun/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.144 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream
Running org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.999 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream
Running org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.172 sec - 
in org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract
Running org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.504 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemStore
Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.525 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek
Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.868 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.96 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete
Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRootDir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.589 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRootDir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDispCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.563 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDispCp
Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.039 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.464 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen
Running 

[jira] [Commented] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489637#comment-15489637
 ] 

Hadoop QA commented on HADOOP-13164:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13164 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828400/HADOOP-13164-branch-2-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b27e11eb2c6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 3f36ac9 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| findbugs | v3.0.0 |
| JDK v1.7.0_111  Test Results 

[jira] [Updated] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories

2016-09-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13164:
--
Attachment: HADOOP-13164-branch-2-004.patch

> Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
> 
>
> Key: HADOOP-13164
> URL: https://issues.apache.org/jira/browse/HADOOP-13164
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13164-branch-2-003.patch, 
> HADOOP-13164-branch-2-004.patch, HADOOP-13164.branch-2-002.patch, 
> HADOOP-13164.branch-2.WIP.002.patch, HADOOP-13164.branch-2.WIP.patch
>
>
> https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224
> deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename 
> and on outputstream close() to purge any fake directories. Depending on the 
> nesting in the folder structure, it might take a lot longer time as it 
> invokes getFileStatus multiple times.  Instead, it should be able to break 
> out of the loop once a non-empty directory is encountered. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489581#comment-15489581
 ] 

Hadoop QA commented on HADOOP-13591:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828395/HADOOP-13591-HADOOP-12756.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 86a531c32f59 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 60f66a9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10504/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10504/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Unit test failed in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> ---
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756

[jira] [Updated] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: HADOOP-13591-HADOOP-12756.003.patch

> Unit test failed in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> ---
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch, HADOOP-13591-HADOOP-12756.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-14 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: (was: HADOOP-13591-HADOOP-12756.003.patch)

> Unit test failed in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> ---
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org