[jira] [Assigned] (HADOOP-11626) Comment ReadStatistics to indicate that it tracks the actual read occurred

2022-08-30 Thread Lei (Eddy) Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HADOOP-11626:
--

Assignee: (was: Lei (Eddy) Xu)

> Comment ReadStatistics to indicate that it tracks the actual read occurred
> --
>
> Key: HADOOP-11626
> URL: https://issues.apache.org/jira/browse/HADOOP-11626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Priority: Trivial
> Attachments: HADOOP-11626.000.patch, HADOOP-11626.001.patch, 
> HADOOP-11626.002.patch
>
>
> In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the 
> {{ReadStatistics}} even the read is failed:
> {code}
> int nread = reader.readAll(buf, offset, len);
> updateReadStatistics(readStatistics, nread, reader);
> if (nread != len) {
>   throw new IOException("truncated return from reader.read(): " +
> "excpected " + len + ", got " + nread);
> }
> {code}
> It indicates that {{ReadStatistics}} tracks actual read occurred. Need to add 
> comment to {{ReadStatistics}} to make this clear.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-04-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438126#comment-16438126
 ] 

Lei (Eddy) Xu edited comment on HADOOP-15205 at 4/14/18 3:39 AM:
-

Hi, [~shv]

If we run "mvn deploy -Psign -DskipTests" as suggested on 
https://wiki.apache.org/hadoop/HowToRelease, there is no source jars for all.

However, if run "mvn deploy -Psign -DskipTests -Dgpg.executable=gpg2 
-Pdist,src,yarn-ui -Dtar" seems to work, as the repository located below:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/

"dev-support/bin/create-release --asfrelease --docker --dockercache"  seems 
work too.

Update:

Some package has jars without sources:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/org/apache/hadoop/hadoop-client-runtime/3.0.2/

But others have sources:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/org/apache/hadoop/hadoop-hdfs-client/3.0.2/


was (Author: eddyxu):
Hi, [~shv]

If we run "mvn deploy -Psign -DskipTests" as suggested on 
https://wiki.apache.org/hadoop/HowToRelease, there is no source jars for all.

However, if run "mvn deploy -Psign -DskipTests -Dgpg.executable=gpg2 
-Pdist,src,yarn-ui -Dtar" seems to work, as the repository located below:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/

"dev-support/bin/create-release --asfrelease --docker --dockercache"  seems 
work too.

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-04-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438126#comment-16438126
 ] 

Lei (Eddy) Xu commented on HADOOP-15205:


Hi, [~shv]

If we run "mvn deploy -Psign -DskipTests" as suggested on 
https://wiki.apache.org/hadoop/HowToRelease, there is no source jars for all.

However, if run "mvn deploy -Psign -DskipTests -Dgpg.executable=gpg2 
-Pdist,src,yarn-ui -Dtar" seems to work, as the repository located below:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/

"dev-support/bin/create-release --asfrelease --docker --dockercache"  seems 
work too.

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15368) Apache Hadoop release 3.0.2 to fix deploying shaded jars in artifacts.

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HADOOP-15368.

Resolution: Fixed

> Apache Hadoop release 3.0.2 to fix deploying shaded jars in artifacts. 
> ---
>
> Key: HADOOP-15368
> URL: https://issues.apache.org/jira/browse/HADOOP-15368
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 3.0.2
>
>
> Apache Hadoop 3.0.1 was released with dummy shaded jars, like
> {code}
> Repository Path:  
> /org/apache/hadoop/hadoop-client-runtime/3.0.1/hadoop-client-runtime-3.0.1.jar
> Uploaded by:  lei
> Size: 44.47 KB
> Uploaded Date:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)
> Last Modified:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)
> {code}
> The community has agreed to release 3.0.2 on the same code base as 3.0.1, but 
> with shaded jars, to fix the artifacts.  During this process, we moved the 
> bug fixes with target version 3.0.2 to 3.0.3.  
> This JIRA also serves as the metadata for 3.0.2 release to generate 
> CHANGES.md and RELEASENOTE.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15368) Apache Hadoop release 3.0.2 to fix deploying shaded jars in artifacts.

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427709#comment-16427709
 ] 

Lei (Eddy) Xu commented on HADOOP-15368:


We should also fix https://wiki.apache.org/hadoop/HowToRelease.


> Apache Hadoop release 3.0.2 to fix deploying shaded jars in artifacts. 
> ---
>
> Key: HADOOP-15368
> URL: https://issues.apache.org/jira/browse/HADOOP-15368
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 3.0.2
>
>
> Apache Hadoop 3.0.1 was released with dummy shaded jars, like
> {code}
> Repository Path:  
> /org/apache/hadoop/hadoop-client-runtime/3.0.1/hadoop-client-runtime-3.0.1.jar
> Uploaded by:  lei
> Size: 44.47 KB
> Uploaded Date:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)
> Last Modified:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)
> {code}
> The community has agreed to release 3.0.2 on the same code base as 3.0.1, but 
> with shaded jars, to fix the artifacts.  During this process, we moved the 
> bug fixes with target version 3.0.2 to 3.0.3.  
> This JIRA also serves as the metadata for 3.0.2 release to generate 
> CHANGES.md and RELEASENOTE.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15368) Apache Hadoop release 3.0.2 to fix deploying shaded jars in artifacts.

2018-04-05 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-15368:
--

 Summary: Apache Hadoop release 3.0.2 to fix deploying shaded jars 
in artifacts. 
 Key: HADOOP-15368
 URL: https://issues.apache.org/jira/browse/HADOOP-15368
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 3.0.2


Apache Hadoop 3.0.1 was released with dummy shaded jars, like

{code}
Repository Path:  
/org/apache/hadoop/hadoop-client-runtime/3.0.1/hadoop-client-runtime-3.0.1.jar
Uploaded by:  lei
Size: 44.47 KB
Uploaded Date:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)
Last Modified:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)
{code}

The community has agreed to release 3.0.2 on the same code base as 3.0.1, but 
with shaded jars, to fix the artifacts.  During this process, we moved the bug 
fixes with target version 3.0.2 to 3.0.3.  

This JIRA also serves as the metadata for 3.0.2 release to generate CHANGES.md 
and RELEASENOTE.md.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15277:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15277-001.patch
>
>
> When hadoop metrics is started, a message about bean introspection appears.
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> When using wasb or s3a,. this message appears in the client logs, because 
> they both start metrics
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15263) hadoop cloud-storage module to mark hadoop-common as provided; add azure-datalake

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15263:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> hadoop cloud-storage module to mark hadoop-common as provided; add 
> azure-datalake
> -
>
> Key: HADOOP-15263
> URL: https://issues.apache.org/jira/browse/HADOOP-15263
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15263-001.patch
>
>
> Reviewing hadoop-cloud-storage module for use
> * we should cut out hadoop-common so that if something downstream is already 
> doing the heavy lifting of excluding it to get jackson & guava in sync, it's 
> not sneaking back in.
> * and add azure-datalake



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15279) increase maven heap size recommendations

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15279:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> increase maven heap size recommendations
> 
>
> Key: HADOOP-15279
> URL: https://issues.apache.org/jira/browse/HADOOP-15279
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>
> Attachments: HADOOP-15279.00.patch
>
>
> 1G is just a bit too low for JDK8+surefire 2.20+hdfs unit tests running in 
> parallel.  Bump it up a bit more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15275) Incorrect javadoc for return type of RetryPolicy#shouldRetry

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15275:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Incorrect javadoc for return type of RetryPolicy#shouldRetry
> 
>
> Key: HADOOP-15275
> URL: https://issues.apache.org/jira/browse/HADOOP-15275
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HADOOP-15275.000.patch
>
>
> The return type of {{RetryPolicy#shouldRetry}} has been changed from 
> {{boolean}} to {{RetryAction}}, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15296) Fix a wrong link for RBF in the top page

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15296:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Fix a wrong link for RBF in the top page
> 
>
> Key: HADOOP-15296
> URL: https://issues.apache.org/jira/browse/HADOOP-15296
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15296.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14651) Update okhttp version to 2.7.5

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14651:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Update okhttp version to 2.7.5
> --
>
> Key: HADOOP-14651
> URL: https://issues.apache.org/jira/browse/HADOOP-14651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.1.0, 2.9.1, 3.0.3
>
> Attachments: HADOOP-14651-branch-2.0.004.patch, 
> HADOOP-14651-branch-2.0.004.patch, HADOOP-14651-branch-3.0.004.patch, 
> HADOOP-14651-branch-3.0.004.patch, HADOOP-14651.001.patch, 
> HADOOP-14651.002.patch, HADOOP-14651.003.patch, HADOOP-14651.004.patch
>
>
> The current artifact is:
> com.squareup.okhttp:okhttp:2.4.0
> That version could either be bumped to 2.7.5 (the latest of that line), or 
> use the latest artifact:
> com.squareup.okhttp3:okhttp:3.8.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15355) TestCommonConfigurationFields is broken by HADOOP-15312

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15355:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> TestCommonConfigurationFields is broken by HADOOP-15312
> ---
>
> Key: HADOOP-15355
> URL: https://issues.apache.org/jira/browse/HADOOP-15355
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-15355.001.patch, HADOOP-15355.002.patch
>
>
> TestCommonConfigurationFields is failing after HADOOP-15312.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-12862:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, 
> HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch, 
> HADOOP-12862.009.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15342) Update ADLS connector to use the current SDK version (2.2.7)

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15342:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Update ADLS connector to use the current SDK version (2.2.7)
> 
>
> Key: HADOOP-15342
> URL: https://issues.apache.org/jira/browse/HADOOP-15342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-15342.001.patch
>
>
> Updating the ADLS SDK connector to use the current version of the ADLS SDK 
> (2.2.7).
>  
> Changelist is here: 
> [https://github.com/Azure/azure-data-lake-store-java/blob/sdk2.2/CHANGES.md]
>  
> Short summary of what matters:
> Change to the MSI token acquisition interface required by the change in REST 
> interface to Azure ActiveDirecotry's VM MSI interface, and improved 
> diagnostics in the SDK for token acquisition failures (better exception 
> message and log message). The diagnostics was requested in HADOOP-15188.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14742) Document multi-URI replication Inode for ViewFS

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14742:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Document multi-URI replication Inode for ViewFS
> ---
>
> Key: HADOOP-14742
> URL: https://issues.apache.org/jira/browse/HADOOP-14742
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, viewfs
>Affects Versions: 3.0.0-beta1
>Reporter: Chris Douglas
>Assignee: Gera Shegalov
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-14742.001.patch, HADOOP-14742.002.patch
>
>
> HADOOP-12077 added client-side "replication" capabilities to ViewFS. Its 
> semantics and configuration should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15308:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> TestConfiguration fails on Windows because of paths
> ---
>
> Key: HADOOP-15308
> URL: https://issues.apache.org/jira/browse/HADOOP-15308
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HADOOP-15308.000.patch, HADOOP-15308.001.patch, 
> HADOOP-15308.002.patch
>
>
> We are seeing multiple failures with:
> {code}
> Illegal character in authority at index 7: 
> file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
> {code}
> We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15262) AliyunOSS: move files under a directory in parallel when rename a directory

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15262:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> AliyunOSS: move files under a directory in parallel when rename a directory
> ---
>
> Key: HADOOP-15262
> URL: https://issues.apache.org/jira/browse/HADOOP-15262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 3.2.0, 3.0.3
>
> Attachments: HADOOP-15262-branch-2.001.patch, HADOOP-15262.001.patch, 
> HADOOP-15262.002.patch, HADOOP-15262.003.patch, HADOOP-15262.004.patch, 
> HADOOP-15262.005.patch, HADOOP-15262.006.patch, HADOOP-15262.007.patch
>
>
> Currently, rename() operation renames files in series. This will be slow if a 
> directory contains many files. So we can improve this by rename files in 
> parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15312:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Undocumented KeyProvider configuration keys
> ---
>
> Key: HADOOP-15312
> URL: https://issues.apache.org/jira/browse/HADOOP-15312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.3
>
> Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch, 
> HADOOP-15312.003.patch
>
>
> Via HADOOP-14445, I found two undocumented configuration keys: 
> hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15317:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch, HADOOP-15317.05.patch, 
> HADOOP-15317.06.patch, Screen Shot 2018-03-28 at 7.23.32 PM.png
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15334) Upgrade Maven surefire plugin

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15334:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Upgrade Maven surefire plugin
> -
>
> Key: HADOOP-15334
> URL: https://issues.apache.org/jira/browse/HADOOP-15334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HADOOP-15334.01.patch
>
>
> Recent versions of the surefire plugin suppress summary test execution output 
> in quiet mode. This is now fixed in plugin version 2.21.0 (via SUREFIRE-1436).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13972) ADLS to support per-store configuration

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13972:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> ADLS to support per-store configuration
> ---
>
> Key: HADOOP-13972
> URL: https://issues.apache.org/jira/browse/HADOOP-13972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Sharad Sonker
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
>
> Useful when distcp needs to access 2 Data Lake stores with different SPIs.
> Of course, a workaround is to grant the same SPI access permission to both 
> stores, but sometimes it might not be feasible.
> One idea is to embed the store name in the configuration property names, 
> e.g., {{dfs.adls.oauth2..client.id}}. Per-store keys will be consulted 
> first, then fall back to the global keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15311) HttpServer2 needs a way to configure the acceptor/selector count

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15311:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> HttpServer2 needs a way to configure the acceptor/selector count
> 
>
> Key: HADOOP-15311
> URL: https://issues.apache.org/jira/browse/HADOOP-15311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15311.000.patch, HADOOP-15311.001.patch, 
> HADOOP-15311.002.patch
>
>
> HttpServer2 starts up with some number of acceptors and selectors, but only 
> allows for the automatic configuration of these based off of the number of 
> available cores:
> {code:title=org.eclipse.jetty.server.ServerConnector}
> selectors > 0 ? selectors : Math.max(1, Math.min(4, 
> Runtime.getRuntime().availableProcessors() / 2)))
> {code}
> {code:title=org.eclipse.jetty.server.AbstractConnector}
> if (acceptors < 0) {
>   acceptors = Math.max(1, Math.min(4, cores / 8));
> }
> {code}
> A thread pool is started of size, at minimum, {{acceptors + selectors + 1}}, 
> so in addition to allowing for a higher tuning value under heavily loaded 
> environments, adding configurability for this enables tuning these values 
> down in resource constrained environments such as a MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15206) BZip2 drops and duplicates records when input split size is small

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15206:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> BZip2 drops and duplicates records when input split size is small
> -
>
> Key: HADOOP-15206
> URL: https://issues.apache.org/jira/browse/HADOOP-15206
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>
> Attachments: HADOOP-15206-test.patch, HADOOP-15206.001.patch, 
> HADOOP-15206.002.patch, HADOOP-15206.003.patch, HADOOP-15206.004.patch, 
> HADOOP-15206.005.patch, HADOOP-15206.006.patch, HADOOP-15206.007.patch, 
> HADOOP-15206.008.patch
>
>
> BZip2 can drop and duplicate record when input split file is small. I 
> confirmed that this issue happens when the input split size is between 1byte 
> and 4bytes.
> I am seeing the following 2 problem behaviors.
>  
> 1. Drop record:
> BZip2 skips the first record in the input file when the input split size is 
> small
>  
> Set the split size to 3 and tested to load 100 records (0, 1, 2..99)
> {code:java}
> 2018-02-01 10:52:33,502 INFO  [Thread-17] mapred.TestTextInputFormat 
> (TestTextInputFormat.java:verifyPartitions(317)) - 
> splits[1]=file:/work/count-mismatch2/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/TestTextInputFormat/test.bz2:3+3
>  count=99{code}
> > The input format read only 99 records but not 100 records
>  
> 2. Duplicate Record:
> 2 input splits has same BZip2 records when the input split size is small
>  
> Set the split size to 1 and tested to load 100 records (0, 1, 2..99)
>  
> {code:java}
> 2018-02-01 11:18:49,309 INFO [Thread-17] mapred.TestTextInputFormat 
> (TestTextInputFormat.java:verifyPartitions(318)) - splits[3]=file 
> /work/count-mismatch2/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/TestTextInputFormat/test.bz2:3+1
>  count=99
> 2018-02-01 11:18:49,310 WARN [Thread-17] mapred.TestTextInputFormat 
> (TestTextInputFormat.java:verifyPartitions(308)) - conflict with 1 in split 4 
> at position 8
> {code}
>  
> I experienced this error when I execute Spark (SparkSQL) job under the 
> following conditions:
> * The file size of the input files are small (around 1KB)
> * Hadoop cluster has many slave nodes (able to launch many executor tasks)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15267:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15267-001.patch, HADOOP-15267-002.patch, 
> HADOOP-15267-003.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-9747:
--
Fix Version/s: (was: 3.0.2)
   3.0.3

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-9747-trunk-03.patch, HADOOP-9747-trunk-04.patch, 
> HADOOP-9747-trunk.01.patch, HADOOP-9747-trunk.02.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15273) distcp can't handle remote stores with different checksum algorithms

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15273:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> distcp can't handle remote stores with different checksum algorithms
> 
>
> Key: HADOOP-15273
> URL: https://issues.apache.org/jira/browse/HADOOP-15273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15273-001.patch, HADOOP-15273-002.patch, 
> HADOOP-15273-003.patch
>
>
> When using distcp without {{-skipcrcchecks}} . If there's a checksum mismatch 
> between src and dest store types (e.g hdfs to s3), then the error message 
> will talk about blocksize, even when its the underlying checksum protocol 
> itself which is the cause for failure
> bq. Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
> update:  the CRC check takes always place on a distcp upload before the file 
> is renamed into place. *and you can't disable it then*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15289:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> FileStatus.readFields() assertion incorrect
> ---
>
> Key: HADOOP-15289
> URL: https://issues.apache.org/jira/browse/HADOOP-15289
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15289-001.patch
>
>
> As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think 
> the assert at the end of {{FileStatus.readFields()}} is wrong; if you run the 
> code with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15040) Upgrade AWS SDK to 1.11.271: NPE bug spams logs w/ Yarn Log Aggregation

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15040:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Upgrade AWS SDK to 1.11.271: NPE bug spams logs w/ Yarn Log Aggregation
> ---
>
> Key: HADOOP-15040
> URL: https://issues.apache.org/jira/browse/HADOOP-15040
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15040.001.patch
>
>
> My colleagues working with Yarn log aggregation found that they were getting 
> this message spammed in their logs when they used an s3a:// URI for logs 
> (yarn.nodemanager.remote-app-log-dir):
> {noformat}
> getting attribute Region of com.amazonaws.management:type=AwsSdkMetrics threw 
> an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>   at 
> 
> Caused by: java.lang.NullPointerException
>   at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
>   at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
>   at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}
> This happens even though the aws sdk cloudwatch metrics reporting was 
> disabled (default), which is a bug. 
> I filed a [github issue|https://github.com/aws/aws-sdk-java/issues/1375|] and 
> it looks like a fix should be coming around SDK release 1.11.229 or so.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-04-02 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423267#comment-16423267
 ] 

Lei (Eddy) Xu commented on HADOOP-15317:


+1. Thanks [~xiaochen]!

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch, HADOOP-15317.05.patch, 
> HADOOP-15317.06.patch, Screen Shot 2018-03-28 at 7.23.32 PM.png
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop

2018-03-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416436#comment-16416436
 ] 

Lei (Eddy) Xu commented on HADOOP-15317:


Thanks for working on this, [~xiaochen]

A few comments:

{code}
int nthValidToReturn = r.nextInt(availableNodes);
594 LOG.debug("nthValidToReturn is {}", nthValidToReturn);
595 if (nthValidToReturn < 0) {
596   return null;
597 }
{code}

{{nthValidToReturn}} wont be negative , we dont need to check here.

{code}
assert numInScopeNodes >= availableNodes && availableNodes > 0;
{code}

* Can you use Preconditions with error messages? assert statement might be 
optimized out in prod.

* Also, I'd suggest not to put {{Very likely a bug}} in LOG. 

* Can you add some prove that when will this {{if (ret == null && lastValidNode 
!= null) {}} happen?

{code}
for (int i = 0; i < numInScopeNodes; ++i) {
   ret = parentNode.getLeaf(i, excludedScopeNode);
{code}

Does the above mean that it always starts from {{i=0}}?

> Improve NetworkTopology chooseRandom's loop
> ---
>
> Key: HADOOP-15317
> URL: https://issues.apache.org/jira/browse/HADOOP-15317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, 
> HADOOP-15317.03.patch, HADOOP-15317.04.patch
>
>
> Recently we found a postmortem case where the ANN seems to be in an infinite 
> loop. From the logs it seems it just went through a rolling restart, and DNs 
> are getting registered.
> Later the NN become unresponsive, and from the stacktrace it's inside a 
> do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done 
> in HDFS-10320.
> Going through the code and logs I'm not able to come up with any theory 
> (thought about incorrect locking, or the Node object being modified outside 
> of NetworkTopology, both seem impossible) why this is happening, but we 
> should eliminate this loop.
> stacktrace:
> {noformat}
>  Stack:
> java.util.HashMap.hash(HashMap.java:338)
> java.util.HashMap.containsKey(HashMap.java:595)
> java.util.HashSet.contains(HashSet.java:203)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786)
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14396) Add builder interface to FileContext

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366477#comment-16366477
 ] 

Lei (Eddy) Xu commented on HADOOP-14396:


{{TestCopyPreserveFlag}} is not relevant, as it does not use the builder API, 
and it passed locally on my laptop.

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch, 
> HADOOP-14396.02.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14396) Add builder interface to FileContext

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366309#comment-16366309
 ] 

Lei (Eddy) Xu commented on HADOOP-14396:


Updated the patch to address checkstyle and javadoc warnings.

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch, 
> HADOOP-14396.02.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Attachment: HADOOP-14396.02.patch

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch, 
> HADOOP-14396.02.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14396) Add builder interface to FileContext

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366189#comment-16366189
 ] 

Lei (Eddy) Xu edited comment on HADOOP-14396 at 2/15/18 7:58 PM:
-

Updated the patch to address reviews:

bq. FSDataOutputStreamBuilder line 79: fs = null when built from file context. 
Later in getFS() there is Preconditions.checkNotNull(fs); So, the expectation 
is no one should be calling getFS() when it is constructed from FileContext ?

If using {{FileContext}}, the fs field is not used.  Yes, no one should call 
{{getFS()}} if it is constructed from FileContext. 

bq.  Why do we need an additional "donotCreateParent" option? 

It is not necessary indeed. Removed in the new patch.

bq. Can you please extend the test with few build() options like recursive / 
progress added ?

Done

bq. There is an annotation @Nonnull FileContext fc and also later 
Preconditions.checkNotNull(fc); Is the later needed in the constructor?

My understand is that @Nonnull is for compile time check, but Precondition is 
for runtime check. Giving this is a public API, it prevents other call hdfs 
client as library. 

The checkstyle and warnings were deleted from Yetus. Will fix them if any in 
the new build.


was (Author: eddyxu):
Updated the patch to address reviews:

bq. FSDataOutputStreamBuilder line 79: fs = null when built from file context. 
Later in getFS() there is Preconditions.checkNotNull(fs); So, the expectation 
is no one should be calling getFS() when it is constructed from FileContext ?

If using {{FileContext}}, the fs field is not used.  Yes, no one should call 
{{getFS()}} if it is constructed from FileContext. 

bq.  Why do we need an additional "donotCreateParent" option? 

It is not necessary indeed. Removed in the new patch.

bq. Can you please extend the test with few build() options like recursive / 
progress added ?

Done

bq. There is an annotation @Nonnull FileContext fc and also later 
Preconditions.checkNotNull(fc); Is the later needed in the constructor?

My understand is that @Nonnull is for compile time check, but Precondition is 
for runtime check. Giving this is a public API, it prevents other call hdfs 
client as library. 

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Status: Patch Available  (was: Open)

Updated the patch to address reviews:

bq. FSDataOutputStreamBuilder line 79: fs = null when built from file context. 
Later in getFS() there is Preconditions.checkNotNull(fs); So, the expectation 
is no one should be calling getFS() when it is constructed from FileContext ?

If using {{FileContext}}, the fs field is not used.  Yes, no one should call 
{{getFS()}} if it is constructed from FileContext. 

bq.  Why do we need an additional "donotCreateParent" option? 

It is not necessary indeed. Removed in the new patch.

bq. Can you please extend the test with few build() options like recursive / 
progress added ?

Done

bq. There is an annotation @Nonnull FileContext fc and also later 
Preconditions.checkNotNull(fc); Is the later needed in the constructor?

My understand is that @Nonnull is for compile time check, but Precondition is 
for runtime check. Giving this is a public API, it prevents other call hdfs 
client as library. 

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3, 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Attachment: HADOOP-14396.01.patch

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-14396.00.patch, HADOOP-14396.01.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14060) HTTP servlet /logs should require authentication and authorization

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14060:
---
Priority: Critical  (was: Blocker)

> HTTP servlet /logs should require authentication and authorization
> --
>
> Key: HADOOP-14060
> URL: https://issues.apache.org/jira/browse/HADOOP-14060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14060-tmp.001.patch
>
>
> HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine 
> for /conf, /jmx, /logLevel, and /stacks, but not for /logs.
> The code in {{AdminAuthorizedServlet#doGet}} for /logs and 
> {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that 
> /logs should subject to the same access control as intended by the original 
> developer.
> IMHO this could either be my misconfiguration or there is a bug somewhere in 
> {{HttpServer2}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14396) Add builder interface to FileContext

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366046#comment-16366046
 ] 

Lei (Eddy) Xu commented on HADOOP-14396:


[~leftnoteasy] sure I can do it. 

[~ste...@apache.org] Have you started to work on it? or you will give a review 
on it? I'd like to make sure that our efforts dont overlap :)


> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
> Attachments: HADOOP-14396.00.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15112) create-release didn't sign artifacts

2018-02-12 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15112:
---
Status: Patch Available  (was: Open)

> create-release didn't sign artifacts
> 
>
> Key: HADOOP-15112
> URL: https://issues.apache.org/jira/browse/HADOOP-15112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-15112.01.patch
>
>
> While building the 3.0.0 RC1, I had to re-invoke Maven because the 
> create-release script didn't deploy signatures to Nexus. Looking at the repo 
> (and my artifacts), it seems like "sign" didn't run properly.
> I lost my create-release output, but I noticed that it will log and continue 
> rather than abort in some error conditions. This might have caused my lack of 
> signatures. IMO it'd be better to explicitly fail in these situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15112) create-release didn't sign artifacts

2018-02-12 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15112:
---
Attachment: HADOOP-15112.01.patch

> create-release didn't sign artifacts
> 
>
> Key: HADOOP-15112
> URL: https://issues.apache.org/jira/browse/HADOOP-15112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HADOOP-15112.01.patch
>
>
> While building the 3.0.0 RC1, I had to re-invoke Maven because the 
> create-release script didn't deploy signatures to Nexus. Looking at the repo 
> (and my artifacts), it seems like "sign" didn't run properly.
> I lost my create-release output, but I noticed that it will log and continue 
> rather than abort in some error conditions. This might have caused my lack of 
> signatures. IMO it'd be better to explicitly fail in these situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14060) HTTP servlet /logs should require authentication and authorization

2018-02-09 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359128#comment-16359128
 ] 

Lei (Eddy) Xu commented on HADOOP-14060:


Hi, [~daryn], [~kihwal], do you have an estimation about when this will be done?

Thanks!

> HTTP servlet /logs should require authentication and authorization
> --
>
> Key: HADOOP-14060
> URL: https://issues.apache.org/jira/browse/HADOOP-14060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
> Attachments: HADOOP-14060-tmp.001.patch
>
>
> HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine 
> for /conf, /jmx, /logLevel, and /stacks, but not for /logs.
> The code in {{AdminAuthorizedServlet#doGet}} for /logs and 
> {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that 
> /logs should subject to the same access control as intended by the original 
> developer.
> IMHO this could either be my misconfiguration or there is a bug somewhere in 
> {{HttpServer2}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15069) support git-secrets commit hook to keep AWS secrets out of git

2018-02-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15069:
---
Target Version/s: 2.8.3, 3.1.0, 2.9.1, 3.0.2  (was: 2.8.3, 3.1.0, 2.9.1, 
3.0.1)

> support git-secrets commit hook to keep AWS secrets out of git
> --
>
> Key: HADOOP-15069
> URL: https://issues.apache.org/jira/browse/HADOOP-15069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15069-001.patch, HADOOP-15069-002.patch
>
>
> The latest Uber breach looks like it involved AWS keys in git repos.
> Nobody wants that, which is why amazon provide 
> [git-secrets|https://github.com/awslabs/git-secrets]; a script you can use to 
> scan a repo and its history, *and* add as an automated check.
> Anyone can set this up, but there are a few false positives in the scan, 
> mostly from longs and a few all-upper-case constants. These can all be added 
> to a .gitignore file.
> Also: mention git-secrets in the aws testing docs; say "use it"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15112) create-release didn't sign artifacts

2018-02-08 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16357694#comment-16357694
 ] 

Lei (Eddy) Xu commented on HADOOP-15112:


Run on a ubuntu 16.04 machine with {{gnupg-agent  2.1.11-6ubuntu2}}.

{{GPG_AGENT_INFO}} is not set after running the following code :

{code:sh|title=dev-support/bin/create-release}
eval $("${GPGAGENT}" --daemon \
--options "${LOGDIR}/gpgagent.conf" \
--log-file="${LOGDIR}/create-release-gpgagent.log")
{code}

because {{gnupg-agent}} > 2.1 does not set this variable: 
https://www.gnupg.org/faq/whats-new-in-2.1.html#autostart.

{{create-release}} checks the existence of this {{GPG_AGENT_INFO}} before 
signing artifacts, so it will ignore signing process: 

{code:sh|title=dev-support/bin/create-release}
 if [[ -n "${GPG_AGENT_INFO}" ]]; then
  echo "Warming the gpg-agent cache prior to calling maven"
  # warm the agent's cache:
  touch "${LOGDIR}/warm"
  ${GPG} --use-agent --armor --output "${LOGDIR}/warm.asc" --detach-sig 
"${LOGDIR}/warm"
  rm "${LOGDIR}/warm.asc" "${LOGDIR}/warm"
else
  SIGN=false
  hadoop_error "ERROR: Unable to launch or acquire gpg-agent. Disable 
signing."
fi
{code}

[~mackrorysd] [~andrew.wang] [~aw] would like you hear your inputs here. Should 
we check gpg agent version before it?  Or just change how to use {{gpg > 2.1}}. 
 gpg 2.1 was released Nov 2014. 



> create-release didn't sign artifacts
> 
>
> Key: HADOOP-15112
> URL: https://issues.apache.org/jira/browse/HADOOP-15112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Major
>
> While building the 3.0.0 RC1, I had to re-invoke Maven because the 
> create-release script didn't deploy signatures to Nexus. Looking at the repo 
> (and my artifacts), it seems like "sign" didn't run properly.
> I lost my create-release output, but I noticed that it will log and continue 
> rather than abort in some error conditions. This might have caused my lack of 
> signatures. IMO it'd be better to explicitly fail in these situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15112) create-release didn't sign artifacts

2018-02-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15112:
---
Target Version/s: 3.1.0, 3.0.2  (was: 3.1.0, 3.0.1)

> create-release didn't sign artifacts
> 
>
> Key: HADOOP-15112
> URL: https://issues.apache.org/jira/browse/HADOOP-15112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Major
>
> While building the 3.0.0 RC1, I had to re-invoke Maven because the 
> create-release script didn't deploy signatures to Nexus. Looking at the repo 
> (and my artifacts), it seems like "sign" didn't run properly.
> I lost my create-release output, but I noticed that it will log and continue 
> rather than abort in some error conditions. This might have caused my lack of 
> signatures. IMO it'd be better to explicitly fail in these situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15142) Register FTP and SFTP as FS services

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15142:
---
Target Version/s: 3.0.2  (was: 3.0.1)
   Fix Version/s: (was: 3.0.1)

> Register FTP and SFTP as FS services
> 
>
> Key: HADOOP-15142
> URL: https://issues.apache.org/jira/browse/HADOOP-15142
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.0.0
>Reporter: Mario Molina
>Priority: Minor
> Attachments: HADOOP-15142.001.patch, HADOOP-15142.002.patch, 
> HADOOP-15142.003.patch, HADOOP-15142.004.patch, HADOOP-15142.005.patch
>
>
> SFTPFileSystem and FTPFileSystem are not registered as a FS services.
> When calling the 'get' or 'newInstance' methods of the FileSystem class, the 
> FS instance cannot be created due to the schema is not registered as a 
> service FS.
> Also, the SFTPFileSystem class doesn't have the getScheme method implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15040) Upgrade AWS SDK: NPE bug spams logs w/ Yarn Log Aggregation

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15040:
---
Target Version/s: 3.1.0, 3.0.2  (was: 3.1.0, 3.0.1)

> Upgrade AWS SDK: NPE bug spams logs w/ Yarn Log Aggregation
> ---
>
> Key: HADOOP-15040
> URL: https://issues.apache.org/jira/browse/HADOOP-15040
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Major
> Attachments: HADOOP-15040.001.patch
>
>
> My colleagues working with Yarn log aggregation found that they were getting 
> this message spammed in their logs when they used an s3a:// URI for logs 
> (yarn.nodemanager.remote-app-log-dir):
> {noformat}
> getting attribute Region of com.amazonaws.management:type=AwsSdkMetrics threw 
> an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>   at 
> 
> Caused by: java.lang.NullPointerException
>   at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
>   at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
>   at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}
> This happens even though the aws sdk cloudwatch metrics reporting was 
> disabled (default), which is a bug. 
> I filed a [github issue|https://github.com/aws/aws-sdk-java/issues/1375|] and 
> it looks like a fix should be coming around SDK release 1.11.229 or so.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15076) Enhance s3a troubleshooting docs, add perf section

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15076:
---
Target Version/s: 3.1.0, 3.0.2  (was: 3.1.0, 3.0.1)

> Enhance s3a troubleshooting docs, add perf section
> --
>
> Key: HADOOP-15076
> URL: https://issues.apache.org/jira/browse/HADOOP-15076
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15076-001.patch, HADOOP-15076-002.patch, 
> HADOOP-15076-003.patch, HADOOP-15076-004.patch, HADOOP-15076-005.patch
>
>
> A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
> the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
> but its not something immediately obvious to lots of downstream users who 
> want to be able to drop in the new JAR to fix things/add new features
> We need to spell this out quite clearlyi "you cannot safely expect to do 
> this. If you want to upgrade the SDK, you will need to rebuild the whole of 
> hadoop-aws with the maven POM updated to the latest version, ideally 
> rerunning all the tests to make sure something hasn't broken. 
> Maybe near the top of the index.md file, along with "never share your AWS 
> credentials with anyone"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15129:
---
Target Version/s: 2.8.4, 2.7.6, 3.0.2  (was: 3.0.1, 2.8.4, 2.7.6)

> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> {code}
> Unfortunately, the log doesn't contain the exception that triggered it, but 
> the culprit is actually in IPC Client: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444.
> This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 
> to give a clear error message when somebody mispells an address.
> However, the fix in HADOOP-7472 doesn't apply here, because that code happens 
> in Client#getConnection after the Connection is constructed.
> My proposed fix (will attach a patch) is to move this exception out of the 
> constructor and into a place that will trigger HADOOP-7472's logic to 
> re-resolve addresses. If the DNS failure was temporary, this will allow the 
> connection to succeed. If not, the connection will fail after ipc client 
> retries (default 10 seconds worth of retries).
> I want to fix this in ipc client rather than just in Datanode startup, as 
> this fixes temporary DNS issues for all of Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354651#comment-16354651
 ] 

Lei (Eddy) Xu commented on HADOOP-15124:


Hi, [~medb] I moved this JIRA to 3.0.2, as 3.0.1 is about to be released.

In the meantime, can you provide a patch to post here? We usually do the review 
on JIRA, so the jenkins will automatically pick up the patch to run. 

Btw, 
{{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Statistic.java}}
 needs copyright. 


> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, statistics
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15200) Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15200:
---
Target Version/s: 3.1.0, 3.0.2  (was: 3.1.0, 3.0.1)

> Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0
> --
>
> Key: HADOOP-15200
> URL: https://issues.apache.org/jira/browse/HADOOP-15200
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Kuhu Shukla
>Priority: Critical
>
> Post HADOOP-14267, the constructor for DistCpOptions was removed and will 
> break any project using it for java based implementation/usage of DistCp. 
> This JIRA would track next steps required to reconcile/fix this 
> incompatibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15124:
---
Target Version/s: 3.0.2  (was: 3.0.1)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, statistics
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15112) create-release didn't sign artifacts

2018-02-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HADOOP-15112:
--

Assignee: Lei (Eddy) Xu

> create-release didn't sign artifacts
> 
>
> Key: HADOOP-15112
> URL: https://issues.apache.org/jira/browse/HADOOP-15112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Major
>
> While building the 3.0.0 RC1, I had to re-invoke Maven because the 
> create-release script didn't deploy signatures to Nexus. Looking at the repo 
> (and my artifacts), it seems like "sign" didn't run properly.
> I lost my create-release output, but I noticed that it will log and continue 
> rather than abort in some error conditions. This might have caused my lack of 
> signatures. IMO it'd be better to explicitly fail in these situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper

2018-01-10 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HADOOP-12928.

Resolution: Duplicate

> Update netty to 3.10.5.Final to sync with zookeeper
> ---
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12928-branch-2.00.patch, 
> HADOOP-12928-branch-2.01.patch, HADOOP-12928-branch-2.02.patch, 
> HADOOP-12928.01.patch, HADOOP-12928.02.patch, HADOOP-12928.03.patch, 
> HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15061) Regenerate editsStored and editsStored.xml in HDFS tests

2017-11-21 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-15061:
--

 Summary: Regenerate editsStored and editsStored.xml in HDFS tests
 Key: HADOOP-15061
 URL: https://issues.apache.org/jira/browse/HADOOP-15061
 Project: Hadoop Common
  Issue Type: Task
  Components: test
Affects Versions: 3.0.0-beta1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


>From HDFS-12840, we found that the `editsStored` in HDFS tests missing a few 
>operations, i.e., the following operations from 
>{{DFSTestUtils#runOperations()}}.
{code}
 // OP_UPDATE_BLOCKS 25
final String updateBlockFile = "/update_blocks";
FSDataOutputStream fout = filesystem.create(new Path(updateBlockFile), 
true, 4096, (short)1, 4096L);
fout.write(1);
fout.hflush();
long fileId = ((DFSOutputStream)fout.getWrappedStream()).getFileId();
DFSClient dfsclient = DFSClientAdapter.getDFSClient(filesystem);
LocatedBlocks blocks = 
dfsclient.getNamenode().getBlockLocations(updateBlockFile, 0, 
Integer.MAX_VALUE);
dfsclient.getNamenode().abandonBlock(blocks.get(0).getBlock(), fileId, 
updateBlockFile, dfsclient.clientName);
fout.close();
{code}

We should re-generate to edits and related XML to sync with the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15023) ValueQueue should also validate (lowWatermark * numValues) > 0 on construction

2017-11-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254461#comment-16254461
 ] 

Lei (Eddy) Xu commented on HADOOP-15023:


+1. LGTM.

Thanks [~xiaochen]

> ValueQueue should also validate (lowWatermark * numValues) > 0 on construction
> --
>
> Key: HADOOP-15023
> URL: https://issues.apache.org/jira/browse/HADOOP-15023
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-15023.01.patch
>
>
> ValueQueue has precondition checks for each item independently, but does not 
> check {{(int)(lowWatermark * numValues) > 0}}. If the product is low enough, 
> casting to int will wrap that to 0, causing problems later when filling / 
> getting from the queue.
> [code|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java#L224]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15018) Update JAVA_HOME in create-release for Xenial Dockerfile

2017-11-06 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240974#comment-16240974
 ] 

Lei (Eddy) Xu commented on HADOOP-15018:


Hi, [~andrew.wang]

LGTM. One nits: 
{code}
# we always force build with the Oracle JDK
{code}
we should also update the comments. +1 pending this change.

> Update JAVA_HOME in create-release for Xenial Dockerfile
> 
>
> Key: HADOOP-15018
> URL: https://issues.apache.org/jira/browse/HADOOP-15018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-15018.001.patch
>
>
> create-release expects the Oracle JDK when setting JAVA_HOME. HADOOP-14816 no 
> longer includes the Oracle JDK, so we need to update this to point to OpenJDK 
> instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14957) ReconfigurationTaskStatus is exposing guava Optional in its public api

2017-10-20 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16213504#comment-16213504
 ] 

Lei (Eddy) Xu commented on HADOOP-14957:


Sorry this was my bad back then. This class should not be a public API ideally. 
As Andrew mentioned, it is only useful for re-config / hot swap drives, and 
(should) be used only between {{DFSAdmin}} and {{DataNode}}.  Is there a 
protocol to downgrade / deprecate this from public to private API through 
multiple version?

> ReconfigurationTaskStatus is exposing guava Optional in its public api
> --
>
> Key: HADOOP-14957
> URL: https://issues.apache.org/jira/browse/HADOOP-14957
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: HADOOP-14957.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash and linkFallback for ViewFileSystem

2017-10-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204353#comment-16204353
 ] 

Lei (Eddy) Xu commented on HADOOP-13055:


+1. The last patch LGTM.

Thanks!

> Implement linkMergeSlash and linkFallback for ViewFileSystem
> 
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Affects Versions: 2.7.5
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, 
> HADOOP-13055.05.patch, HADOOP-13055.06.patch, HADOOP-13055.07.patch, 
> HADOOP-13055.08.patch, HADOOP-13055.09.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14396) Add builder interface to FileContext

2017-10-11 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201146#comment-16201146
 ] 

Lei (Eddy) Xu commented on HADOOP-14396:


[~asuresh], [~subru] sorry for late reply. I am afraid that I dont have 
bandwidth to get this in 2.9.  I would move this JIRA to the next release.

Thanks!

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14396.00.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2017-10-11 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Target Version/s: 3.0.0, 2.10.0  (was: 2.9.0, 3.0.0)

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14396.00.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2017-10-02 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189080#comment-16189080
 ] 

Lei (Eddy) Xu commented on HADOOP-13055:


Thanks for updates, [~manojg]. Great work.

* Should we consider move the "fallback link" from {{INodeDir}}to 
{{InodeTree}}, as conceptually it is at most 1 per tree / root. It can simplify 
code a bit by removing many precondition checks. For example, it can eliminate 
{{isRoot}} AND {{hasFallbackLink}} checks from INodeDir.
* Could you rephrase the doc about the concept of {{internalDir}}. 
* Is {{INodeDir#resolve()}} called? Can we remove it?
* Please add comments for {{static class LinkEntry }}
* for {{INodeDir#getChildren}} you might want to return a unmodified map. 
* It'd be nice to raise a user readable error if mutliple MERGE_SLASH or 
SINGLE_FALLBACK were configured.
* Maybe check {{mergeSlashTarget == null}} ?
{code}
if (linkType != LinkType.MERGE_SLASH) {
..
} else {
  ...
  mergeSlashTarget = target;
}
{code}
* Test cases are awesome !

> Implement linkMergeSlash for ViewFileSystem
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Affects Versions: 2.7.5
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, 
> HADOOP-13055.05.patch, HADOOP-13055.06.patch, HADOOP-13055.07.patch, 
> HADOOP-13055.08.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2017-09-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175574#comment-16175574
 ] 

Lei (Eddy) Xu commented on HADOOP-13055:


Hi, [~manojg]

I am new to this I might have some misunderstanding.

{code}
// Is an internal directory
abstract boolean isInternalDir();

// Is a Merge Link or a Merge Slash Link
boolean isLink() {
   return !isInternalDir();
}
{code}
Can you add more comments here to clarify what is the concept of internal dir, 
and what is the relationship between "internal dir" and {{Merge Link / Merge 
Slash Link}}? Or maybe rephrase the names?

{code}
 SINGLE_FALLBACK,
MERGE_SLASH,
{code}

Can you add more comments on both of them? What is the difference?

{code}
for (LinkEntry le : linkEntries) {
if (le.isLinkType(LinkType.SINGLE_FALLBACK)) {
  INodeLink fallbackLink = new INodeLink(mountTableName, ugi,
  getTargetFileSystem(new URI(le.getTarget())),
   new URI(le.getTarget()));
   getRootDir().setRootFallbackLink(fallbackLink);
}
{code}

How to guarantee that there is at most one {{LinkType.SINGLE_FALLBACK}} 
instance.

Thanks.

> Implement linkMergeSlash for ViewFileSystem
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Affects Versions: 2.7.5
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, 
> HADOOP-13055.05.patch, HADOOP-13055.06.patch, HADOOP-13055.07.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-18 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch, HADOOP-14398.03.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-17 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131560#comment-16131560
 ] 

Lei (Eddy) Xu commented on HADOOP-14398:


There is no actual code change, the test failures were not related.

Thanks for the reviews, [~fabbri] and [~andrew.wang]. Committed to trunk.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch, HADOOP-14398.03.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14785) Specify the behavior of handling conflicts between must and opt parameters

2017-08-17 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-14785:
--

 Summary: Specify the behavior of handling conflicts between must 
and opt parameters 
 Key: HADOOP-14785
 URL: https://issues.apache.org/jira/browse/HADOOP-14785
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha3
Reporter: Lei (Eddy) Xu


It is flexible to allow user to use strings as key/values to specify the 
behaviors of {{FSOutputStream}}, but this flexibility offers the potential 
conflicts between parameters.

It should specify a general rule of how to handle such conflicts for differnt 
file system implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-17 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.03.patch

Thanks for the reviews, [~andrew.wang]

Will file the follow on JIRAs for further discussions.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch, HADOOP-14398.03.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.02.patch

Thanks for the reviews, [~andrew.wang]

bq. be invoked . 
bq. there's a typo in the builder doc: "recurisve"
bq. Would be better to use an fake "FooFileSystem"

Fixed. 

bq. a behavior change compared to the current create APIs,
bq. Should we also call out the change in default behavior compared to the 
existing create call?

{{overwrite}} is the same as {{FS#create}}. while {{recursive()}} is changed. 
Modified in the doc. I think they are the only changes. 

bq. Are there provisions for probing FS capabilities without must 

It does not have this capability now. We can discuss it in follow on JIRAs.

bq. move the HDFS-specific builder parameters to an HDFS-specific page

Nice suggestions. There are a few places in {{ilesystem.md}} mentioned HDFS 
special cases, and I did not find a good and existing place to put this into 
HDFS section in the doc.  Besides, it offers the single place for user to look 
up what the Builder is capable of. Shall we put it here?

Thanks!



> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14126) remove jackson, joda and other transient aws SDK dependencies from hadoop-aws

2017-08-03 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113271#comment-16113271
 ] 

Lei (Eddy) Xu commented on HADOOP-14126:


+1. Thanks for taking care of this, [~ste...@apache.org]!

> remove jackson, joda and other transient aws SDK dependencies from hadoop-aws
> -
>
> Key: HADOOP-14126
> URL: https://issues.apache.org/jira/browse/HADOOP-14126
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14126-001.patch
>
>
> With HADOOP-14040 in, we can cut out all declarations of dependencies on 
> jackson, joda-time  from the hadoop-aws module, so avoiding it confusing 
> downstream projects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-08-01 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14495:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Thanks for the reviews, [~manojg]

Committed to trunk.

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14495.00.patch, HADOOP-14495.01.patch, 
> HADOOP-14495.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-01 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.01.patch

Thanks for the detailed review, [~fabbri]!

bq. I think you mean the Filesystem instance; some filesystems don't modify the 
underlying FS until close(). 

Good catch. Fixed.

bq. What is the difference in behavior / semantics between opt() and must()?
bq. What is the conflict resolution behavior if there is a builder method (e.g. 
.someOption()) that conflicts with a Configuration option set via opt() or 
must()? ("undefined" is a possibility, but should be specified at least?).

Done. Specified as "undefined" for now. If later on, the specific filesystem 
builder implementation needs to revise this, we can file follow on JIRA.

bq. Seems like the invalid settings are:

This is aligned with {{CreateFlag.java}}. 

Thanks!


> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-08-01 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14495:
---
Attachment: HADOOP-14495.02.patch

Thanks [~manojg].  Addressed all comments in 02 patch.

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14495.00.patch, HADOOP-14495.01.patch, 
> HADOOP-14495.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~manojg] for reviews.

Committed to trunk and branch-2.

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch, 
> HADOOP-14397.002.patch, HADOOP-14397.003.patch, HADOOP-14397.004.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
Attachment: HADOOP-14397.004.patch

Fixed {{TestLocalFileSystem}} in patch 04.   TestZKFailoverController and 
TestKDiag passed on my laptop.

[~manojg], could you take another look?

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch, 
> HADOOP-14397.002.patch, HADOOP-14397.003.patch, HADOOP-14397.004.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107665#comment-16107665
 ] 

Lei (Eddy) Xu commented on HADOOP-14397:


{{TestLocalFileSystem}} is related. Working on it.

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch, 
> HADOOP-14397.002.patch, HADOOP-14397.003.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.

2017-07-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105628#comment-16105628
 ] 

Lei (Eddy) Xu commented on HADOOP-14672:


Hi, [~busbey]. Filed HDFS-12221 to track OEV change. 

> Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, 
> dom, etc.
> --
>
> Key: HADOOP-14672
> URL: https://issues.apache.org/jira/browse/HADOOP-14672
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, 
> HADOOP-14672.04.patch, HADOOP-14672.patch
>
>
> The shaded hadoop-client-minicluster shouldn't include any unshaded 
> dependencies, but we can see: javax, dom, sax, etc. are all unshaded.
> CC [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14672) Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, dom, etc.

2017-07-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105590#comment-16105590
 ] 

Lei (Eddy) Xu commented on HADOOP-14672:


[~andrew.wang] it should be easy to replace xerces with another XML serializer. 

Will file a JIRA for it



> Shaded Hadoop-client-minicluster include unshaded classes, like: javax, sax, 
> dom, etc.
> --
>
> Key: HADOOP-14672
> URL: https://issues.apache.org/jira/browse/HADOOP-14672
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HADOOP-14672.02.patch, HADOOP-14672.03.patch, 
> HADOOP-14672.04.patch, HADOOP-14672.patch
>
>
> The shaded hadoop-client-minicluster shouldn't include any unshaded 
> dependencies, but we can see: javax, dom, sax, etc. are all unshaded.
> CC [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
Attachment: HADOOP-14397.003.patch

Thanks, [~manojg]

Fixed checkstyle and the error message in IOE.

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch, 
> HADOOP-14397.002.patch, HADOOP-14397.003.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
Attachment: HADOOP-14397.002.patch

Thanks for the catch, [~manojg].  Fixed in the 02 patch.

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch, 
> HADOOP-14397.002.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-07-24 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
Attachment: HADOOP-14397.001.patch

Thanks for the reviews, [~fabbri]!

Updated the patch to fix {{TestRawlocalContractAppend}}, which was due to 
missing changes of handling append in 
{{FileSystem#FileSystemDataOutputStreamBuilder#builder()}}.



> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch, HADOOP-14397.001.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-22 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059691#comment-16059691
 ] 

Lei (Eddy) Xu edited comment on HADOOP-14398 at 6/22/17 5:21 PM:
-

[~ste...@apache.org] Thanks a lot for the review. I just realized that the 
uploaded patch was mistakenly run against to local branch. Sorry for confusion.

Re-upload a good version of document change.


was (Author: eddyxu):
Re-upload a good version of document.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.00.patch

Re-upload a good version of document.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: (was: HADOOP-14398.00.patch)

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-22 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059686#comment-16059686
 ] 

Lei (Eddy) Xu commented on HADOOP-14495:


The failure tests are not relevant. They passed on my laptop. 
Findbug warnings are not relevant to the change.

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14495.00.patch, HADOOP-14495.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-06-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
Attachment: HADOOP-14397.000.patch

Upgrade {{FSDataOutputStreamBuilder}} to public API, and added tests against 
builder API in {{AbstractContractCreateTest}} and 
{{AbstractContractAppendTest}}.


> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-06-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14397:
---
Affects Version/s: 3.0.0-alpha3
   Status: Patch Available  (was: Open)

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 3.0.0-alpha3, 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14397.000.patch
>
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Affects Version/s: 2.9.0
   3.0.0-alpha3
 Target Version/s: 3.0.0-alpha3, 2.9.0
   Status: Patch Available  (was: Open)

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3, 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14396.00.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Attachment: HADOOP-14396.00.patch

Upload the patch to add {{FileContext#create(Path)}} and related tests.

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14396.00.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
   Labels: docuentation  (was: )
Affects Version/s: 3.0.0-alpha3
 Target Version/s: 3.0.0-alpha4
 Tags: doc
   Status: Patch Available  (was: Open)

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.00.patch

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14495:
---
Attachment: HADOOP-14495.01.patch

Thanks a lot for the suggestions, [~steve_l]

bq. we can/should replace HadoopIllegalArgumentException with the base 
{{IllegalArguentException, for better Preconditions checks.

Are you suggesting to throw {{IllgealArgumentException}} in 
{{FSDataOutputStreamBuilder#builder()}}?   Ok, I changed to it.

Also addressed your other comments in the latest patch. 

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14495.00.patch, HADOOP-14495.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-20 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14495:
---
Attachment: HADOOP-14495.00.patch

Hey, [~ste...@apache.org] [~andrew.wang]

I upload a draft that only provides API of {{opt(...)}} and {{must(...)}} 
calls, and very basic tests to check the existence of keys to get the interface 
right first. 

Could you take a look? Thanks!

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14495.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-20 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14495:
---
Status: Patch Available  (was: Open)

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14495.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-19 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HADOOP-14398:
--

Assignee: Lei (Eddy) Xu

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14397) Pull up the builder pattern to FileSystem and add AbstractContractCreateTest for it

2017-06-19 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HADOOP-14397:
--

Assignee: Lei (Eddy) Xu

> Pull up the builder pattern to FileSystem and add AbstractContractCreateTest 
> for it
> ---
>
> Key: HADOOP-14397
> URL: https://issues.apache.org/jira/browse/HADOOP-14397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs, hdfs-client
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> After reach the stability of the Builder APIs, we should promote the API from 
> {{DistributedFileSystem}} to {{FileSystem}}, and add necessary contract tests 
> to cover the API for all file systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14538) Fix TestFilterFileSystem and TestHarFileSystem failures after DistributedFileSystem.append API

2017-06-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054346#comment-16054346
 ] 

Lei (Eddy) Xu commented on HADOOP-14538:


Thanks a lot for the review and commit, [~brahmareddy], [~ajisakaa]!

> Fix TestFilterFileSystem and TestHarFileSystem failures after 
> DistributedFileSystem.append API
> --
>
> Key: HADOOP-14538
> URL: https://issues.apache.org/jira/browse/HADOOP-14538
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14538.00.patch
>
>
> Two tests are failed after HADOOP-14395.
> {code}
> TestFilterFileSystem.testFilterFileSystem
> TestHarFileSystem.testInheritedMethodsImplemented
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14538) Fix TestFilterFileSystem and TestHarFileSystem failures after DistributedFileSystem.append API

2017-06-18 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14538:
---
Attachment: HADOOP-14538.00.patch

Add missing override to {{FilterFileSystem}} and {{HarFilterSystem}} to fix 
failures in {{TestHarFileSystem}} an {{TestFilterFileSystem}}.


> Fix TestFilterFileSystem and TestHarFileSystem failures after 
> DistributedFileSystem.append API
> --
>
> Key: HADOOP-14538
> URL: https://issues.apache.org/jira/browse/HADOOP-14538
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14538.00.patch
>
>
> Two tests are failed after HADOOP-14395.
> {code}
> TestFilterFileSystem.testFilterFileSystem
> TestHarFileSystem.testInheritedMethodsImplemented
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14538) Fix TestFilterFileSystem and TestHarFileSystem failures after DistributedFileSystem.append API

2017-06-18 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14538:
---
Status: Patch Available  (was: Open)

> Fix TestFilterFileSystem and TestHarFileSystem failures after 
> DistributedFileSystem.append API
> --
>
> Key: HADOOP-14538
> URL: https://issues.apache.org/jira/browse/HADOOP-14538
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3, 2.8.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14538.00.patch
>
>
> Two tests are failed after HADOOP-14395.
> {code}
> TestFilterFileSystem.testFilterFileSystem
> TestHarFileSystem.testInheritedMethodsImplemented
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-18 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053325#comment-16053325
 ] 

Lei (Eddy) Xu commented on HADOOP-14395:


Hi, [~brahmareddy] Thanks a lot for spotting this. 

I filed HADOOP-14538 to fix the failure.

> Provide Builder pattern for DistributedFileSystem.append
> 
>
> Key: HADOOP-14395
> URL: https://issues.apache.org/jira/browse/HADOOP-14395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, 
> HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch, HADOOP-14395.02.patch, 
> HADOOP-14395.02-trunk.patch
>
>
> Follow HADOOP-14394, it should also provide a {{Builder}} API for 
> {{DistributedFileSystem#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14538) Fix TestFilterFileSystem and TestHarFileSystem failures after DistributedFileSystem.append API

2017-06-18 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-14538:
--

 Summary: Fix TestFilterFileSystem and TestHarFileSystem failures 
after DistributedFileSystem.append API
 Key: HADOOP-14538
 URL: https://issues.apache.org/jira/browse/HADOOP-14538
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


Two tests are failed after HADOOP-14395.

{code}
TestFilterFileSystem.testFilterFileSystem
TestHarFileSystem.testInheritedMethodsImplemented
{code} 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   >