[jira] [Commented] (HADOOP-11461) Namenode stdout log contains IllegalAccessException

2021-10-19 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430733#comment-17430733
 ] 

Jonathan Turner Eagles commented on HADOOP-11461:
-

I've also heard that log4j can override the default java utility logger (JUL) 
https://logging.apache.org/log4j/2.x/log4j-jul/index.html. So this _might_ be a 
way to configure everything with log4j, but I have never tried this.

> Namenode stdout log contains IllegalAccessException
> ---
>
> Key: HADOOP-11461
> URL: https://issues.apache.org/jira/browse/HADOOP-11461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Mohammad Islam
>Assignee: Mohammad Islam
>Priority: Major
>
> We frequently see the following exception in namenode out log file.
> {noformat}
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> attachTypes
> INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 
> resolve
> SEVERE: null
> java.lang.IllegalAccessException: Class 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can 
> not access a member of class javax.ws.rs.co
> re.Response with modifiers "protected"
> at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
> at java.lang.Class.newInstance(Class.java:368)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
> at 
> com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
> at 
> com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
> at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
> at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 

[jira] [Commented] (HADOOP-11461) Namenode stdout log contains IllegalAccessException

2021-10-19 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430720#comment-17430720
 ] 

Jonathan Turner Eagles commented on HADOOP-11461:
-

[~Vichoko],
WadlGeneratorJAXBGrammarGenerator does not use log4j but instead uses 
java.util.logging.Logger. We do not configure a specific logging file for 
Hadoop by default. Perhaps your installation could do this or perhaps Hadoop 
should add this alongside log4j.properties file.

(see Ref 
https://www.logicbig.com/tutorials/core-java-tutorial/logging/getting-started.html#configuration,-setting-up-logging-properties)
 via dash D property file for java 
-Djava.util.logging.config.file=hadoop.logging.properties

We can look at the instruction for enablement by looking at the jre default 
logging properties file present jre/lib/logging.properties

Adding this line to the global logging.properties file or specifying a new 
hadoop specific logging properties file will allow configuring 
java.util.logging.Logger to disable this warning.

{noformat}
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.level = 
OFF
{noformat}

> Namenode stdout log contains IllegalAccessException
> ---
>
> Key: HADOOP-11461
> URL: https://issues.apache.org/jira/browse/HADOOP-11461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Mohammad Islam
>Assignee: Mohammad Islam
>Priority: Major
>
> We frequently see the following exception in namenode out log file.
> {noformat}
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> attachTypes
> INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 
> resolve
> SEVERE: null
> java.lang.IllegalAccessException: Class 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can 
> not access a member of class javax.ws.rs.co
> re.Response with modifiers "protected"
> at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
> at java.lang.Class.newInstance(Class.java:368)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
> at 
> com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
> at 
> com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
> at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
> at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> 

[jira] [Resolved] (HADOOP-17885) Upgrade JSON smart to 1.3.3 on branch-2.10

2021-09-02 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles resolved HADOOP-17885.
-
   Fix Version/s: 2.10.2
Target Version/s: 2.10.1, 2.10.0  (was: 2.10.0, 2.10.1)
  Resolution: Fixed

+1. merged to branch-2.10.

> Upgrade JSON smart to 1.3.3 on branch-2.10
> --
>
> Key: HADOOP-17885
> URL: https://issues.apache.org/jira/browse/HADOOP-17885
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently branch-2.10 is using JSON Smart 1.3.1 version which is vulnerable 
> to [link CVE-2021-27568|https://nvd.nist.gov/vuln/detail/CVE-2021-27568].
> We can upgrade the version to 1.3.1.
> +Description of the vulnerability:+
> {quote}An issue was discovered in netplex json-smart-v1 through 2015-10-23 
> and json-smart-v2 through 2.4. An exception is thrown from a function, but it 
> is not caught, as demonstrated by NumberFormatException. When it is not 
> caught, it may cause programs using the library to crash or expose sensitive 
> information.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17886) Upgrade ant to 1.10.11

2021-09-02 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409126#comment-17409126
 ] 

Jonathan Turner Eagles commented on HADOOP-17886:
-

+1 Committed this to trunk, branch-3.3, branch-3.2, branch-2.10.

> Upgrade ant to 1.10.11
> --
>
> Key: HADOOP-17886
> URL: https://issues.apache.org/jira/browse/HADOOP-17886
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.2.2, 3.4.0, 2.10.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Vulnerabilities reported in org.apache.ant:ant:1.10.9
>  * [CVE-2021-36374|https://nvd.nist.gov/vuln/detail/CVE-2021-36374] moderate 
> severity
>  * [CVE-2021-36373|https://nvd.nist.gov/vuln/detail/CVE-2021-36373] moderate 
> severity
> suggested: org.apache.ant:ant ~> 1.10.11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17886) Upgrade ant to 1.10.11

2021-09-02 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles resolved HADOOP-17886.
-
Fix Version/s: 3.2.4
   3.3.2
   2.10.2
   3.4.0
   Resolution: Fixed

> Upgrade ant to 1.10.11
> --
>
> Key: HADOOP-17886
> URL: https://issues.apache.org/jira/browse/HADOOP-17886
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.2.2, 3.4.0, 2.10.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.3.2, 3.2.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Vulnerabilities reported in org.apache.ant:ant:1.10.9
>  * [CVE-2021-36374|https://nvd.nist.gov/vuln/detail/CVE-2021-36374] moderate 
> severity
>  * [CVE-2021-36373|https://nvd.nist.gov/vuln/detail/CVE-2021-36373] moderate 
> severity
> suggested: org.apache.ant:ant ~> 1.10.11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11452) Make FileSystem.rename(path, path, options) public, specified, tested

2021-07-22 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17385552#comment-17385552
 ] 

Jonathan Turner Eagles commented on HADOOP-11452:
-

[~ste...@apache.org], can we get this PR rebased? I like to help review this 
patch to get it checked in.

> Make FileSystem.rename(path, path, options) public, specified, tested
> -
>
> Key: HADOOP-11452
> URL: https://issues.apache.org/jira/browse/HADOOP-11452
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Yi Liu
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-11452-001.patch, HADOOP-11452-002.patch, 
> HADOOP-14452-004.patch, HADOOP-14452-branch-2-003.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Currently in {{FileSystem}}, {{rename}} with _Rename options_ is protected 
> and with _deprecated_ annotation. And the default implementation is not 
> atomic.
> So this method is not able to be used outside. On the other hand, HDFS has a 
> good and atomic implementation. (Also an interesting thing in {{DFSClient}}, 
> the _deprecated_ annotations for these two methods are opposite).
> It makes sense to make public for {{rename}} with _Rename options_, since 
> it's atomic for rename+overwrite, also it saves RPC calls if user desires 
> rename+overwrite.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17249) Upgrade jackson-databind to 2.10 on branch-2.10

2020-09-08 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192372#comment-17192372
 ] 

Jonathan Turner Eagles commented on HADOOP-17249:
-

Usually with libraries such as jackson, API breaking changes are NOT upgraded 
within the same hadoop minor version (say 2.10.0 -> 2.10.1). Instead they are 
upgraded in new hadoop minor version (say 2.11.0). Without shading, this will 
impact customers, and will make it difficult for downstream products to 
maintain compatibility.

Many of the vulnerabilities that have been found with jackson and others aren't 
a problem in hadoop as its usage can't be exploited. In this case I would 
suggest a minor version upgrade that contains the fixes.

As to jackson-databind, this version needs to align with jackson library as 
there are compatibility issues as well.

Lastly, when upgrading to an incompatible library please mark the jira as an 
incompatible change to make sure it gains the proper attention.

> Upgrade jackson-databind to 2.10 on branch-2.10
> ---
>
> Key: HADOOP-17249
> URL: https://issues.apache.org/jira/browse/HADOOP-17249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is filed to test backporting HADOOP-16905 to branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-07-15 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles updated HADOOP-17099:

Fix Version/s: 3.1.5
   3.4.0
   3.3.1
   3.2.2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

+1. Committed this to trunk, branch-3.3, branch-3.2, branch-3.1. Thanks, 
[~ahussein]!

> Replace Guava Predicate with Java8+ Predicate
> -
>
> Key: HADOOP-17099
> URL: https://issues.apache.org/jira/browse/HADOOP-17099
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17099.004.patch, HADOOP-17099.005.patch, 
> HADOOP-17099.006.patch, HADOOP-17099.007.patch
>
>
> {{com.google.common.base.Predicate}} can be replaced with 
> {{java.util.function.Predicate}}. 
> The change involving 9 occurrences is straightforward:
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Predicate' in project with mask 
> '*.java'
> Found Occurrences  (9 usages found)
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> CombinedHostFileManager.java  (1 usage found)
> 43 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode  (1 usage found)
> NameNodeResourceChecker.java  (1 usage found)
> 38 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode.snapshot  (1 usage found)
> Snapshot.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.metrics2.impl  (2 usages found)
> MetricsRecords.java  (1 usage found)
> 21 import com.google.common.base.Predicate;
> TestMetricsSystemImpl.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation  (1 usage found)
> AggregatedLogFormat.java  (1 usage found)
> 77 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller  (1 usage found)
> LogAggregationFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile  (1 usage 
> found)
> LogAggregationIndexedFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> AppLogAggregatorImpl.java  (1 usage found)
> 75 import com.google.common.base.Predicate;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-07-15 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles updated HADOOP-17101:

Fix Version/s: 3.1.5
   3.4.0
   3.3.1
   3.2.2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

+1. Committed to trunk, branch-3.3, branch-3.2, branch-3.1. Thanks, [~ahussein] 
for this patch.

> Replace Guava Function with Java8+ Function
> ---
>
> Key: HADOOP-17101
> URL: https://issues.apache.org/jira/browse/HADOOP-17101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch, 
> HADOOP-17101.008.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Function'
> Found Occurrences  (7 usages found)
> hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
> Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
> 13603  type="com.google.common.base.Function"
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> HostSet.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.datanode.checker  (1 usage found)
> AbstractFuture.java  (1 usage found)
> 58 * (ListenableFuture, com.google.common.base.Function) 
> Futures.transform}
> org.apache.hadoop.hdfs.server.namenode.ha  (1 usage found)
> HATestUtil.java  (1 usage found)
> 40 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.protocol  (1 usage found)
> RemoteEditLog.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.mapreduce.lib.input  (1 usage found)
> TestFileInputFormat.java  (1 usage found)
> 58 import com.google.common.base.Function;
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb  (1 usage found)
> GetApplicationsRequestPBImpl.java  (1 usage found)
> 38 import com.google.common.base.Function;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-07-14 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157562#comment-17157562
 ] 

Jonathan Turner Eagles edited comment on HADOOP-17099 at 7/14/20, 6:26 PM:
---

{code:title= MetricsRecords.java}
I'm not that familiar with all stream feature set, but could new helper 
function getFirstFromIterableOrDefault be eliminated but using the 
Stream.findFirst api. It seems that it could prevent full evaluation of the 
list with this short circuit method.
{code}


was (Author: jeagles):
{noformat:title= MetricsRecords.java}
I'm not that familiar with all stream feature set, but could new helper 
function getFirstFromIterableOrDefault be eliminated but using the 
Stream.findFirst api. It seems that it could prevent full evaluation of the 
list with this short circuit method.
{noformat}

> Replace Guava Predicate with Java8+ Predicate
> -
>
> Key: HADOOP-17099
> URL: https://issues.apache.org/jira/browse/HADOOP-17099
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HADOOP-17099.004.patch
>
>
> {{com.google.common.base.Predicate}} can be replaced with 
> {{java.util.function.Predicate}}. 
> The change involving 9 occurrences is straightforward:
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Predicate' in project with mask 
> '*.java'
> Found Occurrences  (9 usages found)
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> CombinedHostFileManager.java  (1 usage found)
> 43 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode  (1 usage found)
> NameNodeResourceChecker.java  (1 usage found)
> 38 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode.snapshot  (1 usage found)
> Snapshot.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.metrics2.impl  (2 usages found)
> MetricsRecords.java  (1 usage found)
> 21 import com.google.common.base.Predicate;
> TestMetricsSystemImpl.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation  (1 usage found)
> AggregatedLogFormat.java  (1 usage found)
> 77 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller  (1 usage found)
> LogAggregationFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile  (1 usage 
> found)
> LogAggregationIndexedFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> AppLogAggregatorImpl.java  (1 usage found)
> 75 import com.google.common.base.Predicate;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-07-14 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157562#comment-17157562
 ] 

Jonathan Turner Eagles commented on HADOOP-17099:
-

{noformat:title= MetricsRecords.java}
I'm not that familiar with all stream feature set, but could new helper 
function getFirstFromIterableOrDefault be eliminated but using the 
Stream.findFirst api. It seems that it could prevent full evaluation of the 
list with this short circuit method.
{noformat}

> Replace Guava Predicate with Java8+ Predicate
> -
>
> Key: HADOOP-17099
> URL: https://issues.apache.org/jira/browse/HADOOP-17099
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HADOOP-17099.004.patch
>
>
> {{com.google.common.base.Predicate}} can be replaced with 
> {{java.util.function.Predicate}}. 
> The change involving 9 occurrences is straightforward:
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Predicate' in project with mask 
> '*.java'
> Found Occurrences  (9 usages found)
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> CombinedHostFileManager.java  (1 usage found)
> 43 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode  (1 usage found)
> NameNodeResourceChecker.java  (1 usage found)
> 38 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode.snapshot  (1 usage found)
> Snapshot.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.metrics2.impl  (2 usages found)
> MetricsRecords.java  (1 usage found)
> 21 import com.google.common.base.Predicate;
> TestMetricsSystemImpl.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation  (1 usage found)
> AggregatedLogFormat.java  (1 usage found)
> 77 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller  (1 usage found)
> LogAggregationFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile  (1 usage 
> found)
> LogAggregationIndexedFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> AppLogAggregatorImpl.java  (1 usage found)
> 75 import com.google.common.base.Predicate;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-07-14 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157465#comment-17157465
 ] 

Jonathan Turner Eagles commented on HADOOP-17101:
-

Thanks for the updated patch, [~ahussein].

{code:title=HostSet.java}
String sep = "";
while (iter.hasNext()) {
  InetSocketAddress addr = iter.next();
  sb.append(sep + addr.getAddress().getHostAddress()
  + ":" + addr.getPort());
  sep = ",";
}
{code}

I wasn't clear in my comment from before regarding unrolling this loop, but 
this is very close. By unrolling this loop, we can add each substring 
individually and prevent intermediate string creation. Also making the 
separator and ":" a char (not string) is slightly more efficient in general, 
but probably won't make a huge difference in this code.

{code:title=suggestion}
 sb.append(sep);
 sb.append(addr.getAddress().getHostAddress());
 // Notice single quote below for slightly more efficient char add
 sb.append(':');
 sb.append(addr.getPort());
{code}

Everything else looks great.


> Replace Guava Function with Java8+ Function
> ---
>
> Key: HADOOP-17101
> URL: https://issues.apache.org/jira/browse/HADOOP-17101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17101.005.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Function'
> Found Occurrences  (7 usages found)
> hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
> Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
> 13603  type="com.google.common.base.Function"
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> HostSet.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.datanode.checker  (1 usage found)
> AbstractFuture.java  (1 usage found)
> 58 * (ListenableFuture, com.google.common.base.Function) 
> Futures.transform}
> org.apache.hadoop.hdfs.server.namenode.ha  (1 usage found)
> HATestUtil.java  (1 usage found)
> 40 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.protocol  (1 usage found)
> RemoteEditLog.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.mapreduce.lib.input  (1 usage found)
> TestFileInputFormat.java  (1 usage found)
> 58 import com.google.common.base.Function;
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb  (1 usage found)
> GetApplicationsRequestPBImpl.java  (1 usage found)
> 38 import com.google.common.base.Function;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-07-13 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156839#comment-17156839
 ] 

Jonathan Turner Eagles commented on HADOOP-17101:
-

Let's make the MultiMap replacement it's own jira as it will need special 
treatment.

HostSet.java Building the string in this way isn't very efficient compared to 
what was done before. Instead of creating an intermediate joined string, can we 
just add to the stringbuilder. In fact, we can add the (coma assuming not first 
element), host, colon, port) and make the two string even more efficient than 
before.

checkstyle.xml I expected to see guava Function added to the exclusion list.

RemoteEditLog.java comments still reference guava which are made invalid.

GetApplicationsRequestPBImpl.java one small thing here and is fine the way it 
is. But is there a way to leverage the fact that we know the number of 
application states to create a list of the correct size instead of building 
(ArrayList vs LinkedList)?

> Replace Guava Function with Java8+ Function
> ---
>
> Key: HADOOP-17101
> URL: https://issues.apache.org/jira/browse/HADOOP-17101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17101.001.patch, HADOOP-17101.002.patch, 
> HADOOP-17101.003.patch, HADOOP-17101.004.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Function'
> Found Occurrences  (7 usages found)
> hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
> Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
> 13603  type="com.google.common.base.Function"
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> HostSet.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.datanode.checker  (1 usage found)
> AbstractFuture.java  (1 usage found)
> 58 * (ListenableFuture, com.google.common.base.Function) 
> Futures.transform}
> org.apache.hadoop.hdfs.server.namenode.ha  (1 usage found)
> HATestUtil.java  (1 usage found)
> 40 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.protocol  (1 usage found)
> RemoteEditLog.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.mapreduce.lib.input  (1 usage found)
> TestFileInputFormat.java  (1 usage found)
> 58 import com.google.common.base.Function;
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb  (1 usage found)
> GetApplicationsRequestPBImpl.java  (1 usage found)
> 38 import com.google.common.base.Function;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17098) Reduce Guava dependency in Hadoop source code

2020-06-29 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147935#comment-17147935
 ] 

Jonathan Turner Eagles commented on HADOOP-17098:
-

I agree these kinds of changes make sense. Why add dependency for something 
that's builtin. One note to be careful about is that base64 translation is not 
a standard, so the two implementations could produce different results. This 
might matter in the case of serialization, persistence, or client server 
different versions.

> Reduce Guava dependency in Hadoop source code
> -
>
> Key: HADOOP-17098
> URL: https://issues.apache.org/jira/browse/HADOOP-17098
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Relying on Guava implementation in Hadoop has been painful due to 
> compatibility and vulnerability issues.
>  Guava updates tend to break/deprecate APIs. This made It hard to maintain 
> backward compatibility within hadoop versions and clients/downstreams.
> With 3.x uses java8+, the java 8 features should preferred to Guava, reducing 
> the footprint, and giving stability to source code.
> This jira should serve as an umbrella toward an incremental effort to reduce 
> the usage of Guava in the source code and to create subtasks to replace Guava 
> classes with Java features.
> Furthermore, it will be good to add a rule in the pre-commit build to warn 
> against introducing a new Guava usage in certain modules.
> Any one willing to take part in this code refactoring has to:
>  # Focus on one module at a time in order to reduce the conflicts and the 
> size of the patch. This will significantly help the reviewers.
>  # Run all the unit tests related to the module being affected by the change. 
> It is critical to verify that any change will not break the unit tests, or 
> cause a stable test case to become flaky.
>  
> A list of sub tasks replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}
>  
> I also vote for the replacement of {{Precondition}} with either a wrapper, or 
> Apache commons lang.
> I believe you guys have dealt with Guava compatibilities in the past and 
> probably have better insights. Any thoughts? [~weichiu], [~gabor.bota], 
> [~ste...@apache.org], [~ayushtkn], [~busbey], [~jeagles], [~kihwal]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI

2020-06-24 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143908#comment-17143908
 ] 

Jonathan Turner Eagles commented on HADOOP-17088:
-

One important security feature is to disallow xml resources from outside of the 
classpath.  Does this enforce this constraint?

> Failed to load Xinclude files with relative path in case of loading conf via 
> URI
> 
>
> Key: HADOOP-17088
> URL: https://issues.apache.org/jira/browse/HADOOP-17088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yushi Hayasaka
>Priority: Major
>
> When we create a configuration file, which load a external XML file with 
> relative path, and try to load it via calling `Configuration.addResource` 
> with `Path(URI)`, we got an error, which failed to load a external XML, after 
> https://issues.apache.org/jira/browse/HADOOP-14216 is merged.
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: java.io.IOException: 
> Fetch fail on include for 'mountTable.xml' with no fallback while loading 
> 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
>   at 
> org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896)
>   at com.company.test.Main.main(Main.java:29)
> Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' 
> with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007)
>   ... 4 more
> {noformat}
> The cause is that the URI is passed as string to java.io.File constructor and 
> File does not support the file URI, so my suggestion is trying to convert 
> from string to URI at first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15337) RawLocalFileSystem file status permissions can avoid shelling out in some cases

2020-05-13 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles updated HADOOP-15337:

Attachment: HADOOP-15337.002.patch

> RawLocalFileSystem file status permissions can avoid shelling out in some 
> cases
> ---
>
> Key: HADOOP-15337
> URL: https://issues.apache.org/jira/browse/HADOOP-15337
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Turner Eagles
>Assignee: Jonathan Turner Eagles
>Priority: Major
> Attachments: HADOOP-15337.001.patch, HADOOP-15337.002.patch
>
>
> While investigating YARN-8054, it was noticed that getting file permissions 
> for RawLocalFileSystem can fail by having too many files open. Upon 
> inspection this happens when getting permissions by launching a shell program 
> (ls -ld on linux) and parsing the results. With the introduction of java 7 
> posix file systems can accurately get file permissions without launching a 
> shell program.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15337) RawLocalFileSystem file status permissions can avoid shelling out in some cases

2020-05-13 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106708#comment-17106708
 ] 

Jonathan Turner Eagles commented on HADOOP-15337:
-

There is still a gap in the patch as sticky bit needs to be figured out.

> RawLocalFileSystem file status permissions can avoid shelling out in some 
> cases
> ---
>
> Key: HADOOP-15337
> URL: https://issues.apache.org/jira/browse/HADOOP-15337
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Turner Eagles
>Assignee: Jonathan Turner Eagles
>Priority: Major
> Attachments: HADOOP-15337.001.patch, HADOOP-15337.002.patch
>
>
> While investigating YARN-8054, it was noticed that getting file permissions 
> for RawLocalFileSystem can fail by having too many files open. Upon 
> inspection this happens when getting permissions by launching a shell program 
> (ls -ld on linux) and parsing the results. With the introduction of java 7 
> posix file systems can accurately get file permissions without launching a 
> shell program.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16776) backport HADOOP-16775: distcp copies to s3 are randomly corrupted

2020-03-13 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059048#comment-17059048
 ] 

Jonathan Turner Eagles commented on HADOOP-16776:
-

[~ste...@apache.org], do you thing we may want to use MonotonicClock to avoid 
other collision issues?

> backport HADOOP-16775: distcp copies to s3 are randomly corrupted
> -
>
> Key: HADOOP-16776
> URL: https://issues.apache.org/jira/browse/HADOOP-16776
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.0, 3.0.0, 2.10.0
>Reporter: Amir Shenavandeh
>Assignee: Amir Shenavandeh
>Priority: Blocker
>  Labels: DistCp
> Fix For: 3.1.4, 2.10.1
>
> Attachments: HADOOP-16776-branch-2.8-001.patch, 
> HADOOP-16776-branch-2.8-002.patch
>
>
> This is to back port HADOOP-16775 to hadoop 2.8 branch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16749) Configuration parsing of CDATA values are blank

2020-01-10 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles updated HADOOP-16749:

Fix Version/s: 2.10.1
   3.2.2
   3.1.4
   2.9.3
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks, [~daryn] for this patch and test. Thanks, [~ste...@apache.org] for 
review. Committed this jira's patch HADOOP-16749.001.patch to trunk and 
cherry-picked to branch-3.2, branch-3.1 and used the HADOOP-16749.patch to 
commit to branch-2.10 and cherry-picked to branch-2.9.

> Configuration parsing of CDATA values are blank
> ---
>
> Key: HADOOP-16749
> URL: https://issues.apache.org/jira/browse/HADOOP-16749
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jonathan Turner Eagles
>Assignee: Daryn Sharp
>Priority: Major
> Fix For: 3.3.0, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16749.002.patch, HADOOP-16749.patch
>
>
> When using CDATA. CDATA elements are skipped by the parser.
> In fact someone on stack overflow was asking this same question a few months 
> ago.
> https://stackoverflow.com/questions/57829034/why-is-apache-hadoop-configuration-module-ignores-cdata
> {code}
>   
> test.cdata
> hello 
>   
> {code}
> conf.get("test.cdata") parses as 'hello ' instead of 'hello world'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16749) Configuration parsing of CDATA values are blank

2020-01-09 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012276#comment-17012276
 ] 

Jonathan Turner Eagles commented on HADOOP-16749:
-

[~ste...@apache.org], if you're still +1 on this patch, I will commit and 
cherry-pick to all relevant lines.

> Configuration parsing of CDATA values are blank
> ---
>
> Key: HADOOP-16749
> URL: https://issues.apache.org/jira/browse/HADOOP-16749
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jonathan Turner Eagles
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-16749.002.patch, HADOOP-16749.patch
>
>
> When using CDATA. CDATA elements are skipped by the parser.
> In fact someone on stack overflow was asking this same question a few months 
> ago.
> https://stackoverflow.com/questions/57829034/why-is-apache-hadoop-configuration-module-ignores-cdata
> {code}
>   
> test.cdata
> hello 
>   
> {code}
> conf.get("test.cdata") parses as 'hello ' instead of 'hello world'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16749) Configuration parsing of CDATA values are blank

2020-01-09 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles updated HADOOP-16749:

Attachment: HADOOP-16749.002.patch

> Configuration parsing of CDATA values are blank
> ---
>
> Key: HADOOP-16749
> URL: https://issues.apache.org/jira/browse/HADOOP-16749
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jonathan Turner Eagles
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-16749.002.patch, HADOOP-16749.patch
>
>
> When using CDATA. CDATA elements are skipped by the parser.
> In fact someone on stack overflow was asking this same question a few months 
> ago.
> https://stackoverflow.com/questions/57829034/why-is-apache-hadoop-configuration-module-ignores-cdata
> {code}
>   
> test.cdata
> hello 
>   
> {code}
> conf.get("test.cdata") parses as 'hello ' instead of 'hello world'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16749) Configuration parsing of CDATA values are blank

2020-01-09 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012206#comment-17012206
 ] 

Jonathan Turner Eagles commented on HADOOP-16749:
-

Posted trivial rebase patch for [~daryn] to trigger Hadoop QA build

> Configuration parsing of CDATA values are blank
> ---
>
> Key: HADOOP-16749
> URL: https://issues.apache.org/jira/browse/HADOOP-16749
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jonathan Turner Eagles
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-16749.002.patch, HADOOP-16749.patch
>
>
> When using CDATA. CDATA elements are skipped by the parser.
> In fact someone on stack overflow was asking this same question a few months 
> ago.
> https://stackoverflow.com/questions/57829034/why-is-apache-hadoop-configuration-module-ignores-cdata
> {code}
>   
> test.cdata
> hello 
>   
> {code}
> conf.get("test.cdata") parses as 'hello ' instead of 'hello world'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16749) Configuration parsing of CDATA values are blank

2019-12-09 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991857#comment-16991857
 ] 

Jonathan Turner Eagles commented on HADOOP-16749:
-

[~daryn], can you update this patch to apply on trunk? This will have my +1 
once it applies correctly.

> Configuration parsing of CDATA values are blank
> ---
>
> Key: HADOOP-16749
> URL: https://issues.apache.org/jira/browse/HADOOP-16749
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jonathan Turner Eagles
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-16749.patch
>
>
> When using CDATA. CDATA elements are skipped by the parser.
> In fact someone on stack overflow was asking this same question a few months 
> ago.
> https://stackoverflow.com/questions/57829034/why-is-apache-hadoop-configuration-module-ignores-cdata
> {code}
>   
> test.cdata
> hello 
>   
> {code}
> conf.get("test.cdata") parses as 'hello ' instead of 'hello world'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16749) Configuration parsing of CDATA values are blank

2019-12-05 Thread Jonathan Turner Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Turner Eagles reassigned HADOOP-16749:
---

Assignee: Daryn Sharp

> Configuration parsing of CDATA values are blank
> ---
>
> Key: HADOOP-16749
> URL: https://issues.apache.org/jira/browse/HADOOP-16749
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jonathan Turner Eagles
>Assignee: Daryn Sharp
>Priority: Major
>
> When using CDATA. CDATA elements are skipped by the parser.
> In fact someone on stack overflow was asking this same question a few months 
> ago.
> https://stackoverflow.com/questions/57829034/why-is-apache-hadoop-configuration-module-ignores-cdata
> {code}
>   
> test.cdata
> hello 
>   
> {code}
> conf.get("test.cdata") parses as 'hello ' instead of 'hello world'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16749) Configuration parsing of CDATA values are blank

2019-12-05 Thread Jonathan Turner Eagles (Jira)
Jonathan Turner Eagles created HADOOP-16749:
---

 Summary: Configuration parsing of CDATA values are blank
 Key: HADOOP-16749
 URL: https://issues.apache.org/jira/browse/HADOOP-16749
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Jonathan Turner Eagles


When using CDATA. CDATA elements are skipped by the parser.

In fact someone on stack overflow was asking this same question a few months 
ago.
https://stackoverflow.com/questions/57829034/why-is-apache-hadoop-configuration-module-ignores-cdata

{code}
  
test.cdata
hello 
  
{code}

conf.get("test.cdata") parses as 'hello ' instead of 'hello world'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org