[GitHub] [hadoop] viirya commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-13 Thread GitBox


viirya commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691820247


   Looks like CI failed to fetch and install yetus? @sunchao do you know how we 
can re-trigger CI build and testing?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=483787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483787
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 14/Sep/20 05:29
Start Date: 14/Sep/20 05:29
Worklog Time Spent: 10m 
  Work Description: viirya commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691820247


   Looks like CI failed to fetch and install yetus? @sunchao do you know how we 
can re-trigger CI build and testing?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483787)
Time Spent: 8h  (was: 7h 50m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17260) ABFS: Test testAbfsStreamOps timing out

2020-09-13 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17260:
--

 Summary: ABFS: Test testAbfsStreamOps timing out 
 Key: HADOOP-17260
 URL: https://issues.apache.org/jira/browse/HADOOP-17260
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


Test testAbfsStreamOps is timing out when log4j settings are at DEBUG/TRACE 
level for AbfsInputStream.

log4j.logger.org.apache.hadoop.fs.azurebfs.services.AbfsInputStream=TRACE

 

org.junit.runners.model.TestTimedOutException: test timed out after 90 
millisecondsorg.junit.runners.model.TestTimedOutException: test timed out after 
90 milliseconds
 at java.lang.Throwable.getStackTraceElement(Native Method) at 
java.lang.Throwable.getOurStackTrace(Throwable.java:828) at 
java.lang.Throwable.getStackTrace(Throwable.java:817) at 
sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.log4j.spi.LocationInfo.(LocationInfo.java:139) at 
org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253) 
at 
org.apache.log4j.helpers.PatternParser$LocationPatternConverter.convert(PatternParser.java:500)
 at org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65) 
at org.apache.log4j.PatternLayout.format(PatternLayout.java:506) at 
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310) at 
org.apache.log4j.WriterAppender.append(WriterAppender.java:162) at 
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
 at org.apache.log4j.Category.callAppenders(Category.java:206) at 
org.apache.log4j.Category.forcedLog(Category.java:391) at 
org.apache.log4j.Category.log(Category.java:856) at 
org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:273) at 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readOneBlock(AbfsInputStream.java:150)
 at 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.read(AbfsInputStream.java:131)
 at 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.read(AbfsInputStream.java:104)
 at java.io.FilterInputStream.read(FilterInputStream.java:83) at 
org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout.validateContent(AbstractAbfsTestWithTimeout.java:117)
 at 
org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics.testAbfsStreamOps(ITestAbfsStreamStatistics.java:155)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-13 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195181#comment-17195181
 ] 

Akira Ajisaka commented on HADOOP-17255:


Hi [~hexiaoqiao], I think this is not a blocker. Updated the target version 
from 3.2.2 to 3.2.3.

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-13 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17255:
---
Target Version/s: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3  (was: 3.2.2, 3.3.1, 
3.4.0, 3.1.5, 2.10.2)

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2267: HDFS-15555. RBF: Refresh cacheNS when SocketException occurs.

2020-09-13 Thread GitBox


aajisaka commented on pull request #2267:
URL: https://github.com/apache/hadoop/pull/2267#issuecomment-691786545


   Filed https://issues.apache.org/jira/browse/HDFS-15575 for adding the test 
cases.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2

2020-09-13 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195167#comment-17195167
 ] 

Xiaoqiao He commented on HADOOP-16211:
--

Thanks [~ayushtkn] for involving me here. It seems fix version tags to 3.2.1 
which has released. I am not sure if this blocks 3.2.2? any suggestions? Thanks.

> Update guava to 27.0-jre in hadoop-project branch-3.2
> -
>
> Key: HADOOP-16211
> URL: https://issues.apache.org/jira/browse/HADOOP-16211
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HADOOP-16211-branch-3.2.001.patch, 
> HADOOP-16211-branch-3.2.002.patch, HADOOP-16211-branch-3.2.003.patch, 
> HADOOP-16211-branch-3.2.004.patch, HADOOP-16211-branch-3.2.005.patch, 
> HADOOP-16211-branch-3.2.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2267: HDFS-15555. RBF: Refresh cacheNS when SocketException occurs.

2020-09-13 Thread GitBox


aajisaka commented on pull request #2267:
URL: https://github.com/apache/hadoop/pull/2267#issuecomment-691776918


   Merged. Thank you @goiri for your review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #2267: HDFS-15555. RBF: Refresh cacheNS when SocketException occurs.

2020-09-13 Thread GitBox


aajisaka merged pull request #2267:
URL: https://github.com/apache/hadoop/pull/2267


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka edited a comment on pull request #2277: HADOOP-17246. Fix build the hadoop-build Docker image failed

2020-09-13 Thread GitBox


aajisaka edited a comment on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-691775602


   > I get, should we specific the version of astroid?
   
   Yes. Could you try "isort==4.3.21" and "astroid==1.6.6"? They are the latest 
versions that support Python 2.7.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2277: HADOOP-17246. Fix build the hadoop-build Docker image failed

2020-09-13 Thread GitBox


aajisaka commented on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-691775602


   Could you try "isort==4.3.21" and "astroid==1.6.6"? They are the latest 
versions that support Python 2.7.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17246) Fix build the hadoop-build Docker image failed

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17246?focusedWorklogId=483743=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483743
 ]

ASF GitHub Bot logged work on HADOOP-17246:
---

Author: ASF GitHub Bot
Created on: 14/Sep/20 02:30
Start Date: 14/Sep/20 02:30
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-691775602


   Could you try "isort==4.3.21" and "astroid==1.6.6"? They are the latest 
versions that support Python 2.7.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483743)
Time Spent: 2h 20m  (was: 2h 10m)

> Fix build the hadoop-build Docker image failed
> --
>
> Key: HADOOP-17246
> URL: https://issues.apache.org/jira/browse/HADOOP-17246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: dockerfile, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When I build the docker-build image under macOS, it failed caused by:
> {code:java}
> 
> Command "/usr/bin/python -u -c "import setuptools, 
> tokenize;__file__='/tmp/pip-build-vKHcWu/isort/setup.py';exec(compile(getattr(tokenize,
>  'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
> install --record /tmp/pip-odL0bY-record/install-record.txt 
> --single-version-externally-managed --compile" failed with error code 1 in 
> /tmp/pip-build-vKHcWu/isort/
> You are using pip version 8.1.1, however version 20.2.2 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> The command '/bin/bash -o pipefail -c pip2 install configparser==4.0.2
>  pylint==1.9.2' returned a non-zero code: 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17246) Fix build the hadoop-build Docker image failed

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17246?focusedWorklogId=483744=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483744
 ]

ASF GitHub Bot logged work on HADOOP-17246:
---

Author: ASF GitHub Bot
Created on: 14/Sep/20 02:30
Start Date: 14/Sep/20 02:30
Worklog Time Spent: 10m 
  Work Description: aajisaka edited a comment on pull request #2277:
URL: https://github.com/apache/hadoop/pull/2277#issuecomment-691775602


   > I get, should we specific the version of astroid?
   
   Yes. Could you try "isort==4.3.21" and "astroid==1.6.6"? They are the latest 
versions that support Python 2.7.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483744)
Time Spent: 2.5h  (was: 2h 20m)

> Fix build the hadoop-build Docker image failed
> --
>
> Key: HADOOP-17246
> URL: https://issues.apache.org/jira/browse/HADOOP-17246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: dockerfile, pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> When I build the docker-build image under macOS, it failed caused by:
> {code:java}
> 
> Command "/usr/bin/python -u -c "import setuptools, 
> tokenize;__file__='/tmp/pip-build-vKHcWu/isort/setup.py';exec(compile(getattr(tokenize,
>  'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
> install --record /tmp/pip-odL0bY-record/install-record.txt 
> --single-version-externally-managed --compile" failed with error code 1 in 
> /tmp/pip-build-vKHcWu/isort/
> You are using pip version 8.1.1, however version 20.2.2 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> The command '/bin/bash -o pipefail -c pip2 install configparser==4.0.2
>  pylint==1.9.2' returned a non-zero code: 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huangtianhua commented on pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-13 Thread GitBox


huangtianhua commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-691761278


   @brahmareddybattula  Hi brahma, would you review it again, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huangtianhua commented on pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-13 Thread GitBox


huangtianhua commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-691760924


   @liuml07 That sounds good, thank you very much:)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=483707=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483707
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 13/Sep/20 21:45
Start Date: 13/Sep/20 21:45
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691729909


   > So I am not sure which one is good, fixing these compilation warnings, or 
ignoring them?
   
   Yeah. Looks to me we can just ignore these for now and proceed to other 
things in this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483707)
Time Spent: 7h 50m  (was: 7h 40m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-13 Thread GitBox


sunchao commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691729909


   > So I am not sure which one is good, fixing these compilation warnings, or 
ignoring them?
   
   Yeah. Looks to me we can just ignore these for now and proceed to other 
things in this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15538) Possible RPC deadlock in Client

2020-09-13 Thread Anthony Baldocchi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195121#comment-17195121
 ] 

Anthony Baldocchi commented on HADOOP-15538:


Given the JRE version in the description, could this be another instance of 
[https://bugs.openjdk.java.net/browse/JDK-8215355]?

> Possible RPC deadlock in Client
> ---
>
> Key: HADOOP-15538
> URL: https://issues.apache.org/jira/browse/HADOOP-15538
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
> Attachments: t1+13min.jstack, t1.jstack
>
>
> We have a jstack collection that spans 13 minutes. One frame per ~1.5 
> minutes. And for each of the frame, I observed the following:
> {code:java}
> Found one Java-level deadlock:
> =
> "IPC Parameter Sending Thread #294":
>   waiting to lock monitor 0x7f68f21f3188 (object 0x000621745390, a 
> java.lang.Object),
>   which is held by UNKNOWN_owner_addr=0x7f68332e2800
> Java stack information for the threads listed above:
> ===
> "IPC Parameter Sending Thread #294":
> at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:268)
> - waiting to lock <0x000621745390> (a java.lang.Object)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461)
> - locked <0x000621745380> (a java.lang.Object)
> at 
> org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
> at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> - locked <0x000621749850> (a java.io.BufferedOutputStream)
> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
> at org.apache.hadoop.ipc.Client$Connection$3.run(Client.java:1072)
> - locked <0x00062174b878> (a java.io.DataOutputStream)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Found one Java-level deadlock:
> =
> "IPC Client (297602875) connection to x.y.z.p:8020 from impala":
>   waiting to lock monitor 0x7f68f21f3188 (object 0x000621745390, a 
> java.lang.Object),
>   which is held by UNKNOWN_owner_addr=0x7f68332e2800
> Java stack information for the threads listed above:
> ===
> "IPC Client (297602875) connection to x.y.z.p:8020 from impala":
> at 
> sun.nio.ch.SocketChannelImpl.readerCleanup(SocketChannelImpl.java:279)
> - waiting to lock <0x000621745390> (a java.lang.Object)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:390)
> - locked <0x000621745370> (a java.lang.Object)
> at 
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:553)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
> - locked <0x0006217476f0> (a java.io.BufferedInputStream)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1113)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1006)
> Found 2 deadlocks.
> {code}
> This happens with jdk1.8.0_162 on 2.6.32-696.18.7.el6.x86_64.
> The code appears to match 
> [https://github.com/tuxjdk/jdk8u/blob/master/jdk/src/share/classes/sun/nio/ch/SocketChannelImpl.java].
> The first thread is blocked at:

[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=483677=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483677
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 13/Sep/20 19:20
Start Date: 13/Sep/20 19:20
Worklog Time Spent: 10m 
  Work Description: viirya edited a comment on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691712861


   Checked the cc warnings and the related code. They are committed long time 
ago, e.g., 2014, and not touched here. Many of the cc warnings are `warning: 
dynamic exception specifications are deprecated in C++11 [-Wdeprecated]`. I 
guess it is either due to that we didn't check such warnings when the code was 
committed, or compilation tools upgrade? I think it is not caused by this 
change. Because we removed some .c and .h files, so the CI build triggered 
related building.
   
   So I am not sure which one is good, fixing these compilation warnings, or 
ignoring them?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483677)
Time Spent: 7h 40m  (was: 7.5h)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-13 Thread GitBox


viirya commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691712861


   Checked the cc warnings and the related code. They are code long time ago, 
e.g., 2014, and not touched here. Many of the cc warnings are `warning: dynamic 
exception specifications are deprecated in C++11 [-Wdeprecated]`. I guess it is 
either due to that we didn't check such warnings when the code was committed, 
or compilation tools upgrade? I think it is not caused by this change. Because 
we removed some .c and .h files, so the CI build triggered related building.
   
   So I am not sure which one is good, fixing these compilation warnings, or 
ignoring them?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya edited a comment on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-13 Thread GitBox


viirya edited a comment on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691712861


   Checked the cc warnings and the related code. They are committed long time 
ago, e.g., 2014, and not touched here. Many of the cc warnings are `warning: 
dynamic exception specifications are deprecated in C++11 [-Wdeprecated]`. I 
guess it is either due to that we didn't check such warnings when the code was 
committed, or compilation tools upgrade? I think it is not caused by this 
change. Because we removed some .c and .h files, so the CI build triggered 
related building.
   
   So I am not sure which one is good, fixing these compilation warnings, or 
ignoring them?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=483676=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483676
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 13/Sep/20 19:19
Start Date: 13/Sep/20 19:19
Worklog Time Spent: 10m 
  Work Description: viirya commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-691712861


   Checked the cc warnings and the related code. They are code long time ago, 
e.g., 2014, and not touched here. Many of the cc warnings are `warning: dynamic 
exception specifications are deprecated in C++11 [-Wdeprecated]`. I guess it is 
either due to that we didn't check such warnings when the code was committed, 
or compilation tools upgrade? I think it is not caused by this change. Because 
we removed some .c and .h files, so the CI build triggered related building.
   
   So I am not sure which one is good, fixing these compilation warnings, or 
ignoring them?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483676)
Time Spent: 7.5h  (was: 7h 20m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17180) S3Guard: Include 500 DynamoDB system errors in exponential backoff retries

2020-09-13 Thread David Kats (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195055#comment-17195055
 ] 

David Kats commented on HADOOP-17180:
-

Hi Steve,

thank you for your answer.

We can't easily upgrade to 3.2.0 just to try - 

we are running a few large systems over tens of thousands of cores 

Also, HADOOP-15426  doesn't seem to address 500 system errors.

If treating 500 as a throttle event gets addressed on 3.3, 

we'll move to 3.3 (e.g. this fix doesn't have to be back-ported to 3.1).

 

We are constantly running into this issue with jobs dying and all the work with 
AWS so far yields nothing, looks like this should be addressed on the S3Guard 
side.

Other than that, S3Guard works great for us, thanks a lot for a solid product :)

Appreciate your help,

David

 

> S3Guard: Include 500 DynamoDB system errors in exponential backoff retries
> --
>
> Key: HADOOP-17180
> URL: https://issues.apache.org/jira/browse/HADOOP-17180
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.3
>Reporter: David Kats
>Priority: Major
> Attachments: image-2020-08-03-09-58-54-102.png
>
>
> We get fatal failures from S3guard (that in turn fail our spark jobs) because 
> of the inernal DynamoDB system errors.
> {color:#00}com.amazonaws.services.dynamodbv2.model.InternalServerErrorException:
>  Internal server error (Service: AmazonDynamoDBv2; Status Code: 500; Error 
> Code: InternalServerError; Request ID: 
> 00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG): Internal server error 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalServerError; Request ID: 
> 00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG){color}
> {color:#00}The DynamoDB has separate statistic for system errors:{color}
> {color:#00}!image-2020-08-03-09-58-54-102.png!{color}
> {color:#00}I contacted the AWS Support and got an explanation that those 
> 500 errors are returned to the client once DynamoDB gets overwhelmed with 
> client requests.{color}
> {color:#00}So essentially the traffic should had been throttled but it 
> didn't and got 500 system errors.{color}
> {color:#00}My point is that the client should handle those errors just 
> like throttling exceptions - {color}
> {color:#00}with exponential backoff retries.{color}
>  
> {color:#00}Here is more complete exception stack trace:{color}
>  
> *{color:#00}org.apache.hadoop.fs.s3a.AWSServiceIOException: get on 
> s3a://rem-spark/persisted_step_data/15/0afb1ccb73854f1fa55517a77ec7cc5e__b67e2221-f0e3-4c89-90ab-f49618ea4557__SDTopology/parquet.all_ranges/topo_id=321:
>  com.amazonaws.services.dynamodbv2.model.InternalServerErrorException: 
> Internal server error (Service: AmazonDynamoDBv2; Status Code: 500; Error 
> Code: InternalServerError; Request ID: 
> 00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG): Internal server error 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalServerError; Request ID: 
> 00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG) 
> at{color}*{color:#00} 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) 
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111) at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:438)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2110)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088) 
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1889)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1868)
>  at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1868) at 
> org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$listLeafFiles(InMemoryFileIndex.scala:277)
>  at 
> org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3$$anonfun$apply$2.apply(InMemoryFileIndex.scala:207)
>  at 
> org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3$$anonfun$apply$2.apply(InMemoryFileIndex.scala:206)
>  at scala.collection.immutable.Stream.map(Stream.scala:418) at 
> org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3.apply(InMemoryFileIndex.scala:206)
>  at 
> org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3.apply(InMemoryFileIndex.scala:204)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
>  at 
> 

[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-09-13 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195051#comment-17195051
 ] 

Hemanth Boyina commented on HADOOP-17144:
-

thanks for the comment [~iwasakims] , sorry for late response 
{quote}Adding a test case similar to 
TestLz4CompressorDecompressor#testSetInputWithBytesSizeMoreThenDefaultLz4CompressorByfferSize
 for decompressor would make the point clear
{quote}
we do have a test case similar to this scenario in 
TestCompressorDecompressor#testCompressorDecompressorWithExeedBufferLimit , 
modified the lz4 constructors to use default buffer size , the compressor 
worked the same way as you have mentioned but decompressor didnt work the same 
as the lz4 decompressor api returned negative value for this scenario which is 
incorrect  

please correct me if i am missing something here

> Update Hadoop's lz4 to v1.9.2
> -
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, 
> HADOOP-17144.003.patch, HADOOP-17144.004.patch
>
>
> Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17246) Fix build the hadoop-build Docker image failed

2020-09-13 Thread Wanqiang Ji (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195035#comment-17195035
 ] 

Wanqiang Ji commented on HADOOP-17246:
--

Hi [~hexiaoqiao], thanks for your tracking. I had discussed with [~aajisaka], 
free to join the discussion under the pr.

> Fix build the hadoop-build Docker image failed
> --
>
> Key: HADOOP-17246
> URL: https://issues.apache.org/jira/browse/HADOOP-17246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: dockerfile, pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> When I build the docker-build image under macOS, it failed caused by:
> {code:java}
> 
> Command "/usr/bin/python -u -c "import setuptools, 
> tokenize;__file__='/tmp/pip-build-vKHcWu/isort/setup.py';exec(compile(getattr(tokenize,
>  'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
> install --record /tmp/pip-odL0bY-record/install-record.txt 
> --single-version-externally-managed --compile" failed with error code 1 in 
> /tmp/pip-build-vKHcWu/isort/
> You are using pip version 8.1.1, however version 20.2.2 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> The command '/bin/bash -o pipefail -c pip2 install configparser==4.0.2
>  pylint==1.9.2' returned a non-zero code: 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2225: HDFS-15329. Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread GitBox


hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-691668958


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 41s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 48s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  6s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  7s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   4m  3s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 49s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 24s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  25m 24s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2055 unchanged - 
2 fixed = 2057 total (was 2057)  |
   | +1 :green_heart: |  compile  |  23m 40s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  23m 40s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new + 1949 unchanged - 
2 fixed = 1951 total (was 1951)  |
   | +1 :green_heart: |  checkstyle  |   3m 36s |  root: The patch generated 0 
new + 52 unchanged - 1 fixed = 52 total (was 53)  |
   | +1 :green_heart: |  mvnsite  |   3m 44s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   2m 22s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 30s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 116m 37s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 336m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Write to static field 
org.apache.hadoop.fs.viewfs.ViewFs.showMountLinksAsSymlinks from instance 
method new org.apache.hadoop.fs.viewfs.ViewFs(URI, Configuration)  At 
ViewFs.java:from instance method new org.apache.hadoop.fs.viewfs.ViewFs(URI, 
Configuration)  At ViewFs.java:[line 230] |
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
   |   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5f5553b9d6e8 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2225: HDFS-15329. Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread GitBox


hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-691664562


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 39s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 21s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m  6s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 35s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 21s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 21s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 46s |  root: The patch generated 0 
new + 52 unchanged - 1 fixed = 52 total (was 53)  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 16s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   2m 22s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 37s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  96m 33s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 291m 10s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Write to static field 
org.apache.hadoop.fs.viewfs.ViewFs.showMountLinksAsSymlinks from instance 
method new org.apache.hadoop.fs.viewfs.ViewFs(URI, Configuration)  At 
ViewFs.java:from instance method new org.apache.hadoop.fs.viewfs.ViewFs(URI, 
Configuration)  At ViewFs.java:[line 230] |
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ddad9c477363 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d2779de3f52 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | findbugs | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2225: HDFS-15329. Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread GitBox


hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-691664117


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 29s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 35s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m  4s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 46s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  0s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 34s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 13s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 24s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 50s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 50s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  root: The patch generated 0 
new + 52 unchanged - 1 fixed = 52 total (was 53)  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   2m 24s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 39s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 106m 56s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 290m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Write to static field 
org.apache.hadoop.fs.viewfs.ViewFs.showMountLinksAsSymlinks from instance 
method new org.apache.hadoop.fs.viewfs.ViewFs(URI, Configuration)  At 
ViewFs.java:from instance method new org.apache.hadoop.fs.viewfs.ViewFs(URI, 
Configuration)  At ViewFs.java:[line 230] |
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.TestSetrepDecreasing |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5878ee092b11 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d2779de3f52 |
   | Default Java | Private 

[jira] [Commented] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2

2020-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194969#comment-17194969
 ] 

Ayush Saxena commented on HADOOP-16211:
---

This landed up, upgrading guava from {{11}} to {{27}}, which are two 
incompatible versions. Can we do that in a non-major release? 
 Quoting [~ste...@apache.org] from HADOOP-16210
{quote} what we should always that a release 3.X will work with code built on 
releases 3.(X-1), etc
{quote}
3.2.0 is on {{11}} and further on {{27}}

cc [~hexiaoqiao]


> Update guava to 27.0-jre in hadoop-project branch-3.2
> -
>
> Key: HADOOP-16211
> URL: https://issues.apache.org/jira/browse/HADOOP-16211
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HADOOP-16211-branch-3.2.001.patch, 
> HADOOP-16211-branch-3.2.002.patch, HADOOP-16211-branch-3.2.003.patch, 
> HADOOP-16211-branch-3.2.004.patch, HADOOP-16211-branch-3.2.005.patch, 
> HADOOP-16211-branch-3.2.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] abhishekdas99 commented on a change in pull request #2225: HDFS-15329. Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread GitBox


abhishekdas99 commented on a change in pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#discussion_r487493380



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##
@@ -0,0 +1,218 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import java.net.URISyntaxException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.AbstractFileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+
+/**
+ * This class is AbstractFileSystem implementation corresponding to
+ * ViewFileSystemOverloadScheme. This class is extended from the ViewFs
+ * for the overloaded scheme file system. Mount link configurations and
+ * in-memory mount table building behaviors are inherited from ViewFs.
+ * Unlike ViewFs scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * To use this class, the following configurations need to be added in
+ * core-site.xml file.
+ * 1) fs.AbstractFileSystem..impl
+ *= org.apache.hadoop.fs.viewfs.ViewFsOverloadScheme
+ * 2) fs.viewfs.overload.scheme.target.abstract..impl
+ *= "
+ *
+ * Here  can be any scheme, but with that scheme there should be a
+ * hadoop compatible file system available. Second configuration value should
+ * be the respective scheme's file system implementation class.
+ * Example: if scheme is configured with "hdfs", then the 2nd configuration
+ * class name will be org.apache.hadoop.hdfs.Hdfs.
+ *
+ * Use Case 1:
+ * ===
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ * o3fs://bucket1.volume1/, s3a://bucket1/)
+ *
+ * fs.viewfs.mounttable.Cluster.link./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.Cluster.link./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.Cluster.link./backup = s3a://bucket1/backup/
+ *
+ * Op1: Create file hdfs://Cluster/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file hdfs://Cluster/data/datafile will go to
+ *  o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file hdfs://Cluster/backup/data.zip will go to
+ *  s3a://bucket1/backup/data.zip
+ *
+ * Use Case 2:
+ * ===
+ * If users want some of their existing cluster (s3a://bucketA/)
+ * data to mount with other hdfs and object store clusters
+ * (hdfs://NN1, o3fs://bucket1.volume1/)
+ *
+ * fs.viewfs.mounttable.bucketA.link./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.bucketA.link./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.bucketA.link./salesDB = s3a://bucketA/salesDB/
+ *
+ * Op1: Create file s3a://bucketA/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file s3a://bucketA/data/datafile will go to
+ *  o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to
+ *  s3a://bucketA/salesDB/dbfile
+ *
+ * Note:
+ * (1) In ViewFileSystemOverloadScheme, by default the mount links will be
+ * represented as non-symlinks. If you want to change this behavior, please see
+ * {@link ViewFileSystem#listStatus(Path)}
+ * (2) In ViewFileSystemOverloadScheme, only the initialized uri's hostname 
will
+ * be considered as the mount table name. When the passed uri has 
hostname:port,
+ * it will simply ignore the port number and only hostname will be considered 
as
+ * the mount table name.
+ * (3) If there are no mount links configured with the initializing uri's
+ * hostname as the mount table name, then it will automatically consider the
+ * current uri as fallback( ex: fs.viewfs.mounttable..linkFallBack)
+ * target fs uri.
+ */
+
+public class 

[GitHub] [hadoop] abhishekdas99 commented on a change in pull request #2225: HDFS-15329. Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread GitBox


abhishekdas99 commented on a change in pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#discussion_r487493294



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileContextTestHelper;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Hdfs;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;;
+
+
+/**
+ * Tests ViewFileSystemOverloadScheme with configured mount links.
+ */
+public class TestViewFsOverloadSchemeWithHdfsScheme {
+  private static final String FS_IMPL_PATTERN_KEY =
+  "fs.AbstractFileSystem.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private Configuration conf = null;
+  private MiniDFSCluster cluster = null;
+  private URI defaultFSURI;
+  private File localTargetDir;
+  private static final String TEST_ROOT_DIR = PathUtils
+  .getTestDirName(TestViewFsOverloadSchemeWithHdfsScheme.class);
+  private static final String HDFS_USER_FOLDER = "/HDFSUser";
+  private static final String LOCAL_FOLDER = "/local";
+
+  /**
+   * Sets up the configurations and starts the MiniDFSCluster.
+   */
+  @Before

Review comment:
   Fixed.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileContextTestHelper;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Hdfs;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;;
+
+
+/**

Review comment:
   Fixed.