[jira] [Comment Edited] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-26 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779007#comment-16779007
 ] 

wujinhu edited comment on HADOOP-15038 at 2/27/19 7:48 AM:
---

Hi [~ste...@apache.org]

[~uncleGen] assigned this task to me, I will do the abstraction work. It seems 
has been pending for a long time, do you have some suggestions to me? Thanks. 


was (Author: wujinhu):
Hi [~ste...@apache.org]

[~uncleGen] assigned this task to me, I will do the abstraction work, do you 
have some suggestions to me? Thanks. 

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-26 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779007#comment-16779007
 ] 

wujinhu commented on HADOOP-15038:
--

Hi [~ste...@apache.org]

[~uncleGen] assigned this task to me, I will do the abstraction work, do you 
have some suggestions to me? Thanks. 

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-26 Thread Genmao Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu reassigned HADOOP-15038:
--

Assignee: wujinhu

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16151:
---
Description: 
precommit build fails in branch-2.8 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
{noformat}
Step 24/31 : RUN pip install pylint==1.9.2
 ---> Running in c0bed03b7115
Downloading/unpacking pylint==1.9.2
Downloading/unpacking configparser (from pylint==1.9.2)
  Downloading configparser-3.7.3-py2.py3-none-any.whl
Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
  Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six in 
/usr/lib/python2.7/dist-packages (from pylint==1.9.2)
Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
  Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
package isort
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'python_requires'
  warnings.warn(msg)
error in isort setup command: 'install_requires' must be a string or list 
of strings containing valid project/version requirement specifiers
Complete output from command python setup.py egg_info:
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'python_requires'

  warnings.warn(msg)

error in isort setup command: 'install_requires' must be a string or list of 
strings containing valid project/version requirement specifiers


Cleaning up...
Command python setup.py egg_info failed with error code 1 in 
/tmp/pip_build_root/isort
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
{noformat}

  was:
https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
{noformat}
Step 24/31 : RUN pip install pylint==1.9.2
 ---> Running in c0bed03b7115
Downloading/unpacking pylint==1.9.2
Downloading/unpacking configparser (from pylint==1.9.2)
  Downloading configparser-3.7.3-py2.py3-none-any.whl
Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
  Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six in 
/usr/lib/python2.7/dist-packages (from pylint==1.9.2)
Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
  Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
package isort
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'python_requires'
  warnings.warn(msg)
error in isort setup command: 'install_requires' must be a string or list 
of strings containing valid project/version requirement specifiers
Complete output from command python setup.py egg_info:
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'python_requires'

  warnings.warn(msg)

error in isort setup command: 'install_requires' must be a string or list of 
strings containing valid project/version requirement specifiers


Cleaning up...
Command python setup.py egg_info failed with error code 1 in 
/tmp/pip_build_root/isort
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
{noformat}


> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch, HADOOP-16151-branch-2.9-02.patch
>
>
> precommit build fails in branch-2.8 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' 

[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778984#comment-16778984
 ] 

Hadoop QA commented on HADOOP-16151:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} hadolint {color} | {color:blue}  0m  
0s{color} | {color:blue} hadolint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:78f1974 |
| JIRA Issue | HADOOP-16151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960306/HADOOP-16151-branch-2.9-02.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux 95dd85f26e85 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.9 / 59a65c3 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| shellcheck | v0.4.7 |
| Max. process+thread count | 34 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15985/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch, HADOOP-16151-branch-2.9-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778963#comment-16778963
 ] 

Hadoop QA commented on HADOOP-16151:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15985/console in case of 
problems.


> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch, HADOOP-16151-branch-2.9-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16151:
---
Attachment: HADOOP-16151-branch-2.9-02.patch

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch, HADOOP-16151-branch-2.9-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16151:
---
Attachment: HADOOP-16151-branch-2.8-02.patch

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778958#comment-16778958
 ] 

Hadoop QA commented on HADOOP-16151:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} hadolint {color} | {color:blue}  0m  
0s{color} | {color:blue} hadolint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ae3769f |
| JIRA Issue | HADOOP-16151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960303/HADOOP-16151-branch-2.8-02.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux 96d5cc88641e 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / 4354157 |
| maven | version: Apache Maven 3.0.5 |
| shellcheck | v0.4.7 |
| Max. process+thread count | 34 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15984/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778956#comment-16778956
 ] 

Hadoop QA commented on HADOOP-16151:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15984/console in case of 
problems.


> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778950#comment-16778950
 ] 

Akira Ajisaka edited comment on HADOOP-16151 at 2/27/19 6:25 AM:
-

02 patch: Use isort 4.3.8 instead of the latest version(4.3.9).


was (Author: ajisakaa):
Use isort 4.3.8 instead of the latest version(4.3.9).

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778950#comment-16778950
 ] 

Akira Ajisaka commented on HADOOP-16151:


Use isort 4.3.8 instead of the latest version(4.3.9).

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch, 
> HADOOP-16151-branch-2.8-02.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778933#comment-16778933
 ] 

Hadoop QA commented on HADOOP-16151:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15983/console in case of 
problems.


> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778934#comment-16778934
 ] 

Hadoop QA commented on HADOOP-16151:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
4s{color} | {color:red} Docker failed to build yetus/hadoop:ae3769f. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960302/HADOOP-16151-branch-2.8-01.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15983/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-26 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778931#comment-16778931
 ] 

Akira Ajisaka commented on HADOOP-16055:


The precommit build failure is tracked by HADOOP-16151

> Upgrade AWS SDK to 1.11.271 in branch-2
> ---
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16055-branch-2-01.patch, 
> HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch, 
> HADOOP-16055-branch-2.8-03.patch, HADOOP-16055-branch-2.8-03.patch, 
> HADOOP-16055-branch-2.9-01.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16151:
---
Status: Patch Available  (was: Open)

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16151:
---
Attachment: HADOOP-16151-branch-2.8-01.patch

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-16151-branch-2.8-01.patch
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16151:
--

Assignee: Akira Ajisaka

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778923#comment-16778923
 ] 

Akira Ajisaka commented on HADOOP-16151:


Probably installing isort >= 4.3.1 fixes this issue.
https://github.com/timothycrosley/isort/issues/652

> pip install pylint fails in branch-2.8 and branch-2.9
> -
>
> Key: HADOOP-16151
> URL: https://issues.apache.org/jira/browse/HADOOP-16151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Blocker
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
> {noformat}
> Step 24/31 : RUN pip install pylint==1.9.2
>  ---> Running in c0bed03b7115
> Downloading/unpacking pylint==1.9.2
> Downloading/unpacking configparser (from pylint==1.9.2)
>   Downloading configparser-3.7.3-py2.py3-none-any.whl
> Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
>   Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
> Requirement already satisfied (use --upgrade to upgrade): six in 
> /usr/lib/python2.7/dist-packages (from pylint==1.9.2)
> Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
>   Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
> package isort
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list 
> of strings containing valid project/version requirement specifiers
> Complete output from command python setup.py egg_info:
> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
> distribution option: 'python_requires'
>   warnings.warn(msg)
> error in isort setup command: 'install_requires' must be a string or list of 
> strings containing valid project/version requirement specifiers
> 
> Cleaning up...
> Command python setup.py egg_info failed with error code 1 in 
> /tmp/pip_build_root/isort
> Storing debug log for failure in /root/.pip/pip.log
> The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16151) pip install pylint fails in branch-2.8 and branch-2.9

2019-02-26 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16151:
--

 Summary: pip install pylint fails in branch-2.8 and branch-2.9
 Key: HADOOP-16151
 URL: https://issues.apache.org/jira/browse/HADOOP-16151
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/consoleFull
{noformat}
Step 24/31 : RUN pip install pylint==1.9.2
 ---> Running in c0bed03b7115
Downloading/unpacking pylint==1.9.2
Downloading/unpacking configparser (from pylint==1.9.2)
  Downloading configparser-3.7.3-py2.py3-none-any.whl
Downloading/unpacking backports.functools-lru-cache (from pylint==1.9.2)
  Downloading backports.functools_lru_cache-1.5-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six in 
/usr/lib/python2.7/dist-packages (from pylint==1.9.2)
Downloading/unpacking isort>=4.2.5 (from pylint==1.9.2)
  Running setup.py (path:/tmp/pip_build_root/isort/setup.py) egg_info for 
package isort
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'python_requires'
  warnings.warn(msg)
error in isort setup command: 'install_requires' must be a string or list 
of strings containing valid project/version requirement specifiers
Complete output from command python setup.py egg_info:
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'python_requires'

  warnings.warn(msg)

error in isort setup command: 'install_requires' must be a string or list of 
strings containing valid project/version requirement specifiers


Cleaning up...
Command python setup.py egg_info failed with error code 1 in 
/tmp/pip_build_root/isort
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c pip install pylint==1.9.2' returned a non-zero code: 1
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778913#comment-16778913
 ] 

Hadoop QA commented on HADOOP-16055:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 17m 
54s{color} | {color:red} Docker failed to build yetus/hadoop:ae3769f. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16055 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960291/HADOOP-16055-branch-2.8-03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15982/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade AWS SDK to 1.11.271 in branch-2
> ---
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16055-branch-2-01.patch, 
> HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch, 
> HADOOP-16055-branch-2.8-03.patch, HADOOP-16055-branch-2.8-03.patch, 
> HADOOP-16055-branch-2.9-01.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-26 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778912#comment-16778912
 ] 

Akira Ajisaka commented on HADOOP-16055:


Ran unit tests and manual testing with Tokyo region.

> Upgrade AWS SDK to 1.11.271 in branch-2
> ---
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16055-branch-2-01.patch, 
> HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch, 
> HADOOP-16055-branch-2.8-03.patch, HADOOP-16055-branch-2.8-03.patch, 
> HADOOP-16055-branch-2.9-01.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16055:
---
Attachment: HADOOP-16055-branch-2.8-03.patch

> Upgrade AWS SDK to 1.11.271 in branch-2
> ---
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16055-branch-2-01.patch, 
> HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch, 
> HADOOP-16055-branch-2.8-03.patch, HADOOP-16055-branch-2.8-03.patch, 
> HADOOP-16055-branch-2.9-01.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-26 Thread GitBox
hadoop-yetus commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467684481
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 968 | trunk passed |
   | +1 | compile | 949 | trunk passed |
   | +1 | checkstyle | 233 | trunk passed |
   | +1 | mvnsite | 157 | trunk passed |
   | +1 | shadedclient | 1062 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 113 | trunk passed |
   | +1 | javadoc | 90 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 101 | the patch passed |
   | +1 | compile | 884 | the patch passed |
   | +1 | javac | 884 | the patch passed |
   | +1 | checkstyle | 188 | the patch passed |
   | +1 | mvnsite | 123 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 122 | the patch passed |
   | +1 | javadoc | 83 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 69 | common in the patch failed. |
   | +1 | unit | 97 | server-scm in the patch passed. |
   | -1 | unit | 537 | integration-test in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6442 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/518 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux ae75be440e0a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9192f71 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/testReport/ |
   | Max. process+thread count | 3556 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778758#comment-16778758
 ] 

Hadoop QA commented on HADOOP-15999:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
31s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15999 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960256/HADOOP-15999-007.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 411e690ab4b1 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9192f71 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15981/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15981/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15981/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778725#comment-16778725
 ] 

Steve Loughran commented on HADOOP-16140:
-

bq. > TestTrash:L509. Dont downgrade an exception to a log, just rethrow

bq. In this case I was copying the pattern that already exists in this (rather 
large) test method where the above is used quite a few times. I wonder if its 
best to stick with what is there rather than doing something different for this 
one additional test?

well. the other tests are probably broken. If an operation fails, the test 
should throw an exception so it is reported as a failure.

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260542971
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260542975
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778719#comment-16778719
 ] 

Hadoop QA commented on HADOOP-15625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 38 
new + 27 unchanged - 0 fixed = 65 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
40s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960254/HADOOP-15625-007.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 48f2819db2ad 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a5a751b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15980/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15980/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15980/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws 

[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260542985
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,129 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+string(REPLACE "|" ";" CMAKE_PREFIX_PATH "${CMAKE_PREFIX_PATH}")
+message(STATUS "OOM Prefix path = ${CMAKE_PREFIX_PATH}")
+
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-26 Thread GitBox
anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467665680
 
 
   :+1: , feel free to commit once we get a Jenkins run. Thanks for taking care 
of this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260541798
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,129 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+string(REPLACE "|" ";" CMAKE_PREFIX_PATH "${CMAKE_PREFIX_PATH}")
+message(STATUS "OOM Prefix path = ${CMAKE_PREFIX_PATH}")
+
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260541788
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260541792
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15999:

Status: Patch Available  (was: Open)

Patch 007

I never found what the problem with my test setup was -IDE runs would work- but 
there's enough going on with trying to retrofit auth/nonauth to existing FS 
instances then I concluded it was just caching problem playing up.

Fix: 
* explicitly create new guarded/unguarded filesystems in test setup
* with the choice of auth/unauth mode being chosen in the parameter used to 
parameterize all the tests. 
* (which also allows for the tests to be cut in half, etc, etc)

With this, the tests are working in IDE, single test, parallel tests. That bit, 
I'm happy.

now, looking at the code patch, one thing I'm worried about is that the patch 
doesn't handle
expiry of tombstones. Ever. Which is still part of the OOB problem. 

If a guarded FS deletes a file, and an unguarded client creates it, 
getFileStatus on the guarded file will always return FNFE. I think we''ll need 
to have an expiry of tombstones, on both auth and nonauth, to stop this.

Thoughts? It'll probably make sense to do that in a followup patch: this one 
for these bits of the problem

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15999:

Status: Open  (was: Patch Available)

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15999:

Attachment: HADOOP-15999-007.patch

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778698#comment-16778698
 ] 

Hudson commented on HADOOP-16127:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16074 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16074/])
HADOOP-16127. In ipc.Client, put a new connection could happen after (szetszwo: 
rev 9192f71e21847ad86bc9ff23847d8957dfe8ae58)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ClientCache.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: c16127_20190219.patch, c16127_20190220.patch, 
> c16127_20190225.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15999:

Summary: S3Guard: Better support for out-of-band operations  (was: [s3a] 
Better support for out-of-band operations)

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, 
> HADOOP-15999.003.patch, HADOOP-15999.004.patch, HADOOP-15999.005.patch, 
> HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778696#comment-16778696
 ] 

Steve Loughran commented on HADOOP-16150:
-

sound good

People working on the multipart upload stuff ([~ehiggs] need to be aware that 
for testing, they'll need to go through the raw local FS or turn checksums off


> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-26 Thread GitBox
bharatviswa504 commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467659727
 
 
   Thank You @anuengineer  and @elek  for the review.
   I have fixed the test failure.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on issue #485: HDFS-14244. Refactor the libhdfspp cmake 
build files.
URL: https://github.com/apache/hadoop/pull/485#issuecomment-467659614
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 1052 | root in trunk failed. |
   | +1 | compile | 123 | trunk passed |
   | +1 | mvnsite | 26 | trunk passed |
   | +1 | shadedclient | 621 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 18 | the patch passed |
   | -1 | compile | 32 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | cc | 32 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | javac | 32 | hadoop-hdfs-native-client in the patch failed. |
   | +1 | mvnsite | 21 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 16 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch has 125 line(s) that end in whitespace. 
Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 2 | The patch 2024  line(s) with tabs. |
   | +1 | shadedclient | 753 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 36 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | asflicense | 34 | The patch generated 5 ASF License warnings. |
   | | | 3015 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/485 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  cc  shellcheck  shelldocs  |
   | uname | Linux abbf28bafa50 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a5a751b |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/branch-mvninstall-root.txt
 |
   | shellcheck | v0.4.6 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/11/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778691#comment-16778691
 ] 

Hadoop QA commented on HADOOP-16140:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 11 new + 95 unchanged - 0 fixed = 106 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 6 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16140 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960242/HADOOP-14200.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 01f236053a4a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a106d2d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15979/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15979/artifact/out/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15979/testReport/ |
| Max. process+thread count | 1382 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console 

[jira] [Resolved] (HADOOP-16110) ~/hadoop-env doesn't support HADOOP_OPTIONAL_TOOLS

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16110.
-
Resolution: Won't Fix

> ~/hadoop-env doesn't support HADOOP_OPTIONAL_TOOLS
> --
>
> Key: HADOOP-16110
> URL: https://issues.apache.org/jira/browse/HADOOP-16110
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> if you set {{HADOOP_OPTIONAL_TOOLS}} in ~.hadoop-env, it doesn't get picked 
> up because the HADOOP_OPTIONAL_TOOLS expansion takes place in the parse 
> process way before {{hadoop_exec_user_hadoopenv}} is invoked.
> Unless I've really misunderstood what ~/.hadoop-env is meant to do "let me 
> set hadoop env vars", I'd have expected that tools env var examining (and so: 
> loading of optional tools) to take place after



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260535551
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260535538
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260535561
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,129 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+string(REPLACE "|" ";" CMAKE_PREFIX_PATH "${CMAKE_PREFIX_PATH}")
+message(STATUS "OOM Prefix path = ${CMAKE_PREFIX_PATH}")
+
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-26 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778676#comment-16778676
 ] 

Ben Roling edited comment on HADOOP-15625 at 2/26/19 11:13 PM:
---

I got wrapped up in some other things so didn't make quite as much progress on 
the TODO list as I would have liked, but I did upload a new patch with some 
progress:

* added fs.s3.change.detection.versionrequired config
* fixed failure to throw RemoteFileChangedException on multiple reads
* added config property documentation to index.md

The fix to throw RemoteFileChangedException on multiple reads currently means 
the 'warn' setting would result in potentially lots of warnings.  An 
S3AInputStream that detects a change would log a warn on every subsequent 
read() call, which would be noisy, at least within the job or task reading that 
file.  It probably does need to be revisited.

The documentation needs more work and I didn't get all the line length and 
javadoc style issues sorted.

I also didn't address core-site.xml.  To be clear there, you're talking about 
the src/test/resources/core-site.xml, right?

I'll probably have to get back at more of this tomorrow.

My branch is here if you're interested:
https://github.com/ben-roling/hadoop/tree/HADOOP-15625-stevel


was (Author: ben.roling):
I got wrapped up in some other things so didn't make quite as much progress on 
the TODO list as I would have liked, but I did upload a new patch with some 
progress:

* added fs.s3.change.detection.versionrequired config
* fixed failure to throw RemoteFileChangedException on multiple reads
* added config property documentation to index.md

The fix to throw RemoteFileChangedException on multiple reads currently means 
the 'warn' setting would result in potentially lots of warnings.  An 
S3AInputStream that detects a change would log a warn on every subsequent 
read() call, which would be noisy, at least within the job or task reading that 
file.  It probably does need to be revisited.

The documentation needs more work and I didn't get all the line length and 
javadoc style issues sorted.

I also didn't address core-site.xml.  To be clear there, you're talking about 
the src/test/resources/core-site.xml, right?

I'll probably have to get back at more of this tomorrow.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-26 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778676#comment-16778676
 ] 

Ben Roling commented on HADOOP-15625:
-

I got wrapped up in some other things so didn't make quite as much progress on 
the TODO list as I would have liked, but I did upload a new patch with some 
progress:

* added fs.s3.change.detection.versionrequired config
* fixed failure to throw RemoteFileChangedException on multiple reads
* added config property documentation to index.md

The fix to throw RemoteFileChangedException on multiple reads currently means 
the 'warn' setting would result in potentially lots of warnings.  An 
S3AInputStream that detects a change would log a warn on every subsequent 
read() call, which would be noisy, at least within the job or task reading that 
file.  It probably does need to be revisited.

The documentation needs more work and I didn't get all the line length and 
javadoc style issues sorted.

I also didn't address core-site.xml.  To be clear there, you're talking about 
the src/test/resources/core-site.xml, right?

I'll probably have to get back at more of this tomorrow.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-26 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16127:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~ste...@apache.org] for reviewing the patches.

I have committed this.

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: c16127_20190219.patch, c16127_20190220.patch, 
> c16127_20190225.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-26 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778681#comment-16778681
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16127:
--

Since the 25.patch only has whitespace changes compared to 20.patch for fixing 
checkstyle, I will commit it shortly.

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch, c16127_20190220.patch, 
> c16127_20190225.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2019-02-26 Thread Yongjun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778680#comment-16778680
 ] 

Yongjun Zhang edited comment on HADOOP-12909 at 2/26/19 11:14 PM:
--

Hi [~xiaobingo] and [~szetszwo],

Thanks for your work here. One question, when asynchronous mode RPC mode was 
introduced, seems the rpcTimeOut is removed from synchronous mode (see 
[here|https://issues.apache.org/jira/browse/HADOOP-15720?focusedCommentId=16778657=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16778657]),
 did I understand it correctly? and is that intended if so?

Thanks.


was (Author: yzhangal):
Hi [~xiaobingo] and [~szetszwo],

Thanks for your work here. One question, when asynchronous mode RPC mode was 
introduced, seems the rpcTimeOut is removed from synchronous mode (see here), 
did I understand it correctly? and is that intended if so?

Thanks.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260531343
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260531354
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,127 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+message(STATUS "OOM Prefix path = ${CMAKE_PREFIX_PATH}")
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260531349
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2019-02-26 Thread Yongjun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778680#comment-16778680
 ] 

Yongjun Zhang edited comment on HADOOP-12909 at 2/26/19 11:13 PM:
--

Hi [~xiaobingo] and [~szetszwo],

Thanks for your work here. One question, when asynchronous mode RPC mode was 
introduced, seems the rpcTimeOut is removed from synchronous mode (see here), 
did I understand it correctly? and is that intended if so?

Thanks.


was (Author: yzhangal):
Hi [~xiaobingo],

Thanks for your work here. One question, when asynchronous mode RPC mode was 
introduced, seems the rpcTimeOut is removed from synchronous mode (see here), 
did I understand it correctly? and is that intended if so?

Thanks.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2019-02-26 Thread Yongjun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778680#comment-16778680
 ] 

Yongjun Zhang commented on HADOOP-12909:


Hi [~xiaobingo],

Thanks for your work here. One question, when asynchronous mode RPC mode was 
introduced, seems the rpcTimeOut is removed from synchronous mode (see here), 
did I understand it correctly? and is that intended if so?

Thanks.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-26 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-15625:

Attachment: HADOOP-15625-007.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260529148
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,128 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+message(STATUS "OOM Prefix path = ${CMAKE_PREFIX_PATH}")
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260529132
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260529139
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15720) rpcTimeout may not have been applied correctly

2019-02-26 Thread Yongjun Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778657#comment-16778657
 ] 

Yongjun Zhang commented on HADOOP-15720:


The latest RPC code is changed (by HADOOP-12909 and related) to support 
asynchronous mode in additional to the original synchronous mode.

The synchoronous mode RPC behavior seems to have changed, and the new code 
appears to not do RpcTimeOut in synchronous mode (why?), see:

[https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1490]

while it does rpcTimeOut in asynchronous mode, see:

[https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1465]

The jira I reported was on top of the old synchronous mode implementation (in 
which case the sending of RPCs is serialized, but the responses are received 
asynchronously).

 Thanks.

> rpcTimeout may not have been applied correctly
> --
>
> Key: HADOOP-15720
> URL: https://issues.apache.org/jira/browse/HADOOP-15720
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Yongjun Zhang
>Priority: Major
>
> org.apache.hadoop.ipc.Client send multiple RPC calls to server synchronously 
> via the same connection as in the following synchronized code block:
> {code:java}
>   synchronized (sendRpcRequestLock) {
> Future senderFuture = sendParamsExecutor.submit(new Runnable() {
>   @Override
>   public void run() {
> try {
>   synchronized (Connection.this.out) {
> if (shouldCloseConnection.get()) {
>   return;
> }
> 
> if (LOG.isDebugEnabled()) {
>   LOG.debug(getName() + " sending #" + call.id
>   + " " + call.rpcRequest);
> }
>  
> byte[] data = d.getData();
> int totalLength = d.getLength();
> out.writeInt(totalLength); // Total Length
> out.write(data, 0, totalLength);// RpcRequestHeader + 
> RpcRequest
> out.flush();
>   }
> } catch (IOException e) {
>   // exception at this point would leave the connection in an
>   // unrecoverable state (eg half a call left on the wire).
>   // So, close the connection, killing any outstanding calls
>   markClosed(e);
> } finally {
>   //the buffer is just an in-memory buffer, but it is still 
> polite to
>   // close early
>   IOUtils.closeStream(d);
> }
>   }
> });
>   
> try {
>   senderFuture.get();
> } catch (ExecutionException e) {
>   Throwable cause = e.getCause();
>   
>   // cause should only be a RuntimeException as the Runnable above
>   // catches IOException
>   if (cause instanceof RuntimeException) {
> throw (RuntimeException) cause;
>   } else {
> throw new RuntimeException("unexpected checked exception", cause);
>   }
> }
>   }
> {code}
> And it then waits for result asynchronously via
> {code:java}
> /* Receive a response.
>  * Because only one receiver, so no synchronization on in.
>  */
> private void receiveRpcResponse() {
>   if (shouldCloseConnection.get()) {
> return;
>   }
>   touch();
>   
>   try {
> int totalLen = in.readInt();
> RpcResponseHeaderProto header = 
> RpcResponseHeaderProto.parseDelimitedFrom(in);
> checkResponse(header);
> int headerLen = header.getSerializedSize();
> headerLen += CodedOutputStream.computeRawVarint32Size(headerLen);
> int callId = header.getCallId();
> if (LOG.isDebugEnabled())
>   LOG.debug(getName() + " got value #" + callId);
> Call call = calls.get(callId);
> RpcStatusProto status = header.getStatus();
> ..
> {code}
> However, we can see that the {{call}} returned by {{receiveRpcResonse()}} 
> above may be in any order.
> The following code
> {code:java}
> int totalLen = in.readInt();
> {code}
> eventually calls one of the following two methods, where rpcTimeOut is 
> checked against:
> {code:java}
>   /** Read a byte from the stream.
>* Send a ping if timeout on read. Retries if no failure is detected
>* until a byte is read.
>* @throws IOException for any IO problem other than socket timeout
>*/
>   @Override
>   public int read() throws IOException {
> int waiting = 0;
> do {
>   try 

[GitHub] hadoop-yetus commented on issue #495: [HDFS-14292] Added ExecutorService to DataXceiverServer

2019-02-26 Thread GitBox
hadoop-yetus commented on issue #495: [HDFS-14292] Added ExecutorService to 
DataXceiverServer
URL: https://github.com/apache/hadoop/pull/495#issuecomment-467640410
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1077 | trunk passed |
   | +1 | compile | 191 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 114 | trunk passed |
   | +1 | shadedclient | 910 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 232 | trunk passed |
   | +1 | javadoc | 75 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 98 | the patch passed |
   | +1 | compile | 183 | the patch passed |
   | +1 | javac | 183 | hadoop-hdfs-project generated 0 new + 537 unchanged - 3 
fixed = 537 total (was 540) |
   | -0 | checkstyle | 69 | hadoop-hdfs-project: The patch generated 4 new + 
644 unchanged - 8 fixed = 648 total (was 652) |
   | +1 | mvnsite | 100 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 764 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 251 | the patch passed |
   | +1 | javadoc | 72 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 107 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 5068 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 9476 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-495/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/495 |
   | JIRA Issue | HDFS-14292 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux a474f1a5bca3 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a106d2d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-495/2/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-495/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-495/2/testReport/ |
   | Max. process+thread count | 3373 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-495/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-26 Thread GitBox
anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467639233
 
 
   sorry, is this failure related to this patch? 
   hdds.scm.chillmode.healthy.pipelie.pct expected:<0> but was:<1>
   if so, can we please add this field to the ozone-default.xml before we 
commit this? thanks
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-26 Thread GitBox
anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467638802
 
 
   +1, Thanks for the update. Looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-26 Thread GitBox
bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints 
for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-467638054
 
 
   Thank You @elek  for addressing comments.
   There are many additional changes are done in ozone-default.xml, not related 
to this, can we do them as part of separate Jira. As those changes do not 
belong to this Jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-26 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778628#comment-16778628
 ] 

Stephen O'Donnell commented on HADOOP-16140:


I have uploaded new patch that:

# Hopefully fixes the style issues
# Fixes the failing test - this passed on Mac OS due to it being a case 
insensitive FS
# Adds the expunge -immediate option and removes the emptyTrash command

Regarding the comment from [~ste...@apache.org]

> TestTrash:L509. Dont downgrade an exception to a log, just rethrow

In this case I was copying the pattern that already exists in this (rather 
large) test method where the above is used quite a few times. I wonder if its 
best to stick with what is there rather than doing something different for this 
one additional test?

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #497: [HDFS-14295] Add CachedThreadPool for DataNode Transfers

2019-02-26 Thread GitBox
hadoop-yetus commented on issue #497: [HDFS-14295] Add CachedThreadPool for 
DataNode Transfers
URL: https://github.com/apache/hadoop/pull/497#issuecomment-467635918
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1099 | trunk passed |
   | +1 | compile | 61 | trunk passed |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 66 | trunk passed |
   | +1 | shadedclient | 804 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 126 | trunk passed |
   | +1 | javadoc | 49 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 59 | the patch passed |
   | +1 | compile | 57 | the patch passed |
   | +1 | javac | 57 | the patch passed |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 64 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 775 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 134 | the patch passed |
   | +1 | javadoc | 47 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 5191 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 8746 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/497 |
   | JIRA Issue | HDFS-14295 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux ca58174fc52b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a106d2d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/1/testReport/ |
   | Max. process+thread count | 3154 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-26 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-16140:
---
Attachment: HADOOP-14200.002.patch

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-26 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778569#comment-16778569
 ] 

Eric Yang commented on HADOOP-16150:


It would be ok for ChecksumFileSystem to fail on concat.  Other implementations 
extend ChecksumFileSystem must implement their own concat and append logic.  At 
minimum, check writeChecksum flag to be false, then throw 
UnsupportedOperationException to be backward compatible.  This ensures that 
public API doesn't allow silent corruption to occur unless explicitly set 
writeChecksum flag which helps to track down callers that maybe affected by 
this change.

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-26 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778565#comment-16778565
 ] 

Ben Roling commented on HADOOP-15625:
-

[~ste...@apache.org] - something I noticed with your updates is that the first 
read() on an S3AInputStream that detects a change will throw 
RemoteFileChangedException, but subsequent read() calls on the same stream will 
not.

This seems unexpected to me, but it appears as though you _may_ have done that 
intentionally?

My thinking is that you should need to explicitly open a new stream 
(FileSystem.open()) to get past the RemoteFileChangedException condition.

This behavior change is only occurring when mode=client and is evidenced by 
failing ITestS3ARemoteFileChanged tests.  It occurs because ChangeTracker moves 
to the new revision 
[here|https://github.com/steveloughran/hadoop/commit/5cf5d79fc9c5a6e256fa231b21731bd3219079bf#diff-c97e625906bcf378a192c522739e67baR167].

I'm inferring that this may have been intentional due to 
TestStreamChangeTracker.testEtagCheckingWarn(), which asserts a second mismatch 
is not counted 
[here|https://github.com/steveloughran/hadoop/commit/5cf5d79fc9c5a6e256fa231b21731bd3219079bf#diff-ad8ffc56d9d28ed3972d9a8d5efa1814R90].
  Maybe I shouldn't read too much into that.  You may have had a desire to only 
log once (when mode=warn), not realizing the change also causes an exception to 
only be thrown once.  I'm inclined to think it would be acceptable to warn 
multiple times though (as many times as you'd see RemoteFileChangedException in 
the client and server modes).  What do you think?

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260474330
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260474342
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260474350
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,127 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16058) S3A tests to include Terasort

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778552#comment-16778552
 ] 

Hadoop QA commented on HADOOP-16058:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 18s{color} 
| {color:red} root generated 1 new + 1491 unchanged - 0 fixed = 1492 total (was 
1491) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 28s{color} | {color:orange} root: The patch generated 10 new + 77 unchanged 
- 0 fixed = 87 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 33 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m  9s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
15s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
5s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.mapred.TestJobCounters |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16058 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960211/HADOOP-16058-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  

[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260471942
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260471933
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260471950
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,127 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260470481
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,127 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260470459
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260470442
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-26 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778545#comment-16778545
 ] 

Eric Yang commented on HADOOP-16107:


Append seems like an unsupported operation for ChecksumFileSystem, and same 
applies to concat.  It is not clear if someone actually used those API.  
ChecksumFileSystem has the option to skip checksum since HADOOP-8042.  It is 
likely that ChecksumFileSystem can be called without actually doing any crc 
check, which can defeat HDFS's purpose to be a checksum file system.  Let's 
hope the use cases that skipped crc know what they were doing without 
corrupting data.  I can commit the patch if white spaces are fixed.

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-26 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778532#comment-16778532
 ] 

Ben Roling commented on HADOOP-15625:
-

bq. which means that you may open an object and it has a null version -but the 
bucket itself is still versioned. We may not want to give up completely in this 
world

Yeah, that should be noted in the documentation about the "require version" 
configuration.  You shouldn't use that unless you can be confident all your 
buckets will have versioning enabled from the beginning.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260460459
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,136 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260459384
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,135 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260459410
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/CMakeLists.txt
 ##
 @@ -16,10 +16,127 @@
 # limitations under the License.
 #
 
+cmake_minimum_required(VERSION 2.8.12)
+if (POLICY CMP0042)
+  cmake_policy(SET CMP0042 NEW) # suppress warning about mac rpath
+endif ()
+
+project(libhdfspp)
+
+enable_testing()
+include (CTest)
+
+find_package(ASIO REQUIRED)
+find_package(Doxygen)
+find_package(OpenSSL REQUIRED)
+find_package(Protobuf REQUIRED)
+find_package(RapidXML REQUIRED)
+find_package(Threads REQUIRED)
+find_package(URIparser REQUIRED)
+
+include(DecideSasl)
+include(CheckCXXSourceCompiles)
+
+include(HdfsppCompilerOptions)
+
+# Check if thread_local is supported
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (NOT THREAD_LOCAL_SUPPORTED)
+  message(FATAL_ERROR "FATAL ERROR: The required feature thread_local storage 
is not supported by your compiler. Known compilers that support this feature: 
GCC 4.8+, Visual Studio 2015+, Clang (community version 3.3+), Clang (version 
for Xcode 8+ and iOS 9+).")
+endif (NOT THREAD_LOCAL_SUPPORTED)
+
+# Check if PROTOC library was compiled with the compatible compiler by trying
+# to compile some dummy code
+unset (PROTOC_IS_COMPATIBLE CACHE)
+set (CMAKE_REQUIRED_LIBRARIES protobuf protoc)
+check_cxx_source_compiles(
+"#include 
+#include 
+int main(void) {
+  ::google::protobuf::io::ZeroCopyOutputStream *out = NULL;
+  ::google::protobuf::io::Printer printer(out, '$');
+  printer.PrintRaw(std::string(\"test\"));
+  return 0;
+}"
+PROTOC_IS_COMPATIBLE)
+if (NOT PROTOC_IS_COMPATIBLE)
+  message(WARNING "WARNING: the Protocol Buffers Library and the hdfs++ 
Library must both be compiled with the same (or compatible) compiler. Normally 
only the same major versions of the same compiler are compatible with each 
other.")
+endif (NOT PROTOC_IS_COMPATIBLE)
+
+if(DOXYGEN_FOUND)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/doc/Doxyfile.in 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile @ONLY)
+add_custom_target(doc ${DOXYGEN_EXECUTABLE} 
${CMAKE_CURRENT_BINARY_DIR}/doc/Doxyfile
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+  COMMENT "Generating API documentation with Doxygen" VERBATIM)
+endif(DOXYGEN_FOUND)
+
+include_directories(
+  ${CMAKE_CURRENT_SOURCE_DIR}/../include
+  ${CMAKE_CURRENT_SOURCE_DIR}
+  ${CMAKE_CURRENT_BINARY_DIR}/proto
+  )
+
+# Put the protobuf stuff first, since the version has to match between
+# the library, generated code, and the include files.
+include_directories(BEFORE ${PROTOBUF_INCLUDE_DIR})
+
+include_directories(SYSTEM
+  ${ASIO_INCLUDE_DIR}
+  ${RAPIDXML_INCLUDE_DIR}
+  ${OPENSSL_INCLUDE_DIR}
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-02-26 Thread GitBox
hadoop-yetus commented on a change in pull request #485: HDFS-14244. Refactor 
the libhdfspp cmake build files.
URL: https://github.com/apache/hadoop/pull/485#discussion_r260459395
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMake/FindPackageExtension.cmake
 ##
 @@ -0,0 +1,135 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Input:
+#   h_file: the name of a header file to find
+#   lib_names: a list of library names to find
+#   allow_any: Allow either static or shared libraries
+
+# Environment:
+#   CMAKE_FIND_PACKAGE_NAME: the name of the package to build
+#   _HOME: variable is used to check for headers and library
+#   BUILD_SHARED_LIBRARIES: whether to find shared instead of static libraries
+
+# Outputs:
+#   _INCLUDE_DIR: directory containing headers
+#   _LIBRARIES: libraries to link with
+#   _FOUND: whether uriparser has been found
+
+function (findPackageExtension h_file lib_names allow_any)
+  set (_name ${CMAKE_FIND_PACKAGE_NAME})
+  string (TOUPPER ${_name} _upper_name)
+
+  # protect against running it a second time
+  if (NOT DEFINED ${_upper_name}_FOUND)
+
+# find the name of the home variable and get it from the environment
+set (_home_name "${_upper_name}_HOME")
+if (DEFINED ENV{${_home_name}})
+  set(_home "$ENV{${_home_name}}")
+elseif (DEFINED ${_home_name})
+  set(_home ${${_home_name}})
+endif ()
+  
+# If _HOME is set, use that alone as the path, otherwise use
+# PACKAGE_SEARCH_PATH ahead of the default_path.
+if(DEFINED _home AND NOT ${_home} STREQUAL "")
+  set(_no_default TRUE)
+else()
+  set(_no_default FALSE)
+endif()
+
+set (_include_dir "${h_file}-NOTFOUND")
+if (_no_default)
+  find_path (_include_dir ${h_file}
+ PATHS ${_home} NO_DEFAULT_PATH
+ PATH_SUFFIXES "include")
+else ()
+  find_path (_include_dir ${h_file}
+ PATH_SUFFIXES "include")
+endif (_no_default)
+
+set(_libraries)
+foreach (lib ${lib_names})
+  expandLibName(${lib} ${allow_any} _full)
+  set (_match "${_full}-NOTFOUND")
+  if (_no_default)
+find_library (_match NAMES ${_full}
+  PATHS ${_home}
+  NO_DEFAULT_PATH
+  PATH_SUFFIXES "lib" "lib64")
+  else ()
+find_library (_match NAMES ${_full}
+  HINTS ${_include_dir}/..
+  PATH_SUFFIXES "lib" "lib64")
+  endif (_no_default)
+  if (_match)
+list (APPEND _libraries ${_match})
+  endif ()
+  unset(_full)
+endforeach ()
+
+list (LENGTH _libraries _libraries_len)
+list (LENGTH lib_names _name_len)
+
+if (_include_dir AND _libraries_len EQUAL _name_len)
+  message (STATUS "Found the ${_name} header: ${_include_dir}")
+  if (NOT _libraries_len EQUAL 0)
+message (STATUS "Found the ${_name} libraries: ${_libraries}")
+  endif ()
+  set(${_upper_name}_FOUND TRUE PARENT_SCOPE)
+  set(${_upper_name}_INCLUDE_DIR ${_include_dir} PARENT_SCOPE)
+  set(${_upper_name}_LIBRARIES "${_libraries}" PARENT_SCOPE)
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778479#comment-16778479
 ] 

Steve Loughran commented on HADOOP-16107:
-

Created HADOOP-16150 for the concat problem; leaving to HDFS-13186 to deal 
with. I know that operation is used for testing, if that's all it is needed 
-fine. But if someone does go through checksum FS and expects checksums then 
they currently don't even get  a warning that checksums aren't involved. 

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778471#comment-16778471
 ] 

Steve Loughran commented on HADOOP-16107:
-

whitespace fixup comes on on my commits always (I use "apply -3 --verbose 
--whitespace=fix", aliased to something).

now, concat? 

That's an interesting thought. Raw Local does do concat, but it does it 
entirely locally:
{code}
  public void concat(final Path trg, final Path [] psrcs) throws IOException {
final int bufferSize = 4096;
try(FSDataOutputStream out = create(trg)) {
  for (Path src : psrcs) {
try(FSDataInputStream in = open(src)) {
  IOUtils.copyBytes(in, out, bufferSize, false);
}
  }
}
  }
{code}

 I don't see how it'd do checksums, as for that it'd have to go back to the 
filter fs.

How about we get this patch in and then worry about that one, which has clearly 
existed for a few months. And I'll file that one under the multipart upload 
HDFS JIRA to let someone else deal with it :)

oh, and then there's append(). Same issue. it's coming up through ChecksumFS 
without the checksum logic.




> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-14319 to HADOOP-16150:


Affects Version/s: (was: 3.2.0)
   3.2.0
  Component/s: (was: fs)
   fs
  Key: HADOOP-16150  (was: HDFS-14319)
  Project: Hadoop Common  (was: Hadoop HDFS)

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778417#comment-16778417
 ] 

Steve Loughran commented on HADOOP-15625:
-

I've been staring at things a bit in the AWS docs.

"Objects stored in your bucket before you set the versioning state have a 
version ID of null. When you enable versioning, existing objects in your bucket 
do not change"

which means that you may open an object and it has a null version -but the 
bucket itself is still versioned. We may not want to give up completely in this 
world


> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16093) Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util

2019-02-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778233#comment-16778233
 ] 

Hudson commented on HADOOP-16093:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16071 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16071/])
HADOOP-16093. Move DurationInfo from hadoop-aws to hadoop-common (stevel: rev 
52b2eab575d0b4d8ce7fa57661aaca6b8a123cc2)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitProtocol.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3Select.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DurationInfo.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/DurationInfo.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/Duration.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3SelectLandsat.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3SelectCLI.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/OperationDuration.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestDurationInfo.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationTokenBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/AbstractS3SelectTest.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitMRJob.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3SelectMRJob.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitter.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/SelectTool.java


> Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util
> -
>
> Key: HADOOP-16093
> URL: https://issues.apache.org/jira/browse/HADOOP-16093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, util
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16093.001.patch, HADOOP-16093.002.patch, 
> HADOOP-16093.003.patch, HADOOP-16093.004.patch, HADOOP-16093.005.patch, 
> HADOOP-16093.006.patch
>
>
> It'd be useful to have DurationInfo usable in other places (e.g. distcp, 
> abfs, ...). But as it is in hadoop-aws under 
> {{org.apache.hadoop.fs.s3a.commit.DurationInfo
> org.apache.hadoop.fs.s3a.commit.DurationInfo}} we can't do that
> Move it.
> We'll have to rename the Duration class in the process, as java 8 time has a 
> class of that name too. Maybe "OperationDuration", with DurationInfo a 
> subclass of that
> Probably need a test too, won't it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16093) Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util

2019-02-26 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778232#comment-16778232
 ] 

Abhishek Modi commented on HADOOP-16093:


Thanks [~ste...@apache.org] for review and committing this.

> Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util
> -
>
> Key: HADOOP-16093
> URL: https://issues.apache.org/jira/browse/HADOOP-16093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, util
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16093.001.patch, HADOOP-16093.002.patch, 
> HADOOP-16093.003.patch, HADOOP-16093.004.patch, HADOOP-16093.005.patch, 
> HADOOP-16093.006.patch
>
>
> It'd be useful to have DurationInfo usable in other places (e.g. distcp, 
> abfs, ...). But as it is in hadoop-aws under 
> {{org.apache.hadoop.fs.s3a.commit.DurationInfo
> org.apache.hadoop.fs.s3a.commit.DurationInfo}} we can't do that
> Move it.
> We'll have to rename the Duration class in the process, as java 8 time has a 
> class of that name too. Maybe "OperationDuration", with DurationInfo a 
> subclass of that
> Probably need a test too, won't it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16132:

Component/s: fs/s3

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-26 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778214#comment-16778214
 ] 

Eric Yang commented on HADOOP-16107:


[~ste...@apache.org] Thank you for the patch.  Do we need to worry about concat 
operation for ChecksumFileSystem?  Can we also fix the white spaces?  Thanks

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16132:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-15620

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Justin Uang
>Priority: Major
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778205#comment-16778205
 ] 

Steve Loughran commented on HADOOP-15920:
-

We are down to the last few checkstyles. 
{code}
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java:207:
assertTrue("The available should be zero",instream.available() >= 0);:46: 
',' is not followed by whitespace. [WhitespaceAfter]
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java:614:
assertTrue("Data available in " + instream, inputStream.available() >0 
);:76: ')' is preceded with whitespace. [ParenPad]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java:597:
int availableSize = this.wrappedStream == null ? 0 : 
this.wrappedStream.available();: Line is longer than 80 characters (found 88). 
[LineLength]
{code}

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15920-06.patch, HADOOP-15920-07.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16093) Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16093:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

+1

reran all the hadoop-aws tests to make sure they were happy, and they were

thanks for this!

> Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util
> -
>
> Key: HADOOP-16093
> URL: https://issues.apache.org/jira/browse/HADOOP-16093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, util
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16093.001.patch, HADOOP-16093.002.patch, 
> HADOOP-16093.003.patch, HADOOP-16093.004.patch, HADOOP-16093.005.patch, 
> HADOOP-16093.006.patch
>
>
> It'd be useful to have DurationInfo usable in other places (e.g. distcp, 
> abfs, ...). But as it is in hadoop-aws under 
> {{org.apache.hadoop.fs.s3a.commit.DurationInfo
> org.apache.hadoop.fs.s3a.commit.DurationInfo}} we can't do that
> Move it.
> We'll have to rename the Duration class in the process, as java 8 time has a 
> class of that name too. Maybe "OperationDuration", with DurationInfo a 
> subclass of that
> Probably need a test too, won't it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16093) Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util

2019-02-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16093:

Summary: Move DurationInfo from hadoop-aws to hadoop-common 
org.apache.hadoop.util  (was: Move DurationInfo from hadoop-aws to 
hadoop-common org.apache.fs.impl)

> Move DurationInfo from hadoop-aws to hadoop-common org.apache.hadoop.util
> -
>
> Key: HADOOP-16093
> URL: https://issues.apache.org/jira/browse/HADOOP-16093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, util
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Attachments: HADOOP-16093.001.patch, HADOOP-16093.002.patch, 
> HADOOP-16093.003.patch, HADOOP-16093.004.patch, HADOOP-16093.005.patch, 
> HADOOP-16093.006.patch
>
>
> It'd be useful to have DurationInfo usable in other places (e.g. distcp, 
> abfs, ...). But as it is in hadoop-aws under 
> {{org.apache.hadoop.fs.s3a.commit.DurationInfo
> org.apache.hadoop.fs.s3a.commit.DurationInfo}} we can't do that
> Move it.
> We'll have to rename the Duration class in the process, as java 8 time has a 
> class of that name too. Maybe "OperationDuration", with DurationInfo a 
> subclass of that
> Probably need a test too, won't it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16093) Move DurationInfo from hadoop-aws to hadoop-common org.apache.fs.impl

2019-02-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778164#comment-16778164
 ] 

Steve Loughran commented on HADOOP-16093:
-

LGTM, doing a local retest of all the s3 stuff to make sure all is well

> Move DurationInfo from hadoop-aws to hadoop-common org.apache.fs.impl
> -
>
> Key: HADOOP-16093
> URL: https://issues.apache.org/jira/browse/HADOOP-16093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, util
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Attachments: HADOOP-16093.001.patch, HADOOP-16093.002.patch, 
> HADOOP-16093.003.patch, HADOOP-16093.004.patch, HADOOP-16093.005.patch, 
> HADOOP-16093.006.patch
>
>
> It'd be useful to have DurationInfo usable in other places (e.g. distcp, 
> abfs, ...). But as it is in hadoop-aws under 
> {{org.apache.hadoop.fs.s3a.commit.DurationInfo
> org.apache.hadoop.fs.s3a.commit.DurationInfo}} we can't do that
> Move it.
> We'll have to rename the Duration class in the process, as java 8 time has a 
> class of that name too. Maybe "OperationDuration", with DurationInfo a 
> subclass of that
> Probably need a test too, won't it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >