[jira] [Updated] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14103:

Attachment: HADOOP-14103.002.patch

Patch 002
* Fix checkstyle
* Update "ftp" section in filesystem/testing.md

[~steve_l] Can we not check in contract-test-options.xml? It does not seem to 
add a lot of value. The file is simple enough to create on the fly based on the 
template in doc.

Also if XInclude auth-keys.xml is added to "s3a.xml", 
"contract-test-options.xml" is no longer needed. Property 
{{fs.contract.test.fs.s3a}} can be added to "auth-keys.xml", just as 
{{test.fs.s3a.name}}. Although the test properties in "auth-keys.xml" do feel 
awkward.

> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14103.001.patch, HADOOP-14103.002.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079388#comment-16079388
 ] 

John Zhuge edited comment on HADOOP-14103 at 7/9/17 5:18 AM:
-

Patch 001:
* Add hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml 
which XIncludes auth-keys.xml.
* Update filesystem/testing.md and hadoop-aws/testing.md

Notes:
* Did not update the "ftp" section in filesystem/testing.md
* Did not update the "swift" section in filesystem/testing.md. I'd suggest 
migrate this part to hadoop-openstack's index.md.
* Since we are checking in contract-test-options.xml and we have to update 
"fs.contract.test.fs.s3a" in the file in order to run contract tests, the 
change may get accidentally committed. Not a credential leak though.


was (Author: jzhuge):
Patch 001:
* Add hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml 
which XIncludes auth-keys.xml.
* Update filesystem/testing.md and hadoop-aws/testing.md

Notes:
* Did not update the "ftp" section in filesystem/testing.md
* Did not update the "swift" section in filesystem/testing.md. I'd suggest 
migrate this part to hadoop-openstack's index.md.
* Since we are checking in contract-test-options.xml and we have to update 
"fs.contract.test.fs.s3a" in the file in order to run contract tests, he change 
may get accidentally committed. Not a credential leak though.

> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14103.001.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079398#comment-16079398
 ] 

Hadoop QA commented on HADOOP-14103:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14103 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876279/HADOOP-14103.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  xml  |
| uname | Linux 95c407346d95 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12743/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12743/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12743/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12743/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12743/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14103.001.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14103:

Status: Patch Available  (was: In Progress)

> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14103.001.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14103:

Attachment: HADOOP-14103.001.patch

Patch 001:
* Add hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml 
which XIncludes auth-keys.xml.
* Update filesystem/testing.md and hadoop-aws/testing.md

Notes:
* Did not update the "ftp" section in filesystem/testing.md
* Did not update the "swift" section in filesystem/testing.md. I'd suggest 
migrate this part to hadoop-openstack's index.md.
* Since we are checking in contract-test-options.xml and we have to update 
"fs.contract.test.fs.s3a" in the file in order to run contract tests, he change 
may get accidentally committed. Not a credential leak though.

> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14103.001.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079301#comment-16079301
 ] 

John Zhuge edited comment on HADOOP-14103 at 7/9/17 12:34 AM:
--

[~steve_l] Sorry, missed your ping while I was on vacation.

HADOOP_13929 was ported to branch-2 and branch-2.8 in Mar. 2017, so 
{{.gitignore}} in these branches no long contain {{contract-test-options.xml}}.

I will post a patch shortly according to your ideas in Description:
* Add hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml 
which XIncludes auth-keys.xml.
* Update filesystem/testing.md and hadoop-aws/testing.md

Please note this is different from what I suggested on Feb. 23.


was (Author: jzhuge):
[~steve_l] Sorry, missed your ping while I was on vacation.

HADOOP_13929 was ported to branch-2 and branch-2.8 in Mar. 2017, so 
{{.gitignore}} in these branches no long contain {{contract-test-options.xml}}.

I will post a patch shortly according to your ideas in Description:
* Add hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml 
which XIncludes auth-keys.xml.
* Update filesystem/testing.md and hadoop-aws/testing.md

> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14103:

Summary: Sort out hadoop-aws contract-test-options.xml  (was: sort out 
hadoop-aws contract-test-options.xml)

> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14103) sort out hadoop-aws contract-test-options.xml

2017-07-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079301#comment-16079301
 ] 

John Zhuge commented on HADOOP-14103:
-

[~steve_l] Sorry, missed your ping while I was on vacation.

HADOOP_13929 was ported to branch-2 and branch-2.8 in Mar. 2017, so 
{{.gitignore}} in these branches no long contain {{contract-test-options.xml}}.

I will post a patch shortly according to your ideas in Description:
* Add hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml 
which XIncludes auth-keys.xml.
* Update filesystem/testing.md and hadoop-aws/testing.md

> sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14636) TestKDiag failing intermittently on Jenkins/Yetus at login from keytab

2017-07-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079254#comment-16079254
 ] 

Allen Wittenauer commented on HADOOP-14636:
---

{code}
${env.LD_LIBRARY_PATH}
{code}

... is a Configuration object-type setting, which means it is either coming 
from the test code/config itself and/or Configuration can't deal with empty env 
vars.


Three other points:

* This might be a nice, fat security hole.  If it's treating that as empty, 
then that gets translated to the current dir under normal shell semantics. 
which means it's generally trivial to exploit by placing a bad shared library.

* We apparently have LD_LIBRARY_PATH all over the place but not 
DYLD_LIBRARY_PATH.  This means anything using dyld (such as OS X) will have a 
very different experience.

* I've been wondering for a while if we even need to set java.library.path if 
we are already setting DY/LD_LIBRARY_PATH.  Isn't one initialized from the 
other anyway?

> TestKDiag failing intermittently on Jenkins/Yetus at login from keytab
> --
>
> Key: HADOOP-14636
> URL: https://issues.apache.org/jira/browse/HADOOP-14636
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0-beta1
> Environment: {code}
> user.name = "jenkins"
> java.version = "1.8.0_131"
> java.security.krb5.conf = 
> "/testptch/hadoop/hadoop-common-project/hadoop-common/target/1499472499650/krb5.conf"
> kdc.resource.dir = "src/test/resources/kdc"
> hadoop.kerberos.kinit.command = "kinit"
> hadoop.security.authentication = "KERBEROS"
> hadoop.security.authorization = "false"
> hadoop.kerberos.min.seconds.before.relogin = "60"
> hadoop.security.dns.interface = "(unset)"
> hadoop.security.dns.nameserver = "(unset)"
> hadoop.rpc.protection = "authentication"
> hadoop.security.saslproperties.resolver.class = "(unset)"
> hadoop.security.crypto.codec.classes = "(unset)"
> hadoop.security.group.mapping = 
> "org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback"
> hadoop.security.impersonation.provider.class = "(unset)"
> dfs.data.transfer.protection = "(unset)"
> dfs.data.transfer.saslproperties.resolver.class = "(unset)"
> 2017-07-08 00:08:20,381 WARN  security.KDiag (KDiag.java:execute(365)) - The 
> default cluster security is insecure
> {code}
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: output.txt
>
>
> The test {{TestKDiag}} is failing intermittently on Yetus builds, 
> {code}
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
> {code}
> The tests that fail are all trying to log in using a keytab just created, the 
> JVM isn't having any of it.
> Possible causes? I can think of a few to start with
> # keytab generation
> # keytab path parameter wrong
> # JVM isn't doing the login
> # some race condition
> # Host OS
> # Other environment issues (clock, network...)
> There's no recent changes in the kdiag or UGI code.
> The failure is intermittent, not surfacing for me (others?) locally, which 
> which could point at: JVM, host OS, race condition, other env  issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14581) Restrict setOwner to list of user when security is enabled in wasb

2017-07-08 Thread Varada Hemeswari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079214#comment-16079214
 ] 

Varada Hemeswari commented on HADOOP-14581:
---

Hi Steve/Ming Liang,

Can you please review the latest patch at your earliest possible?
Pinging again since we are running tight on deadlines 

Thanks and regards,
Hema







> Restrict setOwner to list of user when security is enabled in wasb
> --
>
> Key: HADOOP-14581
> URL: https://issues.apache.org/jira/browse/HADOOP-14581
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14581.1.patch, HADOOP-14581.2.patch
>
>
> Currently in azure FS, setOwner api is exposed to all the users accessing the 
> file system.
> When Authorization is enabled, access to some files/folders is given to 
> particular users based on whether the user is the owner of the file.
> So setOwner has to be restricted to limited set of users to prevent users 
> from exploiting owner based authorization of files and folders.
> Introducing a new config called fs.azure.chown.allowed.userlist which is a 
> comma seperated list of users who are allowed to perform chown operation when 
> authorization is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076628#comment-16076628
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 11:55 AM:
---

futuremore, flush method is to confirm that data has been written.

*Update/Crorrection*
sorry, it is the {{putMetrics}} method.
in {{KafkaSink}}#{{putMetrics}} , code lists below, which makes me have a 
different opinion:
{code}
……
Future future = producer.send(data);
jsonLines.setLength(0);
try {
  future.get(); // which means synchronously
} catch (InterruptedException e) {
  throw new MetricsException("Error sending data", e);
} catch (ExecutionException e) {
  throw new MetricsException("Error sending data", e);
}

……
{code}


was (Author: hongyuan li):
futuremore, flush method is to confirm that data has been written.

*Update/Crorrection*
sorry, it is the {{putMetrics}} method.
in {{KafkaSink}}#{{putMetrics}} , code lists below, which makes me have a 
different opinion:
{code}
……
Future future = producer.send(data);
jsonLines.setLength(0);
try {
  future.get();
} catch (InterruptedException e) {
  throw new MetricsException("Error sending data", e);
} catch (ExecutionException e) {
  throw new MetricsException("Error sending data", e);
}

……
{code}

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch, HADOOP-14623-002.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14636) TestKDiag failing intermittently on Jenkins/Yetus at login from keytab

2017-07-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14636:

Environment: 
{code}

user.name = "jenkins"
java.version = "1.8.0_131"
java.security.krb5.conf = 
"/testptch/hadoop/hadoop-common-project/hadoop-common/target/1499472499650/krb5.conf"
kdc.resource.dir = "src/test/resources/kdc"

hadoop.kerberos.kinit.command = "kinit"
hadoop.security.authentication = "KERBEROS"
hadoop.security.authorization = "false"
hadoop.kerberos.min.seconds.before.relogin = "60"
hadoop.security.dns.interface = "(unset)"
hadoop.security.dns.nameserver = "(unset)"
hadoop.rpc.protection = "authentication"
hadoop.security.saslproperties.resolver.class = "(unset)"
hadoop.security.crypto.codec.classes = "(unset)"
hadoop.security.group.mapping = 
"org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback"
hadoop.security.impersonation.provider.class = "(unset)"
dfs.data.transfer.protection = "(unset)"
dfs.data.transfer.saslproperties.resolver.class = "(unset)"
2017-07-08 00:08:20,381 WARN  security.KDiag (KDiag.java:execute(365)) - The 
default cluster security is insecure
{code}

> TestKDiag failing intermittently on Jenkins/Yetus at login from keytab
> --
>
> Key: HADOOP-14636
> URL: https://issues.apache.org/jira/browse/HADOOP-14636
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0-beta1
> Environment: {code}
> user.name = "jenkins"
> java.version = "1.8.0_131"
> java.security.krb5.conf = 
> "/testptch/hadoop/hadoop-common-project/hadoop-common/target/1499472499650/krb5.conf"
> kdc.resource.dir = "src/test/resources/kdc"
> hadoop.kerberos.kinit.command = "kinit"
> hadoop.security.authentication = "KERBEROS"
> hadoop.security.authorization = "false"
> hadoop.kerberos.min.seconds.before.relogin = "60"
> hadoop.security.dns.interface = "(unset)"
> hadoop.security.dns.nameserver = "(unset)"
> hadoop.rpc.protection = "authentication"
> hadoop.security.saslproperties.resolver.class = "(unset)"
> hadoop.security.crypto.codec.classes = "(unset)"
> hadoop.security.group.mapping = 
> "org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback"
> hadoop.security.impersonation.provider.class = "(unset)"
> dfs.data.transfer.protection = "(unset)"
> dfs.data.transfer.saslproperties.resolver.class = "(unset)"
> 2017-07-08 00:08:20,381 WARN  security.KDiag (KDiag.java:execute(365)) - The 
> default cluster security is insecure
> {code}
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: output.txt
>
>
> The test {{TestKDiag}} is failing intermittently on Yetus builds, 
> {code}
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
> {code}
> The tests that fail are all trying to log in using a keytab just created, the 
> JVM isn't having any of it.
> Possible causes? I can think of a few to start with
> # keytab generation
> # keytab path parameter wrong
> # JVM isn't doing the login
> # some race condition
> # Host OS
> # Other environment issues (clock, network...)
> There's no recent changes in the kdiag or UGI code.
> The failure is intermittent, not surfacing for me (others?) locally, which 
> which could point at: JVM, host OS, race condition, other env  issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10949) metrics2 sink plugin for Apache Kafka

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079110#comment-16079110
 ] 

Hongyuan Li commented on HADOOP-10949:
--

i filed HADOOP-14623 to update this  module.

> metrics2 sink plugin for Apache Kafka
> -
>
> Key: HADOOP-10949
> URL: https://issues.apache.org/jira/browse/HADOOP-10949
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-10949-1.patch, HADOOP-10949-2.patch, 
> HADOOP-10949-4.patch, HADOOP-10949-5.patch, HADOOP-10949-6-1.patch, 
> HADOOP-10949-6.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch
>
>
> Write a metrics2 sink plugin for Hadoop to send metrics directly to Apache 
> Kafka in addition to the current, Graphite 
> ([Hadoop-9704|https://issues.apache.org/jira/browse/HADOOP-9704]), Ganglia 
> and File sinks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079108#comment-16079108
 ] 

Hongyuan Li commented on HADOOP-14623:
--

none of the test failure is related with this patch. None checkstyle warning 
and findbug warning. 

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch, HADOOP-14623-002.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14636) TestKDiag failing intermittently on Jenkins/Yetus at login from keytab

2017-07-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079107#comment-16079107
 ] 

Steve Loughran commented on HADOOP-14636:
-

HADOOP-14426 updated kerby for 1.0.0-RC2 to 1.0.0; went in 25-05-2017; no other 
obvious changes in the process of generating users in keytabs. Maybe kdiag 
could add a test to read the keytab in and verify that it was generated ok. 
Otherwise, no recent changes which appear to go near the kdc code, and, with it 
only failing in some cases, clearly some kind of execution env problem.

> TestKDiag failing intermittently on Jenkins/Yetus at login from keytab
> --
>
> Key: HADOOP-14636
> URL: https://issues.apache.org/jira/browse/HADOOP-14636
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: output.txt
>
>
> The test {{TestKDiag}} is failing intermittently on Yetus builds, 
> {code}
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
> {code}
> The tests that fail are all trying to log in using a keytab just created, the 
> JVM isn't having any of it.
> Possible causes? I can think of a few to start with
> # keytab generation
> # keytab path parameter wrong
> # JVM isn't doing the login
> # some race condition
> # Host OS
> # Other environment issues (clock, network...)
> There's no recent changes in the kdiag or UGI code.
> The failure is intermittent, not surfacing for me (others?) locally, which 
> which could point at: JVM, host OS, race condition, other env  issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14636) TestKDiag failing intermittently on Jenkins/Yetus at login from keytab

2017-07-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14636:

Attachment: output.txt

Attached: fulll test output.

Looks like there isn't a f...@example.com file in the keytab, which would of 
course explain why the keytab login failed.

(side issue, {{java.library.path}} has an unexpanded env var in it; assume 
unrelated, but possibly of interest to [~aw].

{code}
java.library.path = 
"${env.LD_LIBRARY_PATH}:/testptch/hadoop/hadoop-common-project/hadoop-common/target/native/target/usr/local/lib:/testptch/hadoop/hadoop-common-project/hadoop-common/../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib"
{code}

> TestKDiag failing intermittently on Jenkins/Yetus at login from keytab
> --
>
> Key: HADOOP-14636
> URL: https://issues.apache.org/jira/browse/HADOOP-14636
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: output.txt
>
>
> The test {{TestKDiag}} is failing intermittently on Yetus builds, 
> {code}
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
> {code}
> The tests that fail are all trying to log in using a keytab just created, the 
> JVM isn't having any of it.
> Possible causes? I can think of a few to start with
> # keytab generation
> # keytab path parameter wrong
> # JVM isn't doing the login
> # some race condition
> # Host OS
> # Other environment issues (clock, network...)
> There's no recent changes in the kdiag or UGI code.
> The failure is intermittent, not surfacing for me (others?) locally, which 
> which could point at: JVM, host OS, race condition, other env  issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14636) TestKDiag failing intermittently on Jenkins/Yetus at login from keytab

2017-07-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079103#comment-16079103
 ] 

Steve Loughran commented on HADOOP-14636:
-

Stack. Jenkins says failing since HADOOP-14563, which doesn't appear to go near 
the code.

{code}
org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
f...@example.com from keytab 
/testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
javax.security.auth.login.LoginException: Unable to obtain password from user

at 
com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
at 
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
at 
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at 
javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1427)
at org.apache.hadoop.security.KDiag.loginFromKeytab(KDiag.java:630)
at org.apache.hadoop.security.KDiag.execute(KDiag.java:396)
at org.apache.hadoop.security.KDiag.run(KDiag.java:236)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.security.KDiag.exec(KDiag.java:1047)
at org.apache.hadoop.security.TestKDiag.kdiag(TestKDiag.java:119)
at 
org.apache.hadoop.security.TestKDiag.testLoadResource(TestKDiag.java:196)

{code}



> TestKDiag failing intermittently on Jenkins/Yetus at login from keytab
> --
>
> Key: HADOOP-14636
> URL: https://issues.apache.org/jira/browse/HADOOP-14636
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Minor
>
> The test {{TestKDiag}} is failing intermittently on Yetus builds, 
> {code}
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
> {code}
> The tests that fail are all trying to log in using a keytab just created, the 
> JVM isn't having any of it.
> Possible causes? I can think of a few to start with
> # keytab generation
> # keytab path parameter wrong
> # JVM isn't doing the login
> # some race condition
> # Host OS
> # Other environment issues (clock, network...)
> There's no recent changes in the kdiag or UGI code.
> The failure is intermittent, not surfacing for me (others?) locally, which 
> which could point at: JVM, host OS, race condition, other env  issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079100#comment-16079100
 ] 

Hadoop QA commented on HADOOP-14623:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 23s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14623 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876212/HADOOP-14623-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux d940f544df5e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Created] (HADOOP-14636) TestKDiag failing intermittently on Jenkins/Yetus at login from keytab

2017-07-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14636:
---

 Summary: TestKDiag failing intermittently on Jenkins/Yetus at 
login from keytab
 Key: HADOOP-14636
 URL: https://issues.apache.org/jira/browse/HADOOP-14636
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 3.0.0-beta1
Reporter: Steve Loughran
Priority: Minor


The test {{TestKDiag}} is failing intermittently on Yetus builds, 

{code}
org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
f...@example.com from keytab 
/testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
javax.security.auth.login.LoginException: Unable to obtain password from user
{code}

The tests that fail are all trying to log in using a keytab just created, the 
JVM isn't having any of it.

Possible causes? I can think of a few to start with

# keytab generation
# keytab path parameter wrong
# JVM isn't doing the login
# some race condition
# Host OS
# Other environment issues (clock, network...)

There's no recent changes in the kdiag or UGI code.

The failure is intermittent, not surfacing for me (others?) locally, which 
which could point at: JVM, host OS, race condition, other env  issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14634) Remove jline from main Hadoop pom.xml

2017-07-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14634:

   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

+1, committed. Thanks ray. We all hate spurious JARs sneaking in. the ones we 
deliberately add are troublesome enough

> Remove jline from main Hadoop pom.xml
> -
>
> Key: HADOOP-14634
> URL: https://issues.apache.org/jira/browse/HADOOP-14634
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14634.001.patch
>
>
> A long time ago, HADOOP-9342 removed jline from being included in the Hadoop 
> distribution.  Since then, more modules have added Zookeeper, and are pulling 
> in jline again.
> Recommend excluding jline from the main Hadoop pom in order to prevent 
> subsequent additions of Zookeeper dependencies from doing this again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079075#comment-16079075
 ] 

Hongyuan Li commented on HADOOP-14632:
--

none of the test failure is related with this patch. None  checkstyle warning 
and findbug warning. 
Ping [~ste...@apache.org]、[~brahmareddy] for code review. Performance 
comparision will be submitted soon.


> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14632-001.patch, HADOOP-14632-002.patch, 
> HADOOP-14632-003.patch, HADOOP-14632-004.patch, HADOOP-14632-005.patch
>
>
> add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which 
> can improve the  transfer speed.
> Test example shows transfer  performance has improved a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079073#comment-16079073
 ] 

Hadoop QA commented on HADOOP-14632:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876206/HADOOP-14632-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d5635c644ddb 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12741/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12741/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12741/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>

[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 9:20 AM:
--

I highly recommend Four steps:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
3、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
4、 Use ProducerConfig.XXX instead of using string value  directly. For example, 
use 
{{ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG}} instead of {{key.serializer}}

 Thanks for any advice. latest patch has implemeted above.


was (Author: hongyuan li):
I highly recommend Four steps:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
3、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
4、 Use ProducerConfig.XXX instead of using string value  directly. For example, 
use 
{{ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG}} instead of {{key.serializer}}


 Thanks for any advice.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch, HADOOP-14623-002.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Attachment: HADOOP-14623-002.patch

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch, HADOOP-14623-002.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 9:15 AM:
--

I highly recommend Four steps:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
3、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
4、 Use ProducerConfig.XXX instead of using string value  directly. For example, 
use 
{{ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG}} instead of {{key.serializer}}


 Thanks for any advice.


was (Author: hongyuan li):
I highly recommend Five steps:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
3、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
4、add {{callback}} when using new {{KafkaProducer}}#{{send}}
5、 Use ProducerConfig.XXX instead of using string value  directly. For example, 
use 
{{ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG}} instead of {{key.serializer}}


 Thanks for any advice.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 8:51 AM:
--

I highly recommend Five steps:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
3、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
4、add {{callback}} when using new {{KafkaProducer}}#{{send}}
5、 Use ProducerConfig.XXX instead of using string value  directly. For example, 
use 
{{ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG}} instead of {{key.serializer}}


 Thanks for any advice.


was (Author: hongyuan li):
I highly recommend Four points:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
3、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
4、add {{callback}} when using new {{KafkaProducer}}#{{send}}


 Thanks for any advice.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 8:46 AM:
--

I highly recommend Four points:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
3、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
4、add {{callback}} when using new {{KafkaProducer}}#{{send}}


 Thanks for any advice.


was (Author: hongyuan li):
I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
2、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
3、add {{callback}} when using new {{KafkaProducer}}#{{send}}


 Thanks for any advice.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14632:
-
Attachment: HADOOP-14632-005.patch

fix junit test 

> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14632-001.patch, HADOOP-14632-002.patch, 
> HADOOP-14632-003.patch, HADOOP-14632-004.patch, HADOOP-14632-005.patch
>
>
> add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which 
> can improve the  transfer speed.
> Test example shows transfer  performance has improved a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-08 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Attachment: HADOOP-14587-branch-2.001.patch

This patch [^HADOOP-24587-branch-2.001.patch] has a wrong JIRA issue number. 
Upload a right one.  [^HADOOP-14587-branch-2.001.patch] 

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch, HADOOP-14587.008.patch, 
> HADOOP-14587-branch-2.001.patch, HADOOP-24587-branch-2.001.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079047#comment-16079047
 ] 

Hadoop QA commented on HADOOP-14632:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876189/HADOOP-14632-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f877397ad09a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12740/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12740/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12740/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
> 

[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 8:24 AM:
--

I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、add  {{https://repository.apache.org/content/repositories/releases}} repo,  
the {{apache snapshot rep}} doesnot have a higher version kafka module, the 
version of which is less than {{0.8.2}}
2、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
3、add {{callback}} when using new {{KafkaProducer}}#{{send}}


 Thanks for any advice.


was (Author: hongyuan li):
I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer, but what blocks it that the {{apache 
snapshot rep}} doesnot have a kafka, the version of which is higher than 
{{0.8.2}}
3、add {{callback}} when using new {{KafkaProducer}}#{{send}}


 Thanks for any advice.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 8:20 AM:
--

I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer, but what blocks it that the {{apache 
snapshot rep}} doesnot have a kafka, the version of which is higher than 
{{0.8.2}}
3、add {{callback}} when using new {{KafkaProducer}}#{{send}}


 Thanks for any advice.


was (Author: hongyuan li):
I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
The last patch will fix the two, If you don't think so, close the jira. Thanks 
for any advice.
3、add {{callback}} when using new {{KafkaProducer}}#{{send}}

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 8:11 AM:
--

I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、update kafka client version to at least {{0.10.1.0}},  which has a 
IntegerSerializer class If kafka sink want to generate a kafka producer with 
the the type of key being Integer.
The last patch will fix the two, If you don't think so, close the jira. Thanks 
for any advice.
3、add {{callback}} when using new {{KafkaProducer}}#{{send}}


was (Author: hongyuan li):
I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、update kafka client version to {{0.10.1.0}},  which has a IntegerSerializer 
class If kafka sink want to generate a kafka producer with the the type of key 
being Integer.
The last patch will fix the two, If you don't think so, close the jira. Thanks 
for any advice.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14635) Javadoc correction for AccessControlList#buildACL

2017-07-08 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created HADOOP-14635:
-

 Summary: Javadoc correction for AccessControlList#buildACL
 Key: HADOOP-14635
 URL: https://issues.apache.org/jira/browse/HADOOP-14635
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bibin A Chundatt
Priority: Minor


{{AccessControlList#buildACL}} 
{code}
  /**
   * Build ACL from the given two Strings.
   * The Strings contain comma separated values.
   *
   * @param aclString build ACL from array of Strings
   */
  private void buildACL(String[] userGroupStrings) {

{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/8/17 8:06 AM:
--

I highly recommend Two points:
1、should use {{acks}} = {{1}}.
2、update kafka client version to {{0.10.1.0}},  which has a IntegerSerializer 
class If kafka sink want to generate a kafka producer with the the type of key 
being Integer.
The last patch will fix the two, If you don't think so, close the jira. Thanks 
for any advice.


was (Author: hongyuan li):
I highly recommend Two points:
1、should use acks = 1.
2、update kafka client version to 0.10.1. which has a IntegerSerializer class If 
kafka sink want to generate a kafka producer with the the type of key being 
Integer.
The last patch will fix the two, If you don't think so, close the jira.Thanks.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079033#comment-16079033
 ] 

Hongyuan Li commented on HADOOP-14623:
--

I highly recommend Two points:
1、should use acks = 1.
2、update kafka client version to 0.10.1. which has a IntegerSerializer class If 
kafka sink want to generate a kafka producer with the the type of key being 
Integer.
The last patch will fix the two, If you don't think so, close the jira.Thanks.

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) fixed some bugs in KafkaSink

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Summary: fixed some bugs in KafkaSink   (was: KafkaSink#init should set 
acks to 1,not 0 and key.serializer is wrong however key not used)

> fixed some bugs in KafkaSink 
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0 and key.serializer is wrong however key not used

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Description: 
{{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has been 
written to the broker at least.

current code list below:

{code}
  
props.put("request.required.acks", "0");

{code}

*Update*

find another bug about this class, {{key.serializer}} used 
{{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the key 
properties of Producer is Integer, codes list below:
{code}
props.put("key.serializer",
"org.apache.kafka.common.serialization.ByteArraySerializer");
…
 producer = new KafkaProducer(props);
{code}

  was:
{{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has been 
written to the broker at least.

current code list below:

{code}
  
props.put("request.required.acks", "0");

{code}

*Update*

find another bug about this class, key.serializer used 
{{org.apache.kafka.common.serialization.ByteArraySerializer}}


> KafkaSink#init should set acks to 1,not 0 and key.serializer is wrong however 
> key not used
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.ByteArraySerializer");
> …
>  producer = new KafkaProducer(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0 and key.serializer is wrong however key not used

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Description: 
{{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has been 
written to the broker at least.

current code list below:

{code}
  
props.put("request.required.acks", "0");

{code}

*Update*

find another bug about this class, key.serializer used 
{{org.apache.kafka.common.serialization.ByteArraySerializer}}

  was:
{{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has been 
written to the broker at least.

current code list below:

{code}
  
props.put("request.required.acks", "0");

{code}


> KafkaSink#init should set acks to 1,not 0 and key.serializer is wrong however 
> key not used
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, key.serializer used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0 and key.serializer is wrong however key not used

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Summary: KafkaSink#init should set acks to 1,not 0 and key.serializer is 
wrong however key not used  (was: KafkaSink#init should set acks to 1,not 0)

> KafkaSink#init should set acks to 1,not 0 and key.serializer is wrong however 
> key not used
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14567) DistCP NullPointerException when -atomic is set but -tmp is not

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16071185#comment-16071185
 ] 

Hongyuan Li edited comment on HADOOP-14567 at 7/8/17 7:35 AM:
--

i think we should add a default tmp workPath for distcp. [~yzhangal]

i files a new jira to HADOOP-14631  to state it obvsiously.


was (Author: hongyuan li):
i think we should add a default tmp workPath for distcp. [~yzhangal]

> DistCP NullPointerException when -atomic is set but -tmp is not
> ---
>
> Key: HADOOP-14567
> URL: https://issues.apache.org/jira/browse/HADOOP-14567
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.3
> Environment: HDP 2.5.0 kerberized cluster -> HDP 2.6.0 kerberized 
> cluster
>Reporter: Hari Sekhon
>Assignee: Hongyuan Li
>Priority: Minor
>
> When running distcp if using -atomic but not specifying -tmp then the 
> following NullPointerException is encountered - removing -atomic avoids this 
> bug:
> {code}
> 17/06/21 16:50:59 ERROR tools.DistCp: Exception encountered
> java.lang.NullPointerException
> at org.apache.hadoop.fs.Path.(Path.java:104)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:363)
> at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:247)
> at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:176)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:128)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14631) Distcp should add a default atomicWorkPath properties when using atomic

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14631:
-
Fix Version/s: 2.7.3

> Distcp should add a default  atomicWorkPath properties when using atomic
> 
>
> Key: HADOOP-14631
> URL: https://issues.apache.org/jira/browse/HADOOP-14631
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Fix For: 2.7.3
>
>
> Distcp should add a default  AtomicWorkPath properties when using atomic
> {{Distcp}}#{{configureOutputFormat}} using code below to generate atomic work 
> path,
> {code}
> if (context.shouldAtomicCommit()) {
>   Path workDir = context.getAtomicWorkPath();
>   if (workDir == null) {
> workDir = targetPath.getParent();
>   }
>   workDir = new Path(workDir, WIP_PREFIX + targetPath.getName()
> + rand.nextInt());
> {code}
> When atomic is set and {{AtomicWorkPath}} == null, distcp will get the parent 
> of current {{WorkDir}}. In this case, if {{workdir}} is {{"/"}}, the parent 
> will be {{null}}, wich means 
> {{workDir = new Path(workDir, WIP_PREFIX + targetPath.getName() + 
> rand.nextInt());}} will throw a nullpoint exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14567) DistCP NullPointerException when -atomic is set but -tmp is not

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li reassigned HADOOP-14567:


Assignee: Hongyuan Li

> DistCP NullPointerException when -atomic is set but -tmp is not
> ---
>
> Key: HADOOP-14567
> URL: https://issues.apache.org/jira/browse/HADOOP-14567
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.3
> Environment: HDP 2.5.0 kerberized cluster -> HDP 2.6.0 kerberized 
> cluster
>Reporter: Hari Sekhon
>Assignee: Hongyuan Li
>Priority: Minor
>
> When running distcp if using -atomic but not specifying -tmp then the 
> following NullPointerException is encountered - removing -atomic avoids this 
> bug:
> {code}
> 17/06/21 16:50:59 ERROR tools.DistCp: Exception encountered
> java.lang.NullPointerException
> at org.apache.hadoop.fs.Path.(Path.java:104)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:363)
> at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:247)
> at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:176)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:128)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079022#comment-16079022
 ] 

Hongyuan Li commented on HADOOP-14632:
--

attched new patch.

> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14632-001.patch, HADOOP-14632-002.patch, 
> HADOOP-14632-003.patch, HADOOP-14632-004.patch
>
>
> add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which 
> can improve the  transfer speed.
> Test example shows transfer  performance has improved a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14632:
-
Attachment: HADOOP-14632-004.patch

> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14632-001.patch, HADOOP-14632-002.patch, 
> HADOOP-14632-003.patch, HADOOP-14632-004.patch
>
>
> add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which 
> can improve the  transfer speed.
> Test example shows transfer  performance has improved a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079020#comment-16079020
 ] 

Hongyuan Li commented on HADOOP-14632:
--

one of the junit test is related with the patch, will work on it.

> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14632-001.patch, HADOOP-14632-002.patch, 
> HADOOP-14632-003.patch
>
>
> add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which 
> can improve the  transfer speed.
> Test example shows transfer  performance has improved a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079016#comment-16079016
 ] 

Hadoop QA commented on HADOOP-14632:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  2s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876183/HADOOP-14632-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 99d876cf5307 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12739/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12739/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12739/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12739/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> 

[jira] [Updated] (HADOOP-14631) Distcp should add a default atomicWorkPath properties when using atomic

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14631:
-
Summary: Distcp should add a default  atomicWorkPath properties when using 
atomic  (was: Distcp should add a default  atomicWorkPath properties when using 
atomic or throw obvious Exception)

> Distcp should add a default  atomicWorkPath properties when using atomic
> 
>
> Key: HADOOP-14631
> URL: https://issues.apache.org/jira/browse/HADOOP-14631
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>
> Distcp should add a default  AtomicWorkPath properties when using atomic
> {{Distcp}}#{{configureOutputFormat}} using code below to generate atomic work 
> path,
> {code}
> if (context.shouldAtomicCommit()) {
>   Path workDir = context.getAtomicWorkPath();
>   if (workDir == null) {
> workDir = targetPath.getParent();
>   }
>   workDir = new Path(workDir, WIP_PREFIX + targetPath.getName()
> + rand.nextInt());
> {code}
> When atomic is set and {{AtomicWorkPath}} == null, distcp will get the parent 
> of current {{WorkDir}}. In this case, if {{workdir}} is {{"/"}}, the parent 
> will be {{null}}, wich means 
> {{workDir = new Path(workDir, WIP_PREFIX + targetPath.getName() + 
> rand.nextInt());}} will throw a nullpoint exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14632) add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.

2017-07-08 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14632:
-
Attachment: HADOOP-14632-003.patch

> add buffer to SFTPFileSystem#create and SFTPFileSystem#open method, which can 
> improve the  transfer speed.
> --
>
> Key: HADOOP-14632
> URL: https://issues.apache.org/jira/browse/HADOOP-14632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14632-001.patch, HADOOP-14632-002.patch, 
> HADOOP-14632-003.patch
>
>
> add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which 
> can improve the  transfer speed.
> Test example shows transfer  performance has improved a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org