[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2019-01-09 Thread David Phillips (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16739173#comment-16739173
 ] 

David Phillips commented on HADOOP-15205:
-

I updated the list of affected versions. Unfortunately, almost all of the 
recent release are missing source jars. It does seem that the problem is how 
the release process is being run, since they are missing for 2.9.0 and 2.9.2 
but not 2.9.1.

Can we ensure that the source jars are uploaded for all future releases?

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.2, 2.8.3, 2.7.5, 3.0.0, 3.1.0, 3.0.1, 2.8.4, 
> 2.9.2, 2.8.5
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: chk.bash
>
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.1.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2019-01-09 Thread David Phillips (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Phillips updated HADOOP-15205:

Affects Version/s: 2.9.0
   2.8.2
   2.8.3
   2.8.4
   2.9.2
   2.8.5

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.2, 2.8.3, 2.7.5, 3.0.0, 3.1.0, 3.0.1, 2.8.4, 
> 2.9.2, 2.8.5
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: chk.bash
>
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.1.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] 20100507 opened a new pull request #461: Update ReconfigurationServlet.java

2019-01-09 Thread GitBox
20100507 opened a new pull request #461: Update ReconfigurationServlet.java
URL: https://github.com/apache/hadoop/pull/461
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2019-01-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738879#comment-16738879
 ] 

Wei-Chiu Chuang edited comment on HADOOP-16016 at 1/10/19 6:16 AM:
---

+1 [~ajisakaa] thanks for figuring it out!
I'm wondering if we should document it or attach a release note somewhere, 
since the error message is not obvious, and a user who upgrades JDK would 
suddenly not able to start DataNode because of this issue.


was (Author: jojochuang):
+1 [~ajisakaa] thanks for figuring it out!
I'm wondering if we should document it or attach a release note somewhere, 
since the error message is not obvious, and a user who upgrades JDK would 
suddenly be able to start DataNode because of this issue.

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch, 
> HADOOP-16016.03.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2019-01-09 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16739049#comment-16739049
 ] 

He Xiaoqiao commented on HADOOP-15922:
--

[~daryn],[~eyang] Thanks for pushing forward this issue and sorry for late 
response. Please let me know if there are something i missed.

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch, 
> HADOOP-15922.006.patch, HADOOP-15922.007.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-09 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738348#comment-16738348
 ] 

Kai Xie edited comment on HADOOP-16018 at 1/10/19 2:27 AM:
---

after fixing the compilation issue in branch-2-003 patch, jenkins starts to 
hang during the unit test of distcp.

from jenkins' unit test 
[log|https://builds.apache.org/job/PreCommit-HADOOP-Build/15755/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt]
 it said
{code:java}
[WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
1. See FAQ web page and the dump file 
/testptch/hadoop/hadoop-tools/hadoop-distcp/target/surefire-reports/2019-01-09T03-06-03_867-jvmRun1.dumpstream
...
[INFO] Running org.apache.hadoop.tools.TestDistCpSync
(hanging){code}
which may be the hanging cause and seems unrelated to the patch

 

I ran it locally and all test can be passed.

 


was (Author: kai33):
after fixing the compilation issue in branch-2-003 patch, jenkins starts to 
hang during the unit test of distcp.

from jenkins' unit test 
[log|https://builds.apache.org/job/PreCommit-HADOOP-Build/15755/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt]
 it said
{code:java}
[WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
1. See FAQ web page and the dump file 
/testptch/hadoop/hadoop-tools/hadoop-distcp/target/surefire-reports/2019-01-09T03-06-03_867-jvmRun1.dumpstream
{code}
which may be the hanging cause

 

I ran it locally and all test can be passed.

 

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-003.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15662) ABFS: Better exception handling of DNS errors

2019-01-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738909#comment-16738909
 ] 

Hadoop QA commented on HADOOP-15662:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15662 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954377/HADOOP-15662-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 033d6d38ac13 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c634589 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15763/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15763/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS: Better exception handling of DNS errors
> -
>
> Key: HADOOP-15662
> URL: 

[jira] [Commented] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2019-01-09 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738879#comment-16738879
 ] 

Wei-Chiu Chuang commented on HADOOP-16016:
--

+1 [~ajisakaa] thanks for figuring it out!
I'm wondering if we should document it or attach a release note somewhere, 
since the error message is not obvious, and a user who upgrades JDK would 
suddenly be able to start DataNode because of this issue.

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch, 
> HADOOP-16016.03.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15662) ABFS: Better exception handling of DNS errors

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15662:
-
Fix Version/s: 3.2.0
   Status: Patch Available  (was: Open)

All tests passed my US west account:
XNS account oauth
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 324, Failures: 0, Errors: 0, Skipped: 22
Tests run: 168, Failures: 0, Errors: 0, Skipped: 21

XNS account sharedKey:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 324, Failures: 0, Errors: 0, Skipped: 20
Tests run: 168, Failures: 0, Errors: 0, Skipped: 15

non-xns account sharedKe:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 324, Failures: 0, Errors: 0, Skipped: 206
Tests run: 168, Failures: 0, Errors: 0, Skipped: 15

> ABFS: Better exception handling of DNS errors
> -
>
> Key: HADOOP-15662
> URL: https://issues.apache.org/jira/browse/HADOOP-15662
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15662-001.patch
>
>
> DNS errors are common during testing due to typos or misconfiguration.  They 
> can also occur in production, as some transient DNS issues occur from time to 
> time. 
> 1) Let's investigate if we can distinguish between the two and fail fast for 
> the test issues, but continue to have retry logic for the transient DNS 
> issues in production.
> 2) Let's improve the error handling of DNS failures, so the user has an 
> actionable error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15662) ABFS: Better exception handling of DNS errors

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15662:
-
Attachment: HADOOP-15662-001.patch

> ABFS: Better exception handling of DNS errors
> -
>
> Key: HADOOP-15662
> URL: https://issues.apache.org/jira/browse/HADOOP-15662
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15662-001.patch
>
>
> DNS errors are common during testing due to typos or misconfiguration.  They 
> can also occur in production, as some transient DNS issues occur from time to 
> time. 
> 1) Let's investigate if we can distinguish between the two and fail fast for 
> the test issues, but continue to have retry logic for the transient DNS 
> issues in production.
> 2) Let's improve the error handling of DNS failures, so the user has an 
> actionable error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16029) Consecutive Append Should Reuse

2019-01-09 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738853#comment-16738853
 ] 

Giovanni Matteo Fumarola commented on HADOOP-16029:
---

Thanks [~ayushtkn] .
I think we should start fixing PMD performance and checkstyle warning in the 
entire codebase.
Happy to review any patch.

In [^HADOOP-16029-04.patch] GraphiteSink has a wrong indentation compared to 
the entire class.
We should fix the PMD warning and later the indentation for the entire class.

> Consecutive Append Should Reuse
> ---
>
> Key: HADOOP-16029
> URL: https://issues.apache.org/jira/browse/HADOOP-16029
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16029-01.patch, HADOOP-16029-02.patch, 
> HADOOP-16029-03.patch, HADOOP-16029-04.patch
>
>
> Consecutive calls to StringBuffer/StringBuilder .append should be chained, 
> reusing the target object. This can improve the performance by producing a 
> smaller bytecode, reducing overhead and improving inlining.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16029) Consecutive Append Should Reuse

2019-01-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738846#comment-16738846
 ] 

Hadoop QA commented on HADOOP-16029:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 3s{color} | {color:green} root: The patch generated 0 new + 2510 unchanged - 
18 fixed = 2510 total (was 2528) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 43s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
14s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
4s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}268m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestSSLFactory |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Created] (HADOOP-16041) UserAgent string for ABFS

2019-01-09 Thread Shweta (JIRA)
Shweta created HADOOP-16041:
---

 Summary: UserAgent string for ABFS
 Key: HADOOP-16041
 URL: https://issues.apache.org/jira/browse/HADOOP-16041
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Shweta
Assignee: Shweta
 Fix For: 3.3.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738832#comment-16738832
 ] 

Hadoop QA commented on HADOOP-15954:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-tools/hadoop-azure: The patch generated 0 new 
+ 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15954 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954363/HADOOP-15954-008.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb465ce5f79f 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f4617c6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15762/testReport/ |
| Max. process+thread count | 295 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15762/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Updated] (HADOOP-15934) ABFS: make retry policy configurable

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15934:
-
Labels: won't-fix  (was: )

> ABFS: make retry policy configurable
> 
>
> Key: HADOOP-15934
> URL: https://issues.apache.org/jira/browse/HADOOP-15934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>  Labels: won't-fix
>
> Currently the retry policy parameter is hard coded, should make it 
> configurable for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15934) ABFS: make retry policy configurable

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou resolved HADOOP-15934.
--
Resolution: Won't Fix

> ABFS: make retry policy configurable
> 
>
> Key: HADOOP-15934
> URL: https://issues.apache.org/jira/browse/HADOOP-15934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>  Labels: won't-fix
>
> Currently the retry policy parameter is hard coded, should make it 
> configurable for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15954:
-
Attachment: HADOOP-15954-008.patch

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch, HADOOP-15954-007.patch, HADOOP-15954-008.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-09 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738764#comment-16738764
 ] 

Da Zhou commented on HADOOP-15954:
--

Submitted 008 patch:
- Revert the change for check on isSecurityEnabled(), the reason is even in 
non-secure env, it is possible that name conversion is needed (daemon service 
with MSI). User can still  disable the it through configuration files.

All tests passed:
non-XNS account, ShareKey: 
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 205
Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

XNS account, sharedKey:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 19
Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

XNS account, OAuth:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 20
Tests run: 165, Failures: 0, Errors: 0, Skipped: 21

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch, HADOOP-15954-007.patch, HADOOP-15954-008.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #457: HDDS-965. Ozone: checkstyle improvements and code quality scripts.

2019-01-09 Thread GitBox
elek closed pull request #457: HDDS-965. Ozone: checkstyle improvements and 
code quality scripts.
URL: https://github.com/apache/hadoop/pull/457
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
index b1a70c01140a..f8d02deb262d 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
@@ -190,12 +190,6 @@ public long getReplicatedMinCommitIndex() {
 return minIndex.isPresent() ? minIndex.getAsLong() : 0;
   }
 
-  private void getFailedServer(
-  Collection commitInfos) {
-for (RaftProtos.CommitInfoProto proto : commitInfos) {
-
-}
-  }
 
   @Override
   public long watchForCommit(long index, long timeout)
@@ -217,7 +211,7 @@ public long watchForCommit(long index, long timeout)
 .sendWatchAsync(index, RaftProtos.ReplicationLevel.ALL_COMMITTED);
 RaftClientReply reply;
 try {
-  reply = replyFuture.get(timeout, TimeUnit.MILLISECONDS);
+  replyFuture.get(timeout, TimeUnit.MILLISECONDS);
 } catch (TimeoutException toe) {
   LOG.warn("3 way commit failed ", toe);
 
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
index 32c6b6ae9864..b62f7b6a6674 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
@@ -122,6 +122,7 @@
* @param watchTimeout  watch timeout
* @param checksum  checksum
*/
+  @SuppressWarnings("parameternumber")
   public BlockOutputStream(BlockID blockID, String key,
   XceiverClientManager xceiverClientManager, XceiverClientSpi 
xceiverClient,
   String traceID, int chunkSize, long streamBufferFlushSize,
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index 18637af7a881..08be429231f4 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -18,22 +18,6 @@
 
 package org.apache.hadoop.hdds;
 
-import com.google.common.base.Strings;
-import com.google.common.net.HostAndPort;
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.CommonConfigurationKeys;
-import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
-import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
-import org.apache.hadoop.hdds.scm.ScmConfigKeys;
-import org.apache.hadoop.metrics2.util.MBeans;
-import org.apache.hadoop.net.DNS;
-import org.apache.hadoop.net.NetUtils;
-import org.apache.hadoop.ozone.OzoneConfigKeys;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 import javax.management.ObjectName;
 import java.lang.reflect.InvocationTargetException;
 import java.lang.reflect.Method;
@@ -47,13 +31,26 @@
 import java.util.Optional;
 import java.util.TimeZone;
 
-import static org.apache.hadoop.hdfs.DFSConfigKeys
-.DFS_DATANODE_DNS_INTERFACE_KEY;
-import static org.apache.hadoop.hdfs.DFSConfigKeys
-.DFS_DATANODE_DNS_NAMESERVER_KEY;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.metrics2.util.MBeans;
+import org.apache.hadoop.net.DNS;
+import org.apache.hadoop.net.NetUtils;
+
+import com.google.common.base.Strings;
+import com.google.common.net.HostAndPort;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DNS_INTERFACE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DNS_NAMESERVER_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HOST_NAME_KEY;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED_DEFAULT;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * HDDS specific stateless utility 

[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-09 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738735#comment-16738735
 ] 

Da Zhou commented on HADOOP-15954:
--

[~ste...@apache.org], good point, I would like to keep this in mind and see how 
is things going. Something like would benefit us a lot.


> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch, HADOOP-15954-007.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16040) ABFS: Bug fix for tolerateOobAppends configuration

2019-01-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738668#comment-16738668
 ] 

Hadoop QA commented on HADOOP-16040:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16040 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954346/HADOOP-16040-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 47b98bdca683 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f4617c6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15761/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15761/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS: Bug fix for tolerateOobAppends configuration
> --
>
> Key: HADOOP-16040
> URL: 

[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738645#comment-16738645
 ] 

Steve Loughran commented on HADOOP-14556:
-

javadoc happy, junit happy other than the (independent) SSL failure, and the 
checkstyles are either existing issues flagged as new, some line length == 81 
and some URLs-in-javadocs are too long errors.

Is everyone happy with this updated patch?

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556-028.patch, HADOOP-14556-029.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16040) ABFS: Bug fix for tolerateOobAppends configuration

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16040:
-
Status: Patch Available  (was: Open)

Tests against my US west account passed:
XNS account using OAuth
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 324, Failures: 0, Errors: 0, Skipped: 22
Tests run: 168, Failures: 0, Errors: 0, Skipped: 21

XNS account using SharedKey:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 324, Failures: 0, Errors: 0, Skipped: 20
Tests run: 168, Failures: 0, Errors: 0, Skipped: 15

non-XNS account using SharedKey:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 324, Failures: 0, Errors: 0, Skipped: 206
Tests run: 168, Failures: 0, Errors: 0, Skipped: 15

> ABFS: Bug fix for tolerateOobAppends configuration
> --
>
> Key: HADOOP-16040
> URL: https://issues.apache.org/jira/browse/HADOOP-16040
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16040-001.patch
>
>
> Cause: configuration for tolerateOobAppends is never used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16040) ABFS: Bug fix for tolerateOobAppends configuration

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16040:
-
Attachment: HADOOP-16040-001.patch

> ABFS: Bug fix for tolerateOobAppends configuration
> --
>
> Key: HADOOP-16040
> URL: https://issues.apache.org/jira/browse/HADOOP-16040
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16040-001.patch
>
>
> Cause: configuration for tolerateOobAppends is never used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16029) Consecutive Append Should Reuse

2019-01-09 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16029:
--
Attachment: HADOOP-16029-04.patch

> Consecutive Append Should Reuse
> ---
>
> Key: HADOOP-16029
> URL: https://issues.apache.org/jira/browse/HADOOP-16029
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16029-01.patch, HADOOP-16029-02.patch, 
> HADOOP-16029-03.patch, HADOOP-16029-04.patch
>
>
> Consecutive calls to StringBuffer/StringBuilder .append should be chained, 
> reusing the target object. This can improve the performance by producing a 
> smaller bytecode, reducing overhead and improving inlining.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16029) Consecutive Append Should Reuse

2019-01-09 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738598#comment-16738598
 ] 

Ayush Saxena commented on HADOOP-16029:
---

Thankx [~giovanni.fumarola] for reviewing 

I have made changes to the indentation as suggested

For the GraphiteSink I have corrected the whole for block.Since if I keep it 
same there is existing checkstyle Warning which pops up here to since I have 
touched the line.

Can give a check [here in the trunk 
checkstyle|https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-checkstyle-root.txt]
 result the whole file has checkstyle problem.

> Consecutive Append Should Reuse
> ---
>
> Key: HADOOP-16029
> URL: https://issues.apache.org/jira/browse/HADOOP-16029
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16029-01.patch, HADOOP-16029-02.patch, 
> HADOOP-16029-03.patch, HADOOP-16029-04.patch
>
>
> Consecutive calls to StringBuffer/StringBuilder .append should be chained, 
> reusing the target object. This can improve the performance by producing a 
> smaller bytecode, reducing overhead and improving inlining.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aw-was-here commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-09 Thread GitBox
aw-was-here commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-452824782
 
 
   OK, now the build on the jenkins side is green regardless if junit found 
test results or not. (since not every patch touches java... case in point.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-09 Thread GitBox
apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-452822327
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 22 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 892 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 14 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 843 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 1832 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/459 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
   | uname | Linux f25545f5984f 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3420e26 |
   | maven | version: Apache Maven 3.3.9 |
   | shellcheck | v0.4.6 |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/5/console |
   | Powered by | Apache Yetus 0.9.0-SNAPSHOT http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15975) ABFS: remove timeout check for DELETE and RENAME

2019-01-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738563#comment-16738563
 ] 

Hadoop QA commented on HADOOP-15975:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954339/HADOOP-15975-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c91e6c13b03b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3420e26 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15759/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15759/testReport/ |
| Max. process+thread count | 295 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 

[jira] [Commented] (HADOOP-16029) Consecutive Append Should Reuse

2019-01-09 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738523#comment-16738523
 ] 

Giovanni Matteo Fumarola commented on HADOOP-16029:
---

Thanks [~ayushtkn] for the hard work.

Overall the code is ok and it will increase performance.
NIT: GraphiteSink and FSEditLogOp have a wrong indentation.

I will commit after the indentation issue is fixed.

> Consecutive Append Should Reuse
> ---
>
> Key: HADOOP-16029
> URL: https://issues.apache.org/jira/browse/HADOOP-16029
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16029-01.patch, HADOOP-16029-02.patch, 
> HADOOP-16029-03.patch
>
>
> Consecutive calls to StringBuffer/StringBuilder .append should be chained, 
> reusing the target object. This can improve the performance by producing a 
> smaller bytecode, reducing overhead and improving inlining.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16027) [DOC] Effective use of FS instances during S3A integration tests

2019-01-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738507#comment-16738507
 ] 

Hudson commented on HADOOP-16027:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15751 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15751/])
HADOOP-16027. [DOC] Effective use of FS instances during S3A integration (sean: 
rev 3420e26ae57f5946a913278a8a62ae82e930df88)
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md


> [DOC] Effective use of FS instances during S3A integration tests
> 
>
> Key: HADOOP-16027
> URL: https://issues.apache.org/jira/browse/HADOOP-16027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16027.001.patch, HADOOP-16027.002.patch
>
>
> While fixing HADOOP-15819 we found that a closed fs got into the static fs 
> cache during testing, which caused other tests to fail when the tests were 
> running sequentially.
> We should document some best practices in the testing section on the s3 docs 
> with the following:
> {panel}
> Tests using FileSystems are fastest if they can recycle the existing FS 
> instance from the same JVM. If you do that, you MUST NOT close or do unique 
> configuration on them. If you want a guarantee of 100% isolation or an 
> instance with unique config, create a new instance
> which you MUST close in the teardown to avoid leakage of resources.
> Do not add FileSystem instances (with e.g 
> org.apache.hadoop.fs.FileSystem#addFileSystemForTesting) to the cache that 
> will be modified or closed during the test runs. This can cause other tests 
> to fail when using the same modified or closed FS instance. For more details 
> see HADOOP-15819.
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15989) Synchronized at CompositeService#removeService is not required

2019-01-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738520#comment-16738520
 ] 

Hadoop QA commented on HADOOP-15989:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 43s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestSSLFactory |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954326/0001-HADOOP-15989.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d419ff3117f4 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 709ddb1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15758/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15758/testReport/ |
| Max. process+thread count | 1346 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| 

[GitHub] apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-09 Thread GitBox
apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-452801905
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 934 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 18 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 1819 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/459 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
   | uname | Linux 035d7a1cbf25 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 709ddb1 |
   | maven | version: Apache Maven 3.3.9 |
   | shellcheck | v0.4.6 |
   | Max. process+thread count | 443 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/4/console |
   | Powered by | Apache Yetus 0.9.0-SNAPSHOT http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16027) [DOC] Effective use of FS instances during S3A integration tests

2019-01-09 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738490#comment-16738490
 ] 

Sean Mackrory commented on HADOOP-16027:


+1, committed. Nit: added the empty lines between what appeared to be 
paragraphs.

> [DOC] Effective use of FS instances during S3A integration tests
> 
>
> Key: HADOOP-16027
> URL: https://issues.apache.org/jira/browse/HADOOP-16027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16027.001.patch, HADOOP-16027.002.patch
>
>
> While fixing HADOOP-15819 we found that a closed fs got into the static fs 
> cache during testing, which caused other tests to fail when the tests were 
> running sequentially.
> We should document some best practices in the testing section on the s3 docs 
> with the following:
> {panel}
> Tests using FileSystems are fastest if they can recycle the existing FS 
> instance from the same JVM. If you do that, you MUST NOT close or do unique 
> configuration on them. If you want a guarantee of 100% isolation or an 
> instance with unique config, create a new instance
> which you MUST close in the teardown to avoid leakage of resources.
> Do not add FileSystem instances (with e.g 
> org.apache.hadoop.fs.FileSystem#addFileSystemForTesting) to the cache that 
> will be modified or closed during the test runs. This can cause other tests 
> to fail when using the same modified or closed FS instance. For more details 
> see HADOOP-15819.
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15975) ABFS: remove timeout check for DELETE and RENAME

2019-01-09 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738492#comment-16738492
 ] 

Da Zhou commented on HADOOP-15975:
--

Resubmit the patch after updating my local branch.

All ABFS tests passed:
non-XNS account, using SharedKey: 
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 205
Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

XNS account, using SharedKey:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 19
Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

XNS account, using OAuth:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 20
Tests run: 165, Failures: 0, Errors: 0, Skipped: 21

> ABFS: remove timeout check for DELETE and RENAME
> 
>
> Key: HADOOP-15975
> URL: https://issues.apache.org/jira/browse/HADOOP-15975
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15975-001.patch, HADOOP-15975-002.patch
>
>
> Currently, ABFS rename and delete is doing a timeout check, which will fail 
> the request for rename/delete when the target contains tons of file/dirs.
> Because timeout check is already there for each HTTP call, we should remove 
> the timeout check in RENAME and DELETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15975) ABFS: remove timeout check for DELETE and RENAME

2019-01-09 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15975:
-
Attachment: HADOOP-15975-002.patch

> ABFS: remove timeout check for DELETE and RENAME
> 
>
> Key: HADOOP-15975
> URL: https://issues.apache.org/jira/browse/HADOOP-15975
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15975-001.patch, HADOOP-15975-002.patch
>
>
> Currently, ABFS rename and delete is doing a timeout check, which will fail 
> the request for rename/delete when the target contains tons of file/dirs.
> Because timeout check is already there for each HTTP call, we should remove 
> the timeout check in RENAME and DELETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16036) WASB: Disable jetty logging configuration announcement

2019-01-09 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738476#comment-16738476
 ] 

Da Zhou commented on HADOOP-16036:
--

Ah sorry I forgot to paste the test results:
All tests against my account at US west account passed:
Tests run: 245, Failures: 0, Errors: 0, Skipped: 11
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Tests run: 620, Failures: 0, Errors: 0, Skipped: 66
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

> WASB: Disable jetty logging configuration announcement
> --
>
> Key: HADOOP-16036
> URL: https://issues.apache.org/jira/browse/HADOOP-16036
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16036-001.patch
>
>
> Console output of WASB cmd  contains the following jetty logging 
> configuration announcement:
> 18/12/18 12:39:19 INFO util.log: Logging initialized @1016ms.
> Need to disable this announcement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16040) ABFS: Bug fix for tolerateOobAppends configuration

2019-01-09 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-16040:


 Summary: ABFS: Bug fix for tolerateOobAppends configuration
 Key: HADOOP-16040
 URL: https://issues.apache.org/jira/browse/HADOOP-16040
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Da Zhou
Assignee: Da Zhou


Cause: configuration for tolerateOobAppends is never used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-09 Thread GitBox
apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-452782330
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 528 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 966 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 13 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 909 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2553 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/459 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
   | uname | Linux 826d1a07357a 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 709ddb1 |
   | maven | version: Apache Maven 3.3.9 |
   | shellcheck | v0.4.6 |
   | Max. process+thread count | 339 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/3/console |
   | Powered by | Apache Yetus 0.9.0-SNAPSHOT http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15989) Synchronized at CompositeService#removeService is not required

2019-01-09 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-15989:
---
Status: Patch Available  (was: Open)

> Synchronized at CompositeService#removeService is not required
> --
>
> Key: HADOOP-15989
> URL: https://issues.apache.org/jira/browse/HADOOP-15989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-HADOOP-15989.patch
>
>
> Synchronization at CompositeService#removeService method level is not 
> required.
> {code}
> protected synchronized boolean removeService(Service service) {
> synchronized (serviceList) {
> return serviceList.remove(service);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15989) Synchronized at CompositeService#removeService is not required

2019-01-09 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-15989:
---
Attachment: 0001-HADOOP-15989.patch

> Synchronized at CompositeService#removeService is not required
> --
>
> Key: HADOOP-15989
> URL: https://issues.apache.org/jira/browse/HADOOP-15989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-HADOOP-15989.patch
>
>
> Synchronization at CompositeService#removeService method level is not 
> required.
> {code}
> protected synchronized boolean removeService(Service service) {
> synchronized (serviceList) {
> return serviceList.remove(service);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-09 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738348#comment-16738348
 ] 

Kai Xie commented on HADOOP-16018:
--

after fixing the compilation issue in branch-2-003 patch, jenkins starts to 
hang during the unit test of distcp.

from jenkins' unit test 
[log|https://builds.apache.org/job/PreCommit-HADOOP-Build/15755/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt]
 it said
{code:java}
[WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
1. See FAQ web page and the dump file 
/testptch/hadoop/hadoop-tools/hadoop-distcp/target/surefire-reports/2019-01-09T03-06-03_867-jvmRun1.dumpstream
{code}
which may be the hanging cause

 

I ran it locally and all test can be passed.

 

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-003.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-01-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738274#comment-16738274
 ] 

Hadoop QA commented on HADOOP-16039:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-16039 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16039 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954316/HADOOP-16039-001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15757/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-01-09 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16039:

Status: Patch Available  (was: Open)

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-01-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738258#comment-16738258
 ] 

Steve Loughran commented on HADOOP-16039:
-

Stack, for the curious 

{code}
ERROR] testCreateNonRecursiveFunctionality[CNR - When file exist. Override 
true](org.apache.hadoop.fs.adl.live.TestAdlInternalCreateNonRecursive)  Time 
elapsed: 102.596 s  <<< ERROR!
com.microsoft.azure.datalake.store.ADLException: 
Error creating file 
/test/createNonRecursive/d38b1472-c154-4053-8126-fed057e52586/6dd5d959-3d98-45d5-9cf3-c83d0f36c20d
Error fetching access token
Operation null failed with exception java.io.IOException : Server returned HTTP 
response code: 401 for URL: https://login.microsoftonline.com/$UUID/oauth2/token
Last encountered exception thrown after 5 tries. 
[java.io.IOException,java.io.IOException,java.io.IOException,java.io.IOException,java.io.IOException]
 [ServerRequestId:null]
at 
com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1194)
at 
com.microsoft.azure.datalake.store.ADLStoreClient.createFile(ADLStoreClient.java:284)
at org.apache.hadoop.fs.adl.AdlFileSystem.create(AdlFileSystem.java:375)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1067)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1048)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:937)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:925)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:696)
at 
org.apache.hadoop.fs.adl.live.TestAdlInternalCreateNonRecursive.testCreateNonRecursiveFunctionality(TestAdlInternalCreateNonRecursive.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Server returned HTTP response code: 401 for 
URL: https://login.microsoftonline.com/$UUID/oauth2/token
at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1926)
at 
sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1921)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1920)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1490)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at 

[jira] [Commented] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-01-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738256#comment-16738256
 ] 

Steve Loughran commented on HADOOP-16039:
-

Patch 001: adds the SDK Variable and then updates it

Not tested against any infra; my attempts to log in to ADLS failed; looks like 
an Oauth problem. Adding for others to test & yetus while I look @ this

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-01-09 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16039:

Attachment: HADOOP-16039-001.patch

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16029) Consecutive Append Should Reuse

2019-01-09 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738239#comment-16738239
 ] 

Ayush Saxena commented on HADOOP-16029:
---

Thanx [~giovanni.fumarola] for the response :)

I have fixed all the checkstyle warnings.

Pls Review!!!

> Consecutive Append Should Reuse
> ---
>
> Key: HADOOP-16029
> URL: https://issues.apache.org/jira/browse/HADOOP-16029
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16029-01.patch, HADOOP-16029-02.patch, 
> HADOOP-16029-03.patch
>
>
> Consecutive calls to StringBuffer/StringBuilder .append should be chained, 
> reusing the target object. This can improve the performance by producing a 
> smaller bytecode, reducing overhead and improving inlining.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-01-09 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16039:
---

 Summary: backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to 
branch-2
 Key: HADOOP-16039
 URL: https://issues.apache.org/jira/browse/HADOOP-16039
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 2.9.2
Reporter: Steve Loughran
Assignee: Steve Loughran


Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-09 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16018:

Status: Patch Available  (was: Open)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-003.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-09 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Status: Open  (was: Patch Available)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738207#comment-16738207
 ] 

Steve Loughran commented on HADOOP-15954:
-

I've been working on Delegation Tokens for S3A and thinking about this

Should whatever plugin that ABFS (& other stores) use for DT binding hold the 
right to declare the username & group?  That way, if someone is logged in with 
AD, it can get user & group from that, and if a DT was issued from the logged 
in user, the user & group could be added to the DT and unmarshalled at the far 
end?

I don't think that'll directly impact this patch, but it is ~related...if 
something like that went it, it'd be an evolution

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch, HADOOP-15954-007.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-09 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Status: Patch Available  (was: Open)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-003.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16036) WASB: Disable jetty logging configuration announcement

2019-01-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738199#comment-16738199
 ] 

Steve Loughran commented on HADOOP-16036:
-

LGTM. Which endpoint did you do a test run against?

> WASB: Disable jetty logging configuration announcement
> --
>
> Key: HADOOP-16036
> URL: https://issues.apache.org/jira/browse/HADOOP-16036
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16036-001.patch
>
>
> Console output of WASB cmd  contains the following jetty logging 
> configuration announcement:
> 18/12/18 12:39:19 INFO util.log: Logging initialized @1016ms.
> Need to disable this announcement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-09 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16018:

Status: Open  (was: Patch Available)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-003.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-09 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Attachment: HADOOP-16018-branch-2-003.patch

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-003.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16034) Hadoop/Yarn registry to create Curator binding on demand

2019-01-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738130#comment-16738130
 ] 

Steve Loughran commented on HADOOP-16034:
-

Been thinking about a test 4 this. 

Obvious one: configure client with an invalid IPv4 addr (192.255.255.256) and 
expect it to fail as curator is set up, which will move from service init to 
the actual first operation

> Hadoop/Yarn registry to create Curator binding on demand
> 
>
> Key: HADOOP-16034
> URL: https://issues.apache.org/jira/browse/HADOOP-16034
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: registry
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16034-001.patch
>
>
> If enough slider worker nodes are created, and then each create a Registry 
> client, then ZK can overload even when they aren't making calls of the 
> registry. Why so? Because curator gets started up on serviceStart(), even 
> when not used.
> Proposed fix: make it on-demand on use of a ZK read/write operation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-09 Thread GitBox
steveloughran commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-452660260
 
 
   OK, starting this work. Account will be @hadoop-yetus


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16037) DistCp: Document usage of -diff option in detail

2019-01-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738101#comment-16738101
 ] 

Steve Loughran commented on HADOOP-16037:
-

sounds good. Can you mention that most cloud supports don't work well for 
incremental updates?

> DistCp: Document usage of -diff option in detail
> 
>
> Key: HADOOP-16037
> URL: https://issues.apache.org/jira/browse/HADOOP-16037
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, tools/distcp
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Create a new doc section similar to "Update and Overwrite" for -diff option. 
> Provide step by step guidance.
> Current doc link: 
> https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15994) Upgrade Jackson2 to the latest version

2019-01-09 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738097#comment-16738097
 ] 

lqjacklee commented on HADOOP-15994:


[https://github.com/apache/hadoop/pull/460]

update the Jackson version to 2.9.8

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch, HADOOP-15994-002.patch, 
> HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to the latest version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] lqjack opened a new pull request #460: Hadoop-15994

2019-01-09 Thread GitBox
lqjack opened a new pull request #460: Hadoop-15994
URL: https://github.com/apache/hadoop/pull/460
 
 
   upgrade jackson2 version to 2.9.8


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15994) Upgrade Jackson2 to the latest version

2019-01-09 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737946#comment-16737946
 ] 

Akira Ajisaka commented on HADOOP-15994:


Yes. We need to update the versions as possible to fix potential 
vulnerabilities.

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch, HADOOP-15994-002.patch, 
> HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org