[jira] [Commented] (HADOOP-17048) Increase number of IPC calls and lock creating contention for low latency queries

2020-06-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136197#comment-17136197
 ] 

Hadoop QA commented on HADOOP-17048:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
38s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 102 unchanged - 0 fixed = 105 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 55s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverControllerStress |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16979/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17048 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005740/HADOOP-17048.1.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux f0b7f7f39e68 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |

[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136174#comment-17136174
 ] 

Ayush Saxena commented on HADOOP-9851:
--

Can you give a check to the checkstyle complains
Test failures seems unrelated.
+1 once checkstyle issue is fixed


> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17048) Increase number of IPC calls and lock creating contention for low latency queries

2020-06-15 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HADOOP-17048:
--
Attachment: HADOOP-17048.1.patch
Status: Patch Available  (was: Open)

> Increase number of IPC calls and lock creating contention for low latency 
> queries
> -
>
> Key: HADOOP-17048
> URL: https://issues.apache.org/jira/browse/HADOOP-17048
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HADOOP-17048.1.patch
>
>
> After HADOOP-16126 and HADOOP-16127, we noticed lock issues even in local FS, 
> caused by IPC
> "Task-Executor-11" #273 daemon prio=5 os_prio=0 tid=0x7fe204664800 
> nid=0x2343d2 waiting on condition [0x7fe1fcfda000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.ipc.Client.stop(Client.java:1329)
>   at org.apache.hadoop.ipc.ClientCache.stopClient(ClientCache.java:113)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.close(ProtobufRpcEngine.java:302)
>   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:677)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.close(ClientNamenodeProtocolTranslatorPB.java:304)
>   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:672)
>   at 
> [org.apache.hadoop.io|http://org.apache.hadoop.io/]
> .retry.DefaultFailoverProxyProvider.close(DefaultFailoverProxyProvider.java:57)
>   at 
> [org.apache.hadoop.io|http://org.apache.hadoop.io/]
> .retry.RetryInvocationHandler$ProxyDescriptor.close(RetryInvocationHandler.java:234)
>   at 
> [org.apache.hadoop.io|http://org.apache.hadoop.io/]
> .retry.RetryInvocationHandler.close(RetryInvocationHandler.java:444)
>   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:677)
>   at 
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:592)
>   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:633)
>   - locked <0x7fed071063a0> (a org.apache.hadoop.hdfs.DFSClient)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1358)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:3463)
>   - locked <0x7fe63d00> (a org.apache.hadoop.fs.FileSystem$Cache)
>   at org.apache.hadoop.fs.FileSystem.closeAllForUGI(FileSystem.java:576)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.callInternal(TaskRunnerCallable.java:299)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.callInternal(TaskRunnerCallable.java:93)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)"New I/O worker #44" #80 prio=5 
> os_prio=0 tid=0x7fede2a03000 nid=0x233f2b waiting for monitor entry 
> [0x7fe20cd3d000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3345)
>   - waiting to lock <0x7fe63d00> (a 
> org.apache.hadoop.fs.FileSystem$Cache)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:435)
>   at 
> org.apache.tez.runtime.library.common.sort.impl.TezSpillRecord.(TezSpillRecord.java:65)
>   at 
> org.apache.tez.runtime.library.common.sort.impl.TezSpillRecord.(TezSpillRecord.java:58)
>   at 
> org.apache.hadoop.hive.llap.shufflehandler.IndexCache.readIndexFileToCache(IndexCache.java:121)
>   at 
> org.apache.hadoop.hive.llap.shufflehandler.IndexCache.getIndexInformation(IndexCache.java:70)
>   at 
> org.apache.hadoop.hive.llap.shufflehandler.ShuffleHandler$Shuffle.getMapOutputInfo(ShuffleHandler.java:887)
>   at 
> org.apache.hadoop.hive.llap.shufflehandler.ShuffleHandler$Shuffle.populateHeaders(ShuffleHandler.java:908)
>   at 
> org.apache.hadoop.hive.llap.shufflehandler.ShuffleHandler$Shuffle.messageReceived(ShuffleHandler.java:805)
>   at 
> 

[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136004#comment-17136004
 ] 

Hadoop QA commented on HADOOP-9851:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 36m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 55s{color} | {color:orange} root: The patch generated 1 new + 192 unchanged 
- 1 fixed = 193 total (was 193) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 24s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 45s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}287m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestStripedFileAppend |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-06-15 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-642775273


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
24 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 21s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 40s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 37s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 44s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 33s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 25s |  the patch passed  |
   | -1 :x: |  javac  |  20m 25s |  root generated 1 new + 1857 unchanged - 1 
fixed = 1858 total (was 1858)  |
   | -0 :warning: |  checkstyle  |   3m 18s |  root: The patch generated 26 new 
+ 160 unchanged - 22 fixed = 186 total (was 182)  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 9 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 19s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  the patch passed  |
   | -1 :x: |  findbugs  |   2m 44s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 39s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 40s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 147m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  org.apache.hadoop.fs.statistics.IOStatisticEntry defines equals and 
uses Object.hashCode()  At IOStatisticEntry.java:Object.hashCode()  At 
IOStatisticEntry.java:[lines 299-302] |
   | Failed junit tests | hadoop.fs.statistics.TestDynamicIOStatistics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2069 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux d1ea090666d3 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 93b121a9717 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/1/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/1/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/1/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/1/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/1/testReport/ |
   | Max. process+thread 

[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135783#comment-17135783
 ] 

Andras Bokor commented on HADOOP-9851:
--

[~ayushtkn],
Windows remained unchanged only Linux will allow + sign.

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-15 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9851:
-
Attachment: HADOOP-9851.02.patch

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-15 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r440095900



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -1314,7 +1352,30 @@ private String 
convertXmsPropertiesToCommaSeparatedString(final Hashtable dirSet) {
+
+for (String dir : dirSet) {
+  if (dir.isEmpty() || key.startsWith(dir)) {
+return true;
+  }
+
+  try {
+URI uri = new URI(dir);
+if (null == uri.getAuthority()) {
+  if (key.startsWith(dir + "/")){
+return true;
+  }
+}
+  } catch (URISyntaxException e) {
+LOG.info("URI syntax error creating URI for {}", dir);

Review comment:
   but what if only one comma separated value is incorrect, we would be 
throwing an exception adn crashing the app/jvm?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-15 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r440081448



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -389,6 +423,12 @@ private synchronized void 
flushWrittenBytesToServiceAsync() throws IOException {
 
   private synchronized void flushWrittenBytesToServiceInternal(final long 
offset,
   final boolean retainUncommitedData, final boolean isClose) throws 
IOException {
+
+// flush is not called for appendblob as is not needed
+if (this.isAppendBlob) {
+  return;

Review comment:
   this can lead to frequent log lines, every time a flush(), hflush(), 
hsync() is called. will that be ok?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on pull request #2075: YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-06-15 Thread GitBox


vinayakumarb commented on pull request #2075:
URL: https://github.com/apache/hadoop/pull/2075#issuecomment-644011702


   @aajisaka @steveloughran @jojochuang @ayushtkn 
   Please review, thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2056: HADOOP-17065. Adding Network Counters in ABFS

2020-06-15 Thread GitBox


mukund-thakur commented on pull request #2056:
URL: https://github.com/apache/hadoop/pull/2056#issuecomment-643994075


   LGTM. Great patch @mehakmeet 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2056: HADOOP-17065. Adding Network Counters in ABFS

2020-06-15 Thread GitBox


mukund-thakur commented on a change in pull request #2056:
URL: https://github.com/apache/hadoop/pull/2056#discussion_r440022161



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -1214,11 +1217,11 @@ private void initializeClient(URI uri, String 
fileSystemName, String accountName
 if (tokenProvider != null) {
   this.client = new AbfsClient(baseUrl, creds, abfsConfiguration,
   new ExponentialRetryPolicy(abfsConfiguration.getMaxIoRetries()),
-  tokenProvider, abfsPerfTracker);
+  tokenProvider, abfsPerfTracker, instrumentation);
 } else {
   this.client = new AbfsClient(baseUrl, creds, abfsConfiguration,
   new ExponentialRetryPolicy(abfsConfiguration.getMaxIoRetries()),
-  sasTokenProvider, abfsPerfTracker);
+  sasTokenProvider, abfsPerfTracker, instrumentation);

Review comment:
   At this point the variable name is instrumentation and then it 
AbfsClient() it is statistics. I know these classes has been created earlier 
but I feel the names are bit confusing. 
   A few suggestions.
   - AbfsCounters could be AbfsInstrumentation and AbfsInstrumentation could be 
AbfsInstrumentationImpl
   - Let AbfsCounters be same. change AbfsInstrumentation to AbfsCountersImpl. 
   It is not important to change but if everybody feels the same then sooner 
the better. 
   CC @steveloughran 
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2056: HADOOP-17065. Adding Network Counters in ABFS

2020-06-15 Thread GitBox


mukund-thakur commented on a change in pull request #2056:
URL: https://github.com/apache/hadoop/pull/2056#discussion_r440013198



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
##
@@ -0,0 +1,253 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
+
+public class ITestAbfsNetworkStatistics extends AbstractAbfsIntegrationTest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestAbfsNetworkStatistics.class);
+  private static final int LARGE_OPERATIONS = 10;
+
+  public ITestAbfsNetworkStatistics() throws Exception {
+  }
+
+  /**
+   * Testing connections_made, send_request and bytes_send statistics in
+   * {@link AbfsRestOperation}.
+   */
+  @Test
+  public void testAbfsHttpSendStatistics() throws IOException {
+describe("Test to check correct values of statistics after Abfs http send "
++ "request is done.");
+
+AzureBlobFileSystem fs = getFileSystem();
+Map metricMap;
+Path sendRequestPath = path(getMethodName());
+String testNetworkStatsString = "http_send";
+long connectionsMade, requestsSent, bytesSent;
+
+/*
+ * Creating AbfsOutputStream will result in 1 connection made and 1 send
+ * request.
+ */
+try (AbfsOutputStream out = createAbfsOutputStreamWithFlushEnabled(fs,
+sendRequestPath)) {
+  out.write(testNetworkStatsString.getBytes());
+
+  /*
+   * Flushes all outstanding data (i.e. the current unfinished packet)
+   * from the client into the service on all DataNode replicas.
+   */
+  out.hflush();
+
+  metricMap = fs.getInstrumentationMap();
+
+  /*
+   * Testing the network stats with 1 write operation.
+   *
+   * connections_made : 3(getFileSystem()) + 1(AbfsOutputStream) + 
2(flush).
+   *
+   * send_requests : 1(getFileSystem()) + 1(AbfsOutputStream) + 2(flush).
+   *
+   * bytes_sent : bytes wrote in AbfsOutputStream.
+   */
+  connectionsMade = assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
+  6, metricMap);
+  requestsSent = assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS, 4,
+  metricMap);
+  bytesSent = assertAbfsStatistics(AbfsStatistic.BYTES_SENT,
+  testNetworkStatsString.getBytes().length, metricMap);
+
+}
+
+// To close the AbfsOutputStream 1 connection is made and 1 request is 
sent.
+connectionsMade++;
+requestsSent++;
+
+try (AbfsOutputStream out = createAbfsOutputStreamWithFlushEnabled(fs,
+sendRequestPath)) {
+
+  for (int i = 0; i < LARGE_OPERATIONS; i++) {
+out.write(testNetworkStatsString.getBytes());
+
+/*
+ * 1 flush call would create 2 connections and 2 send requests.
+ * when hflush() is called it will essentially trigger append() and
+ * flush() inside AbfsRestOperation. Both of which calls
+ * executeHttpOperation() method which creates a connection and sends
+ * requests.
+ */
+out.hflush();
+  }
+
+  metricMap = fs.getInstrumentationMap();
+
+  /*
+   * Testing the network stats with Large amount of bytes sent.
+   *
+   * connections made : connections_made(Last assertion) + 1
+   * (AbfsOutputStream) + LARGE_OPERATIONS * 2(flush).
+   *
+   * send requests : requests_sent(Last assertion) + 1(AbfsOutputStream) +
+   * LARGE_OPERATIONS * 2(flush).
+   *
+   * bytes sent : bytes_sent(Last assertion) + LARGE_OPERATIONS * (bytes
+   * wrote each time).
+   *
+   */
+  assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
+  connectionsMade + 1 + LARGE_OPERATIONS * 2, metricMap);
+

[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-15 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r440002446



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -59,6 +59,9 @@
   public static final String FS_AZURE_ENABLE_AUTOTHROTTLING = 
"fs.azure.enable.autothrottling";
   public static final String FS_AZURE_ALWAYS_USE_HTTPS = 
"fs.azure.always.use.https";
   public static final String FS_AZURE_ATOMIC_RENAME_KEY = 
"fs.azure.atomic.rename.key";
+  /** Provides a config to provide comma separated path prefixes on which 
Appendblob based files are created
+   *  Default is empty. **/
+  public static final String FS_AZURE_APPEND_BLOB_KEY = 
"fs.azure.appendblob.key";

Review comment:
   I have made is similar to FS_AZURE_ATOMIC_RENAME_KEY which also provide 
a set of directories. Further please note for appendblob, this is actually a 
prefix for the path (and not necessarily the directories). This is done so that 
the test suite (which all runs on a container with a randon guid can run) can 
run on appendblob based files. Let me know ur comments/thoughts here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-15 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r440001201



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -1314,7 +1352,30 @@ private String 
convertXmsPropertiesToCommaSeparatedString(final Hashtable dirSet) {
+
+for (String dir : dirSet) {
+  if (dir.isEmpty() || key.startsWith(dir)) {
+return true;
+  }
+
+  try {
+URI uri = new URI(dir);
+if (null == uri.getAuthority()) {
+  if (key.startsWith(dir + "/")){
+return true;
+  }
+}
+  } catch (URISyntaxException e) {
+LOG.info("URI syntax error creating URI for {}", dir);

Review comment:
   this is used for every file being created, returning true or false. 
Raising an exception can be a problem.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17068) client fails forever when namenode ipaddr changed

2020-06-15 Thread Sean Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135567#comment-17135567
 ] 

Sean Chow commented on HADOOP-17068:


Hi [~hexiaoqiao], is there any thoughts about the exception? Hope this patch 
solve this issue.

As [~ayushtkn] memtioned that a new UnitTest may need to be added, but I think 
it's not easy to mock the third namenode ipaddr change(it need to be connected).

> client fails forever when namenode ipaddr changed
> -
>
> Key: HADOOP-17068
> URL: https://issues.apache.org/jira/browse/HADOOP-17068
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.10.0, 2.9.2, 3.2.1
>Reporter: Sean Chow
>Priority: Major
> Attachments: HDFS-15390.01.patch
>
>
> For machine replacement, I replace my standby namenode with a new ipaddr and 
> keep the same hostname. Also update the client's hosts to make it resolve 
> correctly
> When I try to run failover to transite the new namenode(let's say nn2), the 
> client will fail to read or write forever until it's restarted.
> That make yarn nodemanager in sick state. Even the new tasks will encounter 
> this exception  too. Until all nodemanager restart.
>  
> {code:java}
> 20/06/02 15:12:25 WARN ipc.Client: Address change detected. Old: 
> nn2-192-168-1-100/192.168.1.100:9000 New: nn2-192-168-1-100/192.168.1.200:9000
> 20/06/02 15:12:25 DEBUG ipc.Client: closing ipc connection to 
> nn2-192-168-1-100/192.168.1.200:9000: Connection refused
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:608)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1517)
> at org.apache.hadoop.ipc.Client.call(Client.java:1440)
> at org.apache.hadoop.ipc.Client.call(Client.java:1401)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:193)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> {code}
>  
> We can see the client has {{Address change detected}}, but it still fails. I 
> find out that's because when method {{updateAddress()}} return true,  the 
> {{handleConnectionFailure()}} thow an exception that break the next retry 
> with the right ipaddr.
> Client.java: setupConnection()
> {code:java}
> } catch (ConnectTimeoutException toe) {
>   /* Check for an address change and update the local reference.
>* Reset the failure counter if the address was changed
>*/
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
>   handleConnectionTimeout(timeoutFailures++,
>   maxRetriesOnSocketTimeouts, toe);
> } catch (IOException ie) {
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
> // because the namenode ip changed in updateAddress(), the old namenode 
> ipaddress cannot be accessed now
> // handleConnectionFailure will thow an exception, the next retry never have 
> a chance to use the right server updated in updateAddress()
>   handleConnectionFailure(ioFailures++, ie);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2075: YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-06-15 Thread GitBox


hadoop-yetus commented on pull request #2075:
URL: https://github.com/apache/hadoop/pull/2075#issuecomment-643931336


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  1s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   6m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 34s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 18s |  hadoop-client-runtime in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 18s |  hadoop-client-minicluster in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2075/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2075 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 591f59e2a4f7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 81d8a887b04 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2075/1/testReport/ |
   | Max. process+thread count | 460 (vs. ulimit of 5500) |
   | modules | C: hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-minicluster U: hadoop-client-modules |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2075/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org