[jira] [Updated] (HADOOP-13340) Compress Hadoop Archive output

2018-07-17 Thread Koji Noguchi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Noguchi updated HADOOP-13340:
--
Attachment: HADOOP-13340-example-v02.patch

bq. Hmm, updated unit test is failing for me.  Please ignore. I'll upload 
another one.

Seems like recent addition of commons-lang3 broke the unit test. Just taking 
out that jar fixed the classnotfound issue.  
>From last patch example, updated getFileBlockLocation to fake the block size 
>so that application still sees the full file size.  
>({{HADOOP-13340-example-v02.patch}})
This is breaking another transparency (or contract).


> Compress Hadoop Archive output
> --
>
> Key: HADOOP-13340
> URL: https://issues.apache.org/jira/browse/HADOOP-13340
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Duc Le Tu
>Priority: Major
>  Labels: features, performance
> Attachments: HADOOP-13340-example-v01.patch, 
> HADOOP-13340-example-v02.patch
>
>
> Why Hadoop Archive tool cannot compress output like other map-reduce job? 
> I used some options like -D mapreduce.output.fileoutputformat.compress=true 
> -D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
>  but it's not work. Did I wrong somewhere?
> If not, please support option for compress output of Hadoop Archive tool, 
> it's very neccessary for data retention for everyone (small files problem and 
> compress data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15610) Hadoop Docker Image Pip Install Fails

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547356#comment-16547356
 ] 

genericqa commented on HADOOP-15610:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
26s{color} | {color:red} Docker failed to build yetus/hadoop:abb62dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931992/HADOOP-15610.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14900/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop Docker Image Pip Install Fails
> -
>
> Key: HADOOP-15610
> URL: https://issues.apache.org/jira/browse/HADOOP-15610
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: docker, trunk
> Attachments: HADOOP-15610.001.patch, HADOOP-15610.002.patch
>
>
> The Hadoop Docker image on trunk does not build. The pip package on the 
> Ubuntu Xenial repo is out of date and fails by throwing the following error 
> when attempting to install pylint:
> "You are using pip version 8.1.1, however version 10.0.1 is available"
> The following patch fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547355#comment-16547355
 ] 

genericqa commented on HADOOP-15546:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
8s{color} | {color:red} Docker failed to build yetus/hadoop:abb62dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931670/HADOOP-15546-HADOOP-15407-006.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14899/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ABFS: tune imports & javadocs; stabilise tests
> --
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, 
> HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, 
> HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, 
> HADOOP-15546-HADOOP-15407-006.patch
>
>
> Followup on HADOOP-15540 with some initial review tuning
> h2. Tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> h2. Testing
> Review the tests, move to ContractTestUtil assertions, make more consistent 
> to contract test setup, and general work to make the tests work well over 
> slower links, document, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15610) Hadoop Docker Image Pip Install Fails

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547354#comment-16547354
 ] 

genericqa commented on HADOOP-15610:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14900/console in case of 
problems.


> Hadoop Docker Image Pip Install Fails
> -
>
> Key: HADOOP-15610
> URL: https://issues.apache.org/jira/browse/HADOOP-15610
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: docker, trunk
> Attachments: HADOOP-15610.001.patch, HADOOP-15610.002.patch
>
>
> The Hadoop Docker image on trunk does not build. The pip package on the 
> Ubuntu Xenial repo is out of date and fails by throwing the following error 
> when attempting to install pylint:
> "You are using pip version 8.1.1, however version 10.0.1 is available"
> The following patch fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547343#comment-16547343
 ] 

Gera Shegalov commented on HADOOP-15612:


[~ste...@apache.org] thank you for the suggestion. {{FileNotFoundException}} 
does not resonate with me tbh since the previously considered reasons for the 
exception are either 'conf key not found' or 'class not found'. I presume you 
allude to the missing hadoop-lzo jar but it's easy to confuse with tfile being 
missing which is not the case. That said it's more about the error message for 
me than about the exception class.

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15615) MapReduce example in v3.1.0 is not working

2018-07-17 Thread Tom Lo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Lo resolved HADOOP-15615.
-
  Resolution: Invalid
Release Note: 
Do not import folder into input

```
./bin/hdfs dfs -put etc/hadoop/* /input
./bin/hdfs dfs -rm -r -f /input/shellprofile.d
```

> MapReduce example in v3.1.0 is not working
> --
>
> Key: HADOOP-15615
> URL: https://issues.apache.org/jira/browse/HADOOP-15615
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tom Lo
>Priority: Minor
>
> I run
> ```
> cd ~/hadoop-3.1.0
> ./bin/hdfs dfs -mkdir /input
> ./bin/hdfs dfs -put etc/hadoop/* /input
> ./bin/hdfs dfs -ls /input
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar 
> grep /input /output 'dfs[a-z.]+'
> ```
>  
> But I got:
> ```
> 2018-07-18 11:08:16,295 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-18 11:08:17,659 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 2018-07-18 11:08:18,676 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: 
> /tmp/hadoop-yarn/staging/101-medialab-tomlo/.staging/job_1531881751698_0008
> 2018-07-18 11:08:19,055 INFO input.FileInputFormat: Total input files to 
> process : 32
> 2018-07-18 11:08:19,132 INFO mapreduce.JobSubmitter: number of splits:32
> 2018-07-18 11:08:19,178 INFO Configuration.deprecation: 
> yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, 
> use yarn.system-metrics-publisher.enabled
> 2018-07-18 11:08:19,315 INFO mapreduce.JobSubmitter: Submitting tokens for 
> job: job_1531881751698_0008
> 2018-07-18 11:08:19,318 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 2018-07-18 11:08:19,588 INFO conf.Configuration: resource-types.xml not found
> 2018-07-18 11:08:19,589 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'.
> 2018-07-18 11:08:19,711 INFO impl.YarnClientImpl: Submitted application 
> application_1531881751698_0008
> 2018-07-18 11:08:19,777 INFO mapreduce.Job: The url to track the job: 
> http://localhost:8088/proxy/application_1531881751698_0008/
> 2018-07-18 11:08:19,778 INFO mapreduce.Job: Running job: 
> job_1531881751698_0008
> 2018-07-18 11:08:29,966 INFO mapreduce.Job: Job job_1531881751698_0008 
> running in uber mode : false
> 2018-07-18 11:08:29,969 INFO mapreduce.Job:  map 0% reduce 0%
> 2018-07-18 11:08:49,203 INFO mapreduce.Job:  map 13% reduce 0%
> 2018-07-18 11:08:50,214 INFO mapreduce.Job:  map 19% reduce 0%
> 2018-07-18 11:09:20,501 INFO mapreduce.Job:  map 28% reduce 0%
> 2018-07-18 11:09:21,511 INFO mapreduce.Job:  map 34% reduce 0%
> 2018-07-18 11:09:29,592 INFO mapreduce.Job:  map 34% reduce 11%
> 2018-07-18 11:09:45,714 INFO mapreduce.Job:  map 38% reduce 11%
> 2018-07-18 11:09:47,729 INFO mapreduce.Job:  map 50% reduce 13%
> 2018-07-18 11:09:53,788 INFO mapreduce.Job:  map 50% reduce 17%
> 2018-07-18 11:10:14,950 INFO mapreduce.Job:  map 53% reduce 17%
> 2018-07-18 11:10:15,957 INFO mapreduce.Job:  map 59% reduce 17%
> 2018-07-18 11:10:16,965 INFO mapreduce.Job:  map 66% reduce 17%
> 2018-07-18 11:10:17,972 INFO mapreduce.Job:  map 66% reduce 22%
> 2018-07-18 11:10:41,164 INFO mapreduce.Job:  map 69% reduce 22%
> 2018-07-18 11:10:42,169 INFO mapreduce.Job:  map 81% reduce 22%
> 2018-07-18 11:10:47,210 INFO mapreduce.Job:  map 81% reduce 27%
> 2018-07-18 11:11:08,350 INFO mapreduce.Job:  map 84% reduce 27%
> 2018-07-18 11:11:09,360 INFO mapreduce.Job:  map 97% reduce 27%
> 2018-07-18 11:11:11,380 INFO mapreduce.Job:  map 97% reduce 32%
> 2018-07-18 11:11:23,487 INFO mapreduce.Job: Task Id : 
> attempt_1531881751698_0008_m_31_0, Status : FAILED
> Error: java.io.FileNotFoundException: Path is not a file: 
> /input/shellprofile.d
>  at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:90)
>  at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:76)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1927)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:424)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at 

[jira] [Created] (HADOOP-15615) MapReduce example in v3.1.0 is not working

2018-07-17 Thread Tom Lo (JIRA)
Tom Lo created HADOOP-15615:
---

 Summary: MapReduce example in v3.1.0 is not working
 Key: HADOOP-15615
 URL: https://issues.apache.org/jira/browse/HADOOP-15615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom Lo


I run

```

cd ~/hadoop-3.1.0

./bin/hdfs dfs -mkdir /input

./bin/hdfs dfs -put etc/hadoop/* /input

./bin/hdfs dfs -ls /input

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar grep 
/input /output 'dfs[a-z.]+'

```

 

But I got:

```

2018-07-18 11:08:16,295 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable

2018-07-18 11:08:17,659 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032

2018-07-18 11:08:18,676 INFO mapreduce.JobResourceUploader: Disabling Erasure 
Coding for path: 
/tmp/hadoop-yarn/staging/101-medialab-tomlo/.staging/job_1531881751698_0008

2018-07-18 11:08:19,055 INFO input.FileInputFormat: Total input files to 
process : 32

2018-07-18 11:08:19,132 INFO mapreduce.JobSubmitter: number of splits:32

2018-07-18 11:08:19,178 INFO Configuration.deprecation: 
yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, 
use yarn.system-metrics-publisher.enabled

2018-07-18 11:08:19,315 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1531881751698_0008

2018-07-18 11:08:19,318 INFO mapreduce.JobSubmitter: Executing with tokens: []

2018-07-18 11:08:19,588 INFO conf.Configuration: resource-types.xml not found

2018-07-18 11:08:19,589 INFO resource.ResourceUtils: Unable to find 
'resource-types.xml'.

2018-07-18 11:08:19,711 INFO impl.YarnClientImpl: Submitted application 
application_1531881751698_0008

2018-07-18 11:08:19,777 INFO mapreduce.Job: The url to track the job: 
http://localhost:8088/proxy/application_1531881751698_0008/

2018-07-18 11:08:19,778 INFO mapreduce.Job: Running job: job_1531881751698_0008

2018-07-18 11:08:29,966 INFO mapreduce.Job: Job job_1531881751698_0008 running 
in uber mode : false

2018-07-18 11:08:29,969 INFO mapreduce.Job:  map 0% reduce 0%

2018-07-18 11:08:49,203 INFO mapreduce.Job:  map 13% reduce 0%

2018-07-18 11:08:50,214 INFO mapreduce.Job:  map 19% reduce 0%

2018-07-18 11:09:20,501 INFO mapreduce.Job:  map 28% reduce 0%

2018-07-18 11:09:21,511 INFO mapreduce.Job:  map 34% reduce 0%

2018-07-18 11:09:29,592 INFO mapreduce.Job:  map 34% reduce 11%

2018-07-18 11:09:45,714 INFO mapreduce.Job:  map 38% reduce 11%

2018-07-18 11:09:47,729 INFO mapreduce.Job:  map 50% reduce 13%

2018-07-18 11:09:53,788 INFO mapreduce.Job:  map 50% reduce 17%

2018-07-18 11:10:14,950 INFO mapreduce.Job:  map 53% reduce 17%

2018-07-18 11:10:15,957 INFO mapreduce.Job:  map 59% reduce 17%

2018-07-18 11:10:16,965 INFO mapreduce.Job:  map 66% reduce 17%

2018-07-18 11:10:17,972 INFO mapreduce.Job:  map 66% reduce 22%

2018-07-18 11:10:41,164 INFO mapreduce.Job:  map 69% reduce 22%

2018-07-18 11:10:42,169 INFO mapreduce.Job:  map 81% reduce 22%

2018-07-18 11:10:47,210 INFO mapreduce.Job:  map 81% reduce 27%

2018-07-18 11:11:08,350 INFO mapreduce.Job:  map 84% reduce 27%

2018-07-18 11:11:09,360 INFO mapreduce.Job:  map 97% reduce 27%

2018-07-18 11:11:11,380 INFO mapreduce.Job:  map 97% reduce 32%

2018-07-18 11:11:23,487 INFO mapreduce.Job: Task Id : 
attempt_1531881751698_0008_m_31_0, Status : FAILED

Error: java.io.FileNotFoundException: Path is not a file: /input/shellprofile.d

 at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:90)

 at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:76)

 at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:153)

 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1927)

 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)

 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:424)

 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)

 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)

 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)

 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)

 at java.security.AccessController.doPrivileged(Native Method)

 at javax.security.auth.Subject.doAs(Subject.java:422)

 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)

 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

 

 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

 at 

[jira] [Commented] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547304#comment-16547304
 ] 

Steve Loughran commented on HADOOP-15612:
-

What about making it a FileNotFoundException? Though it's not quite right

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-07-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15407:

Priority: Blocker  (was: Major)

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Da Zhou
>Priority: Blocker
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, HADOOP-15407-008.patch, 
> HADOOP-15407-HADOOP-15407-008.patch, HADOOP-15407-HADOOP-15407.006.patch, 
> HADOOP-15407-HADOOP-15407.007.patch, HADOOP-15407-HADOOP-15407.008.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15546) ABFS: tune imports & javadocs; stabilise tests

2018-07-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15546:

Status: Patch Available  (was: Open)

> ABFS: tune imports & javadocs; stabilise tests
> --
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch, 
> HADOOP-15546-HADOOP-15407-003.patch, HADOOP-15546-HADOOP-15407-004.patch, 
> HADOOP-15546-HADOOP-15407-005.patch, HADOOP-15546-HADOOP-15407-006.patch, 
> HADOOP-15546-HADOOP-15407-006.patch
>
>
> Followup on HADOOP-15540 with some initial review tuning
> h2. Tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> h2. Testing
> Review the tests, move to ContractTestUtil assertions, make more consistent 
> to contract test setup, and general work to make the tests work well over 
> slower links, document, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15610) Hadoop Docker Image Pip Install Fails

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547236#comment-16547236
 ] 

genericqa commented on HADOOP-15610:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931992/HADOOP-15610.002.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux e0e1c017fb37 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1af87df |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 460 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14898/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop Docker Image Pip Install Fails
> -
>
> Key: HADOOP-15610
> URL: https://issues.apache.org/jira/browse/HADOOP-15610
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: docker, trunk
> Attachments: HADOOP-15610.001.patch, HADOOP-15610.002.patch
>
>
> The Hadoop Docker image on trunk does not build. The pip package on the 
> Ubuntu Xenial repo is out of date and fails by throwing the following error 
> when attempting to install pylint:
> "You are using pip version 8.1.1, however version 10.0.1 is available"
> The following patch fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15610) Hadoop Docker Image Pip Install Fails

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547207#comment-16547207
 ] 

genericqa commented on HADOOP-15610:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14898/console in case of 
problems.


> Hadoop Docker Image Pip Install Fails
> -
>
> Key: HADOOP-15610
> URL: https://issues.apache.org/jira/browse/HADOOP-15610
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: docker, trunk
> Attachments: HADOOP-15610.001.patch, HADOOP-15610.002.patch
>
>
> The Hadoop Docker image on trunk does not build. The pip package on the 
> Ubuntu Xenial repo is out of date and fails by throwing the following error 
> when attempting to install pylint:
> "You are using pip version 8.1.1, however version 10.0.1 is available"
> The following patch fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547059#comment-16547059
 ] 

Xiao Chen commented on HADOOP-15609:


Thanks for the patch Kitti.

I actually prefer we confine this retry just in KMSClientProvider. The retry 
policy in hadoop-common is widely used, and SSLHandshakeException can happen 
for invalid setups too (e.g. handshake failure due to certificates, cipher 
suites etc.). It feels to me we should be specific to KMS here to reduce the 
impact.

Also could you add a unit test for this? There are some similar tests in 
TestLoadBalancingKMSClientProvider

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-13340) Compress Hadoop Archive output

2018-07-17 Thread Koji Noguchi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547046#comment-16547046
 ] 

Koji Noguchi commented on HADOOP-13340:
---

Hmm, updated unit test is failing for me.  Please ignore. I'll upload another 
one.

> Compress Hadoop Archive output
> --
>
> Key: HADOOP-13340
> URL: https://issues.apache.org/jira/browse/HADOOP-13340
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Duc Le Tu
>Priority: Major
>  Labels: features, performance
> Attachments: HADOOP-13340-example-v01.patch
>
>
> Why Hadoop Archive tool cannot compress output like other map-reduce job? 
> I used some options like -D mapreduce.output.fileoutputformat.compress=true 
> -D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
>  but it's not work. Did I wrong somewhere?
> If not, please support option for compress output of Hadoop Archive tool, 
> it's very neccessary for data retention for everyone (small files problem and 
> compress data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13340) Compress Hadoop Archive output

2018-07-17 Thread Koji Noguchi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547033#comment-16547033
 ] 

Koji Noguchi commented on HADOOP-13340:
---

Just to clarify on my previous comment, tried writing an example.  Not intended 
for commit.  

This provides a compressed har but it's not transparent like regular har in 
that it doesn't allow random reads.

> Compress Hadoop Archive output
> --
>
> Key: HADOOP-13340
> URL: https://issues.apache.org/jira/browse/HADOOP-13340
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Duc Le Tu
>Priority: Major
>  Labels: features, performance
> Attachments: HADOOP-13340-example-v01.patch
>
>
> Why Hadoop Archive tool cannot compress output like other map-reduce job? 
> I used some options like -D mapreduce.output.fileoutputformat.compress=true 
> -D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
>  but it's not work. Did I wrong somewhere?
> If not, please support option for compress output of Hadoop Archive tool, 
> it's very neccessary for data retention for everyone (small files problem and 
> compress data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13340) Compress Hadoop Archive output

2018-07-17 Thread Koji Noguchi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Noguchi updated HADOOP-13340:
--
Attachment: HADOOP-13340-example-v01.patch

> Compress Hadoop Archive output
> --
>
> Key: HADOOP-13340
> URL: https://issues.apache.org/jira/browse/HADOOP-13340
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Duc Le Tu
>Priority: Major
>  Labels: features, performance
> Attachments: HADOOP-13340-example-v01.patch
>
>
> Why Hadoop Archive tool cannot compress output like other map-reduce job? 
> I used some options like -D mapreduce.output.fileoutputformat.compress=true 
> -D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
>  but it's not work. Did I wrong somewhere?
> If not, please support option for compress output of Hadoop Archive tool, 
> it's very neccessary for data retention for everyone (small files problem and 
> compress data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15586) Fix wrong log statements in AbstractService

2018-07-17 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547011#comment-16547011
 ] 

Szilard Nemeth commented on HADOOP-15586:
-

Hi [~ste...@apache.org]!
Could you please review/commit? 
Thanks!

> Fix wrong log statements in AbstractService
> ---
>
> Key: HADOOP-15586
> URL: https://issues.apache.org/jira/browse/HADOOP-15586
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.9.0, 3.1.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: HADOOP-15586-001.patch, HADOOP-15586-002.patch
>
>
> There are some wrong logging statements in AbstractService, here is one 
> example: 
> {code:java}
> LOG.debug("noteFailure {}" + exception);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15610) Hadoop Docker Image Pip Install Fails

2018-07-17 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HADOOP-15610:
--
Attachment: HADOOP-15610.002.patch

> Hadoop Docker Image Pip Install Fails
> -
>
> Key: HADOOP-15610
> URL: https://issues.apache.org/jira/browse/HADOOP-15610
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: docker, trunk
> Attachments: HADOOP-15610.001.patch, HADOOP-15610.002.patch
>
>
> The Hadoop Docker image on trunk does not build. The pip package on the 
> Ubuntu Xenial repo is out of date and fails by throwing the following error 
> when attempting to install pylint:
> "You are using pip version 8.1.1, however version 10.0.1 is available"
> The following patch fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546999#comment-16546999
 ] 

genericqa commented on HADOOP-15612:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
7s{color} | {color:red} Docker failed to build yetus/hadoop:abb62dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931981/HADOOP-15612.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14897/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15610) Hadoop Docker Image Pip Install Fails

2018-07-17 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546987#comment-16546987
 ] 

Jack Bearden commented on HADOOP-15610:
---

Upon posting patch 001, I was unaware that this docker file impacted the 
release pipeline. I will be posting a patch shortly that is much more secure.

> Hadoop Docker Image Pip Install Fails
> -
>
> Key: HADOOP-15610
> URL: https://issues.apache.org/jira/browse/HADOOP-15610
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: docker, trunk
> Attachments: HADOOP-15610.001.patch
>
>
> The Hadoop Docker image on trunk does not build. The pip package on the 
> Ubuntu Xenial repo is out of date and fails by throwing the following error 
> when attempting to install pylint:
> "You are using pip version 8.1.1, however version 10.0.1 is available"
> The following patch fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-17 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546917#comment-16546917
 ] 

Jason Lowe commented on HADOOP-15614:
-

It fails reliably when run in isolation on this line:
{noformat}
assertEquals(startingRequestCount, FakeGroupMapping.getRequestCount());
{noformat}

but it also sporadically fails on this last code line below when run with the 
other tests:
{noformat}
// Now sleep for a short time and re-check the request count. It should have
// increased, but the exception means the cache will not have updated
Thread.sleep(50);
FakeGroupMapping.setThrowException(false);
assertEquals(startingRequestCount + 1, FakeGroupMapping.getRequestCount());
assertEquals(groups.getGroups("me").size(), 2);
{noformat}

The 50msec sleep screams racy test to me.


> TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails
> 
>
> Key: HADOOP-15614
> URL: https://issues.apache.org/jira/browse/HADOOP-15614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Priority: Major
>
> When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
> reliably fails. It seems like a fundamental bug in the test or groups caching.
> A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have 
> any insight into this?
> This test case was added in HADOOP-13263.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-15612:
---
Attachment: HADOOP-15612.001.patch

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-15612:
---
Attachment: (was: HADOOP-15612.001.patch)

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-17 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15614:

Description: 
When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
reliably fails. It seems like a fundamental bug in the test or groups caching.

A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have any 
insight into this?

This test case was added in HADOOP-13263.

  was:
When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
reliably fails. It seems like a fundamental bug in the test or groups caching.

A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have any 
insight into this?


> TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails
> 
>
> Key: HADOOP-15614
> URL: https://issues.apache.org/jira/browse/HADOOP-15614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Priority: Major
>
> When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
> reliably fails. It seems like a fundamental bug in the test or groups caching.
> A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have 
> any insight into this?
> This test case was added in HADOOP-13263.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-17 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-15614:
---

 Summary: TestGroupsCaching.testExceptionOnBackgroundRefreshHandled 
reliably fails
 Key: HADOOP-15614
 URL: https://issues.apache.org/jira/browse/HADOOP-15614
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
reliably fails. It seems like a fundamental bug in the test or groups caching.

A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have any 
insight into this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-15612:
---
Status: Open  (was: Patch Available)

resubmitting to check docker build

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-15612:
---
Status: Patch Available  (was: Open)

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546608#comment-16546608
 ] 

genericqa commented on HADOOP-15609:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
7s{color} | {color:red} Docker failed to build yetus/hadoop:abb62dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931928/HADOOP-15609.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14896/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in 

[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Status: Patch Available  (was: Open)

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546581#comment-16546581
 ] 

Kitti Nanasi commented on HADOOP-15609:
---

I uploaded a patch in which I modified FailoverOnNetworkExceptionRetry to retry 
on SSLHandshakeException, because I think this could be a more general solution.

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Attachment: HADOOP-15609.001.patch

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546403#comment-16546403
 ] 

genericqa commented on HADOOP-15596:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 12m 
57s{color} | {color:red} Docker failed to build yetus/hadoop:abb62dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15596 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931921/HADOOP-15596.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14895/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch, HADOOP-15596.002.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-17 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546335#comment-16546335
 ] 

Kitti Nanasi commented on HADOOP-15596:
---

Thanks for the comments [~xiaochen]! I fixed them in patch v002.

One more thing, that the exception trace is still printed in a warn log in 
LoadBalancingKMSClientProvider class. Does it make sense to only print the full 
stack trace out in a debug log there and print only the exception message in 
the warn log?
{code:java}
LOG.warn("KMS provider at [{}] threw an IOException: ",
provider.getKMSUrl(), ioe);
{code}

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch, HADOOP-15596.002.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-17 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15596:
--
Attachment: HADOOP-15596.002.patch

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch, HADOOP-15596.002.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15613) KerberosAuthenticator should resolve the hostname to get the service principal

2018-07-17 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HADOOP-15613:
---

 Summary: KerberosAuthenticator should resolve the hostname to get 
the service principal
 Key: HADOOP-15613
 URL: https://issues.apache.org/jira/browse/HADOOP-15613
 Project: Hadoop Common
  Issue Type: Bug
  Components: auth
Affects Versions: 2..7.2
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


When in rest URL, IP is used as a address then 
"KerberosAuthenticator.this.url.getHost()" not able to resolve the hostname. 
This hostname will be used to construct server HTTP principal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546126#comment-16546126
 ] 

genericqa commented on HADOOP-15612:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  7m 
57s{color} | {color:red} Docker failed to build yetus/hadoop:abb62dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931893/HADOOP-15612.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14893/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15607) AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream

2018-07-17 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546125#comment-16546125
 ] 

genericqa commented on HADOOP-15607:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  7m 
47s{color} | {color:red} Docker failed to build yetus/hadoop:abb62dd. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15607 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931891/HADOOP-15607.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14894/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream 
> -
>
> Key: HADOOP-15607
> URL: https://issues.apache.org/jira/browse/HADOOP-15607
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15607.001.patch, HADOOP-15607.002.patch
>
>
> When I generated data with hive-tpcds tool, I got exception below:
> 2018-07-16 14:50:43,680 INFO mapreduce.Job: Task Id : 
> attempt_1531723399698_0001_m_52_0, Status : FAILED
> Error: com.aliyun.oss.OSSException: The list of parts was not in ascending 
> order. Parts list must specified in order by part number.
> [ErrorCode]: InvalidPartOrder
> [RequestId]: 5B4C40425FCC208D79D1EAF5
> [HostId]: 100.103.0.137
> [ResponseError]:
> 
> 
>  InvalidPartOrder
>  The list of parts was not in ascending order. Parts list must 
> specified in order by part number.
>  5B4C40425FCC208D79D1EAF5
>  100.103.0.137
>  current PartNumber 3, you given part number 3is not in 
> ascending order
> 
> at 
> com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:99)
>  at 
> com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:69)
>  at 
> com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:248)
>  at 
> com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:130)
>  at 
> com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:68)
>  at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:94)
>  at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:149)
>  at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:113)
>  at 
> com.aliyun.oss.internal.OSSMultipartOperation.completeMultipartUpload(OSSMultipartOperation.java:185)
>  at com.aliyun.oss.OSSClient.completeMultipartUpload(OSSClient.java:790)
>  at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.completeMultipartUpload(AliyunOSSFileSystemStore.java:643)
>  at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.close(AliyunOSSBlockOutputStream.java:120)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>  at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
>  at 
> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574)
>  at org.notmysock.tpcds.GenTable$DSDGen.cleanup(GenTable.java:169)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1686)
>  
> I reviewed code below, 
> {code:java}
> blockId {code}
> has thread synchronization problem
> {code:java}
> // code placeholder
> private void uploadCurrentPart() throws IOException {
>   blockFiles.add(blockFile);
>   blockStream.flush();
>   blockStream.close();
>   if (blockId == 0) {
> uploadId = store.getUploadId(key);
>   }
>   ListenableFuture partETagFuture =
>   executorService.submit(() -> {
> PartETag partETag = store.uploadPart(blockFile, key, uploadId,
> blockId + 1);
> return partETag;
>   });
>   partETagsFutures.add(partETagFuture);
>   blockFile = newBlockFile();
>   blockId++;
>   blockStream = new BufferedOutputStream(new FileOutputStream(blockFile));

[jira] [Updated] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-15612:
---
Status: Patch Available  (was: Open)

001 patch for review

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-15612:
---
Attachment: HADOOP-15612.001.patch

> Improve exception when tfile fails to load LzoCodec 
> 
>
> Key: HADOOP-15612
> URL: https://issues.apache.org/jira/browse/HADOOP-15612
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Major
> Attachments: HADOOP-15612.001.patch
>
>
> When hadoop-lzo is not on classpath you get
> {code:java}
> java.io.IOException: LZO codec class not specified. Did you forget to set 
> property io.compression.codec.lzo.class?{code}
> which is probably rarely the real cause given the default class name. The 
> real root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15612) Improve exception when tfile fails to load LzoCodec

2018-07-17 Thread Gera Shegalov (JIRA)
Gera Shegalov created HADOOP-15612:
--

 Summary: Improve exception when tfile fails to load LzoCodec 
 Key: HADOOP-15612
 URL: https://issues.apache.org/jira/browse/HADOOP-15612
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov


When hadoop-lzo is not on classpath you get
{code:java}
java.io.IOException: LZO codec class not specified. Did you forget to set 
property io.compression.codec.lzo.class?{code}
which is probably rarely the real cause given the default class name. The real 
root cause is not attached to the exception thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15607) AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream

2018-07-17 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15607:
-
Attachment: HADOOP-15607.002.patch

> AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream 
> -
>
> Key: HADOOP-15607
> URL: https://issues.apache.org/jira/browse/HADOOP-15607
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15607.001.patch, HADOOP-15607.002.patch
>
>
> When I generated data with hive-tpcds tool, I got exception below:
> 2018-07-16 14:50:43,680 INFO mapreduce.Job: Task Id : 
> attempt_1531723399698_0001_m_52_0, Status : FAILED
> Error: com.aliyun.oss.OSSException: The list of parts was not in ascending 
> order. Parts list must specified in order by part number.
> [ErrorCode]: InvalidPartOrder
> [RequestId]: 5B4C40425FCC208D79D1EAF5
> [HostId]: 100.103.0.137
> [ResponseError]:
> 
> 
>  InvalidPartOrder
>  The list of parts was not in ascending order. Parts list must 
> specified in order by part number.
>  5B4C40425FCC208D79D1EAF5
>  100.103.0.137
>  current PartNumber 3, you given part number 3is not in 
> ascending order
> 
> at 
> com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:99)
>  at 
> com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:69)
>  at 
> com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:248)
>  at 
> com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:130)
>  at 
> com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:68)
>  at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:94)
>  at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:149)
>  at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:113)
>  at 
> com.aliyun.oss.internal.OSSMultipartOperation.completeMultipartUpload(OSSMultipartOperation.java:185)
>  at com.aliyun.oss.OSSClient.completeMultipartUpload(OSSClient.java:790)
>  at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.completeMultipartUpload(AliyunOSSFileSystemStore.java:643)
>  at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.close(AliyunOSSBlockOutputStream.java:120)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>  at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:106)
>  at 
> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574)
>  at org.notmysock.tpcds.GenTable$DSDGen.cleanup(GenTable.java:169)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1686)
>  
> I reviewed code below, 
> {code:java}
> blockId {code}
> has thread synchronization problem
> {code:java}
> // code placeholder
> private void uploadCurrentPart() throws IOException {
>   blockFiles.add(blockFile);
>   blockStream.flush();
>   blockStream.close();
>   if (blockId == 0) {
> uploadId = store.getUploadId(key);
>   }
>   ListenableFuture partETagFuture =
>   executorService.submit(() -> {
> PartETag partETag = store.uploadPart(blockFile, key, uploadId,
> blockId + 1);
> return partETag;
>   });
>   partETagsFutures.add(partETagFuture);
>   blockFile = newBlockFile();
>   blockId++;
>   blockStream = new BufferedOutputStream(new FileOutputStream(blockFile));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15611) Improve log in FairCallQueue

2018-07-17 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546080#comment-16546080
 ] 

Yiqun Lin commented on HADOOP-15611:


Anyone who can assign this JIRA to [~jianliang.wu]? He is willing to make 
contribution on this, :).

> Improve log in FairCallQueue
> 
>
> Key: HADOOP-15611
> URL: https://issues.apache.org/jira/browse/HADOOP-15611
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Ryan Wu
>Priority: Minor
>
> In the usage of the FairCallQueue, we find there missing some Key log. Only a 
> few logs are printed, it makes us hard to learn and debug this feature.
> At least, following places can print more logs.
> * DecayRpcScheduler#decayCurrentCounts
> * WeightedRoundRobinMultiplexer#moveToNextQueue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15611) Improve log in FairCallQueue

2018-07-17 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-15611:
---
Description: 
In the usage of the FairCallQueue, we find there missing some Key log. Only a 
few logs are printed, it makes us hard to learn and debug this feature.

At least, following places can print more logs.
* DecayRpcScheduler#decayCurrentCounts
* WeightedRoundRobinMultiplexer#moveToNextQueue

> Improve log in FairCallQueue
> 
>
> Key: HADOOP-15611
> URL: https://issues.apache.org/jira/browse/HADOOP-15611
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Ryan Wu
>Priority: Minor
>
> In the usage of the FairCallQueue, we find there missing some Key log. Only a 
> few logs are printed, it makes us hard to learn and debug this feature.
> At least, following places can print more logs.
> * DecayRpcScheduler#decayCurrentCounts
> * WeightedRoundRobinMultiplexer#moveToNextQueue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-17 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546078#comment-16546078
 ] 

Xiao Chen commented on HADOOP-15596:


Thanks for working on this Kitti. Looks pretty good. I added a link to 
HDFS-13690 since that's where the discussion happened.

The only comments I have for this one are:
- The change in {{CommandShell}} feels a little widespread. We know for 
KeyShell the stacktrace is unnecessary and we don't worry about backwards 
compatibility. I'm not sure that's the case for other implementations like 
{{CredentialShell}} - it could be the case, but IMO we should limit our change 
to {{KeyShell}} here. One way to do this is to abstract a method in 
{{CommandShell}} with the default behavior, and override that in {{KeyShell}}

- The message now reads: {quote}The following exception occurred while 
executing command:{quote}
It feel to me this would be better to go as 'Executing command failed with the 
following exception: '
 

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15611) Improve log in FairCallQueue

2018-07-17 Thread Ryan Wu (JIRA)
Ryan Wu created HADOOP-15611:


 Summary: Improve log in FairCallQueue
 Key: HADOOP-15611
 URL: https://issues.apache.org/jira/browse/HADOOP-15611
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Ryan Wu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org