[jira] [Resolved] (HADOOP-17533) Server IPC version 9 cannot communicate with client version 4

2021-02-17 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-17533.
-
Resolution: Invalid

> Server IPC version 9 cannot communicate with client version 4
> -
>
> Key: HADOOP-17533
> URL: https://issues.apache.org/jira/browse/HADOOP-17533
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mariia 
>Priority: Major
>  Labels: hadoop, hdfs, java, maven
>
> `I want to connect to hdfs using java jast like this
>  _String url = "hdfs://c7301.ambari.apache.org:8020/file.txt";_
> _FileSystem fs = null;_
>  _InputStream in = null;_
>  _try {_
>  _Configuration conf = new Configuration();_
>  _fs = FileSystem.get(URI.create(url), conf, "admin");_
> _in = fs.open(new Path(url));_
> _IOUtils.copyBytes(in, System.out, 4096, false);_
> _} catch (Exception e) {_
>  _e.printStackTrace();_
>  _} finally {_
>  _IOUtils.closeStream(fs);_
>  _}_ 
> *Error that i got*
>  [2021-02-17 20:02:06,115] ERROR PriviledgedActionException as:admin 
> cause:org.apache.hadoop.ipc.RemoteException: *Server IPC version 9 cannot 
> communicate with* *client version 4 
> (*org.apache.hadoop.security.UserGroupInformation:1124)
>  org.apache.hadoop.ipc.RemoteException: *Server IPC version 9 cannot 
> communicate with client version 4*
>  at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at com.sun.proxy.$Proxy4.getProtocolVersion(Unknown Source)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>  at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>  at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:238)
>  at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:203)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>  at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:117)
>  at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:115)
>  at java.base/java.security.AccessController.doPrivileged(Native Method)
>  at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:115)
>  at Main.main(Main.java:38) 
>  \{{ I tried different solutions to the problem, but nothing helped. }}
> *Its my pom.xml file*
>  
>   
>  http://maven.apache.org/POM/4.0.0";
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> [http://maven.apache.org/xsd/maven-4.0.0.xsd]";>
>  4.0.0
> org.example
>  producer
>  1.0-SNAPSHOT
> 
>  11
>  11
>  
> 
>  
>  org.apache.kafka
>  kafka-clients
>  0.10.0.0
>  
> 
>  org.apache.hadoop
>  hadoop-common
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-hdfs
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-yarn-common
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-mapreduce-client-common
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-mapreduce-client-core
>  3.2.0
>  
> 
>  
>  
>  
>  org.apache.maven.plugins
>  maven-shade-plugin
>  3.2.4
>  
>  
>  package
>  
>  shade
>  
>  
>  
>
> implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
>  Main
>  
>  
>  
>  
>  
>  
>  
>  
> 
>  \{{}}
> *And its version of hdfs*
> Hadoop 3.1.1.3.1.4.0-315 Source code repository 
> g...@github.com:hortonworks/hadoop.git -r 
> 58d0fd3d8ce58b10149da3c717c45e5e57a60d14 Compiled by jenkins on 
> 2019-08-23T05:15Z Compiled with protoc 2.5.0 From source with checksum 
> fcbd146ffa6d48fef0ed81332f9d6f0 This command was run using 
> /usr/ddp/3.1.4.0-315/hadoop/hadoop-common-3.1.1.3.1.4.0-315.jar
>  
> if someone knew a similar problem, please help



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17533) Server IPC version 9 cannot communicate with client version 4

2021-02-17 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17286088#comment-17286088
 ] 

Kihwal Lee commented on HADOOP-17533:
-

rpc v.4 is an ancient tung only spoken by very old hadoop versions. One of the 
artifacts (kafka?) must include an older hadoop. 

BTW, jira is for reporting bugs. Please use relevant mailing lists for usage 
questions and issues.

> Server IPC version 9 cannot communicate with client version 4
> -
>
> Key: HADOOP-17533
> URL: https://issues.apache.org/jira/browse/HADOOP-17533
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mariia 
>Priority: Major
>  Labels: hadoop, hdfs, java, maven
>
> `I want to connect to hdfs using java jast like this
>  _String url = "hdfs://c7301.ambari.apache.org:8020/file.txt";_
> _FileSystem fs = null;_
>  _InputStream in = null;_
>  _try {_
>  _Configuration conf = new Configuration();_
>  _fs = FileSystem.get(URI.create(url), conf, "admin");_
> _in = fs.open(new Path(url));_
> _IOUtils.copyBytes(in, System.out, 4096, false);_
> _} catch (Exception e) {_
>  _e.printStackTrace();_
>  _} finally {_
>  _IOUtils.closeStream(fs);_
>  _}_ 
> *Error that i got*
>  [2021-02-17 20:02:06,115] ERROR PriviledgedActionException as:admin 
> cause:org.apache.hadoop.ipc.RemoteException: *Server IPC version 9 cannot 
> communicate with* *client version 4 
> (*org.apache.hadoop.security.UserGroupInformation:1124)
>  org.apache.hadoop.ipc.RemoteException: *Server IPC version 9 cannot 
> communicate with client version 4*
>  at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at com.sun.proxy.$Proxy4.getProtocolVersion(Unknown Source)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>  at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>  at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:238)
>  at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:203)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>  at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:117)
>  at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:115)
>  at java.base/java.security.AccessController.doPrivileged(Native Method)
>  at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:115)
>  at Main.main(Main.java:38) 
>  \{{ I tried different solutions to the problem, but nothing helped. }}
> *Its my pom.xml file*
>  
>   
>  http://maven.apache.org/POM/4.0.0";
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> [http://maven.apache.org/xsd/maven-4.0.0.xsd]";>
>  4.0.0
> org.example
>  producer
>  1.0-SNAPSHOT
> 
>  11
>  11
>  
> 
>  
>  org.apache.kafka
>  kafka-clients
>  0.10.0.0
>  
> 
>  org.apache.hadoop
>  hadoop-common
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-hdfs
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-yarn-common
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-mapreduce-client-common
>  3.2.0
>  
> 
>  org.apache.hadoop
>  hadoop-mapreduce-client-core
>  3.2.0
>  
> 
>  
>  
>  
>  org.apache.maven.plugins
>  maven-shade-plugin
>  3.2.4
>  
>  
>  package
>  
>  shade
>  
>  
>  
>
> implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
>  Main
>  
>  
>  
>  
>  
>  
>  
>  
> 
>  \{{}}
> *And its version of hdfs*
> Hadoop 3.1.1.3.1.4.0-315 Source code repository 
> g...@github.com:hortonworks/hadoop.git -r 
> 58d0fd3d8ce58b10149da3c717c45e5e57a60d14 Compiled by jenkins on 
> 2019-08-23T05:15Z Compiled with protoc 2.5.0 From source with checksum 
> fcbd146ffa6d48fef0ed81332f9d6f0 This command was run using 
> /usr/ddp/3.1.4.0-315/hadoop/hadoop-common-3.1.1.3.1.4.0-315.jar
>  
> if someone knew a similar problem, please help



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17517) Non-randomized password used

2021-02-08 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-17517.
-
Resolution: Invalid

> Non-randomized password used
> 
>
> Key: HADOOP-17517
> URL: https://issues.apache.org/jira/browse/HADOOP-17517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Md Mahir Asef Kabir
>Priority: Major
>
> In file 
> [https://github.com/apache/hadoop/blob/a89ca56a1b0eb949f56e7c6c5c25fdf87914a02f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java]
>  (at Line 322) non-randomized password is used.
> *Security Impact*:
> Hackers can get access to the non-randomized passwords and compromise the 
> system.
> *Solution we suggest*:
> Password should be generated randomly
> *Please share with us your opinions/comments if there is any*:
> Is the bug report helpful?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17518) Usage of incorrect regex range A-z

2021-02-08 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17281189#comment-17281189
 ] 

Kihwal Lee commented on HADOOP-17518:
-

It is only referenced by a HttpFS test. We could simply remove it.

> Usage of incorrect regex range A-z
> --
>
> Key: HADOOP-17518
> URL: https://issues.apache.org/jira/browse/HADOOP-17518
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Marcono1234
>Priority: Minor
>
> There are two cases where the regex {{A-z}} is used. I assume that is a typo 
> (and should be {{A-Z}}) because {{A-z}} matches:
> - {{A-Z}}
> - {{\[}}, {{\}}, {{\]}}, {{^}}, {{_}}, {{`}}
> - {{a-z}}
> Affected:
> - 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L109
> (and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L115)
> - 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java#L38



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2020-11-20 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17236467#comment-17236467
 ] 

Kihwal Lee edited comment on HADOOP-13626 at 11/20/20, 9:52 PM:


I believe this creates a compatibility issue.  When the job submission env and 
the runtime env are different (e.g. 2.10 Vs. 2.8), distcp jobs cannot run due 
to this incompatibility.   Users who move in lock steps are fine, but we have a 
lot of diverse users and interoperability is very critical.  We had to revert 
it and a major change that came after this to resolve the issue.


was (Author: kihwal):
I believe this creates a compatibility issue.  When the job submission env and 
runtime env are different, distcp jobs cannot run due to this incompatibility.  
 Users who move in lock steps are fine, but we have a lot of diverse users and 
interoperability is very critical.  We had to revert it and a major change that 
came after this to resolve the issue.

> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Christopher Douglas
>Assignee: Christopher Douglas
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> HADOOP-13626.003.patch, HADOOP-13626.004.patch
>
>
> DistCp uses an internal struct {{CopyListingFileStatus}} to record metadata. 
> Because this record extends {{FileStatus}}, it also relies on the 
> {{Writable}} contract from that type. Because DistCp performs its checks on a 
> subset of the fields (i.e., does not actually rely on {{FileStatus}} as a 
> supertype), these types should be independent.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2020-11-20 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17236467#comment-17236467
 ] 

Kihwal Lee edited comment on HADOOP-13626 at 11/20/20, 9:52 PM:


I believe this creates a compatibility issue.  When the job submission env and 
the runtime env are different (e.g. 2.10 Vs. 2.8), distcp jobs cannot run due 
to this incompatibility.   Users who move in lock steps are fine, but we have a 
lot of diverse users and interoperability between versions is very critical.  
We had to revert it and a major change that came after this to resolve the 
issue.


was (Author: kihwal):
I believe this creates a compatibility issue.  When the job submission env and 
the runtime env are different (e.g. 2.10 Vs. 2.8), distcp jobs cannot run due 
to this incompatibility.   Users who move in lock steps are fine, but we have a 
lot of diverse users and interoperability is very critical.  We had to revert 
it and a major change that came after this to resolve the issue.

> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Christopher Douglas
>Assignee: Christopher Douglas
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> HADOOP-13626.003.patch, HADOOP-13626.004.patch
>
>
> DistCp uses an internal struct {{CopyListingFileStatus}} to record metadata. 
> Because this record extends {{FileStatus}}, it also relies on the 
> {{Writable}} contract from that type. Because DistCp performs its checks on a 
> subset of the fields (i.e., does not actually rely on {{FileStatus}} as a 
> supertype), these types should be independent.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2020-11-20 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17236467#comment-17236467
 ] 

Kihwal Lee commented on HADOOP-13626:
-

I believe this creates a compatibility issue.  When the job submission env and 
runtime env are different, distcp jobs cannot run due to this incompatibility.  
 Users who move in lock steps are fine, but we have a lot of diverse users and 
interoperability is very critical.  We had to revert it and a major change that 
came after this to resolve the issue.

> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Christopher Douglas
>Assignee: Christopher Douglas
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> HADOOP-13626.003.patch, HADOOP-13626.004.patch
>
>
> DistCp uses an internal struct {{CopyListingFileStatus}} to record metadata. 
> Because this record extends {{FileStatus}}, it also relies on the 
> {{Writable}} contract from that type. Because DistCp performs its checks on a 
> subset of the fields (i.e., does not actually rely on {{FileStatus}} as a 
> supertype), these types should be independent.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17378) java.lang.NoClassDefFoundError: org/apache/hadoop/tracing/SpanReceiverHost

2020-11-12 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17231174#comment-17231174
 ] 

Kihwal Lee commented on HADOOP-17378:
-

{{SpanReceiverHost}} has been removed since hadoop 2.8. That means you have 
mixed versions of hadoop jars in the classpath. It is possible that one of the 
artifacts you are picking up is bundling an older version of hadoop.

Please note that Jira is used for bug reporting. For general help in using 
hadoop, please use the official mailing lists.  See 
https://hadoop.apache.org/mailing_lists.html.

> java.lang.NoClassDefFoundError: org/apache/hadoop/tracing/SpanReceiverHost
> --
>
> Key: HADOOP-17378
> URL: https://issues.apache.org/jira/browse/HADOOP-17378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 3.0.0
> Environment: those are  librarys that i use
> compile 'org.apache.maven.plugins:maven-shade-plugin:2.4.3'
>  compile 'org.apache.hadoop:hadoop-common:3.0.0'
>  compile 'org.apache.flume.flume-ng-sinks:flume-hdfs-sink:1.9.0'
>  compile 'org.apache.flume.flume-ng-sources:flume-kafka-source:1.9.0'
>  compile 'org.apache.hbase:hbase-client:2.1.0'
>  compile 'org.apache.flume.flume-ng-sinks:flume-ng-hbase-sink:1.9.0'
>  compile 'redis.clients:jedis:2.9.0'
>  compile 'org.apache.kafka:kafka-clients:0.10.2.1'
>  compile 'org.apache.hadoop:hadoop-client:3.0.0'
>  compile 'org.apache.hive:hive-exec:2.1.1'
>  compile 'org.mariadb.jdbc:mariadb-java-client:1.6.1'
>  compileOnly 'org.apache.flume:flume-ng-core:1.9.0'
> compile group: 'org.apache.kafka', name: 'kafka_2.10', version:'0.10.2.1'
>  compile group: 'org.apache.kudu', name: 'kudu-client', version:'1.10.0'
>  compile group: 'org.apache.flume', name: 'flume-ng-configuration', 
> version:'1.9.0'
>  compile group: 'org.apache.yetus', name: 'audience-annotations', 
> version:'0.4.0'
>  compile group: 'org.apache.avro', name: 'avro', version:'1.8.2'
>  compile group: 'org.slf4j', name: 'slf4j-api', version:'1.7.25'
>  compile group: 'org.postgresql', name: 'postgresql', version:'42.1.4.jre7'
>  compile group: 'org.apache.maven.plugins', name: 'maven-resources-plugin', 
> version:'2.6'
>  testCompile group: 'junit', name: 'junit', version: '4.12'
>  compile group: 'org.apache.parquet', name: 'parquet-hadoop-bundle', version: 
> '1.9.0'
>  compile group: 'org.apache.hive', name: 'hive-jdbc', version: '2.1.1'
>Reporter: 정진영
>Priority: Major
>
> Hi. I need help.
> now Im trying to migration Cloudera flume to Apache flume. 
> (which means no use XXX _chd5.16 any more)
> during the test, when i stored data in HDFS i faced this problem below.
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/tracing/SpanReceiverHostjava.lang.NoClassDefFoundError: 
> org/apache/hadoop/tracing/SpanReceiverHost at 
> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:634) at 
> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619) at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2816) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:238)
>  at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:230)
>  at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:675)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>  at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:46) at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:672)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> java.lang.ClassNotFoundException: org.apache.hadoop.tracing.SpanReceiverHost 
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
>  
> And i don't know fixed it .
> pleas

[jira] [Resolved] (HADOOP-17378) java.lang.NoClassDefFoundError: org/apache/hadoop/tracing/SpanReceiverHost

2020-11-12 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-17378.
-
Resolution: Invalid

> java.lang.NoClassDefFoundError: org/apache/hadoop/tracing/SpanReceiverHost
> --
>
> Key: HADOOP-17378
> URL: https://issues.apache.org/jira/browse/HADOOP-17378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 3.0.0
> Environment: those are  librarys that i use
> compile 'org.apache.maven.plugins:maven-shade-plugin:2.4.3'
>  compile 'org.apache.hadoop:hadoop-common:3.0.0'
>  compile 'org.apache.flume.flume-ng-sinks:flume-hdfs-sink:1.9.0'
>  compile 'org.apache.flume.flume-ng-sources:flume-kafka-source:1.9.0'
>  compile 'org.apache.hbase:hbase-client:2.1.0'
>  compile 'org.apache.flume.flume-ng-sinks:flume-ng-hbase-sink:1.9.0'
>  compile 'redis.clients:jedis:2.9.0'
>  compile 'org.apache.kafka:kafka-clients:0.10.2.1'
>  compile 'org.apache.hadoop:hadoop-client:3.0.0'
>  compile 'org.apache.hive:hive-exec:2.1.1'
>  compile 'org.mariadb.jdbc:mariadb-java-client:1.6.1'
>  compileOnly 'org.apache.flume:flume-ng-core:1.9.0'
> compile group: 'org.apache.kafka', name: 'kafka_2.10', version:'0.10.2.1'
>  compile group: 'org.apache.kudu', name: 'kudu-client', version:'1.10.0'
>  compile group: 'org.apache.flume', name: 'flume-ng-configuration', 
> version:'1.9.0'
>  compile group: 'org.apache.yetus', name: 'audience-annotations', 
> version:'0.4.0'
>  compile group: 'org.apache.avro', name: 'avro', version:'1.8.2'
>  compile group: 'org.slf4j', name: 'slf4j-api', version:'1.7.25'
>  compile group: 'org.postgresql', name: 'postgresql', version:'42.1.4.jre7'
>  compile group: 'org.apache.maven.plugins', name: 'maven-resources-plugin', 
> version:'2.6'
>  testCompile group: 'junit', name: 'junit', version: '4.12'
>  compile group: 'org.apache.parquet', name: 'parquet-hadoop-bundle', version: 
> '1.9.0'
>  compile group: 'org.apache.hive', name: 'hive-jdbc', version: '2.1.1'
>Reporter: 정진영
>Priority: Major
>
> Hi. I need help.
> now Im trying to migration Cloudera flume to Apache flume. 
> (which means no use XXX _chd5.16 any more)
> during the test, when i stored data in HDFS i faced this problem below.
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/tracing/SpanReceiverHostjava.lang.NoClassDefFoundError: 
> org/apache/hadoop/tracing/SpanReceiverHost at 
> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:634) at 
> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619) at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2816) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:238)
>  at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:230)
>  at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:675)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>  at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:46) at 
> com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:672)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> java.lang.ClassNotFoundException: org.apache.hadoop.tracing.SpanReceiverHost 
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
>  
> And i don't know fixed it .
> please help me. 
> thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17224) Install Intel ISA-L library in Dockerfile

2020-11-05 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226917#comment-17226917
 ] 

Kihwal Lee commented on HADOOP-17224:
-

[~tasanuma], If there are many test failures caused after the commit of this 
feature, it might be better to revert and rework, unless there is a simple 
change that will fix most failures quickly.  It does not matter which part is 
to blame. It could be faulty assumptions or designs in existing tests or 
unforeseen negative side-effect of the feature.  The point is, trunk build is 
in a broken state and more commits continue to come in with the justification 
like "I didn't break it. It was broken before".  This is a situation we really 
want to avoid.

If the test failures cannot be fixed quickly, we need to bring it back to the 
sane state first. Again, this has nothing to do with who is technically right 
or wrong.  Please review the test failures and determine whether they can be 
fixed quickly.  Feel free to ask if you need additional eyes.

> Install Intel ISA-L library in Dockerfile
> -
>
> Key: HADOOP-17224
> URL: https://issues.apache.org/jira/browse/HADOOP-17224
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Blocker
> Fix For: 3.4.0
>
>
> Currently, there is not isa-l library in the docker container, and jenkins 
> skips the natvie tests, TestNativeRSRawCoder and TestNativeXORRawCoder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17221) update log4j-1.2.17 to atlassian version( To Address: CVE-2019-17571)

2020-08-24 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17183582#comment-17183582
 ] 

Kihwal Lee commented on HADOOP-17221:
-

I know some security scanners recommend the Atlassian version, but last time I 
checked their repo, I did not find any fix that would address the CVE.  We 
should double check this.

> update log4j-1.2.17 to atlassian version( To Address: CVE-2019-17571)
> -
>
> Key: HADOOP-17221
> URL: https://issues.apache.org/jira/browse/HADOOP-17221
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-17221-001.patch
>
>
> Currentlly there are no active release under 1.X in log4j and log4j2 is 
> incompatiable to upgrade (see HADOOP-16206 ) for more details.
> But following CVE is reported on log4j 1.2.17..I think,we should consider to 
> update to 
> Atlassian([https://mvnrepository.com/artifact/log4j/log4j/1.2.17-atlassian-0.4])
>  or redhat versions
> [https://nvd.nist.gov/vuln/detail/CVE-2019-17571]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15338) Java 11 runtime support

2020-06-09 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129854#comment-17129854
 ] 

Kihwal Lee edited comment on HADOOP-15338 at 6/9/20, 10:36 PM:
---

May be I missed it being discussed before. 
 I see illegal access warnings when I run FsShell commands.
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.xbill.DNS.ResolverConfig  to method 
sun.net.dns.ResolverConfiguration.open()
WARNING: Please consider reporting this to the maintainers of 
org.xbill.DNS.ResolverConfig
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
{noformat}
Of course, it is just a first warning. If you set {{--illegal-access=debug}}, 
you see whole lot more. The default is still {{permit}} in jdk11 and the code 
is working despite the warning message. Are we doing anything to address these?


was (Author: kihwal):
May be I missed it being discussed before. 
I see illegal access warnings when I run FsShell commands.
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.xbill.DNS.ResolverConfig  to method 
sun.net.dns.ResolverConfiguration.open()
WARNING: Please consider reporting this to the maintainers of 
org.xbill.DNS.ResolverConfig
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
{noformat}

Of course, it is just a first warning. If you set {{--illegal-access=debug}}, 
you see whole lot more. The default is still {{permit}} and the code is working 
despite the warning message.  Are we doing anything to address these?

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15338) Java 11 runtime support

2020-06-09 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129854#comment-17129854
 ] 

Kihwal Lee commented on HADOOP-15338:
-

May be I missed it being discussed before. 
I see illegal access warnings when I run FsShell commands.
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.xbill.DNS.ResolverConfig  to method 
sun.net.dns.ResolverConfiguration.open()
WARNING: Please consider reporting this to the maintainers of 
org.xbill.DNS.ResolverConfig
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
{noformat}

Of course, it is just a first warning. If you set {{--illegal-access=debug}}, 
you see whole lot more. The default is still {{permit}} and the code is working 
despite the warning message.  Are we doing anything to address these?

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16255) ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum

2020-06-08 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17128690#comment-17128690
 ] 

Kihwal Lee commented on HADOOP-16255:
-

Cherry-picked the change to branch-3.1. Committed the branch-2 patch to 
branch-2.10 and also cherry-picked it to branch-2.9 and branch-2.8.  Branch 2.9 
and 2.8 are EOL except for security fixes, but some of us are still tracking 
the branches.

> ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum
> --
>
> Key: HADOOP-16255
> URL: https://issues.apache.org/jira/browse/HADOOP-16255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Jungtaek Lim
>Priority: Major
> Fix For: 2.8.6, 3.2.1, 2.10.1, 3.1.5
>
> Attachments: HADOOP-16255-branch-2-001.patch
>
>
> ChecksumFS doesn't override FilterFS rename/3, so doesn't rename the checksum 
> with the file.
> As a result, if a file is renamed over an existing file using rename(src, 
> dest, OVERWRITE) the renamed file will be considered to have an invalid 
> checksum -the old one is picked up instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16255) ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum

2020-06-08 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16255:

Fix Version/s: 2.9.3

> ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum
> --
>
> Key: HADOOP-16255
> URL: https://issues.apache.org/jira/browse/HADOOP-16255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Jungtaek Lim
>Priority: Major
> Fix For: 2.8.6, 3.2.1, 2.9.3, 2.10.1, 3.1.5
>
> Attachments: HADOOP-16255-branch-2-001.patch
>
>
> ChecksumFS doesn't override FilterFS rename/3, so doesn't rename the checksum 
> with the file.
> As a result, if a file is renamed over an existing file using rename(src, 
> dest, OVERWRITE) the renamed file will be considered to have an invalid 
> checksum -the old one is picked up instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16255) ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum

2020-06-08 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16255:

Fix Version/s: 3.1.5
   2.10.1
   2.8.6

> ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum
> --
>
> Key: HADOOP-16255
> URL: https://issues.apache.org/jira/browse/HADOOP-16255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Jungtaek Lim
>Priority: Major
> Fix For: 2.8.6, 3.2.1, 2.10.1, 3.1.5
>
> Attachments: HADOOP-16255-branch-2-001.patch
>
>
> ChecksumFS doesn't override FilterFS rename/3, so doesn't rename the checksum 
> with the file.
> As a result, if a file is renamed over an existing file using rename(src, 
> dest, OVERWRITE) the renamed file will be considered to have an invalid 
> checksum -the old one is picked up instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16255) ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum

2020-06-08 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17128685#comment-17128685
 ] 

Kihwal Lee commented on HADOOP-16255:
-

This needs to be in all active branches.

> ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum
> --
>
> Key: HADOOP-16255
> URL: https://issues.apache.org/jira/browse/HADOOP-16255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Jungtaek Lim
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HADOOP-16255-branch-2-001.patch
>
>
> ChecksumFS doesn't override FilterFS rename/3, so doesn't rename the checksum 
> with the file.
> As a result, if a file is renamed over an existing file using rename(src, 
> dest, OVERWRITE) the renamed file will be considered to have an invalid 
> checksum -the old one is picked up instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17070) LocalFs rename is broken.

2020-06-08 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-17070.
-
Resolution: Duplicate

> LocalFs rename is broken.
> -
>
> Key: HADOOP-17070
> URL: https://issues.apache.org/jira/browse/HADOOP-17070
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>
> In LocalFs and any other FileContext based on ChecksumFs, the 
> {{renameInternal(src, dest, overwrite)}} method is broken since it is not 
> implemented. The method in FilterFs will be invoked, which is checksum 
> unaware. This can result in file leaks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17070) LocalFs rename is broken.

2020-06-08 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-17070:
---

Assignee: Kihwal Lee

> LocalFs rename is broken.
> -
>
> Key: HADOOP-17070
> URL: https://issues.apache.org/jira/browse/HADOOP-17070
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>
> In LocalFs and any other FileContext based on ChecksumFs, the 
> {{renameInternal(src, dest, overwrite)}} method is broken since it is not 
> implemented. The method in FilterFs will be invoked, which is checksum 
> unaware. This can result in file leaks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17070) LocalFs rename is broken.

2020-06-08 Thread Kihwal Lee (Jira)
Kihwal Lee created HADOOP-17070:
---

 Summary: LocalFs rename is broken.
 Key: HADOOP-17070
 URL: https://issues.apache.org/jira/browse/HADOOP-17070
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


In LocalFs and any other FileContext based on ChecksumFs, the 
{{renameInternal(src, dest, overwrite)}} method is broken since it is not 
implemented. The method in FilterFs will be invoked, which is checksum unaware. 
This can result in file leaks.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13867) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2020-06-05 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-13867:

Comment: was deleted

(was: I just found out this breaks {{LocalFileSystem}}. If the three argument 
version of rename() is called against {{LocalFileSystem}}, it ends up calling 
{{FilterFileSystem}}'s method, which doesn't do the right thing (i.e. the crc 
files are not renamed). This is because {{ChecksumFileSystem}} does not 
override this method.  When this Jira was done, all subclasses of 
{{FilterFileSystem}} should have been checked for this kind of side-effect.

 )

> FilterFileSystem should override rename(.., options) to take effect of Rename 
> options called via FilterFileSystem implementations
> -
>
> Key: HADOOP-13867
> URL: https://issues.apache.org/jira/browse/HADOOP-13867
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2, 2.8.2
>
> Attachments: HADOOP-13867-01.patch
>
>
> HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving 
> to trash.
> But for FilterFileSystem implementations since this rename(..options) is not 
> overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
> option is not delegated to NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13867) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2020-06-03 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17125509#comment-17125509
 ] 

Kihwal Lee commented on HADOOP-13867:
-

I just found out this breaks {{LocalFileSystem}}. If the three argument version 
of rename() is called against {{LocalFileSystem}}, it ends up calling 
{{FilterFileSystem}}'s method, which doesn't do the right thing (i.e. the crc 
files are not renamed). This is because {{ChecksumFileSystem}} does not 
override this method.  When this Jira was done, all subclasses of 
{{FilterFileSystem}} should have been checked for this kind of side-effect.

 

> FilterFileSystem should override rename(.., options) to take effect of Rename 
> options called via FilterFileSystem implementations
> -
>
> Key: HADOOP-13867
> URL: https://issues.apache.org/jira/browse/HADOOP-13867
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2, 2.8.2
>
> Attachments: HADOOP-13867-01.patch
>
>
> HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving 
> to trash.
> But for FilterFileSystem implementations since this rename(..options) is not 
> overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
> option is not delegated to NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17035) Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents

2020-05-12 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-17035.
-
Hadoop Flags: Reviewed
  Resolution: Fixed

Thanks for the patch. I've added you to the Hadoop Common contributors role.

> Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents
> --
>
> Key: HADOOP-17035
> URL: https://issues.apache.org/jira/browse/HADOOP-17035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sungpeo Kook
>Assignee: Sungpeo Kook
>Priority: Trivial
> Fix For: 3.4.0
>
>
> There are typos 'Interruped' and 'timout'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17035) Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents

2020-05-12 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-17035:

Fix Version/s: 3.4.0

> Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents
> --
>
> Key: HADOOP-17035
> URL: https://issues.apache.org/jira/browse/HADOOP-17035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sungpeo Kook
>Assignee: Sungpeo Kook
>Priority: Trivial
> Fix For: 3.4.0
>
>
> There are typos 'Interruped' and 'timout'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17035) Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents

2020-05-12 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-17035:
---

Assignee: Sungpeo Kook

> Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents
> --
>
> Key: HADOOP-17035
> URL: https://issues.apache.org/jira/browse/HADOOP-17035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sungpeo Kook
>Assignee: Sungpeo Kook
>Priority: Trivial
>
> There are typos 'Interruped' and 'timout'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8143) Change distcp to have -pb on by default

2020-05-07 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101755#comment-17101755
 ] 

Kihwal Lee commented on HADOOP-8143:


For those who want this to be the default behavior, you can achieve it using 
config. 

In {{distcp-site.xml}}, set the following to preserve checksum and block size.
{code:xml}

distcp.always.preserve.checksum
true

{code}

If you build your own hadoop releases, you can put it in {{distcp-default.xml}}.

> Change distcp to have -pb on by default
> ---
>
> Key: HADOOP-8143
> URL: https://issues.apache.org/jira/browse/HADOOP-8143
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dave Thompson
>Assignee: Mithun Radhakrishnan
>Priority: Minor
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-8143.1.patch, HADOOP-8143.2.patch, 
> HADOOP-8143.3.patch
>
>
> We should have the preserve blocksize (-pb) on in distcp by default.
> checksum which is on by default will always fail if blocksize is not the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8143) Change distcp to have -pb on by default

2020-05-07 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101711#comment-17101711
 ] 

Kihwal Lee commented on HADOOP-8143:


This was hdfs-centric change, while distcp is used across many different file 
system implementations, more so now than before.  What made sense back then 
doesn't make sense now. 

 

+1 for revert.

> Change distcp to have -pb on by default
> ---
>
> Key: HADOOP-8143
> URL: https://issues.apache.org/jira/browse/HADOOP-8143
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dave Thompson
>Assignee: Mithun Radhakrishnan
>Priority: Minor
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-8143.1.patch, HADOOP-8143.2.patch, 
> HADOOP-8143.3.patch
>
>
> We should have the preserve blocksize (-pb) on in distcp by default.
> checksum which is on by default will always fail if blocksize is not the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15743) Jetty and SSL tunings to stabilize KMS performance

2020-03-26 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17067988#comment-17067988
 ] 

Kihwal Lee commented on HADOOP-15743:
-

Loosely related: https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8203190
Someone reported that this bug caused HttpServer2 based service to almost hang 
sometimes. I imagine it can affect SSL session cache as well.

> Jetty and SSL tunings to stabilize KMS performance 
> ---
>
> Key: HADOOP-15743
> URL: https://issues.apache.org/jira/browse/HADOOP-15743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Major
>
> The KMS has very low throughput with high client failure rates.  The 
> following config options will "stabilize" the KMS under load:
>  # Disable ECDH algos because java's SSL engine is inexplicably HORRIBLE.
>  # Reduce SSL session cache size (unlimited) and ttl (24h).  The memory cache 
> has very poor performance and causes extreme GC collection pressure. Load 
> balancing diminishes the effectiveness of the cache to 1/N-hosts anyway.
>  ** -Djavax.net.ssl.sessionCacheSize=1000
>  ** -Djavax.net.ssl.sessionCacheTimeout=6
>  # Completely disable thread LowResourceMonitor to stop jetty from 
> immediately closing incoming connections during connection bursts.  Client 
> retries cause jetty to remain in a low resource state until many clients fail 
> and cause thousands of sockets to linger in various close related states.
>  # Set min/max threads to 4x processors.   Jetty recommends only 50 to 500 
> threads.  Java's SSL engine has excessive synchronization that limits 
> performance anyway.
>  # Set https idle timeout to 6s.
>  # Significantly increase max fds to at least 128k.  Recommend using a VIP 
> load balancer with a lower limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2020-02-05 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030971#comment-17030971
 ] 

Kihwal Lee commented on HADOOP-15787:
-

I have cherry-picked it to other branches.

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.3.0, 2.8.6, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2020-02-05 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15787:

Fix Version/s: 2.8.6

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.3.0, 2.8.6, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2020-02-05 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15787:

Fix Version/s: 2.10.1
   3.2.2
   3.1.4

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15787) [JDK11] TestIPC.testRTEDuringConnectionSetup fails

2020-02-05 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030917#comment-17030917
 ] 

Kihwal Lee commented on HADOOP-15787:
-

The same failure happens in other branches when using the new jdk 8u242. The 
visibility of {{SocketAdapter}} was changed.

> [JDK11] TestIPC.testRTEDuringConnectionSetup fails
> --
>
> Key: HADOOP-15787
> URL: https://issues.apache.org/jira/browse/HADOOP-15787
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Java 11+28, CentOS 7.5
>Reporter: Akira Ajisaka
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15787.01.patch, HADOOP-15787.02.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.ipc.TestIPC
> [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 82.577 s <<< FAILURE! - in org.apache.hadoop.ipc.TestIPC
> [ERROR] testRTEDuringConnectionSetup(org.apache.hadoop.ipc.TestIPC)  Time 
> elapsed: 0.462 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ipc.TestIPC.testRTEDuringConnectionSetup(TestIPC.java:625)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11610) hadoop-common build fails on JDK9

2020-02-04 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-11610.
-
Resolution: Won't Fix

> hadoop-common build fails on JDK9
> -
>
> Key: HADOOP-11610
> URL: https://issues.apache.org/jira/browse/HADOOP-11610
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
> Environment: JDK9
>Reporter: Tiago Stürmer Daitx
>Priority: Minor
>  Labels: build-failure, common, jdk9
> Attachments: hadoop-common_jdk9-support.patch
>
>
> The new JDK9 directory structure lacks a jre directory under jdk. Due to that 
> hadoop-common fails to build:
> Error output:
>  [exec] -- The C compiler identification is GNU 4.8.2
>  [exec] -- The CXX compiler identification is GNU 4.8.2
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/home/tdaitx/jdk9-dev/build/linux-ppc64-normal-server-release/images/jdk/include,
>  JAVA_INCLUDE_PATH2=/home/tdai$
> x/jdk9-dev/build/linux-ppc64-normal-server-release/images/jdk/include/linux
>  [exec] CMake Error at JNIFlags.cmake:120 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/home/tdaitx/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11610) hadoop-common build fails on JDK9

2020-02-04 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030140#comment-17030140
 ] 

Kihwal Lee commented on HADOOP-11610:
-

JDK11 build is being fixed in HADOOP-16795

> hadoop-common build fails on JDK9
> -
>
> Key: HADOOP-11610
> URL: https://issues.apache.org/jira/browse/HADOOP-11610
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
> Environment: JDK9
>Reporter: Tiago Stürmer Daitx
>Priority: Minor
>  Labels: build-failure, common, jdk9
> Attachments: hadoop-common_jdk9-support.patch
>
>
> The new JDK9 directory structure lacks a jre directory under jdk. Due to that 
> hadoop-common fails to build:
> Error output:
>  [exec] -- The C compiler identification is GNU 4.8.2
>  [exec] -- The CXX compiler identification is GNU 4.8.2
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/home/tdaitx/jdk9-dev/build/linux-ppc64-normal-server-release/images/jdk/include,
>  JAVA_INCLUDE_PATH2=/home/tdai$
> x/jdk9-dev/build/linux-ppc64-normal-server-release/images/jdk/include/linux
>  [exec] CMake Error at JNIFlags.cmake:120 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/home/tdaitx/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException

2020-01-24 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17023120#comment-17023120
 ] 

Kihwal Lee commented on HADOOP-16683:
-

Picked to branch-2.10.

> Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped 
> AccessControlException
> --
>
> Key: HADOOP-16683
> URL: https://issues.apache.org/jira/browse/HADOOP-16683
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16683.001.patch, HADOOP-16683.002.patch, 
> HADOOP-16683.003.patch, HADOOP-16683.branch-3.1.001.patch, 
> HADOOP-16683.branch-3.2.001.patch, HADOOP-16683.branch-3.2.001.patch
>
>
> Follow up patch on HADOOP-16580.
> We successfully disabled the retry in case of an AccessControlException which 
> has resolved some of the cases, but in other cases AccessControlException is 
> wrapped inside another IOException and you can only get the original 
> exception by calling getCause().
> Let's add this extra case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException

2020-01-24 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17023119#comment-17023119
 ] 

Kihwal Lee commented on HADOOP-16580:
-

Picked to branch-2.10.

> Disable retry of FailoverOnNetworkExceptionRetry in case of 
> AccessControlException
> --
>
> Key: HADOOP-16580
> URL: https://issues.apache.org/jira/browse/HADOOP-16580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16580.001.patch, HADOOP-16580.002.patch, 
> HADOOP-16580.003.patch, HADOOP-16580.branch-3.2.001.patch
>
>
> HADOOP-14982 handled the case where a SaslException is thrown. The issue 
> still persists, since the exception that is thrown is an 
> *AccessControlException* because user has no kerberos credentials. 
> My suggestion is that we should add this case as well to 
> {{FailoverOnNetworkExceptionRetry}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException

2020-01-24 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16580:

Fix Version/s: 2.10.1

> Disable retry of FailoverOnNetworkExceptionRetry in case of 
> AccessControlException
> --
>
> Key: HADOOP-16580
> URL: https://issues.apache.org/jira/browse/HADOOP-16580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16580.001.patch, HADOOP-16580.002.patch, 
> HADOOP-16580.003.patch, HADOOP-16580.branch-3.2.001.patch
>
>
> HADOOP-14982 handled the case where a SaslException is thrown. The issue 
> still persists, since the exception that is thrown is an 
> *AccessControlException* because user has no kerberos credentials. 
> My suggestion is that we should add this case as well to 
> {{FailoverOnNetworkExceptionRetry}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16683) Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException

2020-01-24 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16683:

Fix Version/s: 2.10.1

> Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped 
> AccessControlException
> --
>
> Key: HADOOP-16683
> URL: https://issues.apache.org/jira/browse/HADOOP-16683
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16683.001.patch, HADOOP-16683.002.patch, 
> HADOOP-16683.003.patch, HADOOP-16683.branch-3.1.001.patch, 
> HADOOP-16683.branch-3.2.001.patch, HADOOP-16683.branch-3.2.001.patch
>
>
> Follow up patch on HADOOP-16580.
> We successfully disabled the retry in case of an AccessControlException which 
> has resolved some of the cases, but in other cases AccessControlException is 
> wrapped inside another IOException and you can only get the original 
> exception by calling getCause().
> Let's add this extra case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16770) Compare two directories in HDFS filesystem for every 5 mins interval for same cluster. (smiliar like diff command in linux)

2019-12-18 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-16770.
-
Resolution: Invalid

You will probably get more suggestions by asking at the user mailing list.

> Compare two directories in HDFS filesystem for every 5 mins interval for same 
> cluster. (smiliar like diff command in linux)
> ---
>
> Key: HADOOP-16770
> URL: https://issues.apache.org/jira/browse/HADOOP-16770
> Project: Hadoop Common
>  Issue Type: Task
>  Components: hdfs-client
>Affects Versions: 2.10.0
>Reporter: GanGSTR
>Priority: Major
>
> Hi team,
> Created two hadoop clusters, one cluster is storing files in new directories 
> based on TIME based directories are created in Hadoop FileSystem say 
> /a/b/time/a.txt b.txt..
> For every 5 mins, compare this cluster 1 filesytem for two different 
> directories whether any new directories with list of files are updated or not 
> , if its updated in dir 1, then update those files only to be moved to dir 2. 
> Later those new directories files copied to HDFS cluster 2 file system. 
> Currently HDFS not supported hdfs dfs -diff command,  Any solution for this?
> Have tried  -copyFromLocal and copyToLocal command, it uses lot of diskspace 
> while copying local to hdfs & hdfs to local.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16770) Compare two directories in HDFS filesystem for every 5 mins interval for same cluster. (smiliar like diff command in linux)

2019-12-18 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16999551#comment-16999551
 ] 

Kihwal Lee commented on HADOOP-16770:
-

That's not something file systems support. I am sure many users have built a 
simple rsync-like tools.

> Compare two directories in HDFS filesystem for every 5 mins interval for same 
> cluster. (smiliar like diff command in linux)
> ---
>
> Key: HADOOP-16770
> URL: https://issues.apache.org/jira/browse/HADOOP-16770
> Project: Hadoop Common
>  Issue Type: Task
>  Components: hdfs-client
>Affects Versions: 2.10.0
>Reporter: GanGSTR
>Priority: Major
>
> Hi team,
> Created two hadoop clusters, one cluster is storing files in new directories 
> based on TIME based directories are created in Hadoop FileSystem say 
> /a/b/time/a.txt b.txt..
> For every 5 mins, compare this cluster 1 filesytem for two different 
> directories whether any new directories with list of files are updated or not 
> , if its updated in dir 1, then update those files only to be moved to dir 2. 
> Later those new directories files copied to HDFS cluster 2 file system. 
> Currently HDFS not supported hdfs dfs -diff command,  Any solution for this?
> Have tried  -copyFromLocal and copyToLocal command, it uses lot of diskspace 
> while copying local to hdfs & hdfs to local.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Deleted] (HADOOP-16738) Best Big data hadoop training in pune

2019-12-02 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee deleted HADOOP-16738:



> Best Big data hadoop training in pune
> -
>
> Key: HADOOP-16738
> URL: https://issues.apache.org/jira/browse/HADOOP-16738
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: surbhi nahta
>Priority: Major
>
> h1. What are some software and skills that every data scientist should know 
> (including R, Matlab, and Hadoop)? Also, what are some resources for learning 
> Hadoop?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Deleted] (HADOOP-16737) Best Big data hadoop training in pune

2019-12-02 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee deleted HADOOP-16737:



> Best Big data hadoop training in pune
> -
>
> Key: HADOOP-16737
> URL: https://issues.apache.org/jira/browse/HADOOP-16737
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: surbhi nahta
>Priority: Minor
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> h1. What are some software and skills that every data scientist should know 
> (including R, Matlab, and Hadoop)? Also, what are some resources for learning 
> Hadoop?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16738) Best Big data hadoop training in pune

2019-12-02 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-16738.
-
Resolution: Invalid

> Best Big data hadoop training in pune
> -
>
> Key: HADOOP-16738
> URL: https://issues.apache.org/jira/browse/HADOOP-16738
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: surbhi nahta
>Priority: Major
>
> h1. What are some software and skills that every data scientist should know 
> (including R, Matlab, and Hadoop)? Also, what are some resources for learning 
> Hadoop?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16698) Certificate-based Authentication Support

2019-11-11 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-16698.
-
Resolution: Duplicate

> Certificate-based Authentication Support
> 
>
> Key: HADOOP-16698
> URL: https://issues.apache.org/jira/browse/HADOOP-16698
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Reporter: Xiaofei Gu
>Priority: Minor
>
> Currently, Hadoop is using the Kerberos ticket based authentication protocol. 
> This proposal is about developing passwordless/certificate-based 
> authentication using SSL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16698) Certificate-based Authentication Support

2019-11-11 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971865#comment-16971865
 ] 

Kihwal Lee commented on HADOOP-16698:
-

This is being worked on in HADOOP-15981.

> Certificate-based Authentication Support
> 
>
> Key: HADOOP-16698
> URL: https://issues.apache.org/jira/browse/HADOOP-16698
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Reporter: Xiaofei Gu
>Priority: Minor
>
> Currently, Hadoop is using the Kerberos ticket based authentication protocol. 
> This proposal is about developing passwordless/certificate-based 
> authentication using SSL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15977) RPC support for TLS

2019-11-11 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971864#comment-16971864
 ] 

Kihwal Lee commented on HADOOP-15977:
-

We have the TLS-encrypted rpc feature in production and currently in the 
process of adding certificate-based auth. Once that is done, I hope [~daryn] 
will be able to carve out some time to post patches. 

> RPC support for TLS
> ---
>
> Key: HADOOP-15977
> URL: https://issues.apache.org/jira/browse/HADOOP-15977
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
>
> Umbrella ticket to track adding TLS and mutual TLS support to RPC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-08 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16946997#comment-16946997
 ] 

Kihwal Lee commented on HADOOP-16641:
-

I believe some unit tests still use the writable engine.  E.g. 
{{RPCCallBenchmark}} allows use of {{WritableRpcEngine}}. 

> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
>  Labels: performance
> Attachments: config-get-class-by-name.png, llap-rpc-locks.svg
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 
> When reading from HDFS with good locality, this fills up the contended lock 
> profile with almost no other contributors to the locking - see  
> [^llap-rpc-locks.svg] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2019-09-25 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16937983#comment-16937983
 ] 

Kihwal Lee commented on HADOOP-9747:


Trunk at that time had diverged from branch-2, so back-porting was not 
straightforward. It can be done, but will take some effort.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-9747-trunk-03.patch, HADOOP-9747-trunk-04.patch, 
> HADOOP-9747-trunk.01.patch, HADOOP-9747-trunk.02.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-19 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16582:

Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

Thanks, Steve.  The checkstyle issue was fixed and the patch committed to all 
active branches.

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2
>
> Attachments: HADOOP-16582.1.patch, HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-19 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16582:

Fix Version/s: 3.2.2
   3.1.3
   2.9.3
   2.8.6
   3.3.0
   2.10.0

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2
>
> Attachments: HADOOP-16582.1.patch, HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-18 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16932514#comment-16932514
 ] 

Kihwal Lee commented on HADOOP-16582:
-

Also fixed the existing mkdirs() method, which had the same checkstyle issue.

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16582.1.patch, HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-18 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16582:

Attachment: HADOOP-16582.1.patch

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16582.1.patch, HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16582:

Status: Patch Available  (was: Open)

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16582:

Attachment: HADOOP-16582.patch

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)
Kihwal Lee created HADOOP-16582:
---

 Summary: LocalFileSystem's mkdirs() does not work as expected 
under viewfs.
 Key: HADOOP-16582
 URL: https://issues.apache.org/jira/browse/HADOOP-16582
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the implementation 
in {{RawLocalFileSystem}} is called and the directory permission is determined 
by the umask.  However, if it is under {{ViewFileSystem}}, the default 
implementation in {{FileSystem}} is called and this causes explicit {{chmod()}} 
to 0777.

The {{mkdirs(Path)}} method needs to be overriden in
- ViewFileSystem to avoid calling the default implementation
- ChRootedFileSystem for proper resolution of viewfs mount table
- FilterFileSystem to avoid calling the default implementation

Only then the same method in the target ({{LocalFileSystem}} in this case) will 
be called.  Hdfs does not suffer from the same flaw since it applies umask in 
all cases, regardless of what version of {{mkdirs()}} was called.




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-16582:
---

Assignee: Kihwal Lee

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-22 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913315#comment-16913315
 ] 

Kihwal Lee commented on HADOOP-16524:
-

If {{reload()}} throws an exception, {{lastLoaded}} is not modified, triggering 
immediate retry.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912590#comment-16912590
 ] 

Kihwal Lee edited comment on HADOOP-16524 at 8/21/19 6:37 PM:
--

This does not cover DataNode, since its front-end is netty-based. The 
HttpServer2/jetty based server is internal. Unlike HttpServer2, the netty-based 
DatanodeHttpServer still uses SSLFactory. We have internally modified 
SSLFactory to enable automatic reloading of cert.  This will also make secure 
mapreduce shuffle server to reload cert.  I can add it to this patch if people 
are interested. We have used it for several years in production.


was (Author: kihwal):
This does not cover DataNode, since its front-end is netty-based. The 
HttpServer2/jetty based server is internal. Unlike HttpServer2, the netty-based 
DatanodeHttpServer still uses SSLFactory. We have internally modified 
SSLFactory to enable automatic reloading of cert.  This will also make secure 
mapreduce shuffle server to reload cert.  I can add it to this patch if people 
are interested.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912590#comment-16912590
 ] 

Kihwal Lee commented on HADOOP-16524:
-

This does not cover DataNode, since its front-end is netty-based. The 
HttpServer2/jetty based server is internal. Unlike HttpServer2, the netty-based 
DatanodeHttpServer still uses SSLFactory. We have internally modified 
SSLFactory to enable automatic reloading of cert.  This will also make secure 
mapreduce shuffle server to reload cert.  I can add it to this patch if people 
are interested.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16524:

Assignee: Kihwal Lee
  Status: Patch Available  (was: Open)

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16524:

Description: Jetty 9 simplified reloading of keystore.   This allows hadoop 
daemon's SSL cert to be updated in place without having to restart the service. 
 (was: Jetty 9 simplified reloading of keystore. )

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16524:

Attachment: HADOOP-16524.patch

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)
Kihwal Lee created HADOOP-16524:
---

 Summary: Automatic keystore reloading for HttpServer2
 Key: HADOOP-16524
 URL: https://issues.apache.org/jira/browse/HADOOP-16524
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee


Jetty 9 simplified reloading of keystore. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912585#comment-16912585
 ] 

Kihwal Lee commented on HADOOP-16517:
-

Added a support for YARN. Tested on a small cluster.

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.1.patch, HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16517:

Attachment: HADOOP-16517.1.patch

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.1.patch, HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908445#comment-16908445
 ] 

Kihwal Lee commented on HADOOP-16517:
-

YARN's WebAppUtils#loadSslConfiguration() does not support this, so will need 
to be modified as well.

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16517:

Status: Patch Available  (was: Open)

The patch has no unit test.

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16517:

Attachment: HADOOP-16517.patch

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-16517:
---

Assignee: Kihwal Lee

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-16517:
---

 Summary: Allow optional mutual TLS in HttpServer2
 Key: HADOOP-16517
 URL: https://issues.apache.org/jira/browse/HADOOP-16517
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee


Currently the webservice can enforce mTLS by setting 
"dfs.client.https.need-auth" on the server side. (The config name is 
misleading, as it is actually server-side config. It has been deprecated from 
the client config)  A hadoop client can talk to mTLS enforced web service by 
setting "hadoop.ssl.require.client.cert" with proper ssl config.

We have seen use case where mTLS needs to be enabled optionally for only those 
clients who supplies their cert. In a mixed environment like this, individual 
services may still enforce mTLS for a subset of endpoints by checking the 
existence of x509 cert in the request.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16323) https everywhere in Maven settings

2019-05-22 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16845912#comment-16845912
 ] 

Kihwal Lee commented on HADOOP-16323:
-

We should do this to all active branches up to 2.8 and perhaps 2.7.

> https everywhere in Maven settings
> --
>
> Key: HADOOP-16323
> URL: https://issues.apache.org/jira/browse/HADOOP-16323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
> Attachments: HADOOP-16323.001.patch
>
>
> We should use https everywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14524) Make CryptoCodec Closeable so it can be cleaned up proactively

2019-05-10 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16837429#comment-16837429
 ] 

Kihwal Lee commented on HADOOP-14524:
-

Cherry-picked to 2.8. The new unit test passed.

> Make CryptoCodec Closeable so it can be cleaned up proactively
> --
>
> Key: HADOOP-14524
> URL: https://issues.apache.org/jira/browse/HADOOP-14524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.6
>
> Attachments: HADOOP-14524.01.patch, HADOOP-14524.branch-2.01.patch
>
>
> See HADOOP-14523 for motivation. Credit to [~mi...@cloudera.com] for 
> reporting initially there.
> Basically, the {{CryptoCodec}} class is not a closeable, but the 
> {{OpensslAesCtrCryptoCodec}} implementation of it contains a closeable member 
> (the Random object). Currently it is left for {{finalize()}} to clean up, 
> this depends on when a FGC is run, and would create problems if 
> {{OpensslAesCtrCryptoCodec}} is used with {{OsSecureRandom}}, which could let 
> OS run out of FDs on {{/dev/urandom}} if too many codecs created.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14524) Make CryptoCodec Closeable so it can be cleaned up proactively

2019-05-10 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14524:

Fix Version/s: 2.8.6

> Make CryptoCodec Closeable so it can be cleaned up proactively
> --
>
> Key: HADOOP-14524
> URL: https://issues.apache.org/jira/browse/HADOOP-14524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.6
>
> Attachments: HADOOP-14524.01.patch, HADOOP-14524.branch-2.01.patch
>
>
> See HADOOP-14523 for motivation. Credit to [~mi...@cloudera.com] for 
> reporting initially there.
> Basically, the {{CryptoCodec}} class is not a closeable, but the 
> {{OpensslAesCtrCryptoCodec}} implementation of it contains a closeable member 
> (the Random object). Currently it is left for {{finalize()}} to clean up, 
> this depends on when a FGC is run, and would create problems if 
> {{OpensslAesCtrCryptoCodec}} is used with {{OsSecureRandom}}, which could let 
> OS run out of FDs on {{/dev/urandom}} if too many codecs created.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16283) Error in reading Kerberos principals from the Keytab file

2019-05-01 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831001#comment-16831001
 ] 

Kihwal Lee commented on HADOOP-16283:
-

Thanks for the analysis.  It looks like branch-3.x and trunk are at kerby 1.0.1 
and we will need to move to 1.1.2 when it is released.

> Error in reading Kerberos principals from the Keytab file
> -
>
> Key: HADOOP-16283
> URL: https://issues.apache.org/jira/browse/HADOOP-16283
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Farhan Khan
>Priority: Major
>
> The error refers to the launching of Namenode daemon when Kerberos is used 
> for authentication. While reading Spnego principals (HTTP/.*) from the keytab 
> file to start the Jetty server, KerberosUtil throws an error:
> {code:java}
> javax.servlet.ServletException: java.io.IOException: Unexpected octets len: 
> 16716
>     at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
>     at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
>     at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
>     at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139)
>     at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>     at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>     at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406)
>     at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368)
>     at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>     at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>     at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
>     at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>     at org.eclipse.jetty.server.Server.start(Server.java:427)
>     at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
>     at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>     at org.eclipse.jetty.server.Server.doStart(Server.java:394)
>     at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>     at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1140)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:177)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:872)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:913)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1646)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1713)
> Caused by: java.io.IOException: Unexpected octets len: 16716
>     at 
> org.apache.kerby.kerberos.kerb.KrbInputStream.readCountedOctets(KrbInputStream.java:72)
>     at 
> org.apache.kerby.kerberos.kerb.KrbInputStream.readKey(KrbInputStream.java:48)
>     at 
> org.apache.kerby.kerberos.kerb.keytab.KeytabEntry.load(KeytabEntry.java:55)
>     at org.apache.kerby.kerberos.kerb.keytab.Keytab.readEntry(Keytab.java:203)
>     at 
> org.apache.kerby.kerberos.kerb.keytab.Keytab.readEntries(Keytab.java:189)
>     at org.apache.kerby.kerberos.kerb.keytab.Keytab.doLoad(Keytab.java:161)
>     at org.apache.kerby.kerberos.kerb.keytab.Keytab.load(Keytab.java:155)
>     at org.apache.kerby.kerberos.kerb.keytab.Keytab.load(Keytab.java:143)
>     at org.apache.kerby.kerberos.kerb.keytab.Keytab.loadKeytab(Keytab.java:55)
>     at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.getPrincipalNames(KerberosUtil.java:225)
>     at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.getPrincipalNames(KerberosUtil.java:244)
>     at 
> org.apache.hadoop.security.authentica

[jira] [Commented] (HADOOP-11572) s3a delete() operation fails during a concurrent delete of child entries

2019-04-10 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16814576#comment-16814576
 ] 

Kihwal Lee commented on HADOOP-11572:
-

Cherry-picked to 2.8 to get more information on  occasional delete failures.

> s3a delete() operation fails during a concurrent delete of child entries
> 
>
> Key: HADOOP-11572
> URL: https://issues.apache.org/jira/browse/HADOOP-11572
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.6
>
> Attachments: HADOOP-11572-001.patch, HADOOP-11572-branch-2-002.patch, 
> HADOOP-11572-branch-2-003.patch
>
>
> Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a 
> child entry during a recursive directory delete is propagated as an 
> exception, rather than ignored as a detail which idempotent operations should 
> just ignore.
> the exception should be caught and, if a file not found problem, logged 
> rather than propagated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11572) s3a delete() operation fails during a concurrent delete of child entries

2019-04-10 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-11572:

Fix Version/s: 2.8.6

> s3a delete() operation fails during a concurrent delete of child entries
> 
>
> Key: HADOOP-11572
> URL: https://issues.apache.org/jira/browse/HADOOP-11572
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.6
>
> Attachments: HADOOP-11572-001.patch, HADOOP-11572-branch-2-002.patch, 
> HADOOP-11572-branch-2-003.patch
>
>
> Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a 
> child entry during a recursive directory delete is propagated as an 
> exception, rather than ignored as a detail which idempotent operations should 
> just ignore.
> the exception should be caught and, if a file not found problem, logged 
> rather than propagated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14930) Upgrade Jetty to 9.4 version

2019-04-02 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14930:

Target Version/s: 3.3.0

Setting 3.3.0 as target.

> Upgrade Jetty to 9.4 version
> 
>
> Key: HADOOP-14930
> URL: https://issues.apache.org/jira/browse/HADOOP-14930
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-14930.00.patch
>
>
> Currently 9.3.19.v20170502 is used.
> In hbase 2.0+, 9.4.6.v20170531 is used.
> When starting mini dfs cluster in hbase unit tests, we get the following:
> {code}
> java.lang.NoSuchMethodError: 
> org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
>   at 
> org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
>   at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921)
> {code}
> This issue is to upgrade Jetty to 9.4 version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15909) KeyProvider class should implement Closeable

2019-01-10 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15909:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> KeyProvider class should implement Closeable
> 
>
> Key: HADOOP-15909
> URL: https://issues.apache.org/jira/browse/HADOOP-15909
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.0.3, 2.8.5
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15909.001.patch
>
>
> KeyProviders and the extensions have close() methods. The classes should 
> implement Closeable that will allow try-with-resources to work and help catch 
> original exceptions instead of finally blocks and exception masking that 
> follows that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15909) KeyProvider class should implement Closeable

2019-01-10 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16739699#comment-16739699
 ] 

Kihwal Lee commented on HADOOP-15909:
-

+1 for trunk. 

> KeyProvider class should implement Closeable
> 
>
> Key: HADOOP-15909
> URL: https://issues.apache.org/jira/browse/HADOOP-15909
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.0.3, 2.8.5
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Attachments: HADOOP-15909.001.patch
>
>
> KeyProviders and the extensions have close() methods. The classes should 
> implement Closeable that will allow try-with-resources to work and help catch 
> original exceptions instead of finally blocks and exception masking that 
> follows that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14930) Upgrade Jetty to 9.4 version

2018-12-10 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16714801#comment-16714801
 ] 

Kihwal Lee commented on HADOOP-14930:
-

To clarify, jetty-9.3.20.v20170531 addressed CVE-2017-9735, so the current 
jetty version in Hadoop 3.x is okay.

> Upgrade Jetty to 9.4 version
> 
>
> Key: HADOOP-14930
> URL: https://issues.apache.org/jira/browse/HADOOP-14930
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-14930.00.patch
>
>
> Currently 9.3.19.v20170502 is used.
> In hbase 2.0+, 9.4.6.v20170531 is used.
> When starting mini dfs cluster in hbase unit tests, we get the following:
> {code}
> java.lang.NoSuchMethodError: 
> org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
>   at 
> org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
>   at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921)
> {code}
> This issue is to upgrade Jetty to 9.4 version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14930) Upgrade Jetty to 9.4 version

2018-12-10 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16714801#comment-16714801
 ] 

Kihwal Lee edited comment on HADOOP-14930 at 12/10/18 2:42 PM:
---

To clarify, jetty-9.3.20.v20170531 addressed CVE-2017-9735, so the current 
jetty version (9.3.24.v20180605) in Hadoop 3.x is okay.


was (Author: kihwal):
To clarify, jetty-9.3.20.v20170531 addressed CVE-2017-9735, so the current 
jetty version in Hadoop 3.x is okay.

> Upgrade Jetty to 9.4 version
> 
>
> Key: HADOOP-14930
> URL: https://issues.apache.org/jira/browse/HADOOP-14930
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-14930.00.patch
>
>
> Currently 9.3.19.v20170502 is used.
> In hbase 2.0+, 9.4.6.v20170531 is used.
> When starting mini dfs cluster in hbase unit tests, we get the following:
> {code}
> java.lang.NoSuchMethodError: 
> org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
>   at 
> org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
>   at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921)
> {code}
> This issue is to upgrade Jetty to 9.4 version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711661#comment-16711661
 ] 

Kihwal Lee commented on HADOOP-15985:
-

[~Jack-Lee], I am asking about the content of the attached patch.

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711509#comment-16711509
 ] 

Kihwal Lee commented on HADOOP-15985:
-

[~Jack-Lee], what is this patch for?

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15882) Upgrade maven-shade-plugin from 2.4.3 to 3.2.0

2018-10-25 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15882:

Target Version/s: 3.2.0, 3.0.4, 3.1.2

> Upgrade maven-shade-plugin from 2.4.3 to 3.2.0
> --
>
> Key: HADOOP-15882
> URL: https://issues.apache.org/jira/browse/HADOOP-15882
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15882.1.patch
>
>
> While working on HADOOP-15815, we have faced a shaded-client error. Please 
> see [~bharatviswa]'s comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-15815?focusedCommentId=16662718&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16662718].
> MSHADE-242 and MSHADE-258 are needed to fix it. Let's upgrade 
> maven-shade-plugin to 3.1.0 or later.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-17 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16654074#comment-16654074
 ] 

Kihwal Lee commented on HADOOP-15859:
-

+1 We've applied the same patch internally.

> ZStandardDecompressor.c mistakes a class for an instance
> 
>
> Key: HADOOP-15859
> URL: https://issues.apache.org/jira/browse/HADOOP-15859
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Ben Lau
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15859.001.patch
>
>
> As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
> and still encountered segfaults in the JVM in HBase after that fix. 
> I took a deeper look and realized there is still another bug, which looks 
> like it's that we are actually [calling 
> setInt()|https://github.com/apache/hadoop/blob/f13e231025333ebf80b30bbdce1296cef554943b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
>  on the "remaining" variable on the ZStandardDecompressor class itself 
> (instead of an instance of that class) because the Java stub for the native C 
> init() function [is marked 
> static|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
>  leading to memory corruption and a crash during GC later.
> Initially I thought we would fix this by changing the Java init() method to 
> be non-static, but it looks like the "remaining" setInt() call is actually 
> unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
> "remaining" to 0 right after calling the JNI init() 
> call|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
>  So ZStandardDecompressor.java init() doesn't have to be changed to an 
> instance method, we can leave it as static, but remove the JNI init() call's 
> "remaining" setInt() call altogether.
> Furthermore we should probably clean up the class/instance distinction in the 
> C file because that's what led to this confusion. There are some other 
> methods where the distinction is incorrect or ambiguous, we should fix them 
> to prevent this from happening again.
> I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
> similar problems and needs to be fixed too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-11 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646401#comment-16646401
 ] 

Kihwal Lee commented on HADOOP-15815:
-

Many projects are going or has gone to 9.3.24. Unless 9.3.24 has a fatal flaw, 
it will be a better version to be on.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15820) ZStandardDecompressor native code sets an integer field as a long

2018-10-04 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638863#comment-16638863
 ] 

Kihwal Lee commented on HADOOP-15820:
-

+1 looks good.

> ZStandardDecompressor native code sets an integer field as a long
> -
>
> Key: HADOOP-15820
> URL: https://issues.apache.org/jira/browse/HADOOP-15820
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Attachments: HADOOP-15820.001.patch
>
>
> Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_init in 
> ZStandardDecompressor.c sets the {{remaining}} field as a long when it 
> actually is an integer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-03 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16637009#comment-16637009
 ] 

Kihwal Lee commented on HADOOP-15815:
-

We've been internally using 9.3.24.v20180605 and not seen any issues. I think 
we can safely update it in all 3.x lines.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1
>Reporter: Boris Vulikh
>Priority: Major
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-19 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15614:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   2.8.5
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-3.1, branch-3.0, branch-2, branch-2.9 and 
branch-2.8.

> TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails
> 
>
> Key: HADOOP-15614
> URL: https://issues.apache.org/jira/browse/HADOOP-15614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 2.8.5, 3.0.4
>
> Attachments: HADOOP-15614.001.patch, HADOOP-15614.002.patch
>
>
> When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
> reliably fails. It seems like a fundamental bug in the test or groups caching.
> A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have 
> any insight into this?
> This test case was added in HADOOP-13263.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-19 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549473#comment-16549473
 ] 

Kihwal Lee commented on HADOOP-15614:
-

+1 The patch looks good and the test cases are passing now. Thanks, Weiwei.

> TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails
> 
>
> Key: HADOOP-15614
> URL: https://issues.apache.org/jira/browse/HADOOP-15614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HADOOP-15614.001.patch, HADOOP-15614.002.patch
>
>
> When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
> reliably fails. It seems like a fundamental bug in the test or groups caching.
> A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have 
> any insight into this?
> This test case was added in HADOOP-13263.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-18 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-15614:
---

Assignee: Weiwei Yang

> TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails
> 
>
> Key: HADOOP-15614
> URL: https://issues.apache.org/jira/browse/HADOOP-15614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HADOOP-15614.001.patch
>
>
> When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
> reliably fails. It seems like a fundamental bug in the test or groups caching.
> A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have 
> any insight into this?
> This test case was added in HADOOP-13263.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-17 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15614:

Description: 
When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
reliably fails. It seems like a fundamental bug in the test or groups caching.

A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have any 
insight into this?

This test case was added in HADOOP-13263.

  was:
When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
reliably fails. It seems like a fundamental bug in the test or groups caching.

A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have any 
insight into this?


> TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails
> 
>
> Key: HADOOP-15614
> URL: https://issues.apache.org/jira/browse/HADOOP-15614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Priority: Major
>
> When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
> reliably fails. It seems like a fundamental bug in the test or groups caching.
> A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have 
> any insight into this?
> This test case was added in HADOOP-13263.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15614) TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails

2018-07-17 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-15614:
---

 Summary: TestGroupsCaching.testExceptionOnBackgroundRefreshHandled 
reliably fails
 Key: HADOOP-15614
 URL: https://issues.apache.org/jira/browse/HADOOP-15614
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


When {{testExceptionOnBackgroundRefreshHandled}} is run individually, it 
reliably fails. It seems like a fundamental bug in the test or groups caching.

A similar issue was dealt with in HADOOP-13375. [~cheersyang], do you have any 
insight into this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15359) IPC client hang in kerberized cluster due to JDK deadlock

2018-07-03 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16531458#comment-16531458
 ] 

Kihwal Lee commented on HADOOP-15359:
-

Just curious. Where was the main thread at? Was it tearing down by any chance?

> IPC client hang in kerberized cluster due to JDK deadlock
> -
>
> Key: HADOOP-15359
> URL: https://issues.apache.org/jira/browse/HADOOP-15359
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0, 2.8.0, 3.0.0
>Reporter: Xiao Chen
>Priority: Major
> Attachments: 1.jstack, 2.jstack
>
>
> In a recent internal testing, we have found a DFS client hang. Further 
> inspecting jstack shows the following:
> {noformat}
> "IPC Client (552936351) connection toHOSTNAME:8020 from PRINCIPAL" #7468 
> daemon prio=5 os_prio=0 tid=0x7f6bb306c000 nid=0x1c76e waiting for 
> monitor entry [0x7f6bc2bd6000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:444)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486)
> at javax.crypto.Cipher.getInstance(Cipher.java:513)
> at 
> sun.security.krb5.internal.crypto.dk.Des3DkCrypto.getCipher(Des3DkCrypto.java:202)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dr(DkCrypto.java:484)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dk(DkCrypto.java:447)
> at 
> sun.security.krb5.internal.crypto.dk.DkCrypto.calculateChecksum(DkCrypto.java:413)
> at 
> sun.security.krb5.internal.crypto.Des3.calculateChecksum(Des3.java:59)
> at 
> sun.security.jgss.krb5.CipherHelper.calculateChecksum(CipherHelper.java:231)
> at 
> sun.security.jgss.krb5.MessageToken.getChecksum(MessageToken.java:466)
> at 
> sun.security.jgss.krb5.MessageToken.verifySignAndSeqNumber(MessageToken.java:374)
> at 
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(WrapToken.java:284)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:209)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:182)
> at sun.security.jgss.krb5.Krb5Context.unwrap(Krb5Context.java:1053)
> at sun.security.jgss.GSSContextImpl.unwrap(GSSContextImpl.java:403)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(GssKrb5Base.java:77)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket(SaslRpcClient.java:617)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(SaslRpcClient.java:583)
> - locked <0x83444878> (a java.nio.HeapByteBuffer)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:553)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
> - locked <0x834448c0> (a java.io.BufferedInputStream)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1113)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1006)
> {noformat}
> and at the end of jstack:
> {noformat}
> Found one Java-level deadlock:
> =
> "IPC Parameter Sending Thread #29":
>   waiting to lock monitor 0x17ff49f8 (object 0x80277040, a 
> sun.security.provider.Sun),
>   which is held by UNKNOWN_owner_addr=0x50607000
> Java stack information for the threads listed above:
> ===
> "IPC Parameter Sending Thread #29":
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:437)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486)
> at javax.crypto.SecretKeyFactory.nextSpi(SecretKeyFactory.java:293)
> - locked <0x834386b8> (a java.lang.Object)
> at javax.crypto.SecretKeyFactory.(SecretKeyFactory.java:121)
> at 
> javax.crypto.SecretKeyFactory.getInstance(SecretKeyFactory.java:160)
> at 
> sun.security.krb5.

[jira] [Updated] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-22 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15450:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.5
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Thanks for addressing the issue, Arpit. I've committed this to trunk, 
branch-3.1, branch-3.0, branch-2, branch-2.9 and branch-2.8.

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 3.1.0, 2.10.0, 3.2.0, 3.1.1, 2.9.2, 2.8.5
>
> Attachments: HADOOP-15450.01.patch, HADOOP-15450.02.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16484223#comment-16484223
 ] 

Kihwal Lee commented on HADOOP-15450:
-

+1 The patch looks good.

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HADOOP-15450.01.patch, HADOOP-15450.02.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16484048#comment-16484048
 ] 

Kihwal Lee commented on HADOOP-15450:
-

Yes it is a blocker. The performance regression was introduced in HADOOP-13738. 
For the 2.8.4 release, it was reverted in the release branch.  If the 3.0.x 
release schedule does not allow waiting for this to complete, reverting it in 
the release branch (i.e. branch-3.0.3) is an option. 

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HADOOP-15450.01.patch, HADOOP-15450.02.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-07 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15450:

Fix Version/s: 2.8.4

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 2.8.4
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
> # When space is low, the os returns ENOSPC. Instead simply stop writing, the 
> drive is marked bad and replication happens. This make cluster-wide space 
> problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
> datanode shuts down.
> # There are non-hdfs users of DiskChecker, who use it proactively, not just 
> on failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-07 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-15450:

Priority: Blocker  (was: Major)

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
> # When space is low, the os returns ENOSPC. Instead simply stop writing, the 
> drive is marked bad and replication happens. This make cluster-wide space 
> problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
> datanode shuts down.
> # There are non-hdfs users of DiskChecker, who use it proactively, not just 
> on failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13738) DiskChecker should perform some disk IO

2018-05-04 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464212#comment-16464212
 ] 

Kihwal Lee commented on HADOOP-13738:
-

We are seeing issues in 2.8 with this change.
- When space is low, the os returns ENOSPC. Instead simply stop writing, the 
drive is marked bad and replication happens. This make cluster-wide space 
problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
datanode shuts down.
- There are non-hdfs users of DiskChecker, who use it proactively, not just on 
failures. This was fine before, but now it incurs heavy I/O due to introduction 
of fsync() in the code.

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha2, 2.8.4
>
> Attachments: HADOOP-13738-branch-2.8-06.patch, HADOOP-13738.01.patch, 
> HADOOP-13738.02.patch, HADOOP-13738.03.patch, HADOOP-13738.04.patch, 
> HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   >