[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2015-03-04 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14347298#comment-14347298
 ] 

Arun C Murthy commented on HADOOP-11656:


Agree 1000% with [~jlowe].

Starting with the thesis that we should break compat is less than ideal - we 
should certainly strive to add features in a compatible manner, this allows all 
existing users to consume the feature without the need to make a *should I use 
this or not* choice.

 Classpath isolation for downstream clients
 --

 Key: HADOOP-11656
 URL: https://issues.apache.org/jira/browse/HADOOP-11656
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: classloading, classpath, dependencies

 Currently, Hadoop exposes downstream clients to a variety of third party 
 libraries. As our code base grows and matures we increase the set of 
 libraries we rely on. At the same time, as our user base grows we increase 
 the likelihood that some downstream project will run into a conflict while 
 attempting to use a different version of some library we depend on. This has 
 already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
 (and I'm sure others).
 While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
 off and they don't do anything to help dependency conflicts on the driver 
 side or for folks talking to HDFS directly. This should serve as an umbrella 
 for changes needed to do things thoroughly on the next major version.
 We should ensure that downstream clients
 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
 doesn't pull in any third party dependencies
 2) only see our public API classes (or as close to this as feasible) when 
 executing user provided code, whether client side in a launcher/driver or on 
 the cluster in a container or within MR.
 This provides us with a double benefit: users get less grief when they want 
 to run substantially ahead or behind the versions we need and the project is 
 freer to change our own dependency versions because they'll no longer be in 
 our compatibility promises.
 Project specific task jiras to follow after I get some justifying use cases 
 written in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-11-30 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14229293#comment-14229293
 ] 

Arun C Murthy commented on HADOOP-10530:


[~ste...@apache.org] - Let's get this in for 2.7 asap? Tx!

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.6.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, 
 HADOOP-10530-003.patch, Screen Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11090:
---
Summary: [Umbrella] Support Java 8 in Hadoop  (was: [Umbrella] Issues with 
Java 8 in Hadoop)

 [Umbrella] Support Java 8 in Hadoop
 ---

 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
 works  with Java 8 is important for the Apache community.
   
 This JIRA is to track  the issues/experiences encountered during Java 8 
 migration. If you find a potential bug , please create a separate JIRA either 
 as a sub-task or linked into this JIRA.
 If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
 well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11090:
---
Target Version/s: 2.7.0

 [Umbrella] Support Java 8 in Hadoop
 ---

 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
 works  with Java 8 is important for the Apache community.
   
 This JIRA is to track  the issues/experiences encountered during Java 8 
 migration. If you find a potential bug , please create a separate JIRA either 
 as a sub-task or linked into this JIRA.
 If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
 well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11090:
---
Issue Type: New Feature  (was: Task)

 [Umbrella] Support Java 8 in Hadoop
 ---

 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
 works  with Java 8 is important for the Apache community.
   
 This JIRA is to track  the issues/experiences encountered during Java 8 
 migration. If you find a potential bug , please create a separate JIRA either 
 as a sub-task or linked into this JIRA.
 If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
 well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10726) Shell.ExitCodeException to implement getExitCode()

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10726:
---
Fix Version/s: (was: 2.6.0)
   2.7.0

 Shell.ExitCodeException to implement getExitCode()
 --

 Key: HADOOP-10726
 URL: https://issues.apache.org/jira/browse/HADOOP-10726
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Fix For: 2.7.0


 Once HADOOP-9626 adds an interface to get the exit code of an exception, this 
 should be implemented by {{Shell.ExitCodeException}} to serve up its exit 
 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11244) The HCFS contract test testRenameFileBeingAppended doesn't do a rename

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11244:
---
Fix Version/s: (was: 2.6.0)
   2.7.0

 The HCFS contract test testRenameFileBeingAppended doesn't do a rename
 --

 Key: HADOOP-11244
 URL: https://issues.apache.org/jira/browse/HADOOP-11244
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Noah Watkins
Assignee: jay vyas
 Fix For: 2.7.0

 Attachments: HADOOP-11244.patch, HADOOP-11244.patch


 The test AbstractContractAppendTest::testRenameFileBeingAppended appears to 
 assert the behavior of renaming a file opened for writing. However, the 
 assertion assertPathExists(renamed destination file does not exist, 
 renamed); fails because it appears that the file renamed is never created 
 (ostensibly it should be the target file that has been renamed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9849) License information is missing

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9849:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 License information is missing
 --

 Key: HADOOP-9849
 URL: https://issues.apache.org/jira/browse/HADOOP-9849
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Timothy St. Clair
  Labels: newbie
 Fix For: 2.7.0


 The following files are licensed under the BSD license but the BSD
 license is not part if the distribution:
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
 I believe this file is BSD as well:
 hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10041) UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even if the kerberos ticket cache is non-renewable

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10041:
---
Fix Version/s: (was: 2.6.0)
   2.7.0

 UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even 
 if the kerberos ticket cache is non-renewable
 -

 Key: HADOOP-10041
 URL: https://issues.apache.org/jira/browse/HADOOP-10041
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.1-alpha
Reporter: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0


 UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew user 
 credentials.  However, it does this even if the kerberos ticket cache in 
 question is non-renewable.
 This leads to an annoying error message being printed out all the time.
 {code}
 cmccabe@keter:/h klist
 Ticket cache: FILE:/tmp/krb5cc_1014
 Default principal: hdfs/ke...@cloudera.com
 Valid starting ExpiresService principal
 07/18/12 15:24:15  07/19/12 15:24:13  krbtgt/cloudera@cloudera.com
 {code}
 {code}
 cmccabe@keter:/h ./bin/hadoop fs -ls /
 15:21:39,882  WARN UserGroupInformation:739 - Exception encountered while 
 running the renewal command. Aborting renew thread. 
 org.apache.hadoop.util.Shell$ExitCodeException: kinit: KDC can't fulfill 
 requested option while renewing credentials
 Found 3 items
 -rw-r--r--   3 cmccabe users   0 2012-07-09 17:15 /b
 -rw-r--r--   3 hdfssupergroup  0 2012-07-09 17:17 /c
 drwxrwxrwx   - cmccabe audio   0 2012-07-19 11:25 /tmp
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11251) Confirm that all contract tests are run by LocalFS

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11251:
---
Fix Version/s: (was: 2.6.0)
   2.7.0

 Confirm that all contract tests are run by LocalFS
 --

 Key: HADOOP-11251
 URL: https://issues.apache.org/jira/browse/HADOOP-11251
 Project: Hadoop Common
  Issue Type: Task
  Components: fs
Affects Versions: 2.5.1
Reporter: jay vyas
 Fix For: 2.7.0


 We need to make sure each of the contract FS tests are run.
 For example, HADOOP-11244 shows that some might not be run by RawLocalFS, 
 meaning the tests arent actually verified.
 This is sort of an inverse test coverage task - making sure the tests are 
 actually run is just important as making sure that code is being tested .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8887:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
 HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
 HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8500) Javadoc jars contain entire target directory

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8500:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 Javadoc jars contain entire target directory
 

 Key: HADOOP-8500
 URL: https://issues.apache.org/jira/browse/HADOOP-8500
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: N/A
Reporter: EJ Ciramella
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-8500.patch, site-redo.tar

   Original Estimate: 24h
  Remaining Estimate: 24h

 The javadoc jars contain the contents of the target directory - which 
 includes classes and all sorts of binary files that it shouldn't.
 Sometimes the resulting javadoc jar is 10X bigger than it should be.
 The fix is to reconfigure maven to use api as it's destDir for javadoc 
 generation.
 I have a patch/diff incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8805) Move protocol buffer implementation of GetUserMappingProtocol from HDFS to Common

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8805:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 Move protocol buffer implementation of GetUserMappingProtocol from HDFS to 
 Common
 -

 Key: HADOOP-8805
 URL: https://issues.apache.org/jira/browse/HADOOP-8805
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bo Wang
Assignee: Bo Wang
 Fix For: 2.7.0

 Attachments: HADOOP-8805-v2.patch, HADOOP-8805-v3.patch, 
 HADOOP-8805.patch


 org.apache.hadoop.tools.GetUserMappingProtocol is used in both HDFS and YARN. 
 We should move the protocol buffer implementation from HDFS to Common so that 
 it can also be used by YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8937) ClientProtocol should support a way to get DataNodeInfo for a particular data node.

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8937:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 ClientProtocol should support a way to get DataNodeInfo for a particular data 
 node.
 ---

 Key: HADOOP-8937
 URL: https://issues.apache.org/jira/browse/HADOOP-8937
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Sameer Vaishampayan
Priority: Minor
 Fix For: 2.7.0


 HBase project needs a way to find if a DataNode is running local or not on a 
 given host. The current way is too expensive in which getDatanodeReport needs 
 to be called which returns information for all data nodes in the cluster.
 https://issues.apache.org/jira/browse/HBASE-6398



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8738) junit JAR is showing up in the distro

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8738:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9085) start namenode failure,bacause pid of namenode pid file is other process pid or thread id before start namenode

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9085:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 start namenode failure,bacause pid of namenode pid file is other process pid 
 or thread id before start namenode
 ---

 Key: HADOOP-9085
 URL: https://issues.apache.org/jira/browse/HADOOP-9085
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.0.1-alpha, 2.0.3-alpha
 Environment: NA
Reporter: liaowenrui
 Fix For: 2.0.1-alpha, 2.0.2-alpha, 2.7.0


 pid of namenode pid file is other process pid or thread id before start 
 namenode,start namenode will failure.because the pid of namenode pid file 
 will be checked use kill -0 command before start namenode in hadoop-daemo.sh 
 script.when pid of namenode pid file is other process pid or thread id,checkt 
 is use kil -0 command,and the kill -0 will return success.it means the 
 namenode is runing.in really,namenode is not runing.
 2338 is dead namenode pid 
 2305 is datanode pid
 cqn2:/tmp # kill -0 2338
 cqn2:/tmp # ps -wweLo pid,ppid,tid | grep 2338
  2305 1  2338



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8643) hadoop-client should exclude hadoop-annotations from hadoop-common dependency

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8643:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 hadoop-client should exclude hadoop-annotations from hadoop-common dependency
 -

 Key: HADOOP-8643
 URL: https://issues.apache.org/jira/browse/HADOOP-8643
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Priority: Minor
 Fix For: 2.7.0

 Attachments: hadoop-8643.txt


 When reviewing HADOOP-8370 I've missed that changing the scope to compile for 
 hadoop-annotations in hadoop-common it would make hadoop-annotations to 
 bubble up in hadoop-client. Because of this we need to explicitly exclude it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10523:
---
Fix Version/s: (was: 2.6.0)
   2.7.0

 Hadoop services (such as RM, NN and JHS) throw confusing exception during 
 token auto-cancelation 
 -

 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.7.0

 Attachments: HADOOP-10523.1.patch


 When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
 also periodically tries to cancel the same token. During the second cancel 
 (originated by RM/NN/JHS), Hadoop processes throw the following 
 error/exception in the  log file. Although the exception is harmless, it 
 creates a lot of confusions and causes the dev to spend a lot of time to 
 investigate.
 This JIRA is to make sure if the token is available/not cancelled before 
 attempting to cancel the token and  finally replace this exception with 
 proper warning message.
 {noformat}
 2014-04-15 01:41:14,686 INFO 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
  Token cancelation requested for identifier:: 
 owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
 issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, 
 masterKeyId=2
 2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
 found
 2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.128.42:2783 Call#37759 Retry#0: error: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9464) In test*Conf.xml regexp replication factor set to 1

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9464:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 In test*Conf.xml regexp replication factor set to 1
 ---

 Key: HADOOP-9464
 URL: https://issues.apache.org/jira/browse/HADOOP-9464
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.4-alpha
Reporter: Anatoli Fomenko
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-9464.patch


 In some Hadoop smoke tests (testHDFSConf.xml), in expected output for 
 RegexpComparator, a replication factor is hard coded to 1,
  
 {noformat}
 test !-- TESTED --
   descriptionls: file using absolute path/description
   test-commands
 command-fs NAMENODE -touchz /file1/command
 command-fs NAMENODE -ls /file1/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm /file1/command
   /cleanup-commands
   comparators
 comparator
   typeTokenComparator/type
   expected-outputFound 1 items/expected-output
 /comparator
 comparator
   typeRegexpComparator/type
   expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
 )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( 
 )*/file1/expected-output
 /comparator
   /comparators
 /test
 {noformat}
 such as the first 1 in 
 {noformat}
 expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
 )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( 
 )*/file1/expected-output
 {noformat}.
 We found in Bigtop testing on a standalone cluster that such tests fail with 
 a default replication factor. 
 Please update the regexp in test*Conf.xml files to add a flexibility for a 
 replication factor that would allow to execute these tests with a variety of 
 clusters, or inside the Bigtop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8884:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Fix For: 2.7.0

 Attachments: HADOOP-8884-v2.patch, HADOOP-8884.patch, 
 HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9679:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.1, 2.0.4-alpha
Reporter: fang fang chen
 Fix For: 2.7.0

 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8631) The description of net.topology.table.file.name in core-default.xml is misleading

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8631:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 The description of net.topology.table.file.name in core-default.xml is 
 misleading
 -

 Key: HADOOP-8631
 URL: https://issues.apache.org/jira/browse/HADOOP-8631
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha
Reporter: Han Xiao
Priority: Minor
 Fix For: 2.7.0

 Attachments: core-default.xml.patch


 The net.topology.table.file.name is used when 
 net.topology.node.switch.mapping.impl property is set to 
 org.apache.hadoop.net.TableMapping.
 However, in the description in core-default.xml, 
 net.topology.script.file.name property is asked to set to 
 org.apache.hadoop.net.TableMapping.
 This could mislead user into wrong configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9646:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 Inconsistent exception specifications in FileUtils#chmod
 

 Key: HADOOP-9646
 URL: https://issues.apache.org/jira/browse/HADOOP-9646
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-9646.001.patch, HADOOP-9646.002.patch


 There are two FileUtils#chmod methods:
 {code}
 public static int chmod(String filename, String perm
   ) throws IOException, InterruptedException;
 public static int chmod(String filename, String perm, boolean recursive)
 throws IOException;
 {code}
 The first one just calls the second one with {{recursive = false}}, but 
 despite that it is declared as throwing {{InterruptedException}}, something 
 the second one doesn't declare.
 The new Java7 chmod API, which we will transition to once JDK6 support is 
 dropped, does *not* throw {{InterruptedException}}
 See 
 [http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
  java.nio.file.attribute.UserPrincipal)]
 So we should make these consistent by removing the {{InterruptedException}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8369) Failing tests in branch-2

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8369:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 Failing tests in branch-2
 -

 Key: HADOOP-8369
 URL: https://issues.apache.org/jira/browse/HADOOP-8369
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Arun C Murthy
 Fix For: 2.7.0


 Running org.apache.hadoop.io.compress.TestCodec
 Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
 Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec  
 FAILURE!
 
 TestCodec failed since I didn't pass -Pnative, the test could be improved to 
 ensure snappy tests are skipped if native hadoop isn't present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8345) HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter

2014-11-30 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8345:
--
Fix Version/s: (was: 2.6.0)
   2.7.0

 HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter
 -

 Key: HADOOP-8345
 URL: https://issues.apache.org/jira/browse/HADOOP-8345
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.7.0


 It seems the mapping was added to fullfil HDFS requirements, where the SPNEGO 
 filter is registered.
 The registration o the SPNEGO filter should be done at common level instead 
 to it is avail for all components using HttpServer if security is ON.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11316) mvn package -Pdocs -DskipTests -Dtar fails because of non-ascii characters

2014-11-18 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11316:
---
Priority: Blocker  (was: Major)
Target Version/s: 2.6.1

 mvn package -Pdocs -DskipTests -Dtar fails because of non-ascii characters
 

 Key: HADOOP-11316
 URL: https://issues.apache.org/jira/browse/HADOOP-11316
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Priority: Blocker

 The command fails because following files include non-ascii characters.
 * ComparableVersion.java
 * CommonConfigurationKeysPublic.java
 * ComparableVersion.java
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]  ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc] Loading source files for package org.apache.hadoop.fs.crypto...
   [javadoc]   //  !--- KMSClientProvider configurations ???
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11316) mvn package -Pdocs -DskipTests -Dtar fails because of non-ascii characters

2014-11-18 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14216376#comment-14216376
 ] 

Arun C Murthy commented on HADOOP-11316:


 I think you have to pass along -Pdist too… afaik that is how the Jenkins we 
use for building release artifacts operates.


 mvn package -Pdocs -DskipTests -Dtar fails because of non-ascii characters
 

 Key: HADOOP-11316
 URL: https://issues.apache.org/jira/browse/HADOOP-11316
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Priority: Blocker

 The command fails because following files include non-ascii characters.
 * ComparableVersion.java
 * CommonConfigurationKeysPublic.java
 * ComparableVersion.java
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]  ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc] Loading source files for package org.apache.hadoop.fs.crypto...
   [javadoc]   //  !--- KMSClientProvider configurations ???
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11260) Patch up Jetty to disable SSLv3

2014-11-13 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11260:
---
Fix Version/s: 2.6.0

 Patch up Jetty to disable SSLv3
 ---

 Key: HADOOP-11260
 URL: https://issues.apache.org/jira/browse/HADOOP-11260
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.1
Reporter: Karthik Kambatla
Assignee: Mike Yoder
Priority: Blocker
 Fix For: 2.6.0, 2.5.2

 Attachments: HADOOP-11260.001.patch, HADOOP-11260.002.patch


 Hadoop uses an older version of Jetty that allows SSLv3. We should fix it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-11-13 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11243:
---
Fix Version/s: 2.6.0

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Fix For: 2.6.0, 2.5.2

 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-12 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9576:
--
Target Version/s:   (was: 2.7.0)

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Fix For: 2.6.0

 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-12 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9576:
--
Fix Version/s: (was: 2.7.0)
   2.6.0

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Fix For: 2.6.0

 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-12 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208443#comment-14208443
 ] 

Arun C Murthy commented on HADOOP-9576:
---

Merged this into branch-2.6 for hadoop-2.6.0-rc1.

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Fix For: 2.6.0

 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11286) Map/Reduce dangerously adds Guava @Beta class to CryptoUtils

2014-11-09 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11286:
---
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks [~ctubbsii]!

 Map/Reduce dangerously adds Guava @Beta class to CryptoUtils
 

 Key: HADOOP-11286
 URL: https://issues.apache.org/jira/browse/HADOOP-11286
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Christopher Tubbs
Priority: Blocker
  Labels: beta, deprecated, guava
 Fix For: 2.6.0

 Attachments: 
 0001-MAPREDUCE-6083-Avoid-client-use-of-deprecated-LimitI.patch, 
 0001-MAPREDUCE-6083-and-HDFS-7040-Avoid-Guava-s-LimitInpu.patch


 See HDFS-7040 for more background/details.
 In recent 2.6.0-SNAPSHOTs, the use of LimitInputStream was added to 
 CryptoUtils. This is part of the API components of Hadoop, which severely 
 impacts users who were utilizing newer versions of Guava, where the @Beta and 
 @Deprecated class, LimitInputStream, has been removed (removed in version 15 
 and later), beyond the impact already experienced in 2.4.0 as identified in 
 HDFS-7040.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2014-11-09 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14203918#comment-14203918
 ] 

Arun C Murthy commented on HADOOP-10895:


I'm worried about pulling this into 2.6 at the last minute... seems like this 
could come right after in 2.7?

 HTTP KerberosAuthenticator fallback should have a flag to disable it
 

 Key: HADOOP-10895
 URL: https://issues.apache.org/jira/browse/HADOOP-10895
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Yongjun Zhang
Priority: Blocker
 Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
 HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
 HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
 HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
 HADOOP-10895.008.patch


 Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
 delegation token version coming in with HADOOP-10771 should have a flag to 
 disable fallback to pseudo, similarly to the one that was introduced in 
 Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2014-11-09 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204001#comment-14204001
 ] 

Arun C Murthy commented on HADOOP-10895:


[~yzhangal] - there is no need to apologize at all... these things are very 
normal.

I just feel worried about pulling this in the last minute, let's commit it to 
trunk/branch-2 ASAP, this way we get bake time early for 2.7. I should be able 
create 2.7 very soon with primary reason being to drop JDK-1.6. Makes sense? 
Thanks.

 HTTP KerberosAuthenticator fallback should have a flag to disable it
 

 Key: HADOOP-10895
 URL: https://issues.apache.org/jira/browse/HADOOP-10895
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Yongjun Zhang
Priority: Blocker
 Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
 HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
 HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
 HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
 HADOOP-10895.008.patch


 Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
 delegation token version coming in with HADOOP-10771 should have a flag to 
 disable fallback to pseudo, similarly to the one that was introduced in 
 Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2014-11-09 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10895:
---
Target Version/s: 2.7.0  (was: 2.6.0)

Thanks for your patience and understanding [~yzhangal]... not to mention your 
contributions, please keep them coming.

 HTTP KerberosAuthenticator fallback should have a flag to disable it
 

 Key: HADOOP-10895
 URL: https://issues.apache.org/jira/browse/HADOOP-10895
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Yongjun Zhang
Priority: Blocker
 Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
 HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
 HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
 HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
 HADOOP-10895.008.patch


 Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
 delegation token version coming in with HADOOP-10771 should have a flag to 
 disable fallback to pseudo, similarly to the one that was introduced in 
 Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2014-11-09 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204236#comment-14204236
 ] 

Arun C Murthy commented on HADOOP-10786:


I think it's too late/risky to put this into 2.6; let's get this into 2.7. 
Thanks.

 Fix UGI#reloginFromKeytab on Java 8
 ---

 Key: HADOOP-10786
 URL: https://issues.apache.org/jira/browse/HADOOP-10786
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Tobi Vollebregt
Assignee: Stephen Chu
 Fix For: 2.6.0

 Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
 HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
 HADOOP-10786.patch


 Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
 storeKey are specified, then only a KeyTab object is added to the Subject's 
 private credentials, whereas in java = 7 both a KeyTab and some number of 
 KerberosKey objects were added.
 The UGI constructor checks whether or not a keytab was used to login by 
 looking if there are any KerberosKey objects in the Subject's private 
 credentials. If there are, then isKeyTab is set to true, and otherwise it's 
 set to false.
 Thus, in java 8 isKeyTab is always false given the current UGI 
 implementation, which makes UGI#reloginFromKeytab fail silently.
 Attached patch will check for a KeyTab object on the Subject, instead of a 
 KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-11-09 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10530:
---
Priority: Blocker  (was: Major)

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.4.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, Screen 
 Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-11-09 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204238#comment-14204238
 ] 

Arun C Murthy commented on HADOOP-10530:


Marking this as a blocker for 2.7; this is the core feature for the release.

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.4.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, Screen 
 Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-11-09 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10530:
---
Target Version/s: 2.7.0  (was: 3.0.0)

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.4.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, Screen 
 Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2014-11-09 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10786:
---
Target Version/s:   (was: 2.6.0)
   Fix Version/s: (was: 2.6.0)
  2.7.0

Moved to hadoop-2.7.0.

 Fix UGI#reloginFromKeytab on Java 8
 ---

 Key: HADOOP-10786
 URL: https://issues.apache.org/jira/browse/HADOOP-10786
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Tobi Vollebregt
Assignee: Stephen Chu
 Fix For: 2.7.0

 Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
 HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
 HADOOP-10786.patch


 Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
 storeKey are specified, then only a KeyTab object is added to the Subject's 
 private credentials, whereas in java = 7 both a KeyTab and some number of 
 KerberosKey objects were added.
 The UGI constructor checks whether or not a keytab was used to login by 
 looking if there are any KerberosKey objects in the Subject's private 
 credentials. If there are, then isKeyTab is set to true, and otherwise it's 
 set to false.
 Thus, in java 8 isKeyTab is always false given the current UGI 
 implementation, which makes UGI#reloginFromKeytab fail silently.
 Attached patch will check for a KeyTab object on the Subject, instead of a 
 KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11274) ConcurrentModificationException in Configuration Copy Constructor

2014-11-08 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11274:
---
Target Version/s: 2.7.0  (was: 2.6.0)

 ConcurrentModificationException in Configuration Copy Constructor
 -

 Key: HADOOP-11274
 URL: https://issues.apache.org/jira/browse/HADOOP-11274
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Junping Du
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker
 Attachments: HADOOP-11274-v2.patch, HADOOP-11274-v3.patch, 
 HADOOP-11274-v4.patch, HADOOP-11274.003.patch, HADOOP-11274.patch


 Exception as below happens in doing some configuration update in parallel:
 {noformat}
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
   at java.util.HashMap$EntryIterator.next(HashMap.java:962)
   at java.util.HashMap$EntryIterator.next(HashMap.java:960)
   at java.util.HashMap.putAllForCreate(HashMap.java:554)
   at java.util.HashMap.init(HashMap.java:298)
   at org.apache.hadoop.conf.Configuration.init(Configuration.java:703)
 {noformat}
 In a constructor of Configuration - public Configuration(Configuration 
 other), the copy of updatingResource data structure in copy constructor is 
 not synchronized properly. 
 Configuration.get() eventually calls loadProperty() where updatingResource 
 gets updated. So, whats happening here is one thread is trying to do copy of 
 Configuration as demonstrated in stack trace and other thread is doing 
 Configuration.get(key) and than ConcurrentModificationException occurs 
 because copying of updatingResource is not synchronized in constructor. 
 We should make the update to updatingResource get synchronized, and also fix 
 other tiny synchronized issues there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11274) ConcurrentModificationException in Configuration Copy Constructor

2014-11-08 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14203682#comment-14203682
 ] 

Arun C Murthy commented on HADOOP-11274:


I'm concerned this is too late to make this change to 2.6; particularly given 
that this has been around for a long time. Moving it to 2.7.

 ConcurrentModificationException in Configuration Copy Constructor
 -

 Key: HADOOP-11274
 URL: https://issues.apache.org/jira/browse/HADOOP-11274
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Junping Du
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker
 Attachments: HADOOP-11274-v2.patch, HADOOP-11274-v3.patch, 
 HADOOP-11274-v4.patch, HADOOP-11274.003.patch, HADOOP-11274.patch


 Exception as below happens in doing some configuration update in parallel:
 {noformat}
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
   at java.util.HashMap$EntryIterator.next(HashMap.java:962)
   at java.util.HashMap$EntryIterator.next(HashMap.java:960)
   at java.util.HashMap.putAllForCreate(HashMap.java:554)
   at java.util.HashMap.init(HashMap.java:298)
   at org.apache.hadoop.conf.Configuration.init(Configuration.java:703)
 {noformat}
 In a constructor of Configuration - public Configuration(Configuration 
 other), the copy of updatingResource data structure in copy constructor is 
 not synchronized properly. 
 Configuration.get() eventually calls loadProperty() where updatingResource 
 gets updated. So, whats happening here is one thread is trying to do copy of 
 Configuration as demonstrated in stack trace and other thread is doing 
 Configuration.get(key) and than ConcurrentModificationException occurs 
 because copying of updatingResource is not synchronized in constructor. 
 We should make the update to updatingResource get synchronized, and also fix 
 other tiny synchronized issues there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11286) Map/Reduce dangerously adds Guava @Beta class to CryptoUtils

2014-11-08 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy moved MAPREDUCE-6083 to HADOOP-11286:
---

 Target Version/s: 2.6.0  (was: 3.0.0, 2.6.0, 2.7.0)
Affects Version/s: (was: 2.6.0)
   2.6.0
  Key: HADOOP-11286  (was: MAPREDUCE-6083)
  Project: Hadoop Common  (was: Hadoop Map/Reduce)

 Map/Reduce dangerously adds Guava @Beta class to CryptoUtils
 

 Key: HADOOP-11286
 URL: https://issues.apache.org/jira/browse/HADOOP-11286
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Christopher Tubbs
Priority: Blocker
  Labels: beta, deprecated, guava
 Attachments: 
 0001-MAPREDUCE-6083-Avoid-client-use-of-deprecated-LimitI.patch, 
 0001-MAPREDUCE-6083-and-HDFS-7040-Avoid-Guava-s-LimitInpu.patch


 See HDFS-7040 for more background/details.
 In recent 2.6.0-SNAPSHOTs, the use of LimitInputStream was added to 
 CryptoUtils. This is part of the API components of Hadoop, which severely 
 impacts users who were utilizing newer versions of Guava, where the @Beta and 
 @Deprecated class, LimitInputStream, has been removed (removed in version 15 
 and later), beyond the impact already experienced in 2.4.0 as identified in 
 HDFS-7040.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11274) ConcurrentModificationException in Configuration Copy Constructor

2014-11-05 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-11274:
---
Priority: Blocker  (was: Critical)

 ConcurrentModificationException in Configuration Copy Constructor
 -

 Key: HADOOP-11274
 URL: https://issues.apache.org/jira/browse/HADOOP-11274
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Junping Du
Assignee: Junping Du
Priority: Blocker
 Attachments: HADOOP-11274.patch


 Exception as below happens in doing some configuration update in parallel:
 {noformat}
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
   at java.util.HashMap$EntryIterator.next(HashMap.java:962)
   at java.util.HashMap$EntryIterator.next(HashMap.java:960)
   at java.util.HashMap.putAllForCreate(HashMap.java:554)
   at java.util.HashMap.init(HashMap.java:298)
   at org.apache.hadoop.conf.Configuration.init(Configuration.java:703)
 {noformat}
 In a constructor of Configuration - public Configuration(Configuration 
 other), the copy of updatingResource data structure in copy constructor is 
 not synchronized properly. 
 Configuration.get() eventually calls loadProperty() where updatingResource 
 gets updated. So, whats happening here is one thread is trying to do copy of 
 Configuration as demonstrated in stack trace and other thread is doing 
 Configuration.get(key) and than ConcurrentModificationException occurs 
 because copying of updatingResource is not synchronized in constructor. 
 We should make the update to updatingResource get synchronized, and also fix 
 other tiny synchronized issues there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10681) Remove synchronized blocks from SnappyCodec and ZlibCodec buffering inner loop

2014-10-05 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10681:
---
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

The test failure are unrelated, verified.

I just committed this, sorry it took me so long. Thanks [~gopalv]!

 Remove synchronized blocks from SnappyCodec and ZlibCodec buffering inner loop
 --

 Key: HADOOP-10681
 URL: https://issues.apache.org/jira/browse/HADOOP-10681
 Project: Hadoop Common
  Issue Type: Bug
  Components: performance
Affects Versions: 2.2.0, 2.4.0, 2.5.0
Reporter: Gopal V
Assignee: Gopal V
  Labels: perfomance
 Fix For: 2.6.0

 Attachments: HADOOP-10681.1.patch, HADOOP-10681.2.patch, 
 HADOOP-10681.3.patch, HADOOP-10681.4.patch, compress-cmpxchg-small.png, 
 perf-top-spill-merge.png, snappy-perf-unsync.png


 The current implementation of SnappyCompressor spends more time within the 
 java loop of copying from the user buffer into the direct buffer allocated to 
 the compressor impl, than the time it takes to compress the buffers.
 !perf-top-spill-merge.png!
 The bottleneck was found to be java monitor code inside SnappyCompressor.
 The methods are neatly inlined by the JIT into the parent caller 
 (BlockCompressorStream::write), which unfortunately does not flatten out the 
 synchronized blocks.
 !compress-cmpxchg-small.png!
 The loop does a write of small byte[] buffers (each IFile key+value). 
 I counted approximately 6 monitor enter/exit blocks per k-v pair written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11139) Allow user to choose JVM for container execution

2014-09-26 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150054#comment-14150054
 ] 

Arun C Murthy commented on HADOOP-11139:


I commented on YARN-2481, repeated here:

YARN already allows the {{JAVA_HOME}} to be overridable... take a look at 
{{ApplicationConstants.Environment.JAVA_HOME}} and 
{{YarnConfiguration.DEFAULT_NM_ENV_WHITELIST}} for the code-path.

 Allow user to choose JVM for container execution
 

 Key: HADOOP-11139
 URL: https://issues.apache.org/jira/browse/HADOOP-11139
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Hadoop currently supports one JVM defined through JAVA_HOME. 
 Since multiple JVMs (Java 6,7,8,9) are active, it will be helpful if there is 
 an user configuration to choose the custom but supported JVM for her job.
 In other words, user will be able to choose her expected JVM only for her 
 container execution while Hadoop services may be running on different JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11144) Update website to reflect that we use git, not svn

2014-09-26 Thread Arun C Murthy (JIRA)
Arun C Murthy created HADOOP-11144:
--

 Summary: Update website to reflect that we use git, not svn
 Key: HADOOP-11144
 URL: https://issues.apache.org/jira/browse/HADOOP-11144
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun C Murthy
Assignee: Arun C Murthy


We need to update http://hadoop.apache.org/version_control.html to reflect that 
we use git, not svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11090) [Umbrella] Issues with Java 8 in Hadoop

2014-09-25 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147716#comment-14147716
 ] 

Arun C Murthy commented on HADOOP-11090:


I'd like to try get this into 2.7 if possible, thoughts?

Thanks for driving this [~kamrul]!

 [Umbrella] Issues with Java 8 in Hadoop
 ---

 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
 works  with Java 8 is important for the Apache community.
   
 This JIRA is to track  the issues/experiences encountered during Java 8 
 migration. If you find a potential bug , please create a separate JIRA either 
 as a sub-task or linked into this JIRA.
 If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
 well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-10963) Move compile-time dependency to JDK7

2014-08-12 Thread Arun C Murthy (JIRA)
Arun C Murthy created HADOOP-10963:
--

 Summary: Move compile-time dependency to JDK7
 Key: HADOOP-10963
 URL: https://issues.apache.org/jira/browse/HADOOP-10963
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun C Murthy
 Fix For: 2.7.0


As discussed on the *-d...@hadoop.apache.org mailing list, this jira tracks 
moving to JDK7 and dropping support for JDK6.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10128) Please delete old releases from mirroring system

2014-06-19 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038447#comment-14038447
 ] 

Arun C Murthy commented on HADOOP-10128:


Btw, [~s...@apache.org], I can't remove hadoop-2.0.6-alpha; permission issues. 
Can you please help? Tx.

 Please delete old releases from mirroring system
 

 Key: HADOOP-10128
 URL: https://issues.apache.org/jira/browse/HADOOP-10128
 Project: Hadoop Common
  Issue Type: Bug
 Environment: http://www.apache.org/dist/hadoop/common/
 http://www.apache.org/dist/hadoop/core/
Reporter: Sebb

 To reduce the load on the ASF mirrors, projects are required to delete old 
 releases.
 Please can you remove all non-current releases?
 i.e. anything except
 0.23.9
 1.2.1
 2.2.0
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10128) Please delete old releases from mirroring system

2014-06-19 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038445#comment-14038445
 ] 

Arun C Murthy commented on HADOOP-10128:


[~s...@apache.org] - Thanks for the reminder. I've removed stale releases. 
Please take a look. Thanks.

 Please delete old releases from mirroring system
 

 Key: HADOOP-10128
 URL: https://issues.apache.org/jira/browse/HADOOP-10128
 Project: Hadoop Common
  Issue Type: Bug
 Environment: http://www.apache.org/dist/hadoop/common/
 http://www.apache.org/dist/hadoop/core/
Reporter: Sebb

 To reduce the load on the ASF mirrors, projects are required to delete old 
 releases.
 Please can you remove all non-current releases?
 i.e. anything except
 0.23.9
 1.2.1
 2.2.0
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10373) move s3 logic to own replaceable jar, hadoop-aws

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10373:
---

Fix Version/s: (was: 2.4.0)
   2.5.0

 move s3 logic to own replaceable jar, hadoop-aws
 

 Key: HADOOP-10373
 URL: https://issues.apache.org/jira/browse/HADOOP-10373
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.3.0
Reporter: Steve Loughran
 Fix For: 2.5.0


 After HADOOP-9565 adds a marker interface for blobstores, move s3  s3n into 
 their own hadoop-amazon library
 # keeps the S3 dependencies out of the standard hadoop client dependency 
 graph.
 # lets people switch this for alternative implementations.
 feature #2 would let you swap over to another s3n impl (e.g. amazon's) 
 without rebuilding everything



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10041) UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even if the kerberos ticket cache is non-renewable

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10041:
---

Fix Version/s: (was: 2.4.0)
   2.5.0

 UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even 
 if the kerberos ticket cache is non-renewable
 -

 Key: HADOOP-10041
 URL: https://issues.apache.org/jira/browse/HADOOP-10041
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.1-alpha
Reporter: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.5.0


 UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew user 
 credentials.  However, it does this even if the kerberos ticket cache in 
 question is non-renewable.
 This leads to an annoying error message being printed out all the time.
 {code}
 cmccabe@keter:/h klist
 Ticket cache: FILE:/tmp/krb5cc_1014
 Default principal: hdfs/ke...@cloudera.com
 Valid starting ExpiresService principal
 07/18/12 15:24:15  07/19/12 15:24:13  krbtgt/cloudera@cloudera.com
 {code}
 {code}
 cmccabe@keter:/h ./bin/hadoop fs -ls /
 15:21:39,882  WARN UserGroupInformation:739 - Exception encountered while 
 running the renewal command. Aborting renew thread. 
 org.apache.hadoop.util.Shell$ExitCodeException: kinit: KDC can't fulfill 
 requested option while renewing credentials
 Found 3 items
 -rw-r--r--   3 cmccabe users   0 2012-07-09 17:15 /b
 -rw-r--r--   3 hdfssupergroup  0 2012-07-09 17:17 /c
 drwxrwxrwx   - cmccabe audio   0 2012-07-19 11:25 /tmp
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9849) License information is missing

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9849:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 License information is missing
 --

 Key: HADOOP-9849
 URL: https://issues.apache.org/jira/browse/HADOOP-9849
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Timothy St. Clair
  Labels: newbie
 Fix For: 2.5.0


 The following files are licensed under the BSD license but the BSD
 license is not part if the distribution:
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
 I believe this file is BSD as well:
 hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9613) Updated jersey pom dependencies

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9613:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Updated jersey pom dependencies
 ---

 Key: HADOOP-9613
 URL: https://issues.apache.org/jira/browse/HADOOP-9613
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Timothy St. Clair
  Labels: maven
 Fix For: 3.0.0, 2.5.0

 Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.patch


 Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
 system dependencies on Fedora 18.  
 The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9646:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Inconsistent exception specifications in FileUtils#chmod
 

 Key: HADOOP-9646
 URL: https://issues.apache.org/jira/browse/HADOOP-9646
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-9646.001.patch, HADOOP-9646.002.patch


 There are two FileUtils#chmod methods:
 {code}
 public static int chmod(String filename, String perm
   ) throws IOException, InterruptedException;
 public static int chmod(String filename, String perm, boolean recursive)
 throws IOException;
 {code}
 The first one just calls the second one with {{recursive = false}}, but 
 despite that it is declared as throwing {{InterruptedException}}, something 
 the second one doesn't declare.
 The new Java7 chmod API, which we will transition to once JDK6 support is 
 dropped, does *not* throw {{InterruptedException}}
 See 
 [http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
  java.nio.file.attribute.UserPrincipal)]
 So we should make these consistent by removing the {{InterruptedException}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9464) In test*Conf.xml regexp replication factor set to 1

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9464:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 In test*Conf.xml regexp replication factor set to 1
 ---

 Key: HADOOP-9464
 URL: https://issues.apache.org/jira/browse/HADOOP-9464
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.4-alpha
Reporter: Anatoli Fomenko
Priority: Critical
 Fix For: 2.5.0

 Attachments: HADOOP-9464.patch


 In some Hadoop smoke tests (testHDFSConf.xml), in expected output for 
 RegexpComparator, a replication factor is hard coded to 1,
  
 {noformat}
 test !-- TESTED --
   descriptionls: file using absolute path/description
   test-commands
 command-fs NAMENODE -touchz /file1/command
 command-fs NAMENODE -ls /file1/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm /file1/command
   /cleanup-commands
   comparators
 comparator
   typeTokenComparator/type
   expected-outputFound 1 items/expected-output
 /comparator
 comparator
   typeRegexpComparator/type
   expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
 )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( 
 )*/file1/expected-output
 /comparator
   /comparators
 /test
 {noformat}
 such as the first 1 in 
 {noformat}
 expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
 )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( 
 )*/file1/expected-output
 {noformat}.
 We found in Bigtop testing on a standalone cluster that such tests fail with 
 a default replication factor. 
 Please update the regexp in test*Conf.xml files to add a flexibility for a 
 replication factor that would allow to execute these tests with a variety of 
 clusters, or inside the Bigtop.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9679:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.1, 2.0.4-alpha
Reporter: fang fang chen
 Fix For: 2.5.0

 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9477) posixGroups support for LDAP groups mapping service

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9477:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 posixGroups support for LDAP groups mapping service
 ---

 Key: HADOOP-9477
 URL: https://issues.apache.org/jira/browse/HADOOP-9477
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 2.5.0

 Attachments: HADOOP-9477.patch, HADOOP-9477.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 It would be nice to support posixGroups for LdapGroupsMapping service. Below 
 is from current description for the provider:
 hadoop.security.group.mapping.ldap.search.filter.group:
 An additional filter to use when searching for LDAP groups. This should be
 changed when resolving groups against a non-Active Directory installation.
 posixGroups are currently not a supported group class.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8500) Javadoc jars contain entire target directory

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8500:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Javadoc jars contain entire target directory
 

 Key: HADOOP-8500
 URL: https://issues.apache.org/jira/browse/HADOOP-8500
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: N/A
Reporter: EJ Ciramella
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-8500.patch, site-redo.tar

   Original Estimate: 24h
  Remaining Estimate: 24h

 The javadoc jars contain the contents of the target directory - which 
 includes classes and all sorts of binary files that it shouldn't.
 Sometimes the resulting javadoc jar is 10X bigger than it should be.
 The fix is to reconfigure maven to use api as it's destDir for javadoc 
 generation.
 I have a patch/diff incoming.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9367) Consider combining the implementations of DiskChecker.checkDir for all supported platforms

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9367:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Consider combining the implementations of DiskChecker.checkDir for all 
 supported platforms
 --

 Key: HADOOP-9367
 URL: https://issues.apache.org/jira/browse/HADOOP-9367
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Arpit Agarwal
 Fix For: 3.0.0, 2.5.0


 DiskChecker.checkDir uses different mechanisms to verify disk 
 read/write/execute access. Consider combining these up into a single 
 implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8943) Support multiple group mapping providers

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8943:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Support multiple group mapping providers
 

 Key: HADOOP-8943
 URL: https://issues.apache.org/jira/browse/HADOOP-8943
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 2.5.0

 Attachments: HADOOP-8943.patch, HADOOP-8943.patch, HADOOP-8943.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

   Discussed with Natty about LdapGroupMapping, we need to improve it so that: 
 1. It's possible to do different group mapping for different 
 users/principals. For example, AD user should go to LdapGroupMapping service 
 for group, but service principals such as hdfs, mapred can still use the 
 default one ShellBasedUnixGroupsMapping; 
 2. Multiple ADs can be supported to do LdapGroupMapping; 
 3. It's possible to configure what kind of users/principals (regarding 
 domain/realm is an option) should use which group mapping service/mechanism.
 4. It's possible to configure and combine multiple existing mapping providers 
 without writing codes implementing new one.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8887:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
 HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
 HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8345) HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8345:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter
 -

 Key: HADOOP-8345
 URL: https://issues.apache.org/jira/browse/HADOOP-8345
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.5.0


 It seems the mapping was added to fullfil HDFS requirements, where the SPNEGO 
 filter is registered.
 The registration o the SPNEGO filter should be done at common level instead 
 to it is avail for all components using HttpServer if security is ON.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8738) junit JAR is showing up in the distro

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8738:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.5.0

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8643) hadoop-client should exclude hadoop-annotations from hadoop-common dependency

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8643:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 hadoop-client should exclude hadoop-annotations from hadoop-common dependency
 -

 Key: HADOOP-8643
 URL: https://issues.apache.org/jira/browse/HADOOP-8643
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Priority: Minor
 Fix For: 2.5.0

 Attachments: hadoop-8643.txt


 When reviewing HADOOP-8370 I've missed that changing the scope to compile for 
 hadoop-annotations in hadoop-common it would make hadoop-annotations to 
 bubble up in hadoop-client. Because of this we need to explicitly exclude it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8631) The description of net.topology.table.file.name in core-default.xml is misleading

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8631:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 The description of net.topology.table.file.name in core-default.xml is 
 misleading
 -

 Key: HADOOP-8631
 URL: https://issues.apache.org/jira/browse/HADOOP-8631
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha
Reporter: Han Xiao
Priority: Minor
 Fix For: 2.5.0

 Attachments: core-default.xml.patch


 The net.topology.table.file.name is used when 
 net.topology.node.switch.mapping.impl property is set to 
 org.apache.hadoop.net.TableMapping.
 However, in the description in core-default.xml, 
 net.topology.script.file.name property is asked to set to 
 org.apache.hadoop.net.TableMapping.
 This could mislead user into wrong configuration.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8884:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Fix For: 2.5.0

 Attachments: HADOOP-8884-v2.patch, HADOOP-8884.patch, 
 HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8805) Move protocol buffer implementation of GetUserMappingProtocol from HDFS to Common

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8805:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Move protocol buffer implementation of GetUserMappingProtocol from HDFS to 
 Common
 -

 Key: HADOOP-8805
 URL: https://issues.apache.org/jira/browse/HADOOP-8805
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bo Wang
Assignee: Bo Wang
 Fix For: 2.5.0

 Attachments: HADOOP-8805-v2.patch, HADOOP-8805-v3.patch, 
 HADOOP-8805.patch


 org.apache.hadoop.tools.GetUserMappingProtocol is used in both HDFS and YARN. 
 We should move the protocol buffer implementation from HDFS to Common so that 
 it can also be used by YARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9085) start namenode failure,bacause pid of namenode pid file is other process pid or thread id before start namenode

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9085:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 start namenode failure,bacause pid of namenode pid file is other process pid 
 or thread id before start namenode
 ---

 Key: HADOOP-9085
 URL: https://issues.apache.org/jira/browse/HADOOP-9085
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.0.1-alpha, 2.0.3-alpha
 Environment: NA
Reporter: liaowenrui
 Fix For: 2.0.1-alpha, 2.0.2-alpha, 2.5.0


 pid of namenode pid file is other process pid or thread id before start 
 namenode,start namenode will failure.because the pid of namenode pid file 
 will be checked use kill -0 command before start namenode in hadoop-daemo.sh 
 script.when pid of namenode pid file is other process pid or thread id,checkt 
 is use kil -0 command,and the kill -0 will return success.it means the 
 namenode is runing.in really,namenode is not runing.
 2338 is dead namenode pid 
 2305 is datanode pid
 cqn2:/tmp # kill -0 2338
 cqn2:/tmp # ps -wweLo pid,ppid,tid | grep 2338
  2305 1  2338



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8937) ClientProtocol should support a way to get DataNodeInfo for a particular data node.

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8937:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 ClientProtocol should support a way to get DataNodeInfo for a particular data 
 node.
 ---

 Key: HADOOP-8937
 URL: https://issues.apache.org/jira/browse/HADOOP-8937
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Sameer Vaishampayan
Priority: Minor
 Fix For: 2.5.0


 HBase project needs a way to find if a DataNode is running local or not on a 
 given host. The current way is too expensive in which getDatanodeReport needs 
 to be called which returns information for all data nodes in the cluster.
 https://issues.apache.org/jira/browse/HBASE-6398



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8369) Failing tests in branch-2

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8369:
--

Fix Version/s: (was: 2.4.0)
   2.5.0

 Failing tests in branch-2
 -

 Key: HADOOP-8369
 URL: https://issues.apache.org/jira/browse/HADOOP-8369
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Arun C Murthy
 Fix For: 2.5.0


 Running org.apache.hadoop.io.compress.TestCodec
 Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
 Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec  
 FAILURE!
 
 TestCodec failed since I didn't pass -Pnative, the test could be improved to 
 ensure snappy tests are skipped if native hadoop isn't present.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10382) Add Apache Tez to the Hadoop homepage as a related project

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10382:
---

Attachment: HADOOP-10382.patch

 Add Apache Tez to the Hadoop homepage as a related project
 --

 Key: HADOOP-10382
 URL: https://issues.apache.org/jira/browse/HADOOP-10382
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: HADOOP-10382.patch


 Add Apache Tez to the Hadoop homepage as a related project



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10382) Add Apache Tez to the Hadoop homepage as a related project

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10382:
---

Attachment: HADOOP-10382.patch

Minor edits. Thanks for the review [~sanjay.radia], I'll commit it presently.

 Add Apache Tez to the Hadoop homepage as a related project
 --

 Key: HADOOP-10382
 URL: https://issues.apache.org/jira/browse/HADOOP-10382
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: HADOOP-10382.patch, HADOOP-10382.patch


 Add Apache Tez to the Hadoop homepage as a related project



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10382) Add Apache Tez to the Hadoop homepage as a related project

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy resolved HADOOP-10382.


Resolution: Fixed

I just committed this.

 Add Apache Tez to the Hadoop homepage as a related project
 --

 Key: HADOOP-10382
 URL: https://issues.apache.org/jira/browse/HADOOP-10382
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: HADOOP-10382.patch, HADOOP-10382.patch


 Add Apache Tez to the Hadoop homepage as a related project



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10382) Add Apache Tez to the Hadoop homepage as a related project

2014-04-10 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10382:
---

Component/s: documentation

 Add Apache Tez to the Hadoop homepage as a related project
 --

 Key: HADOOP-10382
 URL: https://issues.apache.org/jira/browse/HADOOP-10382
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: HADOOP-10382.patch, HADOOP-10382.patch


 Add Apache Tez to the Hadoop homepage as a related project



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10388) Pure native hadoop client

2014-03-13 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13934634#comment-13934634
 ] 

Arun C Murthy commented on HADOOP-10388:


+1 for the effort, long overdue.

Agree that starting with RPC client is the right first step.

I also have a barebones client for golang: 
https://github.com/hortonworks/gohadoop which I'm happy to throw in if there is 
sufficient interest.

 Pure native hadoop client
 -

 Key: HADOOP-10388
 URL: https://issues.apache.org/jira/browse/HADOOP-10388
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Binglin Chang
Assignee: Colin Patrick McCabe

 A pure native hadoop client has following use case/advantages:
 1.  writing Yarn applications using c++
 2.  direct access to HDFS, without extra proxy overhead, comparing to web/nfs 
 interface.
 3.  wrap native library to support more languages, e.g. python
 4.  lightweight, small footprint compare to several hundred MB of JDK and 
 hadoop library with various dependencies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-03-13 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13934636#comment-13934636
 ] 

Arun C Murthy commented on HADOOP-10389:


[~cmccabe] I have a barebones C client already as an offshoot of 
https://github.com/hortonworks/gohadoop; would you be interested in using that 
as a starting point?

 Native RPCv9 client
 ---

 Key: HADOOP-10389
 URL: https://issues.apache.org/jira/browse/HADOOP-10389
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Binglin Chang
Assignee: Colin Patrick McCabe





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10301) AuthenticationFilter should return Forbidden for failed authentication

2014-03-12 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932004#comment-13932004
 ] 

Arun C Murthy commented on HADOOP-10301:


Is this ready to go? Or, should it be blocking 2.4? Thanks.

 AuthenticationFilter should return Forbidden for failed authentication
 --

 Key: HADOOP-10301
 URL: https://issues.apache.org/jira/browse/HADOOP-10301
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-10301.branch-23.patch, 
 HADOOP-10301.branch-23.patch, HADOOP-10301.patch, HADOOP-10301.patch


 The hadoop-auth AuthenticationFilter returns a 401 Unauthorized without a 
 WWW-Authenticate headers.  The is illegal per the HTTP RPC and causes a NPE 
 in the HttpUrlConnection.
 This is half of a fix that affects webhdfs.  See HDFS-4564.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10382) Add Apache Tez to the Hadoop homepage as a related project

2014-03-04 Thread Arun C Murthy (JIRA)
Arun C Murthy created HADOOP-10382:
--

 Summary: Add Apache Tez to the Hadoop homepage as a related project
 Key: HADOOP-10382
 URL: https://issues.apache.org/jira/browse/HADOOP-10382
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun C Murthy
Assignee: Arun C Murthy


Add Apache Tez to the Hadoop homepage as a related project



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8943) Support multiple group mapping providers

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8943:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Support multiple group mapping providers
 

 Key: HADOOP-8943
 URL: https://issues.apache.org/jira/browse/HADOOP-8943
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 2.4.0

 Attachments: HADOOP-8943.patch, HADOOP-8943.patch, HADOOP-8943.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

   Discussed with Natty about LdapGroupMapping, we need to improve it so that: 
 1. It's possible to do different group mapping for different 
 users/principals. For example, AD user should go to LdapGroupMapping service 
 for group, but service principals such as hdfs, mapred can still use the 
 default one ShellBasedUnixGroupsMapping; 
 2. Multiple ADs can be supported to do LdapGroupMapping; 
 3. It's possible to configure what kind of users/principals (regarding 
 domain/realm is an option) should use which group mapping service/mechanism.
 4. It's possible to configure and combine multiple existing mapping providers 
 without writing codes implementing new one.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8631) The description of net.topology.table.file.name in core-default.xml is misleading

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8631:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 The description of net.topology.table.file.name in core-default.xml is 
 misleading
 -

 Key: HADOOP-8631
 URL: https://issues.apache.org/jira/browse/HADOOP-8631
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha
Reporter: Han Xiao
Priority: Minor
 Fix For: 2.4.0

 Attachments: core-default.xml.patch


 The net.topology.table.file.name is used when 
 net.topology.node.switch.mapping.impl property is set to 
 org.apache.hadoop.net.TableMapping.
 However, in the description in core-default.xml, 
 net.topology.script.file.name property is asked to set to 
 org.apache.hadoop.net.TableMapping.
 This could mislead user into wrong configuration.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8500) Javadoc jars contain entire target directory

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8500:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Javadoc jars contain entire target directory
 

 Key: HADOOP-8500
 URL: https://issues.apache.org/jira/browse/HADOOP-8500
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: N/A
Reporter: EJ Ciramella
Priority: Minor
 Fix For: 2.4.0

 Attachments: HADOOP-8500.patch, site-redo.tar

   Original Estimate: 24h
  Remaining Estimate: 24h

 The javadoc jars contain the contents of the target directory - which 
 includes classes and all sorts of binary files that it shouldn't.
 Sometimes the resulting javadoc jar is 10X bigger than it should be.
 The fix is to reconfigure maven to use api as it's destDir for javadoc 
 generation.
 I have a patch/diff incoming.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8369) Failing tests in branch-2

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8369:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Failing tests in branch-2
 -

 Key: HADOOP-8369
 URL: https://issues.apache.org/jira/browse/HADOOP-8369
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Arun C Murthy
 Fix For: 2.4.0


 Running org.apache.hadoop.io.compress.TestCodec
 Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
 Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec  
 FAILURE!
 
 TestCodec failed since I didn't pass -Pnative, the test could be improved to 
 ensure snappy tests are skipped if native hadoop isn't present.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9613) Updated jersey pom dependencies

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9613:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Updated jersey pom dependencies
 ---

 Key: HADOOP-9613
 URL: https://issues.apache.org/jira/browse/HADOOP-9613
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Timothy St. Clair
  Labels: maven
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.patch


 Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
 system dependencies on Fedora 18.  
 The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9849) License information is missing

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9849:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 License information is missing
 --

 Key: HADOOP-9849
 URL: https://issues.apache.org/jira/browse/HADOOP-9849
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Timothy St. Clair
  Labels: newbie
 Fix For: 2.4.0


 The following files are licensed under the BSD license but the BSD
 license is not part if the distribution:
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
 I believe this file is BSD as well:
 hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9478) Fix race conditions during the initialization of Configuration related to deprecatedKeyMap

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9478:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Fix race conditions during the initialization of Configuration related to 
 deprecatedKeyMap
 --

 Key: HADOOP-9478
 URL: https://issues.apache.org/jira/browse/HADOOP-9478
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
 Environment: OS:
 CentOS release 6.3 (Final)
 JDK:
 java version 1.6.0_27
 Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
 Hadoop:
 hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0
 Security:
 Kerberos
Reporter: Dongyong Wang
Assignee: Colin Patrick McCabe
 Fix For: 2.4.0

 Attachments: HADOOP-9478.001.patch, HADOOP-9478.002.patch, 
 HADOOP-9478.003.patch, HADOOP-9478.004.patch, HADOOP-9478.005.patch, 
 hadoop-9478-1.patch, hadoop-9478-2.patch


 When we lanuch the client appliation which use kerberos security,the 
 FileSystem can't be create because the exception ' 
 java.lang.NoClassDefFoundError: Could not initialize class 
 org.apache.hadoop.security.SecurityUtil'.
 I check the exception stack trace,it maybe caused by the unsafe get operation 
 of the deprecatedKeyMap which used by the 
 org.apache.hadoop.conf.Configuration.
 So I write a simple test case:
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 public class HTest {
 public static void main(String[] args) throws Exception {
 Configuration conf = new Configuration();
 conf.addResource(core-site.xml);
 conf.addResource(hdfs-site.xml);
 FileSystem fileSystem = FileSystem.get(conf);
 System.out.println(fileSystem);
 System.exit(0);
 }
 }
 Then I launch this test case many times,the following exception is thrown:
 Exception in thread TGT Renewer for XXX 
 java.lang.ExceptionInInitializerError
  at 
 org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719)
  at 
 org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77)
  at 
 org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746)
  at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 16
  at java.util.HashMap.getEntry(HashMap.java:345)
  at java.util.HashMap.containsKey(HashMap.java:335)
  at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989)
  at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867)
  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785)
  at org.apache.hadoop.conf.Configuration.get(Configuration.java:712)
  at 
 org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731)
  at 
 org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047)
  at org.apache.hadoop.security.SecurityUtil.clinit(SecurityUtil.java:76)
  ... 4 more
 Exception in thread main java.io.IOException: Couldn't create proxy 
 provider class 
 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
  at 
 org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453)
  at 
 org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133)
  at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:436)
  at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:403)
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125)
  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262)
  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
  at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162)
  at HTest.main(HTest.java:11)
 Caused by: java.lang.reflect.InvocationTargetException
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
  at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
  at 
 

[jira] [Updated] (HADOOP-9367) Consider combining the implementations of DiskChecker.checkDir for all supported platforms

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9367:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Consider combining the implementations of DiskChecker.checkDir for all 
 supported platforms
 --

 Key: HADOOP-9367
 URL: https://issues.apache.org/jira/browse/HADOOP-9367
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Arpit Agarwal
 Fix For: 3.0.0, 2.4.0


 DiskChecker.checkDir uses different mechanisms to verify disk 
 read/write/execute access. Consider combining these up into a single 
 implementation.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9646:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Inconsistent exception specifications in FileUtils#chmod
 

 Key: HADOOP-9646
 URL: https://issues.apache.org/jira/browse/HADOOP-9646
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.4.0

 Attachments: HADOOP-9646.001.patch, HADOOP-9646.002.patch


 There are two FileUtils#chmod methods:
 {code}
 public static int chmod(String filename, String perm
   ) throws IOException, InterruptedException;
 public static int chmod(String filename, String perm, boolean recursive)
 throws IOException;
 {code}
 The first one just calls the second one with {{recursive = false}}, but 
 despite that it is declared as throwing {{InterruptedException}}, something 
 the second one doesn't declare.
 The new Java7 chmod API, which we will transition to once JDK6 support is 
 dropped, does *not* throw {{InterruptedException}}
 See 
 [http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
  java.nio.file.attribute.UserPrincipal)]
 So we should make these consistent by removing the {{InterruptedException}}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9085) start namenode failure,bacause pid of namenode pid file is other process pid or thread id before start namenode

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9085:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 start namenode failure,bacause pid of namenode pid file is other process pid 
 or thread id before start namenode
 ---

 Key: HADOOP-9085
 URL: https://issues.apache.org/jira/browse/HADOOP-9085
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.0.1-alpha, 2.0.3-alpha
 Environment: NA
Reporter: liaowenrui
 Fix For: 2.0.1-alpha, 2.0.2-alpha, 2.4.0


 pid of namenode pid file is other process pid or thread id before start 
 namenode,start namenode will failure.because the pid of namenode pid file 
 will be checked use kill -0 command before start namenode in hadoop-daemo.sh 
 script.when pid of namenode pid file is other process pid or thread id,checkt 
 is use kil -0 command,and the kill -0 will return success.it means the 
 namenode is runing.in really,namenode is not runing.
 2338 is dead namenode pid 
 2305 is datanode pid
 cqn2:/tmp # kill -0 2338
 cqn2:/tmp # ps -wweLo pid,ppid,tid | grep 2338
  2305 1  2338



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8345) HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8345:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter
 -

 Key: HADOOP-8345
 URL: https://issues.apache.org/jira/browse/HADOOP-8345
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.4.0


 It seems the mapping was added to fullfil HDFS requirements, where the SPNEGO 
 filter is registered.
 The registration o the SPNEGO filter should be done at common level instead 
 to it is avail for all components using HttpServer if security is ON.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9679) KerberosName.rules are not initialized during adding kerberos support to a web servlet using hadoop authentications

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9679:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 KerberosName.rules are not initialized during adding kerberos support to a 
 web servlet using hadoop authentications
 ---

 Key: HADOOP-9679
 URL: https://issues.apache.org/jira/browse/HADOOP-9679
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.1, 2.0.4-alpha
Reporter: fang fang chen
 Fix For: 2.4.0

 Attachments: HADOOP-9679.patch


 I am using hadoop-1.1.1 to add kerberos authentication to a web service. But 
 found rules are not initialized, that makes following error happened:
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at 
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)
 Seems in hadoop-2.0.4-alpha branch, this issue still is still there. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8738) junit JAR is showing up in the distro

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8738:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.4.0

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8884:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Fix For: 2.4.0

 Attachments: HADOOP-8884-v2.patch, HADOOP-8884.patch, 
 HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8643) hadoop-client should exclude hadoop-annotations from hadoop-common dependency

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8643:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 hadoop-client should exclude hadoop-annotations from hadoop-common dependency
 -

 Key: HADOOP-8643
 URL: https://issues.apache.org/jira/browse/HADOOP-8643
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Priority: Minor
 Fix For: 2.4.0

 Attachments: hadoop-8643.txt


 When reviewing HADOOP-8370 I've missed that changing the scope to compile for 
 hadoop-annotations in hadoop-common it would make hadoop-annotations to 
 bubble up in hadoop-client. Because of this we need to explicitly exclude it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9464) In test*Conf.xml regexp replication factor set to 1

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-9464:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 In test*Conf.xml regexp replication factor set to 1
 ---

 Key: HADOOP-9464
 URL: https://issues.apache.org/jira/browse/HADOOP-9464
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.4-alpha
Reporter: Anatoli Fomenko
Priority: Critical
 Fix For: 2.4.0

 Attachments: HADOOP-9464.patch


 In some Hadoop smoke tests (testHDFSConf.xml), in expected output for 
 RegexpComparator, a replication factor is hard coded to 1,
  
 {noformat}
 test !-- TESTED --
   descriptionls: file using absolute path/description
   test-commands
 command-fs NAMENODE -touchz /file1/command
 command-fs NAMENODE -ls /file1/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm /file1/command
   /cleanup-commands
   comparators
 comparator
   typeTokenComparator/type
   expected-outputFound 1 items/expected-output
 /comparator
 comparator
   typeRegexpComparator/type
   expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
 )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( 
 )*/file1/expected-output
 /comparator
   /comparators
 /test
 {noformat}
 such as the first 1 in 
 {noformat}
 expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
 )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( 
 )*/file1/expected-output
 {noformat}.
 We found in Bigtop testing on a standalone cluster that such tests fail with 
 a default replication factor. 
 Please update the regexp in test*Conf.xml files to add a flexibility for a 
 replication factor that would allow to execute these tests with a variety of 
 clusters, or inside the Bigtop.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8887:
--

Fix Version/s: (was: 2.3.0)
   2.4.0

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.4.0

 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
 HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
 HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10041) UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even if the kerberos ticket cache is non-renewable

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10041:
---

Fix Version/s: (was: 2.3.0)
   2.4.0

 UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew even 
 if the kerberos ticket cache is non-renewable
 -

 Key: HADOOP-10041
 URL: https://issues.apache.org/jira/browse/HADOOP-10041
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.1-alpha
Reporter: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.4.0


 UserGroupInformation#spawnAutoRenewalThreadForUserCreds tries to renew user 
 credentials.  However, it does this even if the kerberos ticket cache in 
 question is non-renewable.
 This leads to an annoying error message being printed out all the time.
 {code}
 cmccabe@keter:/h klist
 Ticket cache: FILE:/tmp/krb5cc_1014
 Default principal: hdfs/ke...@cloudera.com
 Valid starting ExpiresService principal
 07/18/12 15:24:15  07/19/12 15:24:13  krbtgt/cloudera@cloudera.com
 {code}
 {code}
 cmccabe@keter:/h ./bin/hadoop fs -ls /
 15:21:39,882  WARN UserGroupInformation:739 - Exception encountered while 
 running the renewal command. Aborting renew thread. 
 org.apache.hadoop.util.Shell$ExitCodeException: kinit: KDC can't fulfill 
 requested option while renewing credentials
 Found 3 items
 -rw-r--r--   3 cmccabe users   0 2012-07-09 17:15 /b
 -rw-r--r--   3 hdfssupergroup  0 2012-07-09 17:17 /c
 drwxrwxrwx   - cmccabe audio   0 2012-07-19 11:25 /tmp
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10360) Use 2 network adapter In hdfs read and write

2014-02-24 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-10360:
---

Fix Version/s: (was: 2.3.0)
   2.4.0

 Use 2 network adapter In hdfs read and write
 

 Key: HADOOP-10360
 URL: https://issues.apache.org/jira/browse/HADOOP-10360
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: guodongdong
Priority: Minor
 Fix For: 2.4.0






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   3   4   5   6   >