[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-10-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782684#comment-13782684
 ] 

Colin Patrick McCabe commented on HADOOP-9984:
--

* skip Windows in testCreateDanglingLink

* fix TOCTOU in RawLocalFileSystem where we could sometimes return an invalid 
result if a directory was removed at the wrong time.

* fix Stat on BSD (thanks, Chris)

* DCRException#serialVersionUID should be private.

* capitalize error when throwing DCRException

* listStatusImpl - listStatusInternal

* add DCRException to some throw specs that already throw IOE (does nothing, 
but it serves as extra documentation)

bq. Rather than copy+pasting javadoc for similar methods, I like using See 
@link with a small note about the differences (if any). Will help shrink this 
patch.

OK.  I am referencing all the {{listStatus}} implementations to the 
{{listLinkStatus}} ones, with a note about the added exception and behavior.

bq. In the createSymlink javadoc, listLinkStatus should also be in the list of 
functions that fully resolve

added

bq. Are we planning to add globLinkStatus type methods to 
FileSystem/FileContext? Right now we have this resolveLinks which is always 
true (and in TestGlobStatus too). It's a little confusing right now; I'd like 
to either see the new APIs included here, or all of it broken out into a 
separate JIRA

It's under discussion in HADOOP-9972.  I think it will end up being a lot like 
CreateOptions, but let's hold off on discussing that for now, since this JIRA 
is already big enough.

bq. Rather than uriToSchemeAndAuthority, can we instead use Path#makeQualified 
or FileSystem#makeQualified? If not, I also preferred the old style, since 
using parameters as return values kinda bites.

We can't qualify a path pattern because it may involve things like 
\{a,/b\}/foo where the different branches of the pattern have to be qualified 
in different ways.  The old style of two separate functions didn't work because 
the decision about whether to use the default for scheme affects the decision 
to use the default for authority.  It could be inlined into the main body of 
the glob function, but I'd prefer not to make it bigger.

bq. In WebHdfs and HttpFS, the Op / Operation is still called LISTSTATUS.

We can't change the name of that because it would break wire compatibility in 
the HTTP request.  Those enums get stringified.  I don't think this will lead 
to any confusion since the link-resolving version will be implemented on the 
client side (i.e., it will not require another type of LIST RPC).

bq. Please use GenericTestUtils#assertExceptionContains in the new symlink 
test, you can check for the right path in the exception message.

OK.

bq. [path comments]

Well, as you mentioned, the Path issues are clearly out of scope for this JIRA.

This is a tangent, but I am not convinced by the proposal that we return 
built-up path everywhere.  It would lead to a lot of unnecessary symlink 
resolutions since we'd have to re-do all the work of resolution whenever we 
used the paths.  Plus, in the case of cross-filesystem links, it just doesn't 
even make sense.  What can you add to the end of an hdfs:// path that makes it 
a file:// path?  Nothing.  Finally, from an implementation perspective, this 
requires revisiting pretty much every FC or FS operation, since they all return 
resolved path now.

The built-up path is information that programs can deduce themselves, in the 
same way globStatus does when resolveLinks = false.  (The resolveLinks = false 
case is not exposed by an API yet, but it will be in HADOOP-9972)

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-10-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9984:
-

Attachment: HADOOP-9984.011.patch

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.011.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-10-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9984:
-

Attachment: (was: HADOOP-9984.011.patch)

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-10-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9984:
-

Attachment: HADOOP-9984.012.patch

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782810#comment-13782810
 ] 

Hudson commented on HADOOP-1:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #349 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/349/])
MAPREDUCE-5551. Fix compat with hadoop-1 in 
SequenceFileAsBinaryOutputFormat.WritableValueBytes by re-introducing missing 
constructors. Contributed by Zhijie Shen. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527848)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsBinaryOutputFormat.java


 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9964) O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause TestHttpServer pending 10 minutes or longer.

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782806#comment-13782806
 ] 

Hudson commented on HADOOP-9964:


FAILURE: Integrated in Hadoop-Yarn-trunk #349 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/349/])
HADOOP-9964. Fix deadlocks in TestHttpServer by synchronize 
ReflectionUtils.printThreadInfo. (Junping Du via llu) (llu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527650)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java


 O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause 
 TestHttpServer pending 10 minutes or longer.
 -

 Key: HADOOP-9964
 URL: https://issues.apache.org/jira/browse/HADOOP-9964
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.3.0

 Attachments: HADOOP-9964.patch, jstack-runTestHttpServer.log


 The printThreadInfo() in ReflectionUtils is not thread-safe which cause two 
 or more threads calling this method from StackServlet to deadlock. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9964) O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause TestHttpServer pending 10 minutes or longer.

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782912#comment-13782912
 ] 

Hudson commented on HADOOP-9964:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1539 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1539/])
HADOOP-9964. Fix deadlocks in TestHttpServer by synchronize 
ReflectionUtils.printThreadInfo. (Junping Du via llu) (llu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527650)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java


 O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause 
 TestHttpServer pending 10 minutes or longer.
 -

 Key: HADOOP-9964
 URL: https://issues.apache.org/jira/browse/HADOOP-9964
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.3.0

 Attachments: HADOOP-9964.patch, jstack-runTestHttpServer.log


 The printThreadInfo() in ReflectionUtils is not thread-safe which cause two 
 or more threads calling this method from StackServlet to deadlock. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782916#comment-13782916
 ] 

Hudson commented on HADOOP-1:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1539 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1539/])
MAPREDUCE-5551. Fix compat with hadoop-1 in 
SequenceFileAsBinaryOutputFormat.WritableValueBytes by re-introducing missing 
constructors. Contributed by Zhijie Shen. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527848)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsBinaryOutputFormat.java


 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9964) O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause TestHttpServer pending 10 minutes or longer.

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782964#comment-13782964
 ] 

Hudson commented on HADOOP-9964:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1565 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1565/])
HADOOP-9964. Fix deadlocks in TestHttpServer by synchronize 
ReflectionUtils.printThreadInfo. (Junping Du via llu) (llu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527650)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java


 O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause 
 TestHttpServer pending 10 minutes or longer.
 -

 Key: HADOOP-9964
 URL: https://issues.apache.org/jira/browse/HADOOP-9964
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.3.0

 Attachments: HADOOP-9964.patch, jstack-runTestHttpServer.log


 The printThreadInfo() in ReflectionUtils is not thread-safe which cause two 
 or more threads calling this method from StackServlet to deadlock. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782968#comment-13782968
 ] 

Hudson commented on HADOOP-1:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1565 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1565/])
MAPREDUCE-5551. Fix compat with hadoop-1 in 
SequenceFileAsBinaryOutputFormat.WritableValueBytes by re-introducing missing 
constructors. Contributed by Zhijie Shen. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527848)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/SequenceFileAsBinaryOutputFormat.java


 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil

2013-10-01 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783008#comment-13783008
 ] 

Robert Parker commented on HADOOP-9063:
---

+1 (non-binding) lgtm. I was able to apply the trunk patch to branch-2 and 
successfully test it(now). Recommend applying the trunk patch to branch-2 to 
avoid unnecessary divergence.

 enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
 -

 Key: HADOOP-9063
 URL: https://issues.apache.org/jira/browse/HADOOP-9063
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, 
 HADOOP-9063-branch-0.23--c.patch, HADOOP-9063-branch-2--N1.patch, 
 HADOOP-9063-branch-2--N2.patch, HADOOP-9063.patch, 
 HADOOP-9063-trunk--c.patch, HADOOP-9063-trunk--c.patch, 
 HADOOP-9063-trunk--N2.patch, HADOOP-9063-trunk--N3.patch, 
 HADOOP-9063-trunk--N6.patch


 Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests 
 poorly or not covered at all. Enhance the coverage.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-7838) sbin/start-balancer doesnt

2013-10-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-7838.


Resolution: Duplicate

resolving as duplicate of HDFS-3165 (technically that duplicates this one, but 
the patch is in the later JIRA...)

 sbin/start-balancer doesnt
 --

 Key: HADOOP-7838
 URL: https://issues.apache.org/jira/browse/HADOOP-7838
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.0
 Environment: OS/X, no JAVA_HOME set, tarball installation
Reporter: Steve Loughran

 you can't start the balance as it tries to call bin/hadoop-daemon.sh, which 
 isn't there:
 {code}
 hadoop-0.23.0 slo$ sbin/start-balancer.sh 
 sbin/start-balancer.sh: line 25: 
 /Users/slo/Java/Hadoop/versions/hadoop-0.23.0/libexec/../bin/hadoop-daemon.sh:
  No such file or directory
 hadoop-0.23.0 slo$ 
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2013-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783018#comment-13783018
 ] 

Steve Loughran commented on HADOOP-9902:


playing with this. sometimes the generated classpath is , say. 
share/hadoop/yarn/* ; the capacity scheduler is /*.jar -should everything be 
consistent.

My tarball built with {{mvn clean package -Pdist -Dtar -DskipTests 
-Dmaven.javadoc.skip=true }} doesn't seem to contain any conf/ directories, 
which is presumably unrelated to this build setup. What I do wonder is whether 
the scripts should care about this fact -and how to react

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2013-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783048#comment-13783048
 ] 

Steve Loughran commented on HADOOP-9902:


(ignore that last comment about the conf dirs, they are there. But we do need 
to think when and how to react to their absence)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2013-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783086#comment-13783086
 ] 

Steve Loughran commented on HADOOP-9902:


One thing I will note is that {{yarn classpath}} fails saying no conf dir set

{code}
$ yarn classpath
No HADOOP_CONF_DIR set.
Please specify it either in yarn-env.sh or in the environment.
{code}

the normal {{hadoop classpath}} doesn't fail in the same situation.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2013-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783103#comment-13783103
 ] 

Steve Loughran commented on HADOOP-9902:


-actually a rebuild fixes that. What I did have to do was drop 
hadoop-functions.sh into libexec

I don't see hadoop tools getting on the CP: is there a plan for that? Because 
it would suit  me to have a directory into which I could put things to get them 
on a classpath without playing with HADOOP_CLASSPATH

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10010) Add expectedFalsePositiveProbability to BloomFilter

2013-10-01 Thread Xiangrui Meng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783117#comment-13783117
 ] 

Xiangrui Meng commented on HADOOP-10010:


Assume that a hash function selects each array position with equal probability. 
Then for an element not in the collection, the probability that the bloom 
filter returns true is

(numTrueBits/numBits)^numHashes

See http://en.wikipedia.org/wiki/Bloom_filter

I don't know what tests would be appropriate here.

 Add expectedFalsePositiveProbability to BloomFilter
 ---

 Key: HADOOP-10010
 URL: https://issues.apache.org/jira/browse/HADOOP-10010
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Xiangrui Meng
 Attachments: fpp.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 It would be nice to see the expected false positive probability of a bloom 
 filter instance to check its quality. This is a simple function but needs 
 access to BloomFilter#bits.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10009) Backport HADOOP-7808 to branch-1

2013-10-01 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783138#comment-13783138
 ] 

Jing Zhao commented on HADOOP-10009:


The patch looks good to me. Could you post the test-patch and ant test 
results?

 Backport HADOOP-7808 to branch-1
 

 Key: HADOOP-10009
 URL: https://issues.apache.org/jira/browse/HADOOP-10009
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10009.000.patch, HADOOP-10009.001.patch


 In branch-1, SecurityUtil::setTokenService() might throw a 
 NullPointerException, which is fixed in HADOOP-7808.
 The patch should be backported into branch-1



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9991) Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions

2013-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783152#comment-13783152
 ] 

Steve Loughran commented on HADOOP-9991:


enforcement is on, it's more excessive export to things downstream, especially 
with hbase in the mix -as that is where version number problems start to surface

This is the main set of exclusions because they don't appear used values, 
though HDFS's JSP pages may well need jasper. The 
jersey-test-framework-grizzly2 dependency (branch-2.1.1) is clearly spurious

{code}
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-minicluster/artifactId
  version${hadoop.version}/version
  scopetest/scope
  exclusions
exclusion
  groupIdcom.sun.jersey.jersey-test-framework/groupId
  artifactIdjersey-test-framework-grizzly2/artifactId
/exclusion
  /exclusions
/dependency

dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-hdfs/artifactId
  version${hadoop.version}/version
  exclusions
exclusion
  groupIdtomcat/groupId
  artifactIdjasper-runtime/artifactId
/exclusion
  /exclusions
/dependency
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-yarn-server-common/artifactId
  version${hadoop.version}/version
  exclusions
exclusion
  groupIdcom.sun.jersey.jersey-test-framework/groupId
  artifactIdjersey-test-framework-grizzly2/artifactId
/exclusion
  /exclusions
/dependency

dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-yarn-client/artifactId
  version${hadoop.version}/version
  exclusions
exclusion
  groupIdcom.sun.jersey.jersey-test-framework/groupId
  artifactIdjersey-test-framework-grizzly2/artifactId
/exclusion
  /exclusions  
/dependency
{code}


 Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions
 -

 Key: HADOOP-9991
 URL: https://issues.apache.org/jira/browse/HADOOP-9991
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.3.0, 2.1.1-beta
Reporter: Steve Loughran

 If you try using Hadoop downstream with a classpath shared with HBase and 
 Accumulo, you soon discover how messy the dependencies are.
 Hadoop's side of this problem is
 # not being up to date with some of the external releases of common JARs
 # not locking down/excluding inconsistent versions of artifacts provided down 
 the dependency graph



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules

2013-10-01 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783242#comment-13783242
 ] 

Daryn Sharp commented on HADOOP-9470:
-

I don't think adding 2 to the class name is a good precedent to set.  Please 
rename the dups to TestYarnWhatever, TestMRWhatever, etc.

 eliminate duplicate FQN tests in different Hadoop modules
 -

 Key: HADOOP-9470
 URL: https://issues.apache.org/jira/browse/HADOOP-9470
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: find-duplicate-fqns.sh, HADOOP-9470-branch-0.23.patch, 
 HADOOP-9470-trunk.patch


 In different modules of Hadoop project there are tests with identical FQNs 
 (fully qualified name).
 For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 
 2 modules:
  
 ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
  
 ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java
  
 Such situation causes certain problems with test result reporting and other 
 code analysis tools (such as Clover, e.g.) because almost all the tools 
 identify the tests by their Java FQN.
 So, I suggest to rename all such test classes to avoid duplicate FQNs in 
 different modules. I'm attaching simple shell script that can find all such 
 problematic test classes. Currently Hadoop trunk has 9 such test classes, 
 they are:
 $ ~/bin/find-duplicate-fqns.sh
 # Module 
 [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes]
  has 7 duplicate FQN tests:
 org.apache.hadoop.ipc.TestSocketFactory
 org.apache.hadoop.mapred.TestFileOutputCommitter
 org.apache.hadoop.mapred.TestJobClient
 org.apache.hadoop.mapred.TestJobConf
 org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter
 org.apache.hadoop.util.TestReflectionUtils
 org.apache.hadoop.util.TestRunJar
 # Module 
 [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes]
  has 2 duplicate FQN tests:
 org.apache.hadoop.yarn.TestRecordFactory
 org.apache.hadoop.yarn.TestRPCFactories



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Moved] (HADOOP-10011) NPE if the system can't determine its own name and you go DNS.getDefaultHost(null)

2013-10-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-116 to HADOOP-10011:
--

Key: HADOOP-10011  (was: HDFS-116)
Project: Hadoop Common  (was: Hadoop HDFS)

 NPE if the system can't determine its own name and you go 
 DNS.getDefaultHost(null)
 --

 Key: HADOOP-10011
 URL: https://issues.apache.org/jira/browse/HADOOP-10011
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor

 In a test case that I am newly writing, on my infamous home machine with 
 broken DNS, I cant call getByName(null) without seeing a stack trace:
 Testcase: testNullInterface took 0.014 sec
   Caused an ERROR
 null
 java.lang.NullPointerException
   at java.net.NetworkInterface.getByName(NetworkInterface.java:226)
   at org.apache.hadoop.net.DNS.getIPs(DNS.java:94)
   at org.apache.hadoop.net.DNS.getHosts(DNS.java:141)
   at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:218)
   at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:235)
   at org.apache.hadoop.net.TestDNS.testNullInterface(TestDNS.java:62)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10011) NPE if the system can't determine its own name and you go DNS.getDefaultHost(null)

2013-10-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10011:


Affects Version/s: 3.0.0
   2.1.1-beta

doing a review of the code, the issue is still there -DNS.getHosts assumes that 
{{netAddress.getByName()}} never returns null, which is the official failure 
state of the method according to the JDK:
{code}
hosts.add(reverseDns(InetAddress.getByName(ips[ctr]),
 nameserver));
{code}


 NPE if the system can't determine its own name and you go 
 DNS.getDefaultHost(null)
 --

 Key: HADOOP-10011
 URL: https://issues.apache.org/jira/browse/HADOOP-10011
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor

 In a test case that I am newly writing, on my infamous home machine with 
 broken DNS, I cant call getByName(null) without seeing a stack trace:
 Testcase: testNullInterface took 0.014 sec
   Caused an ERROR
 null
 java.lang.NullPointerException
   at java.net.NetworkInterface.getByName(NetworkInterface.java:226)
   at org.apache.hadoop.net.DNS.getIPs(DNS.java:94)
   at org.apache.hadoop.net.DNS.getHosts(DNS.java:141)
   at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:218)
   at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:235)
   at org.apache.hadoop.net.TestDNS.testNullInterface(TestDNS.java:62)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10003) HarFileSystem.listLocatedStatus() fails

2013-10-01 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HADOOP-10003:


Attachment: test.har.tar

Attaching test.har as a tar archive, as Suresh reported having issues applying 
the patch with binary files.

 HarFileSystem.listLocatedStatus() fails
 ---

 Key: HADOOP-10003
 URL: https://issues.apache.org/jira/browse/HADOOP-10003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.1-beta
Reporter: Jason Dere
 Attachments: HADOOP-10003.1.patch, HADOOP-10003.2.patch, 
 HADOOP-10003.3.patch, test.har.tar


 It looks like HarFileSystem.listLocatedStatus() doesn't work properly because 
 it is inheriting FilterFileSystem's implementation.  This is causing archive 
 unit tests to fail in Hive when using hadoop 2.1.1.
 If HarFileSystem overrides listLocatedStatus() to use FileSystem's 
 implementation, the Hive unit tests pass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-10-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783264#comment-13783264
 ] 

Chris Nauroth commented on HADOOP-9984:
---

The code in the latest patch is looking good.  I'm planning to give it a full 
test run on Windows overnight in case there are any sneaky OS-specific issues.  
A few comments/questions:

Failure to auto-resolve any symlink causes an exception for the whole 
operation.  There had been prior discussion of supporting an option to ignore 
symlink resolution failures.  Is that out of scope right now and coming later 
in HADOOP-9972?

Nice job updating JavaDocs to describe the effects of symlinks on existing 
methods.  I'm going to take one more pass over this part, just to make sure we 
covered everything.

Regarding the backwards-incompatible change of abstract listStatus to abstract 
listLinkStatus, I also do not see a way to avoid this.  At least this way, it's 
only incompatible for subclass implementers and not callers.

Methods that perform auto-resolution will return multiple occurrences of the 
same path if there are multiple symlinks with the same target.  I haven't seen 
this mentioned explicitly in the prior threads discussing compatibility 
concerns, so I thought I'd bring it up.  This decision can be significant for 
apps.  Taking the example of MapReduce running against HDFS, 
{{FileInputFormat#getSplits}} runs {{FileSystem#globStatus}} and skips symlinks 
(based on a length != 0) check.  If {{FileSystem#globStatus}} returns symlinks, 
they don't go into the job input.  If the symlinks are auto-resolved (as in 
this patch), then the same HDFS blocks get used multiple times to create 
multiple input splits.  According to comments in HADOOP-9912, {{globStatus}} 
has been inconsistent over time, and I think auto-resolving yields the correct 
expected behavior anyway.  I have no objection to the change, but I wanted to 
describe it clearly in case anyone else has concerns.


 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs with delegation token renewal exception in HA setup

2013-10-01 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783277#comment-13783277
 ] 

Arpit Gupta commented on HADOOP-10012:
--

Here is hte stack trace

{code}
2013-08-29 20:07:05,773 INFO  resourcemanager.ClientRMService 
(ClientRMService.java:getNewApplicationId(206)) - Allocated new applicationId: 8
2013-08-29 20:07:06,713 WARN  token.Token (Token.java:getRenewer(352)) - No 
TokenRenewer defined for token kind Localizer
2013-08-29 20:07:06,731 ERROR security.UserGroupInformation 
(UserGroupInformation.java:doAs(1480)) - PriviledgedActionException 
as:rm/hostname:8020;
2013-08-29 20:07:06,731 WARN  resourcemanager.RMAppManager 
(RMAppManager.java:submitApplication(297)) - Unable to add the application to 
the delegation token renewer.
java.io.IOException: Failed on local exception: java.io.EOFException; Host 
Details : local host is: hostname:8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy9.renewDelegationToken(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy9.renewDelegationToken(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewDelegationToken(ClientNamenodeProtocolTranslatorPB.java:820)
at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:932)
at org.apache.hadoop.security.token.Token.renew(Token.java:372)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:385)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:382)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:381)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.addApplication(DelegationTokenRenewer.java:301)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:291)
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:315)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:163)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:243)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
2013-08-29 20:07:06,733 INFO  rmapp.RMAppImpl (RMAppImpl.java:handle(565)) - 
application_1377802472892_0008 State change from NEW to FAILED
2013-08-29 20:07:06,734 WARN  resourcemanager.RMAuditLogger 
(RMAuditLogger.java:logFailure(255)) - USER=hrt_qa  OPERATION=Application 
Finished - Failed TARGET=RMAppManager RESULT=FAILURE  DESCRIPTION=App 
failed with state: FAILED   PERMISSIONS=Failed on local exception: 
java.io.EOFException; Host Details : local host is: hostname:8020; 
APPID=application_1377802472892_0008
{code}

 Secure Oozie jobs with delegation token 

[jira] [Created] (HADOOP-10012) Secure Oozie jobs with delegation token renewal exception in HA setup

2013-10-01 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-10012:


 Summary: Secure Oozie jobs with delegation token renewal exception 
in HA setup
 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10003) HarFileSystem.listLocatedStatus() fails

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10003:
-

Attachment: HADOOP-10003.4.patch

Updated patch. It changes HarFileSystem to subclass FileSystem instead of 
FilterFileSystem. This patch also must include test.har posted by [~jdere].

 HarFileSystem.listLocatedStatus() fails
 ---

 Key: HADOOP-10003
 URL: https://issues.apache.org/jira/browse/HADOOP-10003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.1-beta
Reporter: Jason Dere
 Attachments: HADOOP-10003.1.patch, HADOOP-10003.2.patch, 
 HADOOP-10003.3.patch, HADOOP-10003.4.patch, test.har.tar


 It looks like HarFileSystem.listLocatedStatus() doesn't work properly because 
 it is inheriting FilterFileSystem's implementation.  This is causing archive 
 unit tests to fail in Hive when using hadoop 2.1.1.
 If HarFileSystem overrides listLocatedStatus() to use FileSystem's 
 implementation, the Hive unit tests pass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-10-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783282#comment-13783282
 ] 

Colin Patrick McCabe commented on HADOOP-9984:
--

bq. Failure to auto-resolve any symlink causes an exception for the whole 
operation. There had been prior discussion of supporting an option to ignore 
symlink resolution failures. Is that out of scope right now and coming later in 
HADOOP-9972?

Yeah, we've been discussing that in HADOOP-9972.  My feeling right now is that 
we should definitely allow users to provide an error handler for 
{{globStatus}}, as well as the ability to skip resolving symlinks.  We need a 
{{globStatus}} error handler for other reasons as well, such as to improve 
FSShell error handling  I don't think {{listStatus}} needs an error handler 
since users can always turn to {{listLinkStatus}} and do the resolution 
themselves, which seems simpler.  But it's best to discuss that on HADOOP-9972 
:)

Thanks for testing this on Windows.

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10003) HarFileSystem.listLocatedStatus() fails

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10003:
-

Attachment: HADOOP-10003.5.patch

Updated patch to remove the rat checks for the har data files required for 
testing.

With this patch, TestHarFileSystemBasics is still expected to fail due to the 
absence of test.har in the patch. Reviewer, please run the test 
TestHarFileSystemBasics, after untarring the test.har.tar in 
hadoop-common-project/hadoop-common/src/test/resources.

 HarFileSystem.listLocatedStatus() fails
 ---

 Key: HADOOP-10003
 URL: https://issues.apache.org/jira/browse/HADOOP-10003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.1-beta
Reporter: Jason Dere
 Attachments: HADOOP-10003.1.patch, HADOOP-10003.2.patch, 
 HADOOP-10003.3.patch, HADOOP-10003.4.patch, HADOOP-10003.5.patch, test.har.tar


 It looks like HarFileSystem.listLocatedStatus() doesn't work properly because 
 it is inheriting FilterFileSystem's implementation.  This is causing archive 
 unit tests to fail in Hive when using hadoop 2.1.1.
 If HarFileSystem overrides listLocatedStatus() to use FileSystem's 
 implementation, the Hive unit tests pass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs with delegation token renewal exception in HA setup

2013-10-01 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783294#comment-13783294
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-10012:
--

Tx for filing this Arpit. I'd like to also credit [~venkatnrangan] for his 
extensive debugging to figure out the underlying issue.

What's happening here is that
 - In the oozie's launcher job, before we create a job-client, Cluster.java 
creates a file-system object which eventually invokes DFS HAUtils code that 
clones the single delegation token with logical URI as service-name into 
multiple tokens with the ip-addresses
 - Once the UGI is 'polluted' with these duplicate tokens, JobClient uses the 
tokens from UGI to submit to RM which eventually fails to renew these 'fake' 
tokens as it cannot reach the stand-by NN for renewal
 - The failure to renew tokens fails the job.

 Secure Oozie jobs with delegation token renewal exception in HA setup
 -

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs with delegation token renewal exception in HA setup

2013-10-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783293#comment-13783293
 ] 

Suresh Srinivas commented on HADOOP-10012:
--

In DFSClient, we clone the delegation token associated with logical service 
name to two physical addresses corresponding to the active and standby namenode 
addresse. In Oozie job, when the oozie job launcher submits a jobs to the YARN 
RM, the RM tries to renew the delegation tokens including the cloned tokens. 
The standby namenode does not allow token renewal. This results in Token 
renewal failure at the RM and subsequent failure of job submitted by Oozie.

 Secure Oozie jobs with delegation token renewal exception in HA setup
 -

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs with delegation token renewal exception in HA setup

2013-10-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-10012:
-

Target Version/s: 2.1.2-beta

Hehe, race conditions for comments.

I think we should get this fixed for 2.1.2.

 Secure Oozie jobs with delegation token renewal exception in HA setup
 -

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783299#comment-13783299
 ] 

Hadoop QA commented on HADOOP-9984:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606060/HADOOP-9984.012.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-tools/hadoop-gridmix 
hadoop-tools/hadoop-openstack 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3153//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3153//console

This message is automatically generated.

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in HA setup

2013-10-01 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10012:
-

Summary: Secure Oozie jobs fail with delegation token renewal exception in 
HA setup  (was: Secure Oozie jobs with delegation token renewal exception in HA 
setup)

 Secure Oozie jobs fail with delegation token renewal exception in HA setup
 --

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in HA setup

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10012:
-

Attachment: HADOOP-10012.patch

Here is a patch that @Daryn had given me. I have added unit tests to his code. 
The patch adds a new subclass type for cloned tokens called PrivateToken. Such 
tokens are not returned in UserGroupInformation#getCredentials() method.

[~venkatnrangan] and [~vinodkv] spent long hours debugging this issue. Also 
[~venkatnrangan] helped in verifying that this patch worked. Thanks guys for 
the help.

 Secure Oozie jobs fail with delegation token renewal exception in HA setup
 --

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10003) HarFileSystem.listLocatedStatus() fails

2013-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783343#comment-13783343
 ] 

Hadoop QA commented on HADOOP-10003:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606177/HADOOP-10003.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestHarFileSystemBasics

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3154//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3154//console

This message is automatically generated.

 HarFileSystem.listLocatedStatus() fails
 ---

 Key: HADOOP-10003
 URL: https://issues.apache.org/jira/browse/HADOOP-10003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.1-beta
Reporter: Jason Dere
 Attachments: HADOOP-10003.1.patch, HADOOP-10003.2.patch, 
 HADOOP-10003.3.patch, HADOOP-10003.4.patch, HADOOP-10003.5.patch, test.har.tar


 It looks like HarFileSystem.listLocatedStatus() doesn't work properly because 
 it is inheriting FilterFileSystem's implementation.  This is causing archive 
 unit tests to fail in Hive when using hadoop 2.1.1.
 If HarFileSystem overrides listLocatedStatus() to use FileSystem's 
 implementation, the Hive unit tests pass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10004) [Documentation] hadoop.ssl.enabled knob will no longer be used for MR AM and JobHistoryServer

2013-10-01 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated HADOOP-10004:
---

Summary: [Documentation] hadoop.ssl.enabled knob will no longer be used for 
MR AM and JobHistoryServer  (was: hadoop.ssl.enabled knob will no longer be 
used for MR AM and JobHistoryServer)

 [Documentation] hadoop.ssl.enabled knob will no longer be used for MR AM and 
 JobHistoryServer
 -

 Key: HADOOP-10004
 URL: https://issues.apache.org/jira/browse/HADOOP-10004
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: HADOOP-10004.20131027.1.patch


 it is related to MAPREDUCE-5536



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10003) HarFileSystem.listLocatedStatus() fails

2013-10-01 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783379#comment-13783379
 ] 

Sanjay Radia commented on HADOOP-10003:
---

+1
Create a new jira to to update har test to ensure that HarFileSystem implements 
every declared method of FileSystem (see 
TestFilterFileSystem#testFilterFileSystem()  - it does a similar check). This 
ensures that when a new method is added to FileSystem, the test will catch that 
HarFileSystem is updated accordingly).

 HarFileSystem.listLocatedStatus() fails
 ---

 Key: HADOOP-10003
 URL: https://issues.apache.org/jira/browse/HADOOP-10003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.1-beta
Reporter: Jason Dere
 Attachments: HADOOP-10003.1.patch, HADOOP-10003.2.patch, 
 HADOOP-10003.3.patch, HADOOP-10003.4.patch, HADOOP-10003.5.patch, test.har.tar


 It looks like HarFileSystem.listLocatedStatus() doesn't work properly because 
 it is inheriting FilterFileSystem's implementation.  This is causing archive 
 unit tests to fail in Hive when using hadoop 2.1.1.
 If HarFileSystem overrides listLocatedStatus() to use FileSystem's 
 implementation, the Hive unit tests pass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-10-01 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783392#comment-13783392
 ] 

Robert Parker commented on HADOOP-9078:
---

Thanks [~dennisyv] and [~iveselovsky] for the patches.  Both the branch-2 and 
trunk patches apply to trunk fine (as all the affected files are identical in 
both branch-2 and trunk). But patches are different, for example 
FileContextMainOperationsBaseTest.java is different for testWorkingDirectory.  
Please reconcile these patches so they are making the same change. 

 enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
 

 Key: HADOOP-9078
 URL: https://issues.apache.org/jira/browse/HADOOP-9078
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
 HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, 
 HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2--N2.patch, 
 HADOOP-9078-branch-2--N3.patch, HADOOP-9078-branch-2--N4.patch, 
 HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
 HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, 
 HADOOP-9078-trunk--N1.patch, HADOOP-9078-trunk--N2.patch, 
 HADOOP-9078-trunk--N6.patch, HADOOP-9078-trunk--N8.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in HA setup

2013-10-01 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783439#comment-13783439
 ] 

Sanjay Radia commented on HADOOP-10012:
---

I am little bit worried about the key name in the map 
{code}
Text alias = new Text(HA_DT_SERVICE_PREFIX + // + specificToken.getService());
 ugi.addToken(alias, specificToken);
{code}

The original code added it using the unchanged service name.

 Secure Oozie jobs fail with delegation token renewal exception in HA setup
 --

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in HA setup

2013-10-01 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783455#comment-13783455
 ] 

Sanjay Radia commented on HADOOP-10012:
---

Turns out the key name in the map is not used to lookup a token when connecting 
to a service. Instead the token selector grabs all tokens and uses the service 
name *inside* the token:
{code}
for (Token? extends TokenIdentifier token : tokens) {
  if (kindName.equals(token.getKind())
   service.equals(token.getService())) {
return (TokenTokenIdent) token;
  }
}
{code}
I think changing the key in the map should be okay.
Daryn added this for debugging assistance - quote from IM:
{quote}
I figured it should have a unique name just in case, for some reason, the 
client really did have a token for the physical service.  Plus to simplify 
debugging if something goes awry again.
it won't break anything, because nothing really looks for a token by its key 
other than some mr/yarn stuff (grumble)
{quote}

 +1 for the patch.

Todd/Atm - didn't you run into this bug with CDH4 and CDH5 (even though CDH 
ships MR1, wouldn't it run into the same issue?)

 Secure Oozie jobs fail with delegation token renewal exception in HA setup
 --

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10003) HarFileSystem.listLocatedStatus() fails

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783453#comment-13783453
 ] 

Hudson commented on HADOOP-10003:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4508 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4508/])
HADOOP-10003. HarFileSystem.listLocatedStatus() fails. Contributed by Jason 
Dere and suresh. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1528256)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystemBasics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test.har
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test.har/.part-0.crc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test.har/_SUCCESS
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test.har/_index
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test.har/_masterindex
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test.har/part-0


 HarFileSystem.listLocatedStatus() fails
 ---

 Key: HADOOP-10003
 URL: https://issues.apache.org/jira/browse/HADOOP-10003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.1-beta
Reporter: Jason Dere
 Attachments: HADOOP-10003.1.patch, HADOOP-10003.2.patch, 
 HADOOP-10003.3.patch, HADOOP-10003.4.patch, HADOOP-10003.5.patch, test.har.tar


 It looks like HarFileSystem.listLocatedStatus() doesn't work properly because 
 it is inheriting FilterFileSystem's implementation.  This is causing archive 
 unit tests to fail in Hive when using hadoop 2.1.1.
 If HarFileSystem overrides listLocatedStatus() to use FileSystem's 
 implementation, the Hive unit tests pass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10003) HarFileSystem.listLocatedStatus() fails

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10003:
-

   Resolution: Fixed
Fix Version/s: 2.1.2-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to 2.1.2 beta and all the branches leading up to it.

 HarFileSystem.listLocatedStatus() fails
 ---

 Key: HADOOP-10003
 URL: https://issues.apache.org/jira/browse/HADOOP-10003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.1-beta
Reporter: Jason Dere
 Fix For: 2.1.2-beta

 Attachments: HADOOP-10003.1.patch, HADOOP-10003.2.patch, 
 HADOOP-10003.3.patch, HADOOP-10003.4.patch, HADOOP-10003.5.patch, test.har.tar


 It looks like HarFileSystem.listLocatedStatus() doesn't work properly because 
 it is inheriting FilterFileSystem's implementation.  This is causing archive 
 unit tests to fail in Hive when using hadoop 2.1.1.
 If HarFileSystem overrides listLocatedStatus() to use FileSystem's 
 implementation, the Hive unit tests pass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-8315) Support SASL-authenticated ZooKeeper in ActiveStandbyElector

2013-10-01 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783477#comment-13783477
 ] 

Aaron T. Myers commented on HADOOP-8315:


+1, the latest patch looks good to me.

 Support SASL-authenticated ZooKeeper in ActiveStandbyElector
 

 Key: HADOOP-8315
 URL: https://issues.apache.org/jira/browse/HADOOP-8315
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8315.txt, hadoop-8315_v2.txt


 Currently, if you try to use SASL-authenticated ZK with the 
 ActiveStandbyElector, you run into a couple issues:
 1) We hit ZOOKEEPER-1437 - we need to wait until we see SaslAuthenticated 
 before we can make any requests
 2) We currently throw a fatalError when we see the SaslAuthenticated callback 
 on the connection watcher
 We need to wait for ZK-1437 upstream, and then upgrade to the fixed version 
 for #1. For #2 we just need to add a case there and ignore it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in Namenode HA setup

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10012:
-

Summary: Secure Oozie jobs fail with delegation token renewal exception in 
Namenode HA setup  (was: Secure Oozie jobs fail with delegation token renewal 
exception in HA setup)

 Secure Oozie jobs fail with delegation token renewal exception in Namenode HA 
 setup
 ---

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in Namenode HA setup

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10012:
-

Attachment: HADOOP-10012.1.patch

Attached patch removes unnecessary code change in TestDelegationToken.java.

 Secure Oozie jobs fail with delegation token renewal exception in Namenode HA 
 setup
 ---

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.1.patch, HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9920) Should upgrade maven-surefire-plugin version to avoid hitting SUREFIRE-910

2013-10-01 Thread Ashish Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783511#comment-13783511
 ] 

Ashish Singh commented on HADOOP-9920:
--

+1 for the patch.

 Should upgrade maven-surefire-plugin version to avoid hitting SUREFIRE-910
 --

 Key: HADOOP-9920
 URL: https://issues.apache.org/jira/browse/HADOOP-9920
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.1.0-beta
Reporter: Yu Li
Assignee: Yu Li
 Attachments: HADOOP-9920.patch


 While running UT against 2.1.0-beta on our own Jenkins server, the UT was 
 interruptted at hadoop common project with below exception:
 {noformat}
 ExecutionException; nested exception is 
 java.util.concurrent.ExecutionException: java.lang.RuntimeException: The 
 forked VM terminated without saying properly goodbye. VM crash or System.exit 
 called ?
 {noformat}
 And further checking proves we have ran into 
 [SUREFIRE-910|http://jira.codehaus.org/browse/SUREFIRE-910] which reports the 
 same issue which got fixed in surefire v2.13 while our maven-surefire-plugin 
 version is still 2.12.3 for now. We should upgrade it to the latest 2.16



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in Namenode HA setup

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10012:
-

Status: Patch Available  (was: Open)

 Secure Oozie jobs fail with delegation token renewal exception in Namenode HA 
 setup
 ---

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.1.patch, HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-8315) Support SASL-authenticated ZooKeeper in ActiveStandbyElector

2013-10-01 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8315:


   Resolution: Fixed
Fix Version/s: 2.3.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-2, branch-2.1, and trunk. I'm not sure if I set the Fix 
Versions right, since there's no 2.2.0 available to choose from. Feel free to 
update if it I got it wrong.

 Support SASL-authenticated ZooKeeper in ActiveStandbyElector
 

 Key: HADOOP-8315
 URL: https://issues.apache.org/jira/browse/HADOOP-8315
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.3.0

 Attachments: hadoop-8315.txt, hadoop-8315_v2.txt


 Currently, if you try to use SASL-authenticated ZK with the 
 ActiveStandbyElector, you run into a couple issues:
 1) We hit ZOOKEEPER-1437 - we need to wait until we see SaslAuthenticated 
 before we can make any requests
 2) We currently throw a fatalError when we see the SaslAuthenticated callback 
 on the connection watcher
 We need to wait for ZK-1437 upstream, and then upgrade to the fixed version 
 for #1. For #2 we just need to add a case there and ignore it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9758) Provide configuration option for FileSystem/FileContext symlink resolution

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783539#comment-13783539
 ] 

Hudson commented on HADOOP-9758:


SUCCESS: Integrated in Hadoop-trunk-Commit #4510 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4510/])
move HADOOP-9758 to the branch-2.1.2 section (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1528288)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Provide configuration option for FileSystem/FileContext symlink resolution
 --

 Key: HADOOP-9758
 URL: https://issues.apache.org/jira/browse/HADOOP-9758
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-9758-4.patch, hadoop-9758-5.patch, 
 hadoop-9758-6.patch, hdfs-4968-1.patch, hdfs-4968-2.patch, hdfs-4968-3.patch


 With FileSystem symlink support incoming in HADOOP-8040, some clients will 
 wish to not transparently resolve symlinks. This is somewhat similar to 
 O_NOFOLLOW in open(2).
 Rationale for is for a security model where a user can invoke a third-party 
 service running as a service user to operate on the user's data. For 
 instance, users might want to use Hive to query data in their homedirs, where 
 Hive runs as the Hive user and the data is readable by the Hive user. This 
 leads to a security issue with symlinks:
 # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
 # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
 the query to proceed.
 # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
 that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
 readable by Mallory, but she can create whatever symlinks she wants in her 
 own scratch directory.
 # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.
 This is also potentially useful for clients using FileContext, so let's add 
 it there too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9758) Provide configuration option for FileSystem/FileContext symlink resolution

2013-10-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9758:
-

Fix Version/s: (was: 2.3.0)
   2.1.2-beta

 Provide configuration option for FileSystem/FileContext symlink resolution
 --

 Key: HADOOP-9758
 URL: https://issues.apache.org/jira/browse/HADOOP-9758
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 2.1.2-beta

 Attachments: hadoop-9758-4.patch, hadoop-9758-5.patch, 
 hadoop-9758-6.patch, hdfs-4968-1.patch, hdfs-4968-2.patch, hdfs-4968-3.patch


 With FileSystem symlink support incoming in HADOOP-8040, some clients will 
 wish to not transparently resolve symlinks. This is somewhat similar to 
 O_NOFOLLOW in open(2).
 Rationale for is for a security model where a user can invoke a third-party 
 service running as a service user to operate on the user's data. For 
 instance, users might want to use Hive to query data in their homedirs, where 
 Hive runs as the Hive user and the data is readable by the Hive user. This 
 leads to a security issue with symlinks:
 # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
 # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
 the query to proceed.
 # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
 that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
 readable by Mallory, but she can create whatever symlinks she wants in her 
 own scratch directory.
 # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.
 This is also potentially useful for clients using FileContext, so let's add 
 it there too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-8315) Support SASL-authenticated ZooKeeper in ActiveStandbyElector

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783559#comment-13783559
 ] 

Hudson commented on HADOOP-8315:


SUCCESS: Integrated in Hadoop-trunk-Commit #4511 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4511/])
HADOOP-8315. Support SASL-authenticated ZooKeeper in ActiveStandbyElector. 
Contributed by Todd Lipcon (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1528293)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* /hadoop/common/trunk/hadoop-project/pom.xml


 Support SASL-authenticated ZooKeeper in ActiveStandbyElector
 

 Key: HADOOP-8315
 URL: https://issues.apache.org/jira/browse/HADOOP-8315
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.3.0

 Attachments: hadoop-8315.txt, hadoop-8315_v2.txt


 Currently, if you try to use SASL-authenticated ZK with the 
 ActiveStandbyElector, you run into a couple issues:
 1) We hit ZOOKEEPER-1437 - we need to wait until we see SaslAuthenticated 
 before we can make any requests
 2) We currently throw a fatalError when we see the SaslAuthenticated callback 
 on the connection watcher
 We need to wait for ZK-1437 upstream, and then upgrade to the fixed version 
 for #1. For #2 we just need to add a case there and ignore it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-8315) Support SASL-authenticated ZooKeeper in ActiveStandbyElector

2013-10-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-8315:


Fix Version/s: (was: 2.3.0)
   2.1.2-beta

 Support SASL-authenticated ZooKeeper in ActiveStandbyElector
 

 Key: HADOOP-8315
 URL: https://issues.apache.org/jira/browse/HADOOP-8315
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: hadoop-8315.txt, hadoop-8315_v2.txt


 Currently, if you try to use SASL-authenticated ZK with the 
 ActiveStandbyElector, you run into a couple issues:
 1) We hit ZOOKEEPER-1437 - we need to wait until we see SaslAuthenticated 
 before we can make any requests
 2) We currently throw a fatalError when we see the SaslAuthenticated callback 
 on the connection watcher
 We need to wait for ZK-1437 upstream, and then upgrade to the fixed version 
 for #1. For #2 we just need to add a case there and ignore it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in Namenode HA setup

2013-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783596#comment-13783596
 ] 

Hadoop QA commented on HADOOP-10012:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606225/HADOOP-10012.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3156//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3156//console

This message is automatically generated.

 Secure Oozie jobs fail with delegation token renewal exception in Namenode HA 
 setup
 ---

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.1.patch, HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in Namenode HA setup

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783625#comment-13783625
 ] 

Hudson commented on HADOOP-10012:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4512 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4512/])
HADOOP-10012. Secure Oozie jobs fail with delegation token renewal exception in 
Namenode HA setup. Contributed by Daryn Sharp and Suresh Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1528301)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java


 Secure Oozie jobs fail with delegation token renewal exception in Namenode HA 
 setup
 ---

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
 Attachments: HADOOP-10012.1.patch, HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in Namenode HA setup

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10012:
-

Priority: Blocker  (was: Major)

 Secure Oozie jobs fail with delegation token renewal exception in Namenode HA 
 setup
 ---

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
Priority: Blocker
 Fix For: 2.1.2-beta

 Attachments: HADOOP-10012.1.patch, HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in Namenode HA setup

2013-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10012:
-

   Resolution: Fixed
Fix Version/s: 2.1.2-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to 2.1.2, branch-2 and trunk.

Thank you Daryn for the initial patch. Thank you Sanjay for the review.

 Secure Oozie jobs fail with delegation token renewal exception in Namenode HA 
 setup
 ---

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas
Priority: Blocker
 Fix For: 2.1.2-beta

 Attachments: HADOOP-10012.1.patch, HADOOP-10012.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)