[jira] [Created] (HADOOP-9683) Wrap IpcConnectionContext in RPC headers

2013-07-02 Thread Luke Lu (JIRA)
Luke Lu created HADOOP-9683:
---

 Summary: Wrap IpcConnectionContext in RPC headers
 Key: HADOOP-9683
 URL: https://issues.apache.org/jira/browse/HADOOP-9683
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Luke Lu
Assignee: Daryn Sharp
Priority: Blocker


After HADOOP-9421, all RPC exchanges (including SASL) are wrapped in RPC 
headers except IpcConnectionContext, which is still raw protobuf, which makes 
request pipelining (a desirable feature for things like HDFS-2856) impossible 
to achieve in a backward compatible way. Let's finish the job and wrap 
IpcConnectionContext with the RPC request header with the call id of 
SET_IPC_CONNECTION_CONTEXT. Or simply make it an optional field in the RPC 
request header that gets set for the first RPC call of a given stream.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: write access to hbase wiki

2013-07-02 Thread Steve Loughran
done

On 1 July 2013 19:47, erman pattuk ermanpat...@su.sabanciuniv.edu wrote:

 Hi,

 I was planning to add our current project, BigSecret, to the powered-by
 list of HBase. https://wiki.apache.org/hadoop/PoweredBy
 I believe I need to get write permissions to do so. My username is
 ermanpattuk.

 Thanks,

 Erman Pattuk



Build failed in Jenkins: Hadoop-Common-trunk #817

2013-07-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/817/changes

Changes:

[cnauroth] HADOOP-9678. TestRPC#testStopsAllThreads intermittently fails on 
Windows. Contributed by Ivan Mitic.

[cmccabe] HADOOP-9676.  Make maximum RPC buffer size configurable (Colin 
Patrick McCabe)

[cmccabe] HADOOP-9414.  Refactor out FSLinkResolver and relevant helper methods.

[kihwal] HDFS-4888. Refactor and fix FSNamesystem.getTurnOffTip. Contributed by 
Ravi Prakash.

[tgraves] Preparing for 0.23.9 release

--
[...truncated 52042 lines...]
Adding reference: maven.compile.classpath
Adding reference: maven.runtime.classpath
Adding reference: maven.test.classpath
Adding reference: maven.plugin.classpath
Adding reference: maven.project
Adding reference: maven.project.helper
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: build.platform - Linux-i386-32
Setting project property: failIfNoTests - false
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-classes
Setting project property: project.build.sourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/src/main/java
Setting project property: project.build.testSourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/src/test/java
Setting project property: localRepository -id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none
Setting project property: settings.localRepository - 

[jira] [Created] (HADOOP-9684) the initialization or missed may be for org.apache.ipc.Client$Connection

2013-07-02 Thread Hua xu (JIRA)
Hua xu created HADOOP-9684:
--

 Summary: the initialization or missed may be for 
org.apache.ipc.Client$Connection
 Key: HADOOP-9684
 URL: https://issues.apache.org/jira/browse/HADOOP-9684
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.21.0, 1.0.3
Reporter: Hua xu


Today, we see that a TaskTracer has keeped throwing the same exception in our 
production environment.it is that:

2013-07-01 18:41:40,023 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot 
: current free slots : 7
2013-07-01 18:41:43,026 INFO org.apache.hadoop.mapred.TaskTracker: 
LaunchTaskAction (registerTask): attempt_201208241212_27521_m_02_3 task's 
state:UNASSIGNED
2013-07-01 18:41:43,026 INFO org.apache.hadoop.mapred.TaskTracker: Trying to 
launch : attempt_201208241212_27521_m_02_3 which needs 1 slots
2013-07-01 18:41:43,026 INFO org.apache.hadoop.mapred.TaskTracker: In 
TaskLauncher, current free slots : 7 and trying to launch 
attempt_201208241212_27521_m_02_3 which needs 1 slots
2013-07-01 18:41:43,026 INFO 
org.apache.hadoop.mapreduce.server.tasktracker.Localizer: User-directories for 
the user sds are already initialized on this TT. Not doing anything.
2013-07-01 18:41:43,029 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201208241212_27521_m_02_3:
java.lang.NullPointerException

2013-07-01 18:41:43,029 ERROR org.apache.hadoop.mapred.TaskStatus: Trying to 
set finish time for task attempt_201208241212_27521_m_02_3 when no start 
time is set, stackTrace is : java.lang.Exception
at 
org.apache.hadoop.mapred.TaskStatus.setFinishTime(TaskStatus.java:195)
at 
org.apache.hadoop.mapred.MapTaskStatus.setFinishTime(MapTaskStatus.java:51)
at 
org.apache.hadoop.mapred.TaskTracker$TaskInProgress.kill(TaskTracker.java:2937)
at 
org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2255)
at 
org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2212)

  Then, we view the log files of the TaskTracker,and find that the TaskTracker 
throwed Several OutOfMemoryError: Java heap space about ten days ago. after 
that, the TaskTracker has still throws the exception:


2013-06-22 12:39:42,296 INFO org.apache.hadoop.mapred.TaskTracker: 
LaunchTaskAction (registerTask): attempt_201208241212_26088_m_43_1 task's 
state:UNASSIGNED
2013-06-22 12:39:42,296 INFO org.apache.hadoop.mapred.TaskTracker: Trying to 
launch : attempt_201208241212_26088_m_43_1 which needs 1 slots
2013-06-22 12:39:42,296 INFO org.apache.hadoop.mapred.TaskTracker: In 
TaskLauncher, current free slots : 7 and trying to launch 
attempt_201208241212_26088_m_43_1 which needs 1 slots
2013-06-22 12:39:42,296 INFO 
org.apache.hadoop.mapreduce.server.tasktracker.Localizer: Initializing user sds 
on this TT.
2013-06-22 12:39:42,300 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201208241212_26088_m_43_1:
java.lang.NullPointerException
at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:630)
at org.apache.hadoop.ipc.Client.call(Client.java:886)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy5.getFileInfo(Unknown Source)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy5.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:850)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:620)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJobTokenFile(TaskTracker.java:3984)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJobFiles(TaskTracker.java:1036)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:977)
at 
org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2247)
at 
org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2212)

2013-06-22 12:39:42,300 ERROR org.apache.hadoop.mapred.TaskStatus: Trying to 
set finish time for task attempt_201208241212_26088_m_43_1 when no start 
time is set, stackTrace is : java.lang.Exception
at 
org.apache.hadoop.mapred.TaskStatus.setFinishTime(TaskStatus.java:195)
at 
org.apache.hadoop.mapred.MapTaskStatus.setFinishTime(MapTaskStatus.java:51)
at 
org.apache.hadoop.mapred.TaskTracker$TaskInProgress.kill(TaskTracker.java:2937)
at 

Re: KerberosName.rules are null during KerberosName.getShortName() in KerberosAuthenticationHandler

2013-07-02 Thread Alejandro Abdelnur
Hi Lulynn,

I've commented in the JIRA, now that I see your email that gives me a bit
more of context on what you are trying to do.

If I understand correctly, you are trying to use this outside of Hadoop. If
that is the case you should set the PREFIX.kerberos.name.rules=DEFAULT
(or a custom name.rules if you have one) in your hadoop-auth
AuthenticationFilter configuration.

This is required because you are not initializing UGI before initializing
the filter.

Thanks.




On Mon, Jul 1, 2013 at 3:41 AM, lulynn_2008 lulynn_2...@163.com wrote:

  Hi All,

 I am trying to add kerberos support to a web servlet via hadoop
 authentication classes. This is to make this web servlet server to
 authenticate its client via kerberos. I assume this should work. Right?

 The whole design is to add AuthFilter at server side and
 AuthenticatedURL.injectToken(conn, currentToken) during create connection
 at client side.  But the process failed at KerberosName.rules, I made a fix
 based on 2.0.4-alpha branch. Could you please help to review it and give
 some suggestions? I think with this fix, we can add kerberos support to any
 web servlet via hadoop authentication classes. I have opened HADOOP-9679 to
 trace this issue and applied the patch.

 Error:
 The process failed during AuthenticationFilter.doFilter,  with following
 error:
 java.lang.NullPointerException
 at
 org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:384)
 at
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:328)
 at
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:302)
 at
 java.security.AccessController.doPrivileged(AccessController.java:310)
 at javax.security.auth.Subject.doAs(Subject.java:573)
 at
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:302)
 at
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:340)


 Root cause:
 this error happened because KerberosName.rules are not initialized. I
 found that this parameter only be initialized during initialize
 UserGroupInformation which is used for manager hadoop user and group. Then
 this parameter will be initialized during hadoop client(like oozie) access
 hadoop. But the servlet I am testing is not hadoop client, then current
 there is no place for initializing it. But I think we should make it work
 via value KerberosName.rules with default value DEFAULT.

 FIX:
 Following is my draft fix based on hadoop-2.0.4-alpha branch, with this
 fix, my test web servlet can support kerberos now.
 ---
 a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
 +++
 b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
 @@ -308,6 +308,10 @@ public AuthenticationToken run() throws Exception {
} else {
  String clientPrincipal =
 gssContext.getSrcName().toString();
  KerberosName kerberosName = new
 KerberosName(clientPrincipal);
 +if( !KerberosName.hasRulesBeenSet()){
 +LOG.warn(No rules applied to  +
 kerberosName.toString() + . Using DEFAULT rules.);
 +KerberosName.setRules(DEFAULT);
 +}
  String userName = kerberosName.getShortName();
  token = new AuthenticationToken(userName,
 clientPrincipal, getType());
  response.setStatus(HttpServletResponse.SC_OK);





-- 
Alejandro


[jira] [Created] (HADOOP-9685) hadoop-config.cmd: builds a classpath that is too long on windows

2013-07-02 Thread Raja Aluri (JIRA)
Raja Aluri created HADOOP-9685:
--

 Summary: hadoop-config.cmd: builds a classpath that is too long on 
windows
 Key: HADOOP-9685
 URL: https://issues.apache.org/jira/browse/HADOOP-9685
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1-win
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 1-win


hadoop-config.cmd, sets the class path by listing each jar file to the 
CLASAPATH. Many times the downstream components use hadoop_config.cmd to set 
the CLASAPATH for hadoop jars. After adding their own class path, many times 
the classpath length is hitting the windows command lenght limit.
We should try to use classpath wildcards to reduce the length of the classpath.
http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 0.23.9

2013-07-02 Thread Robert Evans
+1 downloaded the release.  Ran a couple of simple jobs and everything
worked.

On 7/1/13 12:20 PM, Thomas Graves tgra...@yahoo-inc.com wrote:

I've created a release candidate (RC0) for hadoop-0.23.9 that I would like
to release.

The RC is available at:
http://people.apache.org/~tgraves/hadoop-0.23.9-candidate-0/
The RC tag in svn is here:
http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.9-rc0/

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days
til July 8th.

I am +1 (binding).

thanks,
Tom Graves



[jira] [Created] (HADOOP-9686) Easy access to final parameters in Configuration

2013-07-02 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9686:
--

 Summary: Easy access to final parameters in Configuration
 Key: HADOOP-9686
 URL: https://issues.apache.org/jira/browse/HADOOP-9686
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe


It would be nice if there was an easy way to get final parameters within a 
Configuration.  This would allow clients who wrap Configuration to easily 
determine which properties should not be changed and implement stricter 
semantics for them (e.g.: throw an exception when attempts to change them are 
made).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9681) FileUtil.unTarUsingJava() should close the InputStream upon finishing

2013-07-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-9681.
---

  Resolution: Fixed
   Fix Version/s: 2.1.0-beta
  1-win
  3.0.0
Target Version/s: 3.0.0, 1-win, 2.1.0-beta

I commmitted this to trunk, branch-2, branch-2.1-beta, and branch-1-win.  Thank 
you for contributing the fix, Chuan.

 FileUtil.unTarUsingJava() should close the InputStream upon finishing
 -

 Key: HADOOP-9681
 URL: https://issues.apache.org/jira/browse/HADOOP-9681
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 1-win, 2.1.0-beta

 Attachments: HADOOP-9681-branch-1-win.patch, 
 HADOOP-9681-trunk.2.patch, HADOOP-9681-trunk.patch


 In {{FileUtil.unTarUsingJava()}} method, we did not close input steams 
 explicitly upon finish. This could lead to a file handle leak on Windows.
 I discovered this when investigating the unit test case failure of 
 {{TestFSDownload.testDownloadArchive()}}. FSDownload class will use 
 {{FileUtil.unTarUsingJava()}} to unpack some temporary archive file. Later, 
 the temporary file should be deleted. Because of the file handle leak, the 
 {{File.delete()}} method fails. The test case then fails because it assert 
 the temporary file should not exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[DISCUSS] Hadoop SSO/Token Server Components

2013-07-02 Thread Larry McCay
All -

As a follow up to the discussions that were had during Hadoop Summit, I would 
like to introduce the discussion topic around the moving parts of a Hadoop 
SSO/Token Service.
There are a couple of related Jira's that can be referenced and may or may not 
be updated as a result of this discuss thread.

https://issues.apache.org/jira/browse/HADOOP-9533
https://issues.apache.org/jira/browse/HADOOP-9392

As the first aspect of the discussion, we should probably state the overall 
goals and scoping for this effort:
* An alternative authentication mechanism to Kerberos for user authentication
* A broader capability for integration into enterprise identity and SSO 
solutions
* Possibly the advertisement/negotiation of available authentication mechanisms
* Backward compatibility for the existing use of Kerberos
* No (or minimal) changes to existing Hadoop tokens (delegation, job, block 
access, etc)
* Pluggable authentication mechanisms across: RPC, REST and webui enforcement 
points
* Continued support for existing authorization policy/ACLs, etc
* Keeping more fine grained authorization policies in mind - like attribute 
based access control
- fine grained access control is a separate but related effort that we 
must not preclude with this effort
* Cross cluster SSO

In order to tease out the moving parts here are a couple high level and 
simplified descriptions of SSO interaction flow:
   +--+
+--+ credentials 1 | SSO  |
|CLIENT|--|SERVER|
+--+  :tokens  +--+
  2 |
| access token
V :requested resource
+---+
|HADOOP |
|SERVICE|
+---+

The above diagram represents the simplest interaction model for an SSO service 
in Hadoop.
1. client authenticates to SSO service and acquires an access token
  a. client presents credentials to an authentication service endpoint exposed 
by the SSO server (AS) and receives a token representing the authentication 
event and verified identity
  b. client then presents the identity token from 1.a. to the token endpoint 
exposed by the SSO server (TGS) to request an access token to a particular 
Hadoop service and receives an access token
2. client presents the Hadoop access token to the Hadoop service for which the 
access token has been granted and requests the desired resource or services
  a. access token is presented as appropriate for the service endpoint protocol 
being used
  b. Hadoop service token validation handler validates the token and verifies 
its integrity and the identity of the issuer

+--+
|  IdP |
+--+
1   ^ credentials
| :idp_token
|  +--+
+--+  idp_token  2 | SSO  |
|CLIENT|--|SERVER|
+--+  :tokens  +--+
  3 |
| access token
V :requested resource
+---+
|HADOOP |
|SERVICE|
+---+


The above diagram represents a slightly more complicated interaction model for 
an SSO service in Hadoop that removes Hadoop from the credential collection 
business.
1. client authenticates to a trusted identity provider within the enterprise 
and acquires an IdP specific token
  a. client presents credentials to an enterprise IdP and receives a token 
representing the authentication identity
2. client authenticates to SSO service and acquires an access token
  a. client presents idp_token to an authentication service endpoint exposed by 
the SSO server (AS) and receives a token representing the authentication event 
and verified identity
  b. client then presents the identity token from 2.a. to the token endpoint 
exposed by the SSO server (TGS) to request an access token to a particular 
Hadoop service and receives an access token
3. client presents the Hadoop access token to the Hadoop service for which the 
access token has been granted and requests the desired resource or services
  a. access token is presented as appropriate for the service endpoint protocol 
being used
  b. Hadoop service token validation handler validates the token and verifies 
its integrity and the identity of the issuer

Considering the above set of goals and high level interaction flow description, 
we can start to discuss the component inventory required to accomplish this 
vision:

1. SSO Server Instance: this component must be able to expose endpoints for 
both authentication of users by collecting and validating credentials and 
federation of identities represented by tokens from trusted IdPs within the 
enterprise. The endpoints should be composable so as to allow for multifactor 
authentication mechanisms. They will also need to return tokens that represent 
the authentication event and verified identity as well as access tokens for 
specific Hadoop services.

2. Authentication 

[jira] [Resolved] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-9677.
---

   Resolution: Fixed
Fix Version/s: 1-win

I committed this to branch-1-win.  Xi, thank you for the contribution.

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: creating 2.2.0 version in JIRA

2013-07-02 Thread Jason Lowe
I thought Arun intends for 2.2.0 to be created off of branch-2.1.0-beta 
and not off of branch-2.  As I understand it, only critical blockers 
will be the delta between 2.1.0-beta and 2.2.0 and items checked into 
branch-2 should be marked as  fixed in 2.3.0.


Part of the confusion is that currently branch-2 builds as 
2.2.0-SNAPSHOT, but I believe Arun intended it to be 2.3.0-SNAPSHOT.


Jason

On 06/21/2013 12:05 PM, Alejandro Abdelnur wrote:

Thanks Suresh, didn't know that, will do.


On Fri, Jun 21, 2013 at 9:48 AM, Suresh Srinivas sur...@hortonworks.comwrote:


I have added in to HDFS, HADOOP, MAPREDUCE projects. Can someone add it for
YARN?


On Fri, Jun 21, 2013 at 9:35 AM, Alejandro Abdelnur t...@cloudera.com

wrote:
When Arun created branch-2.1-beta he stated:


The expectation is that 2.2.0 will be limited to content in

branch-2.1-beta

and we stick to stabilizing it henceforth (I've deliberately not

created

2.2.0

fix-version on jira yet).

I working/committing some JIRAs that I'm putting in branch-2 (testcases

and

improvements) but I don't want to put them in branch-2.1-beta as they are
not critical and I don't won't add unnecessary noise to the

branch-2.1-beta

release work.

Currently branch-2 POMs have a version 2.2.0 and the CHANGES.txt files as
well.

But because we did not create a JIRA version I cannot close those JIRAs.

Can we please create the JIRA versions? later we can rename them.

Thx


--
Alejandro




--
http://hortonworks.com/download/








Re: creating 2.2.0 version in JIRA

2013-07-02 Thread Alejandro Abdelnur
We need clarification on this then.

I was under the impression that branch-2 would be 2.2.0.

thx

On Tue, Jul 2, 2013 at 2:38 PM, Jason Lowe jl...@yahoo-inc.com wrote:

 I thought Arun intends for 2.2.0 to be created off of branch-2.1.0-beta
 and not off of branch-2.  As I understand it, only critical blockers will
 be the delta between 2.1.0-beta and 2.2.0 and items checked into branch-2
 should be marked as  fixed in 2.3.0.

 Part of the confusion is that currently branch-2 builds as 2.2.0-SNAPSHOT,
 but I believe Arun intended it to be 2.3.0-SNAPSHOT.

 Jason


 On 06/21/2013 12:05 PM, Alejandro Abdelnur wrote:

 Thanks Suresh, didn't know that, will do.


 On Fri, Jun 21, 2013 at 9:48 AM, Suresh Srinivas sur...@hortonworks.com
 wrote:

  I have added in to HDFS, HADOOP, MAPREDUCE projects. Can someone add it
 for
 YARN?


 On Fri, Jun 21, 2013 at 9:35 AM, Alejandro Abdelnur t...@cloudera.com

 wrote:
 When Arun created branch-2.1-beta he stated:

  The expectation is that 2.2.0 will be limited to content in

 branch-2.1-beta

 and we stick to stabilizing it henceforth (I've deliberately not

 created

 2.2.0

 fix-version on jira yet).

 I working/committing some JIRAs that I'm putting in branch-2 (testcases

 and

 improvements) but I don't want to put them in branch-2.1-beta as they
 are
 not critical and I don't won't add unnecessary noise to the

 branch-2.1-beta

 release work.

 Currently branch-2 POMs have a version 2.2.0 and the CHANGES.txt files
 as
 well.

 But because we did not create a JIRA version I cannot close those JIRAs.

 Can we please create the JIRA versions? later we can rename them.

 Thx


 --
 Alejandro



 --
 http://hortonworks.com/**download/ http://hortonworks.com/download/







-- 
Alejandro


[jira] [Created] (HADOOP-9687) Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction on Windows

2013-07-02 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9687:
---

 Summary: Condor-Branch-1 TestJobTrackerQuiescence and 
TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction 
on Windows
 Key: HADOOP-9687
 URL: https://issues.apache.org/jira/browse/HADOOP-9687
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


TestJobTrackerQuiescence is a test case introduced in 
https://issues.apache.org/jira/browse/MAPREDUCE-4328. 
Here is the code generating a file path on DFS:
{code}
 final Path testDir = 
new Path(System.getProperty(test.build.data, /tmp), jt-safemode);
{code}

This doesn't work on Windows because test.build.data would have a driver name 
with : (e.g. D:/hadoop/build/test). However, this is not a valid path name on 
DFS because colon is disallowed (See DFSUtil#isValidName()).

A similar problem happens to 
TestFileLengthOnClusterRestart#testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister()
{code}
  Path path = new Path(MiniDFSCluster.getBaseDir().getPath(), test);
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9688) Add globally unique request ID to RPC requests

2013-07-02 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-9688:
---

 Summary: Add globally unique request ID to RPC requests
 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas


This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-07-02 Thread Ramya Sunil
-1.
Some of the cli and distcp system tests which use hftp:// and webhdfs://
are failing on secure cluster (HDFS-4841 and HDFS-4952/HDFS-4896). This is
a regression and we need to make sure they work before we call a release.


On Wed, Jun 26, 2013 at 1:17 AM, Arun C Murthy a...@hortonworks.com wrote:

 Folks,

 I've created a release candidate (rc0) for hadoop-2.1.0-beta that I would
 like to get released.

 This release represents a *huge* amount of work done by the community (639
 fixes) which includes several major advances including:
 # HDFS Snapshots
 # Windows support
 # YARN API stabilization
 # MapReduce Binary Compatibility with hadoop-1.x
 # Substantial amount of integration testing with rest of projects in the
 ecosystem

 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc0/
 The RC tag in svn is here:
 http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc0

 The maven artifacts are available via repository.apache.org.

 Please try the release and vote; the vote will run for the usual 7 days.

 thanks,
 Arun

 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/





[jira] [Created] (HADOOP-9689) Implement HDFS Zero-copy reading

2013-07-02 Thread Jacques Nadeau (JIRA)
Jacques Nadeau created HADOOP-9689:
--

 Summary: Implement HDFS Zero-copy reading
 Key: HADOOP-9689
 URL: https://issues.apache.org/jira/browse/HADOOP-9689
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Jacques Nadeau


Leverage the work in https://issues.apache.org/jira/browse/HADOOP-8148 to 
support zero copy reading of columnar file formats for improved performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9689) Implement HDFS Zero-copy reading

2013-07-02 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau resolved HADOOP-9689.


Resolution: Duplicate

This was my accident.  I had the wrong JIRA opened.  This should have been 
filed against Apache Drill.  Opened DRILL-146 instead.  Sorry

 Implement HDFS Zero-copy reading
 

 Key: HADOOP-9689
 URL: https://issues.apache.org/jira/browse/HADOOP-9689
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Jacques Nadeau

 Leverage the work in https://issues.apache.org/jira/browse/HADOOP-8148 to 
 support zero copy reading of columnar file formats for improved performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9691) RPC clients can generate call ID using AtomicInteger instead of synchronizing on the Client instance.

2013-07-02 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9691:
-

 Summary: RPC clients can generate call ID using AtomicInteger 
instead of synchronizing on the Client instance.
 Key: HADOOP-9691
 URL: https://issues.apache.org/jira/browse/HADOOP-9691
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


As noted in discussion on HADOOP-9688, we can optimize generation of call ID in 
the RPC client code.  Currently, it synchronizes on the {{Client}} instance to 
coordinate access to a shared {{int}}.  We can switch this to {{AtomicInteger}} 
to avoid lock contention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira