hadoop git commit: HDFS-10423. Increase default value of httpfs maxHttpHeaderSize. Contributed by Nicolae Popa.

2016-06-20 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5370a6ffa -> aa1b583cf


HDFS-10423. Increase default value of httpfs maxHttpHeaderSize. Contributed by 
Nicolae Popa.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa1b583c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa1b583c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa1b583c

Branch: refs/heads/trunk
Commit: aa1b583cf99d1a7cfe554d1769fc4af252374663
Parents: 5370a6f
Author: Aaron T. Myers 
Authored: Mon Jun 20 13:46:11 2016 -0700
Committer: Aaron T. Myers 
Committed: Mon Jun 20 13:46:56 2016 -0700

--
 .../hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh   | 4 
 .../hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh | 1 +
 .../hadoop-hdfs-httpfs/src/main/tomcat/server.xml| 1 +
 .../hadoop-hdfs-httpfs/src/main/tomcat/ssl-server.xml.conf   | 1 +
 .../hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm   | 4 
 5 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa1b583c/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
index f012453..300d2ac 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
@@ -38,6 +38,10 @@
 #
 # export HTTPFS_HTTP_HOSTNAME=$(hostname -f)
 
+# The maximum size of Tomcat HTTP header
+#
+# export HTTPFS_MAX_HTTP_HEADER_SIZE=65536
+
 # The location of the SSL keystore if using SSL
 #
 # export HTTPFS_SSL_KEYSTORE_FILE=${HOME}/.keystore

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa1b583c/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
index ba4b406..176dd32 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
@@ -42,6 +42,7 @@ function hadoop_subproject_init
   export HADOOP_CATALINA_HTTP_PORT="${HTTPFS_HTTP_PORT:-14000}"
   export 
HADOOP_CATALINA_ADMIN_PORT="${HTTPFS_ADMIN_PORT:-$((HADOOP_CATALINA_HTTP_PORT+1))}"
   export HADOOP_CATALINA_MAX_THREADS="${HTTPFS_MAX_THREADS:-150}"
+  export 
HADOOP_CATALINA_MAX_HTTP_HEADER_SIZE="${HTTPFS_MAX_HTTP_HEADER_SIZE:-65536}"
 
   export HTTPFS_SSL_ENABLED=${HTTPFS_SSL_ENABLED:-false}
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa1b583c/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/tomcat/server.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/tomcat/server.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/tomcat/server.xml
index a425bdd..67f2159 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/tomcat/server.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/tomcat/server.xml
@@ -71,6 +71,7 @@
 -->
 
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa1b583c/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm
index 3c7f9d3..6a03a45 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm
@@ -80,6 +80,8 @@ HttpFS preconfigures the HTTP and Admin ports in Tomcat's 
`server.xml` to 14000
 
 Tomcat logs are also preconfigured to go to HttpFS's `logs/` directory.
 
+HttpFS default value for the maxHttpHeaderSize parameter in Tomcat's 
`server.xml` is set to 65536 by default.
+
 The following environment variables (which can be set in HttpFS's 
`etc/hadoop/httpfs-env.sh` script) can be used to alter those values:
 
 * HTTPFS\_HTTP\_PORT
@@ -88,6 +90,8 @@ The following environment variables (which can be set in 
HttpFS's `etc/hadoop/ht
 
 * HADOOP\_LOG\_DIR
 
+* HTTPFS\_MAX\_HTTP\_HEADER\_SIZE
+
 HttpFS Configuration
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[1/2] hadoop git commit: HDFS-9001. DFSUtil.getNsServiceRpcUris() can return too many entries in a non-HA, non-federated cluster. Contributed by Daniel Templeton.

2015-09-29 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 37d42fcec -> cb89ee931
  refs/heads/trunk 39285e6a1 -> 071733dc6


HDFS-9001. DFSUtil.getNsServiceRpcUris() can return too many entries in a 
non-HA, non-federated cluster. Contributed by Daniel Templeton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/071733dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/071733dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/071733dc

Branch: refs/heads/trunk
Commit: 071733dc69a6f83c0cdca046b31ffd4f13304e93
Parents: 39285e6
Author: Aaron T. Myers <a...@apache.org>
Authored: Tue Sep 29 18:19:31 2015 -0700
Committer: Aaron T. Myers <a...@apache.org>
Committed: Tue Sep 29 18:19:31 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../java/org/apache/hadoop/hdfs/DFSUtil.java|  37 +++--
 .../org/apache/hadoop/hdfs/TestDFSUtil.java | 144 +--
 3 files changed, 130 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/071733dc/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index dfd0b57..cedf1a7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1468,6 +1468,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9174. Fix findbugs warnings in FSOutputSummer.tracer and
 DirectoryScanner$ReportCompiler.currentThread. (Yi Liu via wheat9)
 
+HDFS-9001. DFSUtil.getNsServiceRpcUris() can return too many entries in a
+non-HA, non-federated cluster. (Daniel Templeton via atm)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/071733dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index 5b11ac2..5d405ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
@@ -732,20 +732,29 @@ public class DFSUtil {
 }
   }
 }
-
-// Add the default URI if it is an HDFS URI.
-URI defaultUri = FileSystem.getDefaultUri(conf);
-// checks if defaultUri is ip:port format
-// and convert it to hostname:port format
-if (defaultUri != null && (defaultUri.getPort() != -1)) {
-  defaultUri = createUri(defaultUri.getScheme(),
-  NetUtils.createSocketAddr(defaultUri.getHost(), 
-  defaultUri.getPort()));
-}
-if (defaultUri != null &&
-HdfsConstants.HDFS_URI_SCHEME.equals(defaultUri.getScheme()) &&
-!nonPreferredUris.contains(defaultUri)) {
-  ret.add(defaultUri);
+
+// Add the default URI if it is an HDFS URI and we haven't come up with a
+// valid non-nameservice NN address yet.  Consider the servicerpc-address
+// and rpc-address to be the "unnamed" nameservice.  defaultFS is our
+// fallback when rpc-address isn't given.  We therefore only want to add
+// the defaultFS when neither the servicerpc-address (which is preferred)
+// nor the rpc-address (which overrides defaultFS) is given.
+if (!uriFound) {
+  URI defaultUri = FileSystem.getDefaultUri(conf);
+
+  // checks if defaultUri is ip:port format
+  // and convert it to hostname:port format
+  if (defaultUri != null && (defaultUri.getPort() != -1)) {
+defaultUri = createUri(defaultUri.getScheme(),
+NetUtils.createSocketAddr(defaultUri.getHost(),
+defaultUri.getPort()));
+  }
+
+  if (defaultUri != null &&
+  HdfsConstants.HDFS_URI_SCHEME.equals(defaultUri.getScheme()) &&
+  !nonPreferredUris.contains(defaultUri)) {
+ret.add(defaultUri);
+  }
 }
 
 return ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/071733dc/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
index 3435b7f..f22deaf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
+++ 
b/hadoop-hdfs-project/

[2/2] hadoop git commit: HDFS-9001. DFSUtil.getNsServiceRpcUris() can return too many entries in a non-HA, non-federated cluster. Contributed by Daniel Templeton. (cherry picked from commit 071733dc69

2015-09-29 Thread atm
HDFS-9001. DFSUtil.getNsServiceRpcUris() can return too many entries in a 
non-HA, non-federated cluster. Contributed by Daniel Templeton.
(cherry picked from commit 071733dc69a6f83c0cdca046b31ffd4f13304e93)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb89ee93
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb89ee93
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb89ee93

Branch: refs/heads/branch-2
Commit: cb89ee931d58251ab330a65d739c36c2d56dfcff
Parents: 37d42fc
Author: Aaron T. Myers <a...@apache.org>
Authored: Tue Sep 29 18:19:31 2015 -0700
Committer: Aaron T. Myers <a...@apache.org>
Committed: Tue Sep 29 18:20:00 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../java/org/apache/hadoop/hdfs/DFSUtil.java|  37 +++--
 .../org/apache/hadoop/hdfs/TestDFSUtil.java | 144 +--
 3 files changed, 130 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb89ee93/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d82ee41..ba399ac 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1124,6 +1124,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9174. Fix findbugs warnings in FSOutputSummer.tracer and
 DirectoryScanner$ReportCompiler.currentThread. (Yi Liu via wheat9)
 
+HDFS-9001. DFSUtil.getNsServiceRpcUris() can return too many entries in a
+non-HA, non-federated cluster. (Daniel Templeton via atm)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb89ee93/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index c290031..c599ac2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
@@ -736,20 +736,29 @@ public class DFSUtil {
 }
   }
 }
-
-// Add the default URI if it is an HDFS URI.
-URI defaultUri = FileSystem.getDefaultUri(conf);
-// checks if defaultUri is ip:port format
-// and convert it to hostname:port format
-if (defaultUri != null && (defaultUri.getPort() != -1)) {
-  defaultUri = createUri(defaultUri.getScheme(),
-  NetUtils.createSocketAddr(defaultUri.getHost(), 
-  defaultUri.getPort()));
-}
-if (defaultUri != null &&
-HdfsConstants.HDFS_URI_SCHEME.equals(defaultUri.getScheme()) &&
-!nonPreferredUris.contains(defaultUri)) {
-  ret.add(defaultUri);
+
+// Add the default URI if it is an HDFS URI and we haven't come up with a
+// valid non-nameservice NN address yet.  Consider the servicerpc-address
+// and rpc-address to be the "unnamed" nameservice.  defaultFS is our
+// fallback when rpc-address isn't given.  We therefore only want to add
+// the defaultFS when neither the servicerpc-address (which is preferred)
+// nor the rpc-address (which overrides defaultFS) is given.
+if (!uriFound) {
+  URI defaultUri = FileSystem.getDefaultUri(conf);
+
+  // checks if defaultUri is ip:port format
+  // and convert it to hostname:port format
+  if (defaultUri != null && (defaultUri.getPort() != -1)) {
+defaultUri = createUri(defaultUri.getScheme(),
+NetUtils.createSocketAddr(defaultUri.getHost(),
+defaultUri.getPort()));
+  }
+
+  if (defaultUri != null &&
+  HdfsConstants.HDFS_URI_SCHEME.equals(defaultUri.getScheme()) &&
+  !nonPreferredUris.contains(defaultUri)) {
+ret.add(defaultUri);
+  }
 }
 
 return ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb89ee93/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
index 3435b7f..f22deaf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop

hadoop git commit: Fix-up for HDFS-9072 - adding missing import of DFSTestUtil.

2015-09-16 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a343aa928 -> e533555b5


Fix-up for HDFS-9072 - adding missing import of DFSTestUtil.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e533555b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e533555b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e533555b

Branch: refs/heads/branch-2
Commit: e533555b577c3678aa7a430c6b084973187de18a
Parents: a343aa9
Author: Aaron T. Myers 
Authored: Wed Sep 16 12:29:26 2015 -0700
Committer: Aaron T. Myers 
Committed: Wed Sep 16 12:30:31 2015 -0700

--
 .../src/test/java/org/apache/hadoop/tools/TestJMXGet.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e533555b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
index aef18fb..278fbb8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.tools.JMXGet;



[1/2] hadoop git commit: HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed by Mike Yoder. (cherry picked from commit 28ea4c068ea0152af682058b02311bba81780770)

2015-08-12 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 65d22b368 - 43f386e74
  refs/heads/trunk 3e715a4f4 - 820f864a2


HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder.
(cherry picked from commit 28ea4c068ea0152af682058b02311bba81780770)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43f386e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43f386e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43f386e7

Branch: refs/heads/branch-2
Commit: 43f386e7484ff7544ea5454bb0ba7453f3edfbcf
Parents: 65d22b3
Author: Aaron T. Myers a...@apache.org
Authored: Wed Aug 12 15:16:05 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Aug 12 15:16:36 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/security/SaslPlainServer.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43f386e7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index bfad714..8be89ed 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -235,6 +235,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12280. Skip unit tests based on maven profile rather than
 NativeCodeLoader.isNativeCodeLoaded (Masatake Iwasaki via Colin P. McCabe)
 
+HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. (Mike
+Yoder via atm)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43f386e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
index 7d1b980..7c74f4a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
@@ -105,7 +105,7 @@ public class SaslPlainServer implements SaslServer {
 authz = ac.getAuthorizedID();
   }
 } catch (Exception e) {
-  throw new SaslException(PLAIN auth failed:  + e.getMessage());
+  throw new SaslException(PLAIN auth failed:  + e.getMessage(), e);
 } finally {
   completed = true;
 }



[2/2] hadoop git commit: HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed by Mike Yoder.

2015-08-12 Thread atm
HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/820f864a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/820f864a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/820f864a

Branch: refs/heads/trunk
Commit: 820f864a26d90e9f4a3584577df581dcac20f9b6
Parents: 3e715a4
Author: Aaron T. Myers a...@apache.org
Authored: Wed Aug 12 15:16:05 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Aug 12 15:24:16 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/security/SaslPlainServer.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/820f864a/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7d7982f..e9be2e0 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -746,6 +746,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12280. Skip unit tests based on maven profile rather than
 NativeCodeLoader.isNativeCodeLoaded (Masatake Iwasaki via Colin P. McCabe)
 
+HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. (Mike
+Yoder via atm)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/820f864a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
index 7d1b980..7c74f4a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
@@ -105,7 +105,7 @@ public class SaslPlainServer implements SaslServer {
 authz = ac.getAuthorizedID();
   }
 } catch (Exception e) {
-  throw new SaslException(PLAIN auth failed:  + e.getMessage());
+  throw new SaslException(PLAIN auth failed:  + e.getMessage(), e);
 } finally {
   completed = true;
 }



hadoop git commit: HDFS-8657. Update docs for mSNN. Contributed by Jesse Yates.

2015-07-20 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/trunk e4f756260 - ed01dc70b


HDFS-8657. Update docs for mSNN. Contributed by Jesse Yates.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ed01dc70
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ed01dc70
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ed01dc70

Branch: refs/heads/trunk
Commit: ed01dc70b2f4ff4bdcaf71c19acf244da0868a82
Parents: e4f7562
Author: Aaron T. Myers a...@apache.org
Authored: Mon Jul 20 16:40:06 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Jul 20 16:40:06 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 +
 .../markdown/HDFSHighAvailabilityWithNFS.md | 40 +++-
 .../markdown/HDFSHighAvailabilityWithQJM.md | 32 ++--
 3 files changed, 45 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed01dc70/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 13d9969..cd32c0e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -341,6 +341,8 @@ Trunk (Unreleased)
 HDFS-8627. NPE thrown if unable to fetch token from Namenode
 (J.Andreina via vinayakumarb)
 
+HDFS-8657. Update docs for mSNN. (Jesse Yates via atm)
+
 Release 2.8.0 - UNRELEASED
 
   NEW FEATURES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed01dc70/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
index 626a473..cc53a38 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
@@ -65,18 +65,18 @@ This impacted the total availability of the HDFS cluster in 
two major ways:
 * Planned maintenance events such as software or hardware upgrades on the
   NameNode machine would result in windows of cluster downtime.
 
-The HDFS High Availability feature addresses the above problems by providing 
the option of running two redundant NameNodes in the same cluster in an 
Active/Passive configuration with a hot standby. This allows a fast failover to 
a new NameNode in the case that a machine crashes, or a graceful 
administrator-initiated failover for the purpose of planned maintenance.
+The HDFS High Availability feature addresses the above problems by providing 
the option of running two (or more, as of Hadoop 3.0.0) redundant NameNodes in 
the same cluster in an Active/Passive configuration with a hot standby(s). This 
allows a fast failover to a new NameNode in the case that a machine crashes, or 
a graceful administrator-initiated failover for the purpose of planned 
maintenance.
 
 Architecture
 
 
-In a typical HA cluster, two separate machines are configured as NameNodes. At 
any point in time, exactly one of the NameNodes is in an *Active* state, and 
the other is in a *Standby* state. The Active NameNode is responsible for all 
client operations in the cluster, while the Standby is simply acting as a 
slave, maintaining enough state to provide a fast failover if necessary.
+In a typical HA cluster, two or more separate machines are configured as 
NameNodes. At any point in time, exactly one of the NameNodes is in an *Active* 
state, and the others are in a *Standby* state. The Active NameNode is 
responsible for all client operations in the cluster, while the Standby is 
simply acting as a slave, maintaining enough state to provide a fast failover 
if necessary.
 
-In order for the Standby node to keep its state synchronized with the Active 
node, the current implementation requires that the two nodes both have access 
to a directory on a shared storage device (eg an NFS mount from a NAS). This 
restriction will likely be relaxed in future versions.
+In order for the Standby nodes to keep their state synchronized with the 
Active node, the current implementation requires that the nodes have access to 
a directory on a shared storage device (eg an NFS mount from a NAS). This 
restriction will likely be relaxed in future versions.
 
-When any namespace modification is performed by the Active node, it durably 
logs a record of the modification to an edit log file stored in the shared 
directory. The Standby node is constantly watching this directory for edits, 
and as it sees the edits, it applies

[1/3] hadoop git commit: HDFS-6440. Support more than 2 NameNodes. Contributed by Jesse Yates.

2015-06-23 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/trunk 122cad6ae - 49dfad942


http://git-wip-us.apache.org/repos/asf/hadoop/blob/49dfad94/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAConfiguration.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAConfiguration.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAConfiguration.java
index c4a2988..62643ae 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAConfiguration.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAConfiguration.java
@@ -23,10 +23,12 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.fail;
 
 import java.io.IOException;
+import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URL;
 import java.util.Collection;
 
+import com.google.common.base.Joiner;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
@@ -58,19 +60,23 @@ public class TestHAConfiguration {
 }
   }
 
-  private Configuration getHAConf(String nsId, String host1, String host2) {
+  private Configuration getHAConf(String nsId, String ... hosts) {
 Configuration conf = new Configuration();
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, nsId);
-conf.set(DFSUtil.addKeySuffixes(
-DFSConfigKeys.DFS_HA_NAMENODES_KEY_PREFIX, nsId),
-nn1,nn2);
 conf.set(DFSConfigKeys.DFS_HA_NAMENODE_ID_KEY, nn1);
+
+String[] nnids = new String[hosts.length];
+for (int i = 0; i  hosts.length; i++) {
+  String nnid = nn + (i + 1);
+  nnids[i] = nnid;
+  conf.set(DFSUtil.addKeySuffixes(
+  DFSConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY, nsId, nnid),
+  hosts[i] + :12345);
+}
+
 conf.set(DFSUtil.addKeySuffixes(
-DFSConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY, nsId, nn1),
-host1 + :12345);
-conf.set(DFSUtil.addKeySuffixes(
-DFSConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY, nsId, nn2),
-host2 + :12345);
+DFSConfigKeys.DFS_HA_NAMENODES_KEY_PREFIX, nsId),
+Joiner.on(',').join(nnids));
 return conf;
   }
 
@@ -87,11 +93,28 @@ public class TestHAConfiguration {
 // 0.0.0.0, it should substitute the address from the RPC configuration
 // above.
 StandbyCheckpointer checkpointer = new StandbyCheckpointer(conf, fsn);
-assertEquals(new URL(http, 1.2.3.2,
-DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT, ),
-checkpointer.getActiveNNAddress());
+assertAddressMatches(1.2.3.2, 
checkpointer.getActiveNNAddresses().get(0));
+
+//test when there are three NNs
+// Use non-local addresses to avoid host address matching
+conf = getHAConf(ns1, 1.2.3.1, 1.2.3.2, 1.2.3.3);
+
+// This is done by the NN before the StandbyCheckpointer is created
+NameNode.initializeGenericKeys(conf, ns1, nn1);
+
+checkpointer = new StandbyCheckpointer(conf, fsn);
+assertEquals(Got an unexpected number of possible active NNs, 2, 
checkpointer
+.getActiveNNAddresses().size());
+assertEquals(new URL(http, 1.2.3.2, 
DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT, ),
+checkpointer.getActiveNNAddresses().get(0));
+assertAddressMatches(1.2.3.2, 
checkpointer.getActiveNNAddresses().get(0));
+assertAddressMatches(1.2.3.3, 
checkpointer.getActiveNNAddresses().get(1));
   }
-  
+
+  private void assertAddressMatches(String address, URL url) throws 
MalformedURLException {
+assertEquals(new URL(http, address, 
DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT, ), url);
+  }
+
   /**
* Tests that the namenode edits dirs and shared edits dirs are gotten with
* duplicates removed

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49dfad94/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
index 76a62ff..3da37f5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
@@ -24,6 +24,7 @@ import static org.junit.Assert.fail;
 
 import java.io.IOException;
 import java.security.PrivilegedExceptionAction;
+import java.util.Random;
 import java.util.concurrent.TimeoutException;
 
 import 

[2/2] hadoop git commit: HDFS-8101. DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime. Contributed by Sean Busbey. (cherry picked from commit 3fe61e0bb0d025a6acbb754027f7

2015-04-09 Thread atm
HDFS-8101. DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes 
at runtime. Contributed by Sean Busbey.
(cherry picked from commit 3fe61e0bb0d025a6acbb754027f73f3084b2f4d1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edf2f52d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edf2f52d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edf2f52d

Branch: refs/heads/branch-2
Commit: edf2f52d6d2b0fd7f0ea5a63401a8affeb976949
Parents: 6d1cb34
Author: Aaron T. Myers a...@apache.org
Authored: Thu Apr 9 09:40:08 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Thu Apr 9 09:45:02 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  8 +++--
 .../apache/hadoop/hdfs/TestDFSConfigKeys.java   | 37 
 3 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edf2f52d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 2c4a3bf..132daa9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -88,6 +88,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-7979. Initialize block report IDs with a random number. (wang)
 
+HDFS-8101. DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS
+classes at runtime. (Sean Busbey via atm)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edf2f52d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index f88b221..d8b1692 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
-import org.apache.hadoop.hdfs.web.AuthFilter;
 import org.apache.hadoop.http.HttpConfig;
 
 /** 
@@ -156,7 +155,12 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final String  DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_KEY 
= dfs.namenode.replication.max-streams-hard-limit;
   public static final int 
DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT = 4;
   public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_KEY = 
dfs.web.authentication.filter;
-  public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
AuthFilter.class.getName();
+  /* Phrased as below to avoid javac inlining as a constant, to match the 
behavior when
+ this was AuthFilter.class.getName(). Note that if you change the import 
for AuthFilter, you
+ need to update the literal here as well as TestDFSConfigKeys.
+   */
+  public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT =
+  org.apache.hadoop.hdfs.web.AuthFilter.toString();
   public static final String  DFS_WEBHDFS_ENABLED_KEY = dfs.webhdfs.enabled;
   public static final boolean DFS_WEBHDFS_ENABLED_DEFAULT = true;
   public static final String  DFS_WEBHDFS_USER_PATTERN_KEY = 
dfs.webhdfs.user.provider.user.pattern;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edf2f52d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
new file mode 100644
index 000..c7df891
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except

[1/2] hadoop git commit: HDFS-8101. DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime. Contributed by Sean Busbey.

2015-04-09 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6d1cb3422 - edf2f52d6
  refs/heads/trunk 6495940ea - 3fe61e0bb


HDFS-8101. DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes 
at runtime. Contributed by Sean Busbey.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3fe61e0b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3fe61e0b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3fe61e0b

Branch: refs/heads/trunk
Commit: 3fe61e0bb0d025a6acbb754027f73f3084b2f4d1
Parents: 6495940
Author: Aaron T. Myers a...@apache.org
Authored: Thu Apr 9 09:40:08 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Thu Apr 9 09:40:08 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  8 +++--
 .../apache/hadoop/hdfs/TestDFSConfigKeys.java   | 37 
 3 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fe61e0b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 74ed624..727bec7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -406,6 +406,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-7979. Initialize block report IDs with a random number. (wang)
 
+HDFS-8101. DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS
+classes at runtime. (Sean Busbey via atm)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fe61e0b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 3bb2ae6..d0ca125 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
-import org.apache.hadoop.hdfs.web.AuthFilter;
 import org.apache.hadoop.http.HttpConfig;
 
 /** 
@@ -157,7 +156,12 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final String  DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_KEY 
= dfs.namenode.replication.max-streams-hard-limit;
   public static final int 
DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT = 4;
   public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_KEY = 
dfs.web.authentication.filter;
-  public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
AuthFilter.class.getName();
+  /* Phrased as below to avoid javac inlining as a constant, to match the 
behavior when
+ this was AuthFilter.class.getName(). Note that if you change the import 
for AuthFilter, you
+ need to update the literal here as well as TestDFSConfigKeys.
+   */
+  public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT =
+  org.apache.hadoop.hdfs.web.AuthFilter.toString();
   public static final String  DFS_WEBHDFS_USER_PATTERN_KEY = 
dfs.webhdfs.user.provider.user.pattern;
   public static final String  DFS_WEBHDFS_USER_PATTERN_DEFAULT =
   HdfsClientConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fe61e0b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
new file mode 100644
index 000..c7df891
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you

[1/2] hadoop git commit: HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager go down when old token cannot be deleted. Contributed by Arun Suresh.

2015-03-17 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ab34e6975 - 85473cd61
  refs/heads/trunk 968425e9f - fc90bf7b2


HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager 
go down when old token cannot be deleted. Contributed by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fc90bf7b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fc90bf7b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fc90bf7b

Branch: refs/heads/trunk
Commit: fc90bf7b27cc20486f2806670a14fd7d654b0a31
Parents: 968425e
Author: Aaron T. Myers a...@apache.org
Authored: Tue Mar 17 19:41:36 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Mar 17 19:41:36 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  4 
 .../ZKDelegationTokenSecretManager.java | 21 ++--
 2 files changed, 23 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc90bf7b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3817054..a6bd68d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -,6 +,10 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
 tags in hadoop-tools. (Akira AJISAKA via ozawa)
 
+HADOOP-11722. Some Instances of Services using
+ZKDelegationTokenSecretManager go down when old token cannot be deleted.
+(Arun Suresh via atm)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc90bf7b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index ec522dcf..73c3ab8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenManager;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.KeeperException.NoNodeException;
 import org.apache.zookeeper.ZooDefs.Perms;
 import org.apache.zookeeper.client.ZooKeeperSaslClient;
 import org.apache.zookeeper.data.ACL;
@@ -709,7 +710,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to delete a non-existing znode  + 
nodeRemovePath);
@@ -761,7 +770,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to remove a non-existing znode  + 
nodeRemovePath);



[2/2] hadoop git commit: HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager go down when old token cannot be deleted. Contributed by Arun Suresh. (cherry picked from commit

2015-03-17 Thread atm
HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager 
go down when old token cannot be deleted. Contributed by Arun Suresh.
(cherry picked from commit fc90bf7b27cc20486f2806670a14fd7d654b0a31)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/85473cd6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/85473cd6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/85473cd6

Branch: refs/heads/branch-2
Commit: 85473cd61a37a9b7614805bd83507cabe85eaeb0
Parents: ab34e69
Author: Aaron T. Myers a...@apache.org
Authored: Tue Mar 17 19:41:36 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Mar 17 19:42:31 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  4 
 .../ZKDelegationTokenSecretManager.java | 21 ++--
 2 files changed, 23 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/85473cd6/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7f47197..0d1ffce 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -700,6 +700,10 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
 tags in hadoop-tools. (Akira AJISAKA via ozawa)
 
+HADOOP-11722. Some Instances of Services using
+ZKDelegationTokenSecretManager go down when old token cannot be deleted.
+(Arun Suresh via atm)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85473cd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index ec522dcf..73c3ab8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenManager;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.KeeperException.NoNodeException;
 import org.apache.zookeeper.ZooDefs.Perms;
 import org.apache.zookeeper.client.ZooKeeperSaslClient;
 import org.apache.zookeeper.data.ACL;
@@ -709,7 +710,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to delete a non-existing znode  + 
nodeRemovePath);
@@ -761,7 +770,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to remove a non-existing znode  + 
nodeRemovePath);



[1/2] hadoop git commit: HDFS-7682. {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes non-snapshotted content. Contributed by Charles Lamb.

2015-03-03 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 65bfde552 - bce3d442f
  refs/heads/trunk e2262d3d1 - f2d7a67a2


HDFS-7682. {{DistributedFileSystem#getFileChecksum}} of a snapshotted file 
includes non-snapshotted content. Contributed by Charles Lamb.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f2d7a67a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f2d7a67a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f2d7a67a

Branch: refs/heads/trunk
Commit: f2d7a67a2c1d9dde10ed3171fdec65dff885afcc
Parents: e2262d3
Author: Aaron T. Myers a...@apache.org
Authored: Tue Mar 3 18:08:59 2015 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Mar 3 18:08:59 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  3 +++
 .../snapshot/TestSnapshotFileLength.java| 25 +---
 3 files changed, 28 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2d7a67a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 4e7b919..7ff3c78 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1080,6 +1080,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-6565. Use jackson instead jetty json in hdfs-client.
 (Akira Ajisaka via wheat9)
 
+HDFS-7682. {{DistributedFileSystem#getFileChecksum}} of a snapshotted file
+includes non-snapshotted content. (Charles Lamb via atm)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2d7a67a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index abcd847..aac7b51 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -2220,6 +2220,9 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 
 // get block checksum for each block
 long remaining = length;
+if (src.contains(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR_SEPARATOR)) {
+  remaining = Math.min(length, blockLocations.getFileLength());
+}
 for(int i = 0; i  locatedblocks.size()  remaining  0; i++) {
   if (refetchBlocks) {  // refetch to get fresh tokens
 blockLocations = callGetBlockLocations(namenode, src, 0, length);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2d7a67a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
index 98aafc1..d53140f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
@@ -20,8 +20,8 @@ package org.apache.hadoop.hdfs.server.namenode.snapshot;
 import java.io.ByteArrayOutputStream;
 import java.io.PrintStream;
 
-
 import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileChecksum;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.hdfs.AppendTestUtil;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
@@ -29,8 +29,9 @@ import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 import static org.hamcrest.CoreMatchers.is;
-import static org.junit.Assert.*;
-
+import static org.hamcrest.CoreMatchers.not;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThat;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataInputStream;
@@ -103,17 +104,35 @@ public class TestSnapshotFileLength {
 Path file1snap1
 = SnapshotTestHelper.getSnapshotPath(sub, snapshot1, file1Name);
 
+final FileChecksum snapChksum1 = hdfs.getFileChecksum(file1snap1);
+assertThat(file and snapshot file checksums

[2/2] hadoop git commit: HDFS-7682. {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes non-snapshotted content. Contributed by Charles Lamb. (cherry picked from commit f2d7a67a2c

2015-03-03 Thread atm
HDFS-7682. {{DistributedFileSystem#getFileChecksum}} of a snapshotted file 
includes non-snapshotted content. Contributed by Charles Lamb.
(cherry picked from commit f2d7a67a2c1d9dde10ed3171fdec65dff885afcc)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bce3d442
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bce3d442
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bce3d442

Branch: refs/heads/branch-2
Commit: bce3d442ff08ee1e730b0bac112439d6a6931917
Parents: 65bfde5
Author: Aaron T. Myers a...@apache.org
Authored: Tue Mar 3 18:08:59 2015 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Mar 3 18:09:31 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  3 +++
 .../snapshot/TestSnapshotFileLength.java| 25 +---
 3 files changed, 28 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bce3d442/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 69a410f..bff45bb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -774,6 +774,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-6565. Use jackson instead jetty json in hdfs-client.
 (Akira Ajisaka via wheat9)
 
+HDFS-7682. {{DistributedFileSystem#getFileChecksum}} of a snapshotted file
+includes non-snapshotted content. (Charles Lamb via atm)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bce3d442/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 6f96126..ba6a1d1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -2219,6 +2219,9 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 
 // get block checksum for each block
 long remaining = length;
+if (src.contains(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR_SEPARATOR)) {
+  remaining = Math.min(length, blockLocations.getFileLength());
+}
 for(int i = 0; i  locatedblocks.size()  remaining  0; i++) {
   if (refetchBlocks) {  // refetch to get fresh tokens
 blockLocations = callGetBlockLocations(namenode, src, 0, length);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bce3d442/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
index 98aafc1..d53140f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
@@ -20,8 +20,8 @@ package org.apache.hadoop.hdfs.server.namenode.snapshot;
 import java.io.ByteArrayOutputStream;
 import java.io.PrintStream;
 
-
 import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileChecksum;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.hdfs.AppendTestUtil;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
@@ -29,8 +29,9 @@ import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 import static org.hamcrest.CoreMatchers.is;
-import static org.junit.Assert.*;
-
+import static org.hamcrest.CoreMatchers.not;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThat;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataInputStream;
@@ -103,17 +104,35 @@ public class TestSnapshotFileLength {
 Path file1snap1
 = SnapshotTestHelper.getSnapshotPath(sub, snapshot1, file1Name);
 
+final FileChecksum snapChksum1 = hdfs.getFileChecksum(file1snap1);
+assertThat(file and snapshot file checksums are not equal,
+hdfs.getFileChecksum(file1

[1/2] hadoop git commit: HADOOP-10626. Limit Returning Attributes for LDAP search. Contributed by Jason Hubbard. (cherry picked from commit 8709751e1ee9a2c5553823dcd715bd077052ad7f)

2015-01-27 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 8100c8a68 - 3c8d3816c
  refs/heads/trunk 4e15fc084 - 8bf6f0b70


HADOOP-10626. Limit Returning Attributes for LDAP search. Contributed by Jason 
Hubbard.
(cherry picked from commit 8709751e1ee9a2c5553823dcd715bd077052ad7f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c8d3816
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c8d3816
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c8d3816

Branch: refs/heads/branch-2
Commit: 3c8d3816c67d5309c56f2ee08876967e34b65ab6
Parents: 8100c8a
Author: Aaron T. Myers a...@apache.org
Authored: Tue Jan 27 13:50:45 2015 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Jan 27 13:51:19 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../main/java/org/apache/hadoop/security/LdapGroupsMapping.java   | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c8d3816/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 43c5fc2..9b841ce 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -144,6 +144,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-4297. Enable Java assertions when running tests.
 (Tsz Wo Nicholas Sze via wheat9)
 
+HADOOP-10626. Limit Returning Attributes for LDAP search. (Jason Hubbard
+via atm)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c8d3816/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
index c0c8d2b..d463ac7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
@@ -341,6 +341,8 @@ public class LdapGroupsMapping
 
 int dirSearchTimeout = conf.getInt(DIRECTORY_SEARCH_TIMEOUT, 
DIRECTORY_SEARCH_TIMEOUT_DEFAULT);
 SEARCH_CONTROLS.setTimeLimit(dirSearchTimeout);
+// Limit the attributes returned to only those required to speed up the 
search. See HADOOP-10626 for more details.
+SEARCH_CONTROLS.setReturningAttributes(new String[] {groupNameAttr});
 
 this.conf = conf;
   }



[2/2] hadoop git commit: HADOOP-10626. Limit Returning Attributes for LDAP search. Contributed by Jason Hubbard.

2015-01-27 Thread atm
HADOOP-10626. Limit Returning Attributes for LDAP search. Contributed by Jason 
Hubbard.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8bf6f0b7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8bf6f0b7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8bf6f0b7

Branch: refs/heads/trunk
Commit: 8bf6f0b70396e8f2d3b37e6da194b19f357e846a
Parents: 4e15fc0
Author: Aaron T. Myers a...@apache.org
Authored: Tue Jan 27 13:50:45 2015 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Jan 27 13:53:35 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../main/java/org/apache/hadoop/security/LdapGroupsMapping.java   | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bf6f0b7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index fce2c81..0396e7d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -505,6 +505,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-4297. Enable Java assertions when running tests.
 (Tsz Wo Nicholas Sze via wheat9)
 
+HADOOP-10626. Limit Returning Attributes for LDAP search. (Jason Hubbard
+via atm)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bf6f0b7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
index c0c8d2b..d463ac7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
@@ -341,6 +341,8 @@ public class LdapGroupsMapping
 
 int dirSearchTimeout = conf.getInt(DIRECTORY_SEARCH_TIMEOUT, 
DIRECTORY_SEARCH_TIMEOUT_DEFAULT);
 SEARCH_CONTROLS.setTimeLimit(dirSearchTimeout);
+// Limit the attributes returned to only those required to speed up the 
search. See HADOOP-10626 for more details.
+SEARCH_CONTROLS.setReturningAttributes(new String[] {groupNameAttr});
 
 this.conf = conf;
   }



[2/2] hadoop git commit: HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos TGT is available in the subject. Contributed by Dian Fu. (cherry picked from commit 9d1a8f5897d58

2014-12-03 Thread atm
HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos 
TGT is available in the subject. Contributed by Dian Fu.
(cherry picked from commit 9d1a8f5897d585bec96de32116fbd2118f8e0f95)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/534a021e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/534a021e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/534a021e

Branch: refs/heads/branch-2
Commit: 534a021e70ac2764617eeaf9dd8f93c7683a0b68
Parents: 58c9711
Author: Aaron T. Myers a...@apache.org
Authored: Wed Dec 3 18:53:45 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Dec 3 18:54:26 2014 -0800

--
 .../security/authentication/client/KerberosAuthenticator.java  | 6 +-
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 2 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/534a021e/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
index e4ebf1b..928866c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
@@ -23,6 +23,8 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosKey;
+import javax.security.auth.kerberos.KerberosTicket;
 import javax.security.auth.login.AppConfigurationEntry;
 import javax.security.auth.login.Configuration;
 import javax.security.auth.login.LoginContext;
@@ -247,7 +249,9 @@ public class KerberosAuthenticator implements Authenticator 
{
 try {
   AccessControlContext context = AccessController.getContext();
   Subject subject = Subject.getSubject(context);
-  if (subject == null) {
+  if (subject == null
+  || (subject.getPrivateCredentials(KerberosKey.class).isEmpty()
+   
subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) {
 LOG.debug(No subject in context, logging in);
 subject = new Subject();
 LoginContext login = new LoginContext(, subject,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/534a021e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 62f7ea9..655216e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -133,6 +133,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL
 and whitelist key ACL. (Dian Fu via wang)
 
+HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if
+kerberos TGT is available in the subject. (Dian Fu via atm)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[1/2] hadoop git commit: HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos TGT is available in the subject. Contributed by Dian Fu.

2014-12-03 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 58c971164 - 534a021e7
  refs/heads/trunk 73fbb3c66 - 9d1a8f589


HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos 
TGT is available in the subject. Contributed by Dian Fu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9d1a8f58
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9d1a8f58
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9d1a8f58

Branch: refs/heads/trunk
Commit: 9d1a8f5897d585bec96de32116fbd2118f8e0f95
Parents: 73fbb3c
Author: Aaron T. Myers a...@apache.org
Authored: Wed Dec 3 18:53:45 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Dec 3 18:53:45 2014 -0800

--
 .../security/authentication/client/KerberosAuthenticator.java  | 6 +-
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 2 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d1a8f58/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
index e4ebf1b..928866c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
@@ -23,6 +23,8 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosKey;
+import javax.security.auth.kerberos.KerberosTicket;
 import javax.security.auth.login.AppConfigurationEntry;
 import javax.security.auth.login.Configuration;
 import javax.security.auth.login.LoginContext;
@@ -247,7 +249,9 @@ public class KerberosAuthenticator implements Authenticator 
{
 try {
   AccessControlContext context = AccessController.getContext();
   Subject subject = Subject.getSubject(context);
-  if (subject == null) {
+  if (subject == null
+  || (subject.getPrivateCredentials(KerberosKey.class).isEmpty()
+   
subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) {
 LOG.debug(No subject in context, logging in);
 subject = new Subject();
 LoginContext login = new LoginContext(, subject,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d1a8f58/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7a2159f..f53bceb 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -496,6 +496,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL
 and whitelist key ACL. (Dian Fu via wang)
 
+HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if
+kerberos TGT is available in the subject. (Dian Fu via atm)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[2/2] hadoop git commit: HDFS-7331. Add Datanode network counts to datanode jmx page. Contributed by Charles Lamb.

2014-11-21 Thread atm
HDFS-7331. Add Datanode network counts to datanode jmx page. Contributed by 
Charles Lamb.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d4f3e56
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d4f3e56
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d4f3e56

Branch: refs/heads/trunk
Commit: 2d4f3e567e4bb8068c028de12df118a4f3fa6343
Parents: b8c094b
Author: Aaron T. Myers a...@apache.org
Authored: Fri Nov 21 16:34:08 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Fri Nov 21 16:36:39 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  2 +
 .../hadoop/hdfs/server/datanode/DataNode.java   | 47 
 .../hdfs/server/datanode/DataNodeMXBean.java|  7 +++
 .../hdfs/server/datanode/DataXceiver.java   | 27 ++-
 .../server/datanode/TestDataNodeMetrics.java| 18 
 6 files changed, 94 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d4f3e56/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 408a6ed..3f12cec 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -385,6 +385,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7420. Delegate permission checks to FSDirectory. (wheat9)
 
+HDFS-7331. Add Datanode network counts to datanode jmx page. (Charles Lamb
+via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d4f3e56/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index af18f4d..78cae9c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -155,6 +155,8 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final float   
DFS_DATANODE_RAM_DISK_LOW_WATERMARK_PERCENT_DEFAULT = 10.0f;
   public static final String  DFS_DATANODE_RAM_DISK_LOW_WATERMARK_BYTES = 
dfs.datanode.ram.disk.low.watermark.bytes;
   public static final long
DFS_DATANODE_RAM_DISK_LOW_WATERMARK_BYTES_DEFAULT = DFS_BLOCK_SIZE_DEFAULT;
+  public static final String  DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_KEY = 
dfs.datanode.network.counts.cache.max.size;
+  public static final int 
DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_DEFAULT = Integer.MAX_VALUE;
 
   // This setting is for testing/internal use only.
   public static final String  DFS_DATANODE_DUPLICATE_REPLICA_DELETION = 
dfs.datanode.duplicate.replica.deletion;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d4f3e56/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index a53698a..2ff6870 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -38,6 +38,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_IPC_ADDRESS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KERBEROS_PRINCIPAL_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KEYTAB_FILE_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_MAX_LOCKED_MEMORY_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_DEFAULT;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_PLUGINS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SCAN_PERIOD_HOURS_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SCAN_PERIOD_HOURS_KEY;
@@ -77,6 +79,7 @@ import java.util.Map;
 import java.util.Set;
 import java.util.UUID;
 import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
 import

[1/2] hadoop git commit: HDFS-7331. Add Datanode network counts to datanode jmx page. Contributed by Charles Lamb. (cherry picked from commit ffa8c1a1b437cf0dc6d98a9b29161d12919e5afa)

2014-11-21 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3e2e8eac2 - 3de3640e4
  refs/heads/trunk b8c094b07 - 2d4f3e567


HDFS-7331. Add Datanode network counts to datanode jmx page. Contributed by 
Charles Lamb.
(cherry picked from commit ffa8c1a1b437cf0dc6d98a9b29161d12919e5afa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3de3640e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3de3640e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3de3640e

Branch: refs/heads/branch-2
Commit: 3de3640e4c1172f1e47565ac61a1f3183c354c79
Parents: 3e2e8ea
Author: Aaron T. Myers a...@apache.org
Authored: Fri Nov 21 16:34:08 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Fri Nov 21 16:34:41 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  2 +
 .../hadoop/hdfs/server/datanode/DataNode.java   | 47 
 .../hdfs/server/datanode/DataNodeMXBean.java|  7 +++
 .../hdfs/server/datanode/DataXceiver.java   | 27 ++-
 .../server/datanode/TestDataNodeMetrics.java| 18 
 6 files changed, 94 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3de3640e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 034db7b..2a99645 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -128,6 +128,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7420. Delegate permission checks to FSDirectory. (wheat9)
 
+HDFS-7331. Add Datanode network counts to datanode jmx page. (Charles Lamb
+via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3de3640e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 46b409e..590ba2a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -155,6 +155,8 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final float   
DFS_DATANODE_RAM_DISK_LOW_WATERMARK_PERCENT_DEFAULT = 10.0f;
   public static final String  DFS_DATANODE_RAM_DISK_LOW_WATERMARK_BYTES = 
dfs.datanode.ram.disk.low.watermark.bytes;
   public static final long
DFS_DATANODE_RAM_DISK_LOW_WATERMARK_BYTES_DEFAULT = DFS_BLOCK_SIZE_DEFAULT;
+  public static final String  DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_KEY = 
dfs.datanode.network.counts.cache.max.size;
+  public static final int 
DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_DEFAULT = Integer.MAX_VALUE;
 
   // This setting is for testing/internal use only.
   public static final String  DFS_DATANODE_DUPLICATE_REPLICA_DELETION = 
dfs.datanode.duplicate.replica.deletion;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3de3640e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 3f32d4b..7d49511 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -38,6 +38,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_IPC_ADDRESS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KERBEROS_PRINCIPAL_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KEYTAB_FILE_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_MAX_LOCKED_MEMORY_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_NETWORK_COUNTS_CACHE_MAX_SIZE_DEFAULT;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_PLUGINS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SCAN_PERIOD_HOURS_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SCAN_PERIOD_HOURS_KEY;
@@ -77,6

[2/2] hadoop git commit: HADOOP-11157. ZKDelegationTokenSecretManager never shuts down listenerThreadPool. Contributed by Arun Suresh. (cherry picked from commit 07d489e6230682e0553840bb1a0e446acb9f8d

2014-11-17 Thread atm
HADOOP-11157. ZKDelegationTokenSecretManager never shuts down 
listenerThreadPool. Contributed by Arun Suresh.
(cherry picked from commit 07d489e6230682e0553840bb1a0e446acb9f8d19)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d35eba7b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d35eba7b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d35eba7b

Branch: refs/heads/branch-2
Commit: d35eba7b1ff98e2e542a6c6c5b389fcc20d885c7
Parents: 6eb88c2
Author: Aaron T. Myers a...@apache.org
Authored: Mon Nov 17 12:57:52 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Nov 17 13:02:59 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../ZKDelegationTokenSecretManager.java | 126 +++--
 .../TestZKDelegationTokenSecretManager.java | 275 +++
 3 files changed, 331 insertions(+), 73 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d35eba7b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index ec26ac0..1942b9f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -85,6 +85,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11294. Nfs3FileAttributes should not change the values of rdev,
 nlink and size in the constructor. (Brandon Li via wheat9)
 
+HADOOP-11157. ZKDelegationTokenSecretManager never shuts down
+listenerThreadPool. (Arun Suresh via atm)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d35eba7b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index ebc45a5..d6bc995 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -29,6 +29,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
 
 import javax.security.auth.login.AppConfigurationEntry;
 
@@ -38,6 +39,7 @@ import org.apache.curator.framework.CuratorFrameworkFactory;
 import org.apache.curator.framework.CuratorFrameworkFactory.Builder;
 import org.apache.curator.framework.api.ACLProvider;
 import org.apache.curator.framework.imps.DefaultACLProvider;
+import org.apache.curator.framework.recipes.cache.ChildData;
 import org.apache.curator.framework.recipes.cache.PathChildrenCache;
 import org.apache.curator.framework.recipes.cache.PathChildrenCache.StartMode;
 import org.apache.curator.framework.recipes.cache.PathChildrenCacheEvent;
@@ -48,6 +50,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenManager;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
@@ -80,6 +83,8 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
   + zkSessionTimeout;
   public static final String ZK_DTSM_ZK_CONNECTION_TIMEOUT = ZK_CONF_PREFIX
   + zkConnectionTimeout;
+  public static final String ZK_DTSM_ZK_SHUTDOWN_TIMEOUT = ZK_CONF_PREFIX
+  + zkShutdownTimeout;
   public static final String ZK_DTSM_ZNODE_WORKING_PATH = ZK_CONF_PREFIX
   + znodeWorkingPath;
   public static final String ZK_DTSM_ZK_AUTH_TYPE = ZK_CONF_PREFIX
@@ -94,6 +99,7 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
   public static final int ZK_DTSM_ZK_NUM_RETRIES_DEFAULT = 3;
   public static final int ZK_DTSM_ZK_SESSION_TIMEOUT_DEFAULT = 1;
   public static final int ZK_DTSM_ZK_CONNECTION_TIMEOUT_DEFAULT = 1;
+  public static final int ZK_DTSM_ZK_SHUTDOWN_TIMEOUT_DEFAULT = 1;
   public static final String ZK_DTSM_ZNODE_WORKING_PATH_DEAFULT = zkdtsm;
 
   private static Logger LOG

[1/2] hadoop git commit: HADOOP-11157. ZKDelegationTokenSecretManager never shuts down listenerThreadPool. Contributed by Arun Suresh.

2014-11-17 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6eb88c278 - d35eba7b1
  refs/heads/trunk bf8e4332c - bd8196e85


HADOOP-11157. ZKDelegationTokenSecretManager never shuts down 
listenerThreadPool. Contributed by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bd8196e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bd8196e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bd8196e8

Branch: refs/heads/trunk
Commit: bd8196e85e49d44de57237a59bcd7ceae4332c2e
Parents: bf8e433
Author: Aaron T. Myers a...@apache.org
Authored: Mon Nov 17 12:57:52 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Nov 17 13:02:49 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../ZKDelegationTokenSecretManager.java | 126 +++--
 .../TestZKDelegationTokenSecretManager.java | 275 +++
 3 files changed, 331 insertions(+), 73 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bd8196e8/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index bce342b..bc63c75 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -441,6 +441,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11294. Nfs3FileAttributes should not change the values of rdev,
 nlink and size in the constructor. (Brandon Li via wheat9)
 
+HADOOP-11157. ZKDelegationTokenSecretManager never shuts down
+listenerThreadPool. (Arun Suresh via atm)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bd8196e8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index ebc45a5..d6bc995 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -29,6 +29,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
 
 import javax.security.auth.login.AppConfigurationEntry;
 
@@ -38,6 +39,7 @@ import org.apache.curator.framework.CuratorFrameworkFactory;
 import org.apache.curator.framework.CuratorFrameworkFactory.Builder;
 import org.apache.curator.framework.api.ACLProvider;
 import org.apache.curator.framework.imps.DefaultACLProvider;
+import org.apache.curator.framework.recipes.cache.ChildData;
 import org.apache.curator.framework.recipes.cache.PathChildrenCache;
 import org.apache.curator.framework.recipes.cache.PathChildrenCache.StartMode;
 import org.apache.curator.framework.recipes.cache.PathChildrenCacheEvent;
@@ -48,6 +50,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenManager;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
@@ -80,6 +83,8 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
   + zkSessionTimeout;
   public static final String ZK_DTSM_ZK_CONNECTION_TIMEOUT = ZK_CONF_PREFIX
   + zkConnectionTimeout;
+  public static final String ZK_DTSM_ZK_SHUTDOWN_TIMEOUT = ZK_CONF_PREFIX
+  + zkShutdownTimeout;
   public static final String ZK_DTSM_ZNODE_WORKING_PATH = ZK_CONF_PREFIX
   + znodeWorkingPath;
   public static final String ZK_DTSM_ZK_AUTH_TYPE = ZK_CONF_PREFIX
@@ -94,6 +99,7 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
   public static final int ZK_DTSM_ZK_NUM_RETRIES_DEFAULT = 3;
   public static final int ZK_DTSM_ZK_SESSION_TIMEOUT_DEFAULT = 1;
   public static final int ZK_DTSM_ZK_CONNECTION_TIMEOUT_DEFAULT = 1;
+  public static final int ZK_DTSM_ZK_SHUTDOWN_TIMEOUT_DEFAULT = 1;
   public static final String

[3/4] HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (cherry picked from commit 6ba52d88ec11444cbac946ffadbc645acd0657de)

2014-11-05 Thread atm
http://git-wip-us.apache.org/repos/asf/hadoop/blob/9082fe4e/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
new file mode 100644
index 000..c913a67
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  License); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an AS IS BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.scale;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorCompletionService;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import static org.junit.Assert.assertEquals;
+
+public class TestS3ADeleteManyFiles extends S3AScaleTestBase {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestS3ADeleteManyFiles.class);
+
+
+  @Rule
+  public Timeout testTimeout = new Timeout(30 * 60 * 1000);
+
+  @Test
+  public void testBulkRenameAndDelete() throws Throwable {
+final Path scaleTestDir = getTestPath();
+final Path srcDir = new Path(scaleTestDir, src);
+final Path finalDir = new Path(scaleTestDir, final);
+final long count = getOperationCount();
+ContractTestUtils.rm(fs, scaleTestDir, true, false);
+
+fs.mkdirs(srcDir);
+fs.mkdirs(finalDir);
+
+int testBufferSize = fs.getConf()
+.getInt(ContractTestUtils.IO_CHUNK_BUFFER_SIZE,
+ContractTestUtils.DEFAULT_IO_CHUNK_BUFFER_SIZE);
+// use Executor to speed up file creation
+ExecutorService exec = Executors.newFixedThreadPool(16);
+final ExecutorCompletionServiceBoolean completionService =
+new ExecutorCompletionServiceBoolean(exec);
+try {
+  final byte[] data = ContractTestUtils.dataset(testBufferSize, 'a', 'z');
+
+  for (int i = 0; i  count; ++i) {
+final String fileName = foo- + i;
+completionService.submit(new CallableBoolean() {
+  @Override
+  public Boolean call() throws IOException {
+ContractTestUtils.createFile(fs, new Path(srcDir, fileName),
+false, data);
+return fs.exists(new Path(srcDir, fileName));
+  }
+});
+  }
+  for (int i = 0; i  count; ++i) {
+final FutureBoolean future = completionService.take();
+try {
+  if (!future.get()) {
+LOG.warn(cannot create file);
+  }
+} catch (ExecutionException e) {
+  LOG.warn(Error while uploading file, e.getCause());
+  throw e;
+}
+  }
+} finally {
+  exec.shutdown();
+}
+
+int nSrcFiles = fs.listStatus(srcDir).length;
+fs.rename(srcDir, finalDir);
+assertEquals(nSrcFiles, fs.listStatus(finalDir).length);
+ContractTestUtils.assertPathDoesNotExist(fs, not deleted after rename,
+new Path(srcDir, foo- + 0));
+ContractTestUtils.assertPathDoesNotExist(fs, not deleted after rename,
+new Path(srcDir, foo- + count / 2));
+ContractTestUtils.assertPathDoesNotExist(fs, not deleted after rename,
+new Path(srcDir, foo- + (count - 1)));
+ContractTestUtils.assertPathExists(fs, not renamed to dest dir,
+new Path(finalDir, foo- + 0));
+ContractTestUtils.assertPathExists(fs, not renamed to dest dir,
+new Path(finalDir, foo- + count/2));
+ContractTestUtils.assertPathExists(fs, not renamed to dest dir,
+new Path(finalDir, foo- + (count-1)));
+
+ContractTestUtils.assertDeleted(fs, finalDir, true, false);
+  }
+
+  @Test
+  public void testOpenCreate() throws IOException {
+Path dir = new 

[1/4] HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu.

2014-11-05 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f92ff24f5 - 9082fe4e2
  refs/heads/trunk 395275af8 - 6ba52d88e


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6ba52d88/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
new file mode 100644
index 000..c913a67
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  License); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an AS IS BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.scale;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorCompletionService;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import static org.junit.Assert.assertEquals;
+
+public class TestS3ADeleteManyFiles extends S3AScaleTestBase {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestS3ADeleteManyFiles.class);
+
+
+  @Rule
+  public Timeout testTimeout = new Timeout(30 * 60 * 1000);
+
+  @Test
+  public void testBulkRenameAndDelete() throws Throwable {
+final Path scaleTestDir = getTestPath();
+final Path srcDir = new Path(scaleTestDir, src);
+final Path finalDir = new Path(scaleTestDir, final);
+final long count = getOperationCount();
+ContractTestUtils.rm(fs, scaleTestDir, true, false);
+
+fs.mkdirs(srcDir);
+fs.mkdirs(finalDir);
+
+int testBufferSize = fs.getConf()
+.getInt(ContractTestUtils.IO_CHUNK_BUFFER_SIZE,
+ContractTestUtils.DEFAULT_IO_CHUNK_BUFFER_SIZE);
+// use Executor to speed up file creation
+ExecutorService exec = Executors.newFixedThreadPool(16);
+final ExecutorCompletionServiceBoolean completionService =
+new ExecutorCompletionServiceBoolean(exec);
+try {
+  final byte[] data = ContractTestUtils.dataset(testBufferSize, 'a', 'z');
+
+  for (int i = 0; i  count; ++i) {
+final String fileName = foo- + i;
+completionService.submit(new CallableBoolean() {
+  @Override
+  public Boolean call() throws IOException {
+ContractTestUtils.createFile(fs, new Path(srcDir, fileName),
+false, data);
+return fs.exists(new Path(srcDir, fileName));
+  }
+});
+  }
+  for (int i = 0; i  count; ++i) {
+final FutureBoolean future = completionService.take();
+try {
+  if (!future.get()) {
+LOG.warn(cannot create file);
+  }
+} catch (ExecutionException e) {
+  LOG.warn(Error while uploading file, e.getCause());
+  throw e;
+}
+  }
+} finally {
+  exec.shutdown();
+}
+
+int nSrcFiles = fs.listStatus(srcDir).length;
+fs.rename(srcDir, finalDir);
+assertEquals(nSrcFiles, fs.listStatus(finalDir).length);
+ContractTestUtils.assertPathDoesNotExist(fs, not deleted after rename,
+new Path(srcDir, foo- + 0));
+ContractTestUtils.assertPathDoesNotExist(fs, not deleted after rename,
+new Path(srcDir, foo- + count / 2));
+ContractTestUtils.assertPathDoesNotExist(fs, not deleted after rename,
+new Path(srcDir, foo- + (count - 1)));
+ContractTestUtils.assertPathExists(fs, not renamed to dest dir,
+new Path(finalDir, foo- + 0));
+ContractTestUtils.assertPathExists(fs, not renamed to dest dir,
+new Path(finalDir, foo- + count/2));
+ContractTestUtils.assertPathExists(fs, not renamed to dest dir,
+new Path(finalDir, foo- + (count-1)));
+
+

[2/4] git commit: HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu.

2014-11-05 Thread atm
HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries 
per call. Contributed by Juan Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6ba52d88
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6ba52d88
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6ba52d88

Branch: refs/heads/trunk
Commit: 6ba52d88ec11444cbac946ffadbc645acd0657de
Parents: 395275a
Author: Aaron T. Myers a...@apache.org
Authored: Wed Nov 5 17:17:04 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Nov 5 17:17:04 2014 -0800

--
 .gitignore  |   1 +
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../src/site/markdown/filesystem/testing.md |  47 ---
 .../hadoop/fs/FileSystemContractBaseTest.java   |   6 +-
 .../fs/contract/AbstractContractDeleteTest.java |  27 ++
 .../fs/contract/AbstractContractMkdirTest.java  |  19 +
 .../fs/contract/AbstractContractRenameTest.java |  41 ++
 .../hadoop/fs/contract/ContractOptions.java |   7 +
 .../hadoop/fs/contract/ContractTestUtils.java   | 139 +++
 .../src/test/resources/contract/localfs.xml |   4 +
 hadoop-tools/hadoop-aws/pom.xml |   7 +
 .../org/apache/hadoop/fs/s3/S3Credentials.java  |   4 +-
 .../fs/s3a/BasicAWSCredentialsProvider.java |   8 +-
 .../org/apache/hadoop/fs/s3a/Constants.java |   7 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 147 ---
 .../apache/hadoop/fs/s3a/S3AInputStream.java|  38 +-
 .../apache/hadoop/fs/s3a/S3AOutputStream.java   |  18 +-
 .../site/markdown/tools/hadoop-aws/index.md | 417 +++
 .../fs/contract/s3a/TestS3AContractRename.java  |  13 +-
 .../fs/s3/S3FileSystemContractBaseTest.java |  11 +-
 .../fs/s3a/S3AFileSystemContractBaseTest.java   | 327 ---
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |  51 +++
 .../fs/s3a/TestS3AFileSystemContract.java   | 105 +
 .../hadoop/fs/s3a/scale/S3AScaleTestBase.java   |  89 
 .../fs/s3a/scale/TestS3ADeleteManyFiles.java| 131 ++
 .../NativeS3FileSystemContractBaseTest.java |  11 +-
 .../TestJets3tNativeFileSystemStore.java|   3 +
 .../src/test/resources/contract/s3a.xml |   5 +
 .../hadoop-aws/src/test/resources/core-site.xml |  51 +++
 29 files changed, 1263 insertions(+), 474 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6ba52d88/.gitignore
--
diff --git a/.gitignore b/.gitignore
index 8b132cb..15c040c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -21,3 +21,4 @@ 
hadoop-common-project/hadoop-common/src/test/resources/contract-test-options.xml
 hadoop-tools/hadoop-openstack/src/test/resources/contract-test-options.xml
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/tla/yarnregistry.toolbox
 yarnregistry.pdf
+hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6ba52d88/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 422bc3e..8567e1e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -406,6 +406,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11267. TestSecurityUtil fails when run with JDK8 because of empty
 principal names. (Stephen Chu via wheat9)
 
+HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000
+entries per call. (Juan Yu via atm)
+
 Release 2.6.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6ba52d88/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
index bc66e67..444fb60 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
@@ -28,53 +28,6 @@ These filesystem bindings must be defined in an XML 
configuration file, usually
 
`hadoop-common-project/hadoop-common/src/test/resources/contract-test-options.xml`.
 This file is excluded should not be checked in.
 
-### s3://
-
-In `contract-test-options.xml`, the filesystem name must be defined in the 
property `fs.contract.test.fs.s3`. The standard configuration options to define 
the S3 authentication details must also be provided.
-
-Example

[4/4] git commit: HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call. Contributed by Juan Yu. (cherry picked from commit 6ba52d88ec11444cbac946ffadbc645acd0657de)

2014-11-05 Thread atm
HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000 entries 
per call. Contributed by Juan Yu.
(cherry picked from commit 6ba52d88ec11444cbac946ffadbc645acd0657de)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9082fe4e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9082fe4e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9082fe4e

Branch: refs/heads/branch-2
Commit: 9082fe4e206692695ae877d27c19cac87f6481dc
Parents: f92ff24
Author: Aaron T. Myers a...@apache.org
Authored: Wed Nov 5 17:17:04 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Nov 5 17:24:55 2014 -0800

--
 .gitignore  |   1 +
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../src/site/markdown/filesystem/testing.md |  47 ---
 .../hadoop/fs/FileSystemContractBaseTest.java   |   4 +-
 .../fs/contract/AbstractContractDeleteTest.java |  27 ++
 .../fs/contract/AbstractContractMkdirTest.java  |  19 +
 .../fs/contract/AbstractContractRenameTest.java |  41 ++
 .../hadoop/fs/contract/ContractOptions.java |   7 +
 .../hadoop/fs/contract/ContractTestUtils.java   | 139 +++
 .../src/test/resources/contract/localfs.xml |   4 +
 hadoop-tools/hadoop-aws/pom.xml |   7 +
 .../org/apache/hadoop/fs/s3/S3Credentials.java  |   4 +-
 .../fs/s3a/BasicAWSCredentialsProvider.java |   8 +-
 .../org/apache/hadoop/fs/s3a/Constants.java |   7 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 147 ---
 .../apache/hadoop/fs/s3a/S3AInputStream.java|  38 +-
 .../apache/hadoop/fs/s3a/S3AOutputStream.java   |  18 +-
 .../site/markdown/tools/hadoop-aws/index.md | 417 +++
 .../fs/contract/s3a/TestS3AContractRename.java  |  13 +-
 .../fs/s3/S3FileSystemContractBaseTest.java |  11 +-
 .../fs/s3a/S3AFileSystemContractBaseTest.java   | 327 ---
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |  51 +++
 .../fs/s3a/TestS3AFileSystemContract.java   | 105 +
 .../hadoop/fs/s3a/scale/S3AScaleTestBase.java   |  89 
 .../fs/s3a/scale/TestS3ADeleteManyFiles.java| 131 ++
 .../NativeS3FileSystemContractBaseTest.java |  11 +-
 .../TestJets3tNativeFileSystemStore.java|   3 +
 .../src/test/resources/contract/s3a.xml |   5 +
 .../hadoop-aws/src/test/resources/core-site.xml |  51 +++
 29 files changed, 1262 insertions(+), 473 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9082fe4e/.gitignore
--
diff --git a/.gitignore b/.gitignore
index 8b132cb..15c040c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -21,3 +21,4 @@ 
hadoop-common-project/hadoop-common/src/test/resources/contract-test-options.xml
 hadoop-tools/hadoop-openstack/src/test/resources/contract-test-options.xml
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/tla/yarnregistry.toolbox
 yarnregistry.pdf
+hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9082fe4e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index be69d80..563ed84 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -63,6 +63,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11267. TestSecurityUtil fails when run with JDK8 because of empty
 principal names. (Stephen Chu via wheat9)
 
+HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000
+entries per call. (Juan Yu via atm)
+
 Release 2.6.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9082fe4e/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
index bc66e67..444fb60 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md
@@ -28,53 +28,6 @@ These filesystem bindings must be defined in an XML 
configuration file, usually
 
`hadoop-common-project/hadoop-common/src/test/resources/contract-test-options.xml`.
 This file is excluded should not be checked in.
 
-### s3://
-
-In `contract-test-options.xml`, the filesystem name must be defined in the 
property `fs.contract.test.fs.s3`. The standard configuration options to define 
the S3

[2/2] git commit: HADOOP-11272. Allow ZKSignerSecretProvider and ZKDelegationTokenSecretManager to use the same curator client. Contributed by Arun Suresh. (cherry picked from commit 8a261e68e4177b47b

2014-11-05 Thread atm
);
   Mockito.when(config.getInitParameterNames()).thenReturn(
 new 
VectorString(Arrays.asList(AuthenticationFilter.AUTH_TYPE)).elements());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e96f0c6a/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 563ed84..735962f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -66,6 +66,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000
 entries per call. (Juan Yu via atm)
 
+HADOOP-11272. Allow ZKSignerSecretProvider and
+ZKDelegationTokenSecretManager to use the same curator client. (Arun 
Suresh via atm)
+
 Release 2.6.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e96f0c6a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index 82dd2da..ebc45a5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -136,7 +136,11 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 conf.getLong(DelegationTokenManager.REMOVAL_SCAN_INTERVAL,
 DelegationTokenManager.REMOVAL_SCAN_INTERVAL_DEFAULT) * 1000);
 if (CURATOR_TL.get() != null) {
-  zkClient = CURATOR_TL.get();
+  zkClient =
+  CURATOR_TL.get().usingNamespace(
+  conf.get(ZK_DTSM_ZNODE_WORKING_PATH,
+  ZK_DTSM_ZNODE_WORKING_PATH_DEAFULT)
+  + / + ZK_DTSM_NAMESPACE);
   isExternalClient = true;
 } else {
   String connString = conf.get(ZK_DTSM_ZK_CONNECTION_STRING);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e96f0c6a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
index aa9ec99..fbd1129 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.security.token.delegation.web;
 
 import com.google.common.annotations.VisibleForTesting;
+
 import org.apache.curator.framework.CuratorFramework;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
@@ -46,6 +47,7 @@ import javax.servlet.ServletException;
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletRequestWrapper;
 import javax.servlet.http.HttpServletResponse;
+
 import java.io.IOException;
 import java.io.Writer;
 import java.nio.charset.Charset;
@@ -156,14 +158,7 @@ public class DelegationTokenAuthenticationFilter
 
   @Override
   public void init(FilterConfig filterConfig) throws ServletException {
-// A single CuratorFramework should be used for a ZK cluster.
-// If the ZKSignerSecretProvider has already created it, it has to
-// be set here... to be used by the ZKDelegationTokenSecretManager
-ZKDelegationTokenSecretManager.setCurator((CuratorFramework)
-filterConfig.getServletContext().getAttribute(ZKSignerSecretProvider.
-ZOOKEEPER_SIGNER_SECRET_PROVIDER_CURATOR_CLIENT_ATTRIBUTE));
 super.init(filterConfig);
-ZKDelegationTokenSecretManager.setCurator(null);
 AuthenticationHandler handler = getAuthenticationHandler();
 AbstractDelegationTokenSecretManager dtSecretManager =
 (AbstractDelegationTokenSecretManager) 
filterConfig.getServletContext().
@@ -188,6 +183,19 @@ public class DelegationTokenAuthenticationFilter
 ProxyUsers.refreshSuperUserGroupsConfiguration(conf, PROXYUSER_PREFIX);
   }
 
+  @Override
+  protected void

[1/2] git commit: HADOOP-11272. Allow ZKSignerSecretProvider and ZKDelegationTokenSecretManager to use the same curator client. Contributed by Arun Suresh.

2014-11-05 Thread atm
(AuthenticationFilter.AUTH_TYPE)).thenReturn(kerberos);
   Mockito.when(config.getInitParameterNames()).thenReturn(
 new 
VectorString(Arrays.asList(AuthenticationFilter.AUTH_TYPE)).elements());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a261e68/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8567e1e..55ef9d3 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -409,6 +409,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-10714. AmazonS3Client.deleteObjects() need to be limited to 1000
 entries per call. (Juan Yu via atm)
 
+HADOOP-11272. Allow ZKSignerSecretProvider and
+ZKDelegationTokenSecretManager to use the same curator client. (Arun 
Suresh via atm)
+
 Release 2.6.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a261e68/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index 82dd2da..ebc45a5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -136,7 +136,11 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 conf.getLong(DelegationTokenManager.REMOVAL_SCAN_INTERVAL,
 DelegationTokenManager.REMOVAL_SCAN_INTERVAL_DEFAULT) * 1000);
 if (CURATOR_TL.get() != null) {
-  zkClient = CURATOR_TL.get();
+  zkClient =
+  CURATOR_TL.get().usingNamespace(
+  conf.get(ZK_DTSM_ZNODE_WORKING_PATH,
+  ZK_DTSM_ZNODE_WORKING_PATH_DEAFULT)
+  + / + ZK_DTSM_NAMESPACE);
   isExternalClient = true;
 } else {
   String connString = conf.get(ZK_DTSM_ZK_CONNECTION_STRING);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a261e68/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
index aa9ec99..fbd1129 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.security.token.delegation.web;
 
 import com.google.common.annotations.VisibleForTesting;
+
 import org.apache.curator.framework.CuratorFramework;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
@@ -46,6 +47,7 @@ import javax.servlet.ServletException;
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletRequestWrapper;
 import javax.servlet.http.HttpServletResponse;
+
 import java.io.IOException;
 import java.io.Writer;
 import java.nio.charset.Charset;
@@ -156,14 +158,7 @@ public class DelegationTokenAuthenticationFilter
 
   @Override
   public void init(FilterConfig filterConfig) throws ServletException {
-// A single CuratorFramework should be used for a ZK cluster.
-// If the ZKSignerSecretProvider has already created it, it has to
-// be set here... to be used by the ZKDelegationTokenSecretManager
-ZKDelegationTokenSecretManager.setCurator((CuratorFramework)
-filterConfig.getServletContext().getAttribute(ZKSignerSecretProvider.
-ZOOKEEPER_SIGNER_SECRET_PROVIDER_CURATOR_CLIENT_ATTRIBUTE));
 super.init(filterConfig);
-ZKDelegationTokenSecretManager.setCurator(null);
 AuthenticationHandler handler = getAuthenticationHandler();
 AbstractDelegationTokenSecretManager dtSecretManager =
 (AbstractDelegationTokenSecretManager) 
filterConfig.getServletContext().
@@ -188,6 +183,19 @@ public class DelegationTokenAuthenticationFilter
 ProxyUsers.refreshSuperUserGroupsConfiguration(conf

[1/2] git commit: HADOOP-11187 NameNode - KMS communication fails after a long period of inactivity. Contributed by Arun Suresh. (cherry picked from commit d593035d50e9997f31ddd67275b6e68504f9ca3c)

2014-11-05 Thread atm
 HADOOP-11272. Allow ZKSignerSecretProvider and
 ZKDelegationTokenSecretManager to use the same curator client. (Arun 
Suresh via atm)
 
+HADOOP-11187 NameNode - KMS communication fails after a long period of
+inactivity. (Arun Suresh via atm)
+
 Release 2.6.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d698ed1d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index 5c332a8..e9e8af4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -81,6 +81,8 @@ import com.google.common.base.Preconditions;
 public class KMSClientProvider extends KeyProvider implements CryptoExtension,
 KeyProviderDelegationTokenExtension.DelegationTokenExtension {
 
+  private static final String INVALID_SIGNATURE = Invalid signature;
+
   private static final String ANONYMOUS_REQUESTS_DISALLOWED = Anonymous 
requests are disallowed;
 
   public static final String TOKEN_KIND = kms-dt;
@@ -453,7 +455,8 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   throw ex;
 }
 if ((conn.getResponseCode() == HttpURLConnection.HTTP_FORBIDDEN
- conn.getResponseMessage().equals(ANONYMOUS_REQUESTS_DISALLOWED))
+ (conn.getResponseMessage().equals(ANONYMOUS_REQUESTS_DISALLOWED) ||
+conn.getResponseMessage().contains(INVALID_SIGNATURE)))
 || conn.getResponseCode() == HttpURLConnection.HTTP_UNAUTHORIZED) {
   // Ideally, this should happen only when there is an Authentication
   // failure. Unfortunately, the AuthenticationFilter returns 403 when it

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d698ed1d/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
index 9e76178..86e6484 100644
--- 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
@@ -900,6 +900,7 @@ public class TestKMS {
 keytab.getAbsolutePath());
 conf.set(hadoop.kms.authentication.kerberos.principal, HTTP/localhost);
 conf.set(hadoop.kms.authentication.kerberos.name.rules, DEFAULT);
+conf.set(hadoop.kms.authentication.token.validity, 1);
 
 for (KMSACLs.Type type : KMSACLs.Type.values()) {
   conf.set(type.getAclConfigKey(), type.toString());
@@ -930,11 +931,16 @@ public class TestKMS {
   @Override
   public Void run() throws Exception {
 KMSClientProvider kp = new KMSClientProvider(uri, conf);
+
+kp.createKey(k0, new byte[16],
+new KeyProvider.Options(conf));
+// This happens before rollover
 kp.createKey(k1, new byte[16],
 new KeyProvider.Options(conf));
-makeAuthTokenStale(kp);
+// Atleast 2 rollovers.. so should induce signer Exception
+Thread.sleep(3500);
 kp.createKey(k2, new byte[16],
-new KeyProvider.Options(conf));
+  new KeyProvider.Options(conf));
 return null;
   }
 });
@@ -958,15 +964,16 @@ public class TestKMS {
 KMSClientProvider kp = new KMSClientProvider(uri, conf);
 kp.createKey(k3, new byte[16],
 new KeyProvider.Options(conf));
-makeAuthTokenStale(kp);
+// Atleast 2 rollovers.. so should induce signer Exception
+Thread.sleep(3500);
 try {
   kp.createKey(k4, new byte[16],
   new KeyProvider.Options(conf));
-  Assert.fail(Shoud fail since retry count == 0);
+  Assert.fail(This should not succeed..);
 } catch (IOException e) {
   Assert.assertTrue(
-  HTTP exception must be a 403 :  + e.getMessage(), e
-  .getMessage().contains

[2/2] git commit: HADOOP-11187 NameNode - KMS communication fails after a long period of inactivity. Contributed by Arun Suresh.

2014-11-05 Thread atm
();
-  Assert.assertEquals(HttpURLConnection.HTTP_FORBIDDEN, 
conn.getResponseCode());
-  Assert.assertEquals(Anonymous requests are disallowed, 
conn.getResponseMessage());
+  Assert.assertEquals(HttpURLConnection.HTTP_UNAUTHORIZED, 
conn.getResponseCode());
+  
Assert.assertTrue(conn.getHeaderFields().containsKey(WWW-Authenticate));
+  Assert.assertEquals(Authentication required, 
conn.getResponseMessage());
 } finally {
   auth.stop();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef5af4f8/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
index 3b6b958..c01c182 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
@@ -537,11 +537,11 @@ public class TestAuthenticationFilter {
 }
   ).when(chain).doFilter(Mockito.ServletRequestanyObject(), 
Mockito.ServletResponseanyObject());
 
+  
Mockito.when(response.containsHeader(WWW-Authenticate)).thenReturn(true);
   filter.doFilter(request, response, chain);
 
   Mockito.verify(response).sendError(
   HttpServletResponse.SC_UNAUTHORIZED, Authentication required);
-  Mockito.verify(response).setHeader(WWW-Authenticate, dummyauth);
 } finally {
   filter.destroy();
 }
@@ -852,6 +852,7 @@ public class TestAuthenticationFilter {
   Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
 
   HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+  
Mockito.when(response.containsHeader(WWW-Authenticate)).thenReturn(true);
   FilterChain chain = Mockito.mock(FilterChain.class);
 
   verifyUnauthorized(filter, request, response, chain);
@@ -930,6 +931,7 @@ public class TestAuthenticationFilter {
   Mockito.when(request.getCookies()).thenReturn(new Cookie[]{cookie});
 
   HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
+  
Mockito.when(response.containsHeader(WWW-Authenticate)).thenReturn(true);
   FilterChain chain = Mockito.mock(FilterChain.class);
 
   verifyUnauthorized(filter, request, response, chain);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef5af4f8/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java
index 91c1103..b52915d 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestPseudoAuthenticationHandler.java
@@ -21,6 +21,7 @@ import org.mockito.Mockito;
 
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
+
 import java.util.Properties;
 
 public class TestPseudoAuthenticationHandler {
@@ -74,12 +75,8 @@ public class TestPseudoAuthenticationHandler {
   HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
   HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
 
-  handler.authenticate(request, response);
-  Assert.fail();
-} catch (AuthenticationException ex) {
-  // Expected
-} catch (Exception ex) {
-  Assert.fail();
+  AuthenticationToken token = handler.authenticate(request, response);
+  Assert.assertNull(token);
 } finally {
   handler.destroy();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef5af4f8/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 55ef9d3..8587f12 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -412,6 +412,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11272. Allow ZKSignerSecretProvider and
 ZKDelegationTokenSecretManager to use the same curator client. (Arun 
Suresh via atm)
 
+HADOOP-11187 NameNode - KMS communication

[3/3] git commit: HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and loginUgi are a proxied user. Contributed by Arun Suresh. (cherry picked from commit 0e57aa3bf68937473693

2014-10-13 Thread atm
HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and 
loginUgi are a proxied user. Contributed by Arun Suresh.
(cherry picked from commit 0e57aa3bf689374736939300d8f3525ec38bead7)
(cherry picked from commit f3132eee1011b750158169c099b26ce8f6e2d1f4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/834533fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/834533fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/834533fd

Branch: refs/heads/branch-2.6
Commit: 834533fdfcdfe74b0a4c721bb44c5632a96e7160
Parents: e6102b1
Author: Aaron T. Myers a...@apache.org
Authored: Mon Oct 13 18:09:39 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Oct 13 18:10:40 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../crypto/key/kms/KMSClientProvider.java   |  15 +-
 .../hadoop/crypto/key/kms/server/TestKMS.java   | 154 +--
 3 files changed, 155 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/834533fd/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index d08a86a..07241fa 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -466,6 +466,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-11193. Fix uninitialized variables in NativeIO.c
 (Xiaoyu Yao via wheat9)
 
+HADOOP-11176. KMSClientProvider authentication fails when both currentUgi
+and loginUgi are a proxied user. (Arun Suresh via atm)
+
 BREAKDOWN OF HDFS-6134 AND HADOOP-10150 SUBTASKS AND RELATED JIRAS
   
   HADOOP-10734. Implement high-performance secure random number sources.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/834533fd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index 97e458e..60faaa5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -250,8 +250,8 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   private SSLFactory sslFactory;
   private ConnectionConfigurator configurator;
   private DelegationTokenAuthenticatedURL.Token authToken;
-  private UserGroupInformation loginUgi;
   private final int authRetry;
+  private final UserGroupInformation actualUgi;
 
   @Override
   public String toString() {
@@ -335,7 +335,11 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
 KMS_CLIENT_ENC_KEY_CACHE_NUM_REFILL_THREADS_DEFAULT),
 new EncryptedQueueRefiller());
 authToken = new DelegationTokenAuthenticatedURL.Token();
-loginUgi = UserGroupInformation.getCurrentUser();
+actualUgi =
+(UserGroupInformation.getCurrentUser().getAuthenticationMethod() ==
+UserGroupInformation.AuthenticationMethod.PROXY) ? UserGroupInformation
+.getCurrentUser().getRealUser() : UserGroupInformation
+.getCurrentUser();
   }
 
   private String createServiceURL(URL url) throws IOException {
@@ -406,7 +410,7 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   ? currentUgi.getShortUserName() : null;
 
   // creating the HTTP connection using the current UGI at constructor time
-  conn = loginUgi.doAs(new PrivilegedExceptionActionHttpURLConnection() {
+  conn = actualUgi.doAs(new PrivilegedExceptionActionHttpURLConnection() 
{
 @Override
 public HttpURLConnection run() throws Exception {
   DelegationTokenAuthenticatedURL authUrl =
@@ -456,8 +460,6 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   // WWW-Authenticate header as well)..
   KMSClientProvider.this.authToken =
   new DelegationTokenAuthenticatedURL.Token();
-  KMSClientProvider.this.loginUgi =
-  UserGroupInformation.getCurrentUser();
   if (authRetryCount  0) {
 String contentType = conn.getRequestProperty(CONTENT_TYPE);
 String requestMethod = conn.getRequestMethod();
@@ -474,9 +476,6 @@ public class KMSClientProvider extends KeyProvider 
implements

[2/3] git commit: HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and loginUgi are a proxied user. Contributed by Arun Suresh. (cherry picked from commit 0e57aa3bf68937473693

2014-10-13 Thread atm
HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and 
loginUgi are a proxied user. Contributed by Arun Suresh.
(cherry picked from commit 0e57aa3bf689374736939300d8f3525ec38bead7)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3132eee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3132eee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3132eee

Branch: refs/heads/branch-2
Commit: f3132eee1011b750158169c099b26ce8f6e2d1f4
Parents: 8845517
Author: Aaron T. Myers a...@apache.org
Authored: Mon Oct 13 18:09:39 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Oct 13 18:10:23 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../crypto/key/kms/KMSClientProvider.java   |  15 +-
 .../hadoop/crypto/key/kms/server/TestKMS.java   | 154 +--
 3 files changed, 155 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3132eee/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 4ad8f66..5e0fb55 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -486,6 +486,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-11193. Fix uninitialized variables in NativeIO.c
 (Xiaoyu Yao via wheat9)
 
+HADOOP-11176. KMSClientProvider authentication fails when both currentUgi
+and loginUgi are a proxied user. (Arun Suresh via atm)
+
 BREAKDOWN OF HDFS-6134 AND HADOOP-10150 SUBTASKS AND RELATED JIRAS
   
   HADOOP-10734. Implement high-performance secure random number sources.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3132eee/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index 537fd97..5c332a8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -251,8 +251,8 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   private SSLFactory sslFactory;
   private ConnectionConfigurator configurator;
   private DelegationTokenAuthenticatedURL.Token authToken;
-  private UserGroupInformation loginUgi;
   private final int authRetry;
+  private final UserGroupInformation actualUgi;
 
   @Override
   public String toString() {
@@ -336,7 +336,11 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
 KMS_CLIENT_ENC_KEY_CACHE_NUM_REFILL_THREADS_DEFAULT),
 new EncryptedQueueRefiller());
 authToken = new DelegationTokenAuthenticatedURL.Token();
-loginUgi = UserGroupInformation.getCurrentUser();
+actualUgi =
+(UserGroupInformation.getCurrentUser().getAuthenticationMethod() ==
+UserGroupInformation.AuthenticationMethod.PROXY) ? UserGroupInformation
+.getCurrentUser().getRealUser() : UserGroupInformation
+.getCurrentUser();
   }
 
   private String createServiceURL(URL url) throws IOException {
@@ -407,7 +411,7 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   ? currentUgi.getShortUserName() : null;
 
   // creating the HTTP connection using the current UGI at constructor time
-  conn = loginUgi.doAs(new PrivilegedExceptionActionHttpURLConnection() {
+  conn = actualUgi.doAs(new PrivilegedExceptionActionHttpURLConnection() 
{
 @Override
 public HttpURLConnection run() throws Exception {
   DelegationTokenAuthenticatedURL authUrl =
@@ -457,8 +461,6 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   // WWW-Authenticate header as well)..
   KMSClientProvider.this.authToken =
   new DelegationTokenAuthenticatedURL.Token();
-  KMSClientProvider.this.loginUgi =
-  UserGroupInformation.getCurrentUser();
   if (authRetryCount  0) {
 String contentType = conn.getRequestProperty(CONTENT_TYPE);
 String requestMethod = conn.getRequestMethod();
@@ -475,9 +477,6 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   // Ignore the AuthExceptions.. since we are just using

[1/3] git commit: HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and loginUgi are a proxied user. Contributed by Arun Suresh.

2014-10-13 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 88455173e - f3132eee1
  refs/heads/branch-2.6 e6102b182 - 834533fdf
  refs/heads/trunk cc93e7e68 - 0e57aa3bf


HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and 
loginUgi are a proxied user. Contributed by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e57aa3b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e57aa3b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e57aa3b

Branch: refs/heads/trunk
Commit: 0e57aa3bf689374736939300d8f3525ec38bead7
Parents: cc93e7e
Author: Aaron T. Myers a...@apache.org
Authored: Mon Oct 13 18:09:39 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Oct 13 18:09:39 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../crypto/key/kms/KMSClientProvider.java   |  15 +-
 .../hadoop/crypto/key/kms/server/TestKMS.java   | 154 +--
 3 files changed, 155 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e57aa3b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index b308f4e..fcc5385 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -830,6 +830,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-11193. Fix uninitialized variables in NativeIO.c
 (Xiaoyu Yao via wheat9)
 
+HADOOP-11176. KMSClientProvider authentication fails when both currentUgi
+and loginUgi are a proxied user. (Arun Suresh via atm)
+
 BREAKDOWN OF HDFS-6134 AND HADOOP-10150 SUBTASKS AND RELATED JIRAS
   
   HADOOP-10734. Implement high-performance secure random number sources.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e57aa3b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index 441683a..4c24f58 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -251,8 +251,8 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   private SSLFactory sslFactory;
   private ConnectionConfigurator configurator;
   private DelegationTokenAuthenticatedURL.Token authToken;
-  private UserGroupInformation loginUgi;
   private final int authRetry;
+  private final UserGroupInformation actualUgi;
 
   @Override
   public String toString() {
@@ -336,7 +336,11 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
 KMS_CLIENT_ENC_KEY_CACHE_NUM_REFILL_THREADS_DEFAULT),
 new EncryptedQueueRefiller());
 authToken = new DelegationTokenAuthenticatedURL.Token();
-loginUgi = UserGroupInformation.getCurrentUser();
+actualUgi =
+(UserGroupInformation.getCurrentUser().getAuthenticationMethod() ==
+UserGroupInformation.AuthenticationMethod.PROXY) ? UserGroupInformation
+.getCurrentUser().getRealUser() : UserGroupInformation
+.getCurrentUser();
   }
 
   private String createServiceURL(URL url) throws IOException {
@@ -407,7 +411,7 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   ? currentUgi.getShortUserName() : null;
 
   // creating the HTTP connection using the current UGI at constructor time
-  conn = loginUgi.doAs(new PrivilegedExceptionActionHttpURLConnection() {
+  conn = actualUgi.doAs(new PrivilegedExceptionActionHttpURLConnection() 
{
 @Override
 public HttpURLConnection run() throws Exception {
   DelegationTokenAuthenticatedURL authUrl =
@@ -457,8 +461,6 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   // WWW-Authenticate header as well)..
   KMSClientProvider.this.authToken =
   new DelegationTokenAuthenticatedURL.Token();
-  KMSClientProvider.this.loginUgi =
-  UserGroupInformation.getCurrentUser();
   if (authRetryCount  0) {
 String contentType = conn.getRequestProperty(CONTENT_TYPE);
 String requestMethod = conn.getRequestMethod();
@@ -475,9 +477,6 @@ public class KMSClientProvider extends

[1/2] git commit: HDFS-7026. Introduce a string constant for Failed to obtain user group info.... Contributed by Yongjun Zhang.

2014-10-09 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ad47a27db - 8cbacb37e
  refs/heads/trunk e532ed8fa - cbd21fd13


HDFS-7026. Introduce a string constant for Failed to obtain user group 
info Contributed by Yongjun Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cbd21fd1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cbd21fd1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cbd21fd1

Branch: refs/heads/trunk
Commit: cbd21fd13b321d042faeff00fa71c9becc0d6087
Parents: e532ed8
Author: Aaron T. Myers a...@apache.org
Authored: Thu Oct 9 18:52:28 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Thu Oct 9 18:52:48 2014 -0700

--
 .../src/main/java/org/apache/hadoop/security/SecurityUtil.java| 2 ++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java   | 3 ++-
 .../java/org/apache/hadoop/hdfs/web/resources/UserProvider.java   | 3 ++-
 .../hdfs/server/namenode/ha/TestDelegationTokensWithHA.java   | 2 +-
 5 files changed, 10 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbd21fd1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
index b5bf26f..27870c3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
@@ -57,6 +57,8 @@ import com.google.common.annotations.VisibleForTesting;
 public class SecurityUtil {
   public static final Log LOG = LogFactory.getLog(SecurityUtil.class);
   public static final String HOSTNAME_PATTERN = _HOST;
+  public static final String FAILED_TO_GET_UGI_MSG_HEADER = 
+  Failed to obtain user group information:;
 
   // controls whether buildTokenService will use an ip or host/ip as given
   // by the user

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbd21fd1/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 475d865..4757784 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -376,6 +376,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7202. Should be able to omit package name of SpanReceiver on hadoop
 trace -add (iwasakims via cmccabe)
 
+HDFS-7026. Introduce a string constant for Failed to obtain user group
+info (Yongjun Zhang via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbd21fd1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 40312ec..1c3c802 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -354,7 +354,8 @@ public class WebHdfsFileSystem extends FileSystem
   // extract UGI-related exceptions and unwrap InvalidToken
   // the NN mangles these exceptions but the DN does not and may need
   // to re-fetch a token if either report the token is expired
-  if (re.getMessage().startsWith(Failed to obtain user group 
information:)) {
+  if (re.getMessage().startsWith(
+  SecurityUtil.FAILED_TO_GET_UGI_MSG_HEADER)) {
 String[] parts = re.getMessage().split(:\\s+, 3);
 re = new RemoteException(parts[1], parts[2]);
 re = ((RemoteException)re).unwrapRemoteException(InvalidToken.class);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbd21fd1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
index 44e8938..32b3369 100644
--- 
a/hadoop-hdfs-project/hadoop

[2/3] git commit: HADOOP-11161. Expose close method in KeyProvider to give clients of Provider implementations a hook to release resources. Contribued by Arun Suresh.

2014-10-08 Thread atm
HADOOP-11161. Expose close method in KeyProvider to give clients of Provider 
implementations a hook to release resources. Contribued by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2a51494c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2a51494c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2a51494c

Branch: refs/heads/trunk
Commit: 2a51494ce1b05fc494fb3a818a7a3526f3f40070
Parents: d996235
Author: Aaron T. Myers a...@apache.org
Authored: Wed Oct 8 17:58:53 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Oct 8 18:01:51 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../org/apache/hadoop/crypto/key/KeyProvider.java |  8 
 .../crypto/key/KeyProviderCryptoExtension.java|  9 -
 .../hadoop/crypto/key/kms/KMSClientProvider.java  | 11 +++
 .../apache/hadoop/crypto/key/kms/ValueQueue.java  | 14 +++---
 .../java/org/apache/hadoop/hdfs/DFSClient.java| 18 --
 6 files changed, 53 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a51494c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 404d978..0f40caf 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -811,6 +811,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-10404. Some accesses to DomainSocketWatcher#closed are not protected
 by the lock (cmccabe)
 
+HADOOP-11161. Expose close method in KeyProvider to give clients of
+Provider implementations a hook to release resources. (Arun Suresh via atm)
+
 BREAKDOWN OF HDFS-6134 AND HADOOP-10150 SUBTASKS AND RELATED JIRAS
   
   HADOOP-10734. Implement high-performance secure random number sources.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a51494c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
index 36ccbad..dd2d5b9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
@@ -534,6 +534,14 @@ public abstract class KeyProvider {
 ) throws IOException;
 
   /**
+   * Can be used by implementing classes to close any resources
+   * that require closing
+   */
+  public void close() throws IOException {
+// NOP
+  }
+
+  /**
* Roll a new version of the given key generating the material for it.
* p/
* This implementation generates the key material and calls the

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a51494c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index 968e341..7e95211 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -408,6 +408,13 @@ public class KeyProviderCryptoExtension extends
  ? (CryptoExtension) keyProvider
  : new DefaultCryptoExtension(keyProvider);
 return new KeyProviderCryptoExtension(keyProvider, cryptoExtension);
-  }  
+  }
+
+  @Override
+  public void close() throws IOException {
+if (getKeyProvider() != null) {
+  getKeyProvider().close();
+}
+  }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a51494c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index d6abcbb..4f4e843 100644
--- 
a/hadoop-common-project

[3/3] git commit: HADOOP-11161. Expose close method in KeyProvider to give clients of Provider implementations a hook to release resources. Contribued by Arun Suresh. (cherry picked from commit d9556e

2014-10-08 Thread atm
HADOOP-11161. Expose close method in KeyProvider to give clients of Provider 
implementations a hook to release resources. Contribued by Arun Suresh.
(cherry picked from commit d9556e873ef4d3e68c4f0c991f856d1faa747f07)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/afaadd65
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/afaadd65
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/afaadd65

Branch: refs/heads/branch-2
Commit: afaadd65359ba54be38a118bfb5dcf4174416a27
Parents: f1feaae
Author: Aaron T. Myers a...@apache.org
Authored: Wed Oct 8 17:58:53 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Oct 8 18:02:00 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../org/apache/hadoop/crypto/key/KeyProvider.java |  8 
 .../crypto/key/KeyProviderCryptoExtension.java|  7 +++
 .../hadoop/crypto/key/kms/KMSClientProvider.java  | 11 +++
 .../apache/hadoop/crypto/key/kms/ValueQueue.java  | 14 +++---
 .../java/org/apache/hadoop/hdfs/DFSClient.java| 18 --
 6 files changed, 52 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/afaadd65/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index fe15cf5..819df99 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -470,6 +470,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-10404. Some accesses to DomainSocketWatcher#closed are not protected
 by the lock (cmccabe)
 
+HADOOP-11161. Expose close method in KeyProvider to give clients of
+Provider implementations a hook to release resources. (Arun Suresh via atm)
+
 BREAKDOWN OF HDFS-6134 AND HADOOP-10150 SUBTASKS AND RELATED JIRAS
   
   HADOOP-10734. Implement high-performance secure random number sources.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/afaadd65/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
index a8b9414..9dd1d47 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
@@ -534,6 +534,14 @@ public abstract class KeyProvider {
 ) throws IOException;
 
   /**
+   * Can be used by implementing classes to close any resources
+   * that require closing
+   */
+  public void close() throws IOException {
+// NOP
+  }
+
+  /**
* Roll a new version of the given key generating the material for it.
* p/
* This implementation generates the key material and calls the

http://git-wip-us.apache.org/repos/asf/hadoop/blob/afaadd65/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index f800689..73c9885 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -410,4 +410,11 @@ public class KeyProviderCryptoExtension extends
 return new KeyProviderCryptoExtension(keyProvider, cryptoExtension);
   }
 
+  @Override
+  public void close() throws IOException {
+if (getKeyProvider() != null) {
+  getKeyProvider().close();
+}
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/afaadd65/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index 5b7f109..c4c7e0c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms

[1/3] git commit: HADOOP-11161. Expose close method in KeyProvider to give clients of Provider implementations a hook to release resources. Contribued by Arun Suresh. (cherry picked from commit d9556e

2014-10-08 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f1feaae1d - afaadd653
  refs/heads/branch-2.6 ab448565c - f86c9c6c7
  refs/heads/trunk d99623528 - 2a51494ce


HADOOP-11161. Expose close method in KeyProvider to give clients of Provider 
implementations a hook to release resources. Contribued by Arun Suresh.
(cherry picked from commit d9556e873ef4d3e68c4f0c991f856d1faa747f07)
(cherry picked from commit 3a2565c7be80cf6e9cdfec0f5460ed8ed2252768)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f86c9c6c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f86c9c6c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f86c9c6c

Branch: refs/heads/branch-2.6
Commit: f86c9c6c710c9460098b6919a39a287abecd2721
Parents: ab44856
Author: Aaron T. Myers a...@apache.org
Authored: Wed Oct 8 17:58:53 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Oct 8 18:00:37 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../org/apache/hadoop/crypto/key/KeyProvider.java |  8 
 .../crypto/key/KeyProviderCryptoExtension.java|  7 +++
 .../hadoop/crypto/key/kms/KMSClientProvider.java  | 11 +++
 .../apache/hadoop/crypto/key/kms/ValueQueue.java  | 14 +++---
 .../java/org/apache/hadoop/hdfs/DFSClient.java| 18 --
 6 files changed, 52 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f86c9c6c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e15a185..5136644 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -450,6 +450,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-10404. Some accesses to DomainSocketWatcher#closed are not protected
 by lock (cmccabe)
 
+HADOOP-11161. Expose close method in KeyProvider to give clients of
+Provider implementations a hook to release resources. (Arun Suresh via atm)
+
 BREAKDOWN OF HDFS-6134 AND HADOOP-10150 SUBTASKS AND RELATED JIRAS
   
   HADOOP-10734. Implement high-performance secure random number sources.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f86c9c6c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
index a8b9414..9dd1d47 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
@@ -534,6 +534,14 @@ public abstract class KeyProvider {
 ) throws IOException;
 
   /**
+   * Can be used by implementing classes to close any resources
+   * that require closing
+   */
+  public void close() throws IOException {
+// NOP
+  }
+
+  /**
* Roll a new version of the given key generating the material for it.
* p/
* This implementation generates the key material and calls the

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f86c9c6c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index f800689..73c9885 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -410,4 +410,11 @@ public class KeyProviderCryptoExtension extends
 return new KeyProviderCryptoExtension(keyProvider, cryptoExtension);
   }
 
+  @Override
+  public void close() throws IOException {
+if (getKeyProvider() != null) {
+  getKeyProvider().close();
+}
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f86c9c6c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java

git commit: HADOOP-11109. Site build is broken. Contributed by Jian He.

2014-09-18 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8e7308449 - 643457229


HADOOP-11109. Site build is broken. Contributed by Jian He.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/64345722
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/64345722
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/64345722

Branch: refs/heads/trunk
Commit: 64345722975a671869fcfd66a7263f831b36d068
Parents: 8e73084
Author: Aaron T. Myers a...@apache.org
Authored: Thu Sep 18 17:59:36 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Thu Sep 18 18:00:39 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 4 +++-
 hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/64345722/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index f21771b..90053fc 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -834,7 +834,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-11105. MetricsSystemImpl could leak memory in registered callbacks.
 (Chuan Liu via cnauroth)
 
-KMS: Support for multiple Kerberos principals. (tucu)
+HADOOP-10982. KMS: Support for multiple Kerberos principals. (tucu)
+
+HADOOP-11109. Site build is broken. (Jian He via atm)
 
 Release 2.5.1 - 2014-09-05
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64345722/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
--
diff --git a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm 
b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
index cf7a557..e32893b 100644
--- a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
@@ -626,7 +626,7 @@ $ keytool -genkey -alias tomcat -keyalg RSA
 
   NOTE: If using HTTPS, the SSL certificate used by the KMS instance must
   be configured to support multiple hostnames (see Java 7
-  keytool SAN extension support for details on how to do this).
+  keytool SAN extension support for details on how to do this).
 
 *** HTTP Authentication Signature
 



git commit: HADOOP-11109. Site build is broken. Contributed by Jian He. (cherry picked from commit 0e2b64f2029cabbbf05a132625244427f8bf9518)

2014-09-18 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ef693b541 - 71e6a4a73


HADOOP-11109. Site build is broken. Contributed by Jian He.
(cherry picked from commit 0e2b64f2029cabbbf05a132625244427f8bf9518)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/71e6a4a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/71e6a4a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/71e6a4a7

Branch: refs/heads/branch-2
Commit: 71e6a4a735222c25bd0be7f6811863613ece3114
Parents: ef693b5
Author: Aaron T. Myers a...@apache.org
Authored: Thu Sep 18 17:59:36 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Thu Sep 18 18:00:07 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 4 +++-
 hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/71e6a4a7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index b325980..a8e8b6d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -499,7 +499,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-11105. MetricsSystemImpl could leak memory in registered callbacks.
 (Chuan Liu via cnauroth)
 
-KMS: Support for multiple Kerberos principals. (tucu)
+HADOOP-10982. KMS: Support for multiple Kerberos principals. (tucu)
+
+HADOOP-11109. Site build is broken. (Jian He via atm)
 
 Release 2.5.1 - 2014-09-05
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/71e6a4a7/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
--
diff --git a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm 
b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
index 5ab0bbe..2e8405f 100644
--- a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
@@ -626,7 +626,7 @@ $ keytool -genkey -alias tomcat -keyalg RSA
 
   NOTE: If using HTTPS, the SSL certificate used by the KMS instance must
   be configured to support multiple hostnames (see Java 7
-  keytool SAN extension support for details on how to do this).
+  keytool SAN extension support for details on how to do this).
 
 *** HTTP Authentication Signature
 



[2/2] git commit: HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by Jordan Mendelson and Dave Wang.

2014-09-15 Thread atm
HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by 
Jordan Mendelson and Dave Wang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/24d920b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/24d920b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/24d920b8

Branch: refs/heads/trunk
Commit: 24d920b80eb3626073925a1d0b6dcf148add8cc0
Parents: fc741b5
Author: Aaron T. Myers a...@apache.org
Authored: Mon Sep 15 08:27:07 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Sep 15 08:27:07 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |3 +
 .../src/main/conf/log4j.properties  |5 +
 .../src/main/resources/core-default.xml |   86 ++
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml  |8 +
 hadoop-project/pom.xml  |   26 +-
 hadoop-tools/hadoop-aws/pom.xml |   10 +
 .../fs/s3a/AnonymousAWSCredentialsProvider.java |   37 +
 .../fs/s3a/BasicAWSCredentialsProvider.java |   51 +
 .../org/apache/hadoop/fs/s3a/Constants.java |   90 ++
 .../org/apache/hadoop/fs/s3a/S3AFileStatus.java |   62 ++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 1019 ++
 .../apache/hadoop/fs/s3a/S3AInputStream.java|  207 
 .../apache/hadoop/fs/s3a/S3AOutputStream.java   |  208 
 .../services/org.apache.hadoop.fs.FileSystem|1 +
 .../hadoop/fs/contract/s3a/S3AContract.java |   43 +
 .../fs/contract/s3a/TestS3AContractCreate.java  |   38 +
 .../fs/contract/s3a/TestS3AContractDelete.java  |   31 +
 .../fs/contract/s3a/TestS3AContractMkdir.java   |   34 +
 .../fs/contract/s3a/TestS3AContractOpen.java|   31 +
 .../fs/contract/s3a/TestS3AContractRename.java  |   64 ++
 .../fs/contract/s3a/TestS3AContractRootDir.java |   35 +
 .../fs/contract/s3a/TestS3AContractSeek.java|   31 +
 .../fs/s3a/S3AFileSystemContractBaseTest.java   |  327 ++
 .../src/test/resources/contract/s3a.xml |  105 ++
 .../src/test/resources/contract/s3n.xml |7 +-
 hadoop-tools/hadoop-azure/pom.xml   |   10 +-
 26 files changed, 2552 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24d920b8/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 051eac1..c2ae5ed 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -342,6 +342,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-10893. isolated classloader on the client side (Sangjin Lee via
 jlowe)
 
+HADOOP-10400. Incorporate new S3A FileSystem implementation. (Jordan
+Mendelson and Dave Wang via atm)
+
   IMPROVEMENTS
 
 HADOOP-10808. Remove unused native code for munlock. (cnauroth)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/24d920b8/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
--
diff --git a/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties 
b/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
index ef9acbf..5fa21fa 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
+++ b/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
@@ -174,6 +174,11 @@ 
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
 # Jets3t library
 log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
 
+# AWS SDK  S3A FileSystem
+log4j.logger.com.amazonaws=ERROR
+log4j.logger.com.amazonaws.http.AmazonHttpClient=ERROR
+log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=WARN
+
 #
 # Event Counter Appender
 # Sends counts of logging messages at different severity levels to Hadoop 
Metrics.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/24d920b8/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 3cc7545..828dec2 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -690,6 +690,92 @@ for ldap providers in the same way as above does.
 /property
 
 property
+  namefs.s3a.access.key/name
+  descriptionAWS access key ID. Omit for Role-based 
authentication./description
+/property
+
+property
+  namefs.s3a.secret.key/name
+  descriptionAWS secret key. Omit for Role

[1/2] HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by Jordan Mendelson and Dave Wang.

2014-09-15 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/trunk fc741b5d7 - 24d920b80


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24d920b8/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
new file mode 100644
index 000..d677ec4
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  License); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an AS IS BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.contract.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+public class TestS3AContractSeek extends AbstractContractSeekTest {
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new S3AContract(conf);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/24d920b8/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
new file mode 100644
index 000..8455233
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
@@ -0,0 +1,327 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import static org.junit.Assume.*;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystemContractBaseTest;
+import org.apache.hadoop.fs.Path;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URI;
+import java.util.UUID;
+
+/**
+ *  Tests a live S3 system. If you keys and bucket aren't specified, all tests 
+ *  are marked as passed 
+ *  
+ *  This uses BlockJUnit4ClassRunner because FileSystemContractBaseTest from 
+ *  TestCase which uses the old Junit3 runner that doesn't ignore assumptions 
+ *  properly making it impossible to skip the tests if we don't have a valid
+ *  bucket.
+ **/
+public class S3AFileSystemContractBaseTest extends FileSystemContractBaseTest {
+  private static final int TEST_BUFFER_SIZE = 128;
+  private static final int MODULUS = 128;
+
+  protected static final Logger LOG = 
LoggerFactory.getLogger(S3AFileSystemContractBaseTest.class);
+
+  @Override
+  public void setUp() throws Exception {
+Configuration conf = new Configuration();
+
+URI testURI = URI.create(conf.get(test.fs.s3a.name));
+
+boolean liveTest = testURI != null  !testURI.equals(s3a:///);
+
+// This doesn't work with our JUnit 3 style test cases, so instead we'll 
+// make this whole class not run by default
+assumeTrue(liveTest);
+
+fs = new S3AFileSystem();
+fs.initialize(testURI, conf);

[1/2] HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by Jordan Mendelson and Dave Wang.

2014-09-15 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 dd3e28d43 - a0c54aeb0


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c54aeb/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
new file mode 100644
index 000..d677ec4
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  License); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an AS IS BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.contract.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+public class TestS3AContractSeek extends AbstractContractSeekTest {
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new S3AContract(conf);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c54aeb/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
new file mode 100644
index 000..8455233
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3AFileSystemContractBaseTest.java
@@ -0,0 +1,327 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import static org.junit.Assume.*;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystemContractBaseTest;
+import org.apache.hadoop.fs.Path;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URI;
+import java.util.UUID;
+
+/**
+ *  Tests a live S3 system. If you keys and bucket aren't specified, all tests 
+ *  are marked as passed 
+ *  
+ *  This uses BlockJUnit4ClassRunner because FileSystemContractBaseTest from 
+ *  TestCase which uses the old Junit3 runner that doesn't ignore assumptions 
+ *  properly making it impossible to skip the tests if we don't have a valid
+ *  bucket.
+ **/
+public class S3AFileSystemContractBaseTest extends FileSystemContractBaseTest {
+  private static final int TEST_BUFFER_SIZE = 128;
+  private static final int MODULUS = 128;
+
+  protected static final Logger LOG = 
LoggerFactory.getLogger(S3AFileSystemContractBaseTest.class);
+
+  @Override
+  public void setUp() throws Exception {
+Configuration conf = new Configuration();
+
+URI testURI = URI.create(conf.get(test.fs.s3a.name));
+
+boolean liveTest = testURI != null  !testURI.equals(s3a:///);
+
+// This doesn't work with our JUnit 3 style test cases, so instead we'll 
+// make this whole class not run by default
+assumeTrue(liveTest);
+
+fs = new S3AFileSystem();
+fs.initialize(testURI, 

[2/2] git commit: HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by Jordan Mendelson and Dave Wang.

2014-09-15 Thread atm
HADOOP-10400. Incorporate new S3A FileSystem implementation. Contributed by 
Jordan Mendelson and Dave Wang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a0c54aeb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a0c54aeb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a0c54aeb

Branch: refs/heads/branch-2
Commit: a0c54aeb00c0bc38f7dfa3615ce6866023d1ef74
Parents: dd3e28d
Author: Aaron T. Myers a...@apache.org
Authored: Mon Sep 15 08:30:42 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Mon Sep 15 08:30:42 2014 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |3 +
 .../src/main/conf/log4j.properties  |5 +
 .../src/main/resources/core-default.xml |   86 ++
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml  |8 +
 hadoop-project/pom.xml  |   26 +-
 hadoop-tools/hadoop-aws/pom.xml |   10 +
 .../fs/s3a/AnonymousAWSCredentialsProvider.java |   37 +
 .../fs/s3a/BasicAWSCredentialsProvider.java |   51 +
 .../org/apache/hadoop/fs/s3a/Constants.java |   90 ++
 .../org/apache/hadoop/fs/s3a/S3AFileStatus.java |   62 ++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 1019 ++
 .../apache/hadoop/fs/s3a/S3AInputStream.java|  207 
 .../apache/hadoop/fs/s3a/S3AOutputStream.java   |  208 
 .../services/org.apache.hadoop.fs.FileSystem|1 +
 .../hadoop/fs/contract/s3a/S3AContract.java |   43 +
 .../fs/contract/s3a/TestS3AContractCreate.java  |   38 +
 .../fs/contract/s3a/TestS3AContractDelete.java  |   31 +
 .../fs/contract/s3a/TestS3AContractMkdir.java   |   34 +
 .../fs/contract/s3a/TestS3AContractOpen.java|   31 +
 .../fs/contract/s3a/TestS3AContractRename.java  |   64 ++
 .../fs/contract/s3a/TestS3AContractRootDir.java |   35 +
 .../fs/contract/s3a/TestS3AContractSeek.java|   31 +
 .../fs/s3a/S3AFileSystemContractBaseTest.java   |  327 ++
 .../src/test/resources/contract/s3a.xml |  105 ++
 .../src/test/resources/contract/s3n.xml |7 +-
 25 files changed, 2550 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c54aeb/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2dababb..e3dd7d1 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -11,6 +11,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-10893. isolated classloader on the client side (Sangjin Lee via
 jlowe)
 
+HADOOP-10400. Incorporate new S3A FileSystem implementation. (Jordan
+Mendelson and Dave Wang via atm)
+
   IMPROVEMENTS
 
 HADOOP-10808. Remove unused native code for munlock. (cnauroth)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c54aeb/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
--
diff --git a/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties 
b/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
index ef9acbf..5fa21fa 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
+++ b/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
@@ -174,6 +174,11 @@ 
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
 # Jets3t library
 log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
 
+# AWS SDK  S3A FileSystem
+log4j.logger.com.amazonaws=ERROR
+log4j.logger.com.amazonaws.http.AmazonHttpClient=ERROR
+log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=WARN
+
 #
 # Event Counter Appender
 # Sends counts of logging messages at different severity levels to Hadoop 
Metrics.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c54aeb/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index ee3cbf0..cd953e3 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -682,6 +682,92 @@ for ldap providers in the same way as above does.
 /property
 
 property
+  namefs.s3a.access.key/name
+  descriptionAWS access key ID. Omit for Role-based 
authentication./description
+/property
+
+property
+  namefs.s3a.secret.key/name
+  descriptionAWS secret key. Omit for Role-based 
authentication./description
+/property
+
+property

git commit: HDFS-6774. Make FsDataset and DataStore support removing volumes. Contributed by Lei Xu.

2014-08-29 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/trunk 15366d922 - 7eab2a29a


HDFS-6774. Make FsDataset and DataStore support removing volumes. Contributed 
by Lei Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7eab2a29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7eab2a29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7eab2a29

Branch: refs/heads/trunk
Commit: 7eab2a29a5706ce10912c12fa225ef6b27a82cbe
Parents: 15366d9
Author: Aaron T. Myers a...@apache.org
Authored: Fri Aug 29 12:59:23 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Fri Aug 29 13:00:17 2014 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../server/datanode/BlockPoolSliceStorage.java  | 14 +++
 .../hdfs/server/datanode/DataStorage.java   | 27 ++
 .../server/datanode/fsdataset/FsDatasetSpi.java |  3 +
 .../datanode/fsdataset/impl/BlockPoolSlice.java |  2 +-
 .../impl/FsDatasetAsyncDiskService.java | 18 
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 69 +++
 .../datanode/fsdataset/impl/FsVolumeList.java   | 19 
 .../server/datanode/SimulatedFSDataset.java |  5 ++
 .../fsdataset/impl/TestFsDatasetImpl.java   | 92 ++--
 10 files changed, 245 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eab2a29/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 957034b..88b19d8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -427,6 +427,9 @@ Release 2.6.0 - UNRELEASED
 HDFS-6879. Adding tracing to Hadoop RPC (Masatake Iwasaki via Colin Patrick
 McCabe)
 
+HDFS-6774. Make FsDataset and DataStore support removing volumes. (Lei Xu
+via atm)
+
   OPTIMIZATIONS
 
 HDFS-6690. Deduplicate xattr names in memory. (wang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eab2a29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
index 88f858b..b7f688d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
@@ -202,6 +202,20 @@ public class BlockPoolSliceStorage extends Storage {
   }
 
   /**
+   * Remove storage directories.
+   * @param storageDirs a set of storage directories to be removed.
+   */
+  void removeVolumes(SetFile storageDirs) {
+for (IteratorStorageDirectory it = this.storageDirs.iterator();
+ it.hasNext(); ) {
+  StorageDirectory sd = it.next();
+  if (storageDirs.contains(sd.getRoot())) {
+it.remove();
+  }
+}
+  }
+
+  /**
* Set layoutVersion, namespaceID and blockpoolID into block pool storage
* VERSION file
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eab2a29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
index 4b9656e..ceb2aa0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
@@ -337,6 +337,33 @@ public class DataStorage extends Storage {
   }
 
   /**
+   * Remove volumes from DataStorage.
+   * @param locations a collection of volumes.
+   */
+  synchronized void removeVolumes(CollectionStorageLocation locations) {
+if (locations.isEmpty()) {
+  return;
+}
+
+SetFile dataDirs = new HashSetFile();
+for (StorageLocation sl : locations) {
+  dataDirs.add(sl.getFile());
+}
+
+for (BlockPoolSliceStorage bpsStorage : this.bpStorageMap.values()) {
+  bpsStorage.removeVolumes(dataDirs);
+}
+
+for (IteratorStorageDirectory it = this.storageDirs.iterator();
+ it.hasNext(); ) {
+  StorageDirectory sd = it.next();
+  if (dataDirs.contains(sd.getRoot

git commit: HDFS-6774. Make FsDataset and DataStore support removing volumes. Contributed by Lei Xu. (cherry picked from commit 7eab2a29a5706ce10912c12fa225ef6b27a82cbe)

2014-08-29 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 27086f594 - 135315b66


HDFS-6774. Make FsDataset and DataStore support removing volumes. Contributed 
by Lei Xu.
(cherry picked from commit 7eab2a29a5706ce10912c12fa225ef6b27a82cbe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/135315b6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/135315b6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/135315b6

Branch: refs/heads/branch-2
Commit: 135315b66fba5d248a983ad5d05d7ab7da42b5fb
Parents: 27086f5
Author: Aaron T. Myers a...@apache.org
Authored: Fri Aug 29 12:59:23 2014 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Fri Aug 29 13:00:36 2014 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../server/datanode/BlockPoolSliceStorage.java  | 14 +++
 .../hdfs/server/datanode/DataStorage.java   | 27 ++
 .../server/datanode/fsdataset/FsDatasetSpi.java |  3 +
 .../datanode/fsdataset/impl/BlockPoolSlice.java |  2 +-
 .../impl/FsDatasetAsyncDiskService.java | 18 
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 69 +++
 .../datanode/fsdataset/impl/FsVolumeList.java   | 19 
 .../server/datanode/SimulatedFSDataset.java |  5 ++
 .../fsdataset/impl/TestFsDatasetImpl.java   | 92 ++--
 10 files changed, 245 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/135315b6/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 95feb33..5414aea 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -168,6 +168,9 @@ Release 2.6.0 - UNRELEASED
 HDFS-6879. Adding tracing to Hadoop RPC (Masatake Iwasaki via Colin Patrick
 McCabe)
 
+HDFS-6774. Make FsDataset and DataStore support removing volumes. (Lei Xu
+via atm)
+
   OPTIMIZATIONS
 
 HDFS-6690. Deduplicate xattr names in memory. (wang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/135315b6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
index bcee1df..45ca0be 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
@@ -202,6 +202,20 @@ public class BlockPoolSliceStorage extends Storage {
   }
 
   /**
+   * Remove storage directories.
+   * @param storageDirs a set of storage directories to be removed.
+   */
+  void removeVolumes(SetFile storageDirs) {
+for (IteratorStorageDirectory it = this.storageDirs.iterator();
+ it.hasNext(); ) {
+  StorageDirectory sd = it.next();
+  if (storageDirs.contains(sd.getRoot())) {
+it.remove();
+  }
+}
+  }
+
+  /**
* Set layoutVersion, namespaceID and blockpoolID into block pool storage
* VERSION file
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/135315b6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
index 29616e7..9929199 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
@@ -313,6 +313,33 @@ public class DataStorage extends Storage {
   }
 
   /**
+   * Remove volumes from DataStorage.
+   * @param locations a collection of volumes.
+   */
+  synchronized void removeVolumes(CollectionStorageLocation locations) {
+if (locations.isEmpty()) {
+  return;
+}
+
+SetFile dataDirs = new HashSetFile();
+for (StorageLocation sl : locations) {
+  dataDirs.add(sl.getFile());
+}
+
+for (BlockPoolSliceStorage bpsStorage : this.bpStorageMap.values()) {
+  bpsStorage.removeVolumes(dataDirs);
+}
+
+for (IteratorStorageDirectory it = this.storageDirs.iterator();
+ it.hasNext

svn commit: r1611489 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java

2014-07-17 Thread atm
Author: atm
Date: Thu Jul 17 21:56:22 2014
New Revision: 1611489

URL: http://svn.apache.org/r1611489
Log:
HADOOP-10610. Upgrade S3n s3.fs.buffer.dir to support multi directories. 
Contributed by Ted Malaska.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1611489r1=1611488r2=1611489view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Thu Jul 
17 21:56:22 2014
@@ -423,6 +423,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-10733. Fix potential null dereference in CredShell. (Ted Yu via
 omalley)
 
+HADOOP-10610. Upgrade S3n s3.fs.buffer.dir to support multi directories.
+(Ted Malaska via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java?rev=1611489r1=1611488r2=1611489view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 Thu Jul 17 21:56:22 2014
@@ -50,6 +50,7 @@ import org.apache.hadoop.fs.FSInputStrea
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalDirAllocator;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.fs.s3.S3Exception;
@@ -225,6 +226,7 @@ public class NativeS3FileSystem extends 
 private OutputStream backupStream;
 private MessageDigest digest;
 private boolean closed;
+private LocalDirAllocator lDirAlloc;
 
 public NativeS3FsOutputStream(Configuration conf,
 NativeFileSystemStore store, String key, Progressable progress,
@@ -246,11 +248,10 @@ public class NativeS3FileSystem extends 
 }
 
 private File newBackupFile() throws IOException {
-  File dir = new File(conf.get(fs.s3.buffer.dir));
-  if (!dir.mkdirs()  !dir.exists()) {
-throw new IOException(Cannot create S3 buffer directory:  + dir);
+  if (lDirAlloc == null) {
+lDirAlloc = new LocalDirAllocator(fs.s3.buffer.dir);
   }
-  File result = File.createTempFile(output-, .tmp, dir);
+  File result = lDirAlloc.createTmpFileForWrite(output-, 
LocalDirAllocator.SIZE_UNKNOWN, conf);
   result.deleteOnExit();
   return result;
 }




svn commit: r1611490 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java

2014-07-17 Thread atm
Author: atm
Date: Thu Jul 17 21:58:07 2014
New Revision: 1611490

URL: http://svn.apache.org/r1611490
Log:
HADOOP-10610. Upgrade S3n s3.fs.buffer.dir to support multi directories. 
Contrbuted by Ted Malaska.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1611490r1=1611489r2=1611490view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Thu Jul 17 21:58:07 2014
@@ -33,6 +33,9 @@ Release 2.6.0 - UNRELEASED
 HADOOP-10733. Fix potential null dereference in CredShell. (Ted Yu via
 omalley)
 
+HADOOP-10610. Upgrade S3n s3.fs.buffer.dir to support multi directories.
+(Ted Malaska via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java?rev=1611490r1=1611489r2=1611490view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 Thu Jul 17 21:58:07 2014
@@ -50,6 +50,7 @@ import org.apache.hadoop.fs.FSInputStrea
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalDirAllocator;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.fs.s3.S3Exception;
@@ -225,6 +226,7 @@ public class NativeS3FileSystem extends 
 private OutputStream backupStream;
 private MessageDigest digest;
 private boolean closed;
+private LocalDirAllocator lDirAlloc;
 
 public NativeS3FsOutputStream(Configuration conf,
 NativeFileSystemStore store, String key, Progressable progress,
@@ -246,11 +248,10 @@ public class NativeS3FileSystem extends 
 }
 
 private File newBackupFile() throws IOException {
-  File dir = new File(conf.get(fs.s3.buffer.dir));
-  if (!dir.mkdirs()  !dir.exists()) {
-throw new IOException(Cannot create S3 buffer directory:  + dir);
+  if (lDirAlloc == null) {
+lDirAlloc = new LocalDirAllocator(fs.s3.buffer.dir);
   }
-  File result = File.createTempFile(output-, .tmp, dir);
+  File result = lDirAlloc.createTmpFileForWrite(output-, 
LocalDirAllocator.SIZE_UNKNOWN, conf);
   result.deleteOnExit();
   return result;
 }




svn commit: r1606042 - in /hadoop/common/trunk/hadoop-common-project/hadoop-nfs: ./ dev-support/ src/main/java/org/apache/hadoop/oncrpc/security/

2014-06-27 Thread atm
Author: atm
Date: Fri Jun 27 12:00:55 2014
New Revision: 1606042

URL: http://svn.apache.org/r1606042
Log:
HADOOP-10701. NFS should not validate the access premission only based on the 
user's primary group. Contributed by Harsh J.

Added:
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/dev-support/

hadoop/common/trunk/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/pom.xml

hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java

hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/SecurityHandler.java

hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/SysSecurityHandler.java

Added: 
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml?rev=1606042view=auto
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
 (added)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
 Fri Jun 27 12:00:55 2014
@@ -0,0 +1,28 @@
+!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the License); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an AS IS BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--
+FindBugsFilter
+  !--
+FindBugs is complaining about CredentialsSys#getAuxGIDs(...) returning
+a mutable array, but it is alright in our case, and copies would be
+more expensive instead.
+  --
+  Match
+  Class name=org.apache.hadoop.oncrpc.security.CredentialsSys/
+  Method name=getAuxGIDs params= returns=int[]/
+  Bug code=EI/
+  /Match
+/FindBugsFilter

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-nfs/pom.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/pom.xml?rev=1606042r1=1606041r2=1606042view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-nfs/pom.xml (original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-nfs/pom.xml Fri Jun 27 
12:00:55 2014
@@ -93,6 +93,18 @@
 /dependency
   /dependencies
 
+  build
+plugins
+  plugin
+groupIdorg.codehaus.mojo/groupId
+artifactIdfindbugs-maven-plugin/artifactId
+configuration
+  excludeFilterFile${basedir}/dev-support/findbugsExcludeFile.xml
+  /excludeFilterFile
+/configuration
+  /plugin
+/plugins
+  /build
 
   profiles
 profile

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java?rev=1606042r1=1606041r2=1606042view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java
 Fri Jun 27 12:00:55 2014
@@ -58,6 +58,10 @@ public class CredentialsSys extends Cred
 return mUID;
   }
 
+  public int[] getAuxGIDs() {
+return mAuxGIDs;
+  }
+
   public void setGID(int gid) {
 this.mGID = gid;
   }
@@ -65,7 +69,7 @@ public class CredentialsSys extends Cred
   public void setUID(int uid) {
 this.mUID = uid;
   }
-  
+
   public void setStamp(int stamp) {
 this.mStamp = stamp;
   }

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/SecurityHandler.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/SecurityHandler.java?rev=1606042r1=1606041r2=1606042view=diff
==
--- 
hadoop

svn commit: r1606043 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs: ./ dev-support/ src/main/java/org/apache/hadoop/oncrpc/security/

2014-06-27 Thread atm
Author: atm
Date: Fri Jun 27 12:03:33 2014
New Revision: 1606043

URL: http://svn.apache.org/r1606043
Log:
HADOOP-10701. NFS should not validate the access premission only based on the 
user's primary group. Contributed by Harsh J.

Added:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/dev-support/

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
Modified:
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/pom.xml

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/SecurityHandler.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/SysSecurityHandler.java

Added: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml?rev=1606043view=auto
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
 (added)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/dev-support/findbugsExcludeFile.xml
 Fri Jun 27 12:03:33 2014
@@ -0,0 +1,28 @@
+!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the License); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an AS IS BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--
+FindBugsFilter
+  !--
+FindBugs is complaining about CredentialsSys#getAuxGIDs(...) returning
+a mutable array, but it is alright in our case, and copies would be
+more expensive instead.
+  --
+  Match
+  Class name=org.apache.hadoop.oncrpc.security.CredentialsSys/
+  Method name=getAuxGIDs params= returns=int[]/
+  Bug code=EI/
+  /Match
+/FindBugsFilter

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/pom.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/pom.xml?rev=1606043r1=1606042r2=1606043view=diff
==
--- hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/pom.xml 
(original)
+++ hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/pom.xml 
Fri Jun 27 12:03:33 2014
@@ -93,6 +93,18 @@
 /dependency
   /dependencies
 
+  build
+plugins
+  plugin
+groupIdorg.codehaus.mojo/groupId
+artifactIdfindbugs-maven-plugin/artifactId
+configuration
+  excludeFilterFile${basedir}/dev-support/findbugsExcludeFile.xml
+  /excludeFilterFile
+/configuration
+  /plugin
+/plugins
+  /build
 
   profiles
 profile

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java?rev=1606043r1=1606042r2=1606043view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java
 Fri Jun 27 12:03:33 2014
@@ -58,6 +58,10 @@ public class CredentialsSys extends Cred
 return mUID;
   }
 
+  public int[] getAuxGIDs() {
+return mAuxGIDs;
+  }
+
   public void setGID(int gid) {
 this.mGID = gid;
   }
@@ -65,7 +69,7 @@ public class CredentialsSys extends Cred
   public void setUID(int uid) {
 this.mUID = uid;
   }
-  
+
   public void setStamp(int stamp) {
 this.mStamp = stamp;
   }

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/SecurityHandler.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches

svn commit: r1604074 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common/src: main/java/org/apache/hadoop/metrics2/source/ main/java/org/apache/hadoop/util/ site/apt/ test/java/org/apache/ha

2014-06-19 Thread atm
Author: atm
Date: Fri Jun 20 02:38:00 2014
New Revision: 1604074

URL: http://svn.apache.org/r1604074
Log:
HDFS-6403. Add metrics for log warnings reported by JVM pauses. Contributed by 
Yongjun Zhang.

Modified:

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JvmPauseMonitor.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Metrics.apt.vm

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java?rev=1604074r1=1604073r2=1604074view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
 Fri Jun 20 02:38:00 2014
@@ -38,6 +38,7 @@ import org.apache.hadoop.metrics2.lib.De
 import org.apache.hadoop.metrics2.lib.Interns;
 import static org.apache.hadoop.metrics2.source.JvmMetricsInfo.*;
 import static org.apache.hadoop.metrics2.impl.MsInfo.*;
+import org.apache.hadoop.util.JvmPauseMonitor;
 
 /**
  * JVM and logging related metrics.
@@ -65,6 +66,7 @@ public class JvmMetrics implements Metri
   ManagementFactory.getGarbageCollectorMXBeans();
   final ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
   final String processName, sessionId;
+  private JvmPauseMonitor pauseMonitor = null;
   final ConcurrentHashMapString, MetricsInfo[] gcInfoCache =
   new ConcurrentHashMapString, MetricsInfo[]();
 
@@ -73,6 +75,10 @@ public class JvmMetrics implements Metri
 this.sessionId = sessionId;
   }
 
+  public void setPauseMonitor(final JvmPauseMonitor pauseMonitor) {
+this.pauseMonitor = pauseMonitor;
+  }
+
   public static JvmMetrics create(String processName, String sessionId,
   MetricsSystem ms) {
 return ms.register(JvmMetrics.name(), JvmMetrics.description(),
@@ -120,6 +126,15 @@ public class JvmMetrics implements Metri
 }
 rb.addCounter(GcCount, count)
   .addCounter(GcTimeMillis, timeMillis);
+
+if (pauseMonitor != null) {
+  rb.addCounter(GcNumWarnThresholdExceeded,
+  pauseMonitor.getNumGcWarnThreadholdExceeded());
+  rb.addCounter(GcNumInfoThresholdExceeded,
+  pauseMonitor.getNumGcInfoThresholdExceeded());
+  rb.addCounter(GcTotalExtraSleepTime,
+  pauseMonitor.getTotalGcExtraSleepTime());
+}
   }
 
   private MetricsInfo[] getGcInfo(String gcName) {

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java?rev=1604074r1=1604073r2=1604074view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
 Fri Jun 20 02:38:00 2014
@@ -48,7 +48,10 @@ public enum JvmMetricsInfo implements Me
   LogFatal(Total number of fatal log events),
   LogError(Total number of error log events),
   LogWarn(Total number of warning log events),
-  LogInfo(Total number of info log events);
+  LogInfo(Total number of info log events),
+  GcNumWarnThresholdExceeded(Number of times that the GC warn threshold is 
exceeded),
+  GcNumInfoThresholdExceeded(Number of times that the GC info threshold is 
exceeded),
+  GcTotalExtraSleepTime(Total GC extra sleep time in milliseconds);
 
   private final String desc;
 

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JvmPauseMonitor.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JvmPauseMonitor.java?rev=1604074r1=1604073r2=1604074view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JvmPauseMonitor.java

svn commit: r1604076 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src: main/java/org/apache/hadoop/metrics2/source/ main/java/org/apache/hadoop/util/ site/apt/ test/java/o

2014-06-19 Thread atm
Author: atm
Date: Fri Jun 20 02:39:36 2014
New Revision: 1604076

URL: http://svn.apache.org/r1604076
Log:
HDFS-6403. Add metrics for log warnings reported by JVM pauses. Contributed by 
Yongjun Zhang.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JvmPauseMonitor.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/site/apt/Metrics.apt.vm

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java?rev=1604076r1=1604075r2=1604076view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
 Fri Jun 20 02:39:36 2014
@@ -38,6 +38,7 @@ import org.apache.hadoop.metrics2.lib.De
 import org.apache.hadoop.metrics2.lib.Interns;
 import static org.apache.hadoop.metrics2.source.JvmMetricsInfo.*;
 import static org.apache.hadoop.metrics2.impl.MsInfo.*;
+import org.apache.hadoop.util.JvmPauseMonitor;
 
 /**
  * JVM and logging related metrics.
@@ -65,6 +66,7 @@ public class JvmMetrics implements Metri
   ManagementFactory.getGarbageCollectorMXBeans();
   final ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
   final String processName, sessionId;
+  private JvmPauseMonitor pauseMonitor = null;
   final ConcurrentHashMapString, MetricsInfo[] gcInfoCache =
   new ConcurrentHashMapString, MetricsInfo[]();
 
@@ -73,6 +75,10 @@ public class JvmMetrics implements Metri
 this.sessionId = sessionId;
   }
 
+  public void setPauseMonitor(final JvmPauseMonitor pauseMonitor) {
+this.pauseMonitor = pauseMonitor;
+  }
+
   public static JvmMetrics create(String processName, String sessionId,
   MetricsSystem ms) {
 return ms.register(JvmMetrics.name(), JvmMetrics.description(),
@@ -120,6 +126,15 @@ public class JvmMetrics implements Metri
 }
 rb.addCounter(GcCount, count)
   .addCounter(GcTimeMillis, timeMillis);
+
+if (pauseMonitor != null) {
+  rb.addCounter(GcNumWarnThresholdExceeded,
+  pauseMonitor.getNumGcWarnThreadholdExceeded());
+  rb.addCounter(GcNumInfoThresholdExceeded,
+  pauseMonitor.getNumGcInfoThresholdExceeded());
+  rb.addCounter(GcTotalExtraSleepTime,
+  pauseMonitor.getTotalGcExtraSleepTime());
+}
   }
 
   private MetricsInfo[] getGcInfo(String gcName) {

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java?rev=1604076r1=1604075r2=1604076view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java
 Fri Jun 20 02:39:36 2014
@@ -48,7 +48,10 @@ public enum JvmMetricsInfo implements Me
   LogFatal(Total number of fatal log events),
   LogError(Total number of error log events),
   LogWarn(Total number of warning log events),
-  LogInfo(Total number of info log events);
+  LogInfo(Total number of info log events),
+  GcNumWarnThresholdExceeded(Number of times that the GC warn threshold is 
exceeded),
+  GcNumInfoThresholdExceeded(Number of times that the GC info threshold is 
exceeded),
+  GcTotalExtraSleepTime(Total GC extra sleep time in milliseconds);
 
   private final String desc;
 

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JvmPauseMonitor.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/JvmPauseMonitor.java?rev=1604076r1=1604075r2=1604076view=diff

svn commit: r1601478 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/test/java/org/apache/hadoop/net/TestNetUtils.java

2014-06-09 Thread atm
Author: atm
Date: Mon Jun  9 18:53:19 2014
New Revision: 1601478

URL: http://svn.apache.org/r1601478
Log:
HADOOP-10664. TestNetUtils.testNormalizeHostName fails. Contributed by Aaron T. 
Myers.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1601478r1=1601477r2=1601478view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Mon Jun 
 9 18:53:19 2014
@@ -536,6 +536,8 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10647. String Format Exception in SwiftNativeFileSystemStore.java.
 (Gene Kim via stevel)
 
+HADOOP-10664. TestNetUtils.testNormalizeHostName fails. (atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java?rev=1601478r1=1601477r2=1601478view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 Mon Jun  9 18:53:19 2014
@@ -605,7 +605,7 @@ public class TestNetUtils {
   @Test
   public void testNormalizeHostName() {
 ListString hosts = Arrays.asList(new String[] {127.0.0.1,
-localhost, 3w.org, UnknownHost123});
+localhost, 1.kanyezone.appspot.com, UnknownHost123});
 ListString normalizedHosts = NetUtils.normalizeHostNames(hosts);
 // when ipaddress is normalized, same address is expected in return
 assertEquals(normalizedHosts.get(0), hosts.get(0));
@@ -636,4 +636,4 @@ public class TestNetUtils {
 String gotStr = StringUtils.join(got, , );
 assertEquals(expectStr, gotStr);
   }
-}
\ No newline at end of file
+}




svn commit: r1601482 - /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

2014-06-09 Thread atm
Author: atm
Date: Mon Jun  9 19:01:39 2014
New Revision: 1601482

URL: http://svn.apache.org/r1601482
Log:
Moving CHANGES.txt entry for HADOOP-9099 to the correct section.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1601482r1=1601481r2=1601482view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Mon Jun 
 9 19:01:39 2014
@@ -297,9 +297,6 @@ Trunk (Unreleased)
 HADOOP-9394. Port findHangingTest.sh from HBase to Hadoop. (Andrew Wang
 via atm)
 
-HADOOP-9099. NetUtils.normalizeHostName fails on domains where 
-UnknownHost resolves to an IP address. (Ivan Mitic via suresh)
-
 HADOOP-9431 TestSecurityUtil#testLocalHostNameForNullOrWild on systems 
where hostname
 contains capital letters  (Chris Nauroth via sanjay)
 
@@ -536,6 +533,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10647. String Format Exception in SwiftNativeFileSystemStore.java.
 (Gene Kim via stevel)
 
+HADOOP-9099. NetUtils.normalizeHostName fails on domains where
+UnknownHost resolves to an IP address. (Ivan Mitic via suresh)
+
 HADOOP-10664. TestNetUtils.testNormalizeHostName fails. (atm)
 
 Release 2.4.1 - UNRELEASED




svn commit: r1599436 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/security/ssl/ src/test/java/org/apache/hadoop/security/ssl/

2014-06-03 Thread atm
Author: atm
Date: Tue Jun  3 07:24:42 2014
New Revision: 1599436

URL: http://svn.apache.org/r1599436
Log:
HADOOP-10658. SSLFactory expects truststores being configured. Contributed by 
Alejandro Abdelnur.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestSSLFactory.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1599436r1=1599435r2=1599436view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Tue Jun  3 07:24:42 2014
@@ -178,6 +178,8 @@ Release 2.5.0 - UNRELEASED
 
 HADOOP-10630. Possible race condition in RetryInvocationHandler. (jing9)
 
+HADOOP-10658. SSLFactory expects truststores being configured. (tucu via 
atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java?rev=1599436r1=1599435r2=1599436view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
 Tue Jun  3 07:24:42 2014
@@ -188,33 +188,33 @@ public class FileBasedKeyStoresFactory i
 String locationProperty =
   resolvePropertyName(mode, SSL_TRUSTSTORE_LOCATION_TPL_KEY);
 String truststoreLocation = conf.get(locationProperty, );
-if (truststoreLocation.isEmpty()) {
-  throw new GeneralSecurityException(The property ' + locationProperty +
-' has not been set in the ssl configuration file.);
-}
-
-String passwordProperty = resolvePropertyName(mode,
-  
SSL_TRUSTSTORE_PASSWORD_TPL_KEY);
-String truststorePassword = conf.get(passwordProperty, );
-if (truststorePassword.isEmpty()) {
-  throw new GeneralSecurityException(The property ' + passwordProperty +
-' has not been set in the ssl configuration file.);
+if (!truststoreLocation.isEmpty()) {
+  String passwordProperty = resolvePropertyName(mode,
+  SSL_TRUSTSTORE_PASSWORD_TPL_KEY);
+  String truststorePassword = conf.get(passwordProperty, );
+  if (truststorePassword.isEmpty()) {
+throw new GeneralSecurityException(The property ' + passwordProperty 
+
+' has not been set in the ssl configuration file.);
+  }
+  long truststoreReloadInterval =
+  conf.getLong(
+  resolvePropertyName(mode, 
SSL_TRUSTSTORE_RELOAD_INTERVAL_TPL_KEY),
+  DEFAULT_SSL_TRUSTSTORE_RELOAD_INTERVAL);
+
+  LOG.debug(mode.toString() +  TrustStore:  + truststoreLocation);
+
+  trustManager = new ReloadingX509TrustManager(truststoreType,
+  truststoreLocation,
+  truststorePassword,
+  truststoreReloadInterval);
+  trustManager.init();
+  LOG.debug(mode.toString() +  Loaded TrustStore:  + truststoreLocation);
+  trustManagers = new TrustManager[]{trustManager};
+} else {
+  LOG.warn(The property ' + locationProperty + ' has not been set,  +
+  no TrustStore will be loaded);
+  trustManagers = null;
 }
-long truststoreReloadInterval =
-  conf.getLong(
-resolvePropertyName(mode, SSL_TRUSTSTORE_RELOAD_INTERVAL_TPL_KEY),
-DEFAULT_SSL_TRUSTSTORE_RELOAD_INTERVAL);
-
-LOG.debug(mode.toString() +  TrustStore:  + truststoreLocation);
-
-trustManager = new ReloadingX509TrustManager(truststoreType,
- truststoreLocation,
- truststorePassword,
- truststoreReloadInterval);
-trustManager.init();
-LOG.debug(mode.toString() +  Loaded TrustStore:  + truststoreLocation);
-
-trustManagers = new TrustManager

svn commit: r1598451 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/bin/hadoop-daemon.sh

2014-05-29 Thread atm
Author: atm
Date: Fri May 30 01:52:17 2014
New Revision: 1598451

URL: http://svn.apache.org/r1598451
Log:
HADOOP-10638. Updating hadoop-daemon.sh to work as expected when nfs is started 
as a privileged user. Contributed by Manikandan Narayanaswamy.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1598451r1=1598450r2=1598451view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Fri May 
30 01:52:17 2014
@@ -519,6 +519,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10639. FileBasedKeyStoresFactory initialization is not using default
 for SSL_REQUIRE_CLIENT_CERT_KEY. (tucu)
 
+HADOOP-10638. Updating hadoop-daemon.sh to work as expected when nfs is
+started as a privileged user. (Manikandan Narayanaswamy via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh?rev=1598451r1=1598450r2=1598451view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
 Fri May 30 01:52:17 2014
@@ -87,6 +87,14 @@ if [ $command == datanode ]  [ $E
   starting_secure_dn=true
 fi
 
+#Determine if we're starting a privileged NFS, if so, redefine the appropriate 
variables
+if [ $command == nfs3 ]  [ $EUID -eq 0 ]  [ -n 
$HADOOP_PRIVILEGED_NFS_USER ]; then
+export HADOOP_PID_DIR=$HADOOP_PRIVILEGED_NFS_PID_DIR
+export HADOOP_LOG_DIR=$HADOOP_PRIVILEGED_NFS_LOG_DIR
+export HADOOP_IDENT_STRING=$HADOOP_PRIVILEGED_NFS_USER
+starting_privileged_nfs=true
+fi
+
 if [ $HADOOP_IDENT_STRING =  ]; then
   export HADOOP_IDENT_STRING=$USER
 fi
@@ -162,6 +170,9 @@ case $startStop in
   echo ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER  $log
   # capture the ulimit info for the appropriate user
   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a'  $log 21
+elif [ true = $starting_privileged_nfs ]; then
+echo ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER 
 $log
+su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a'  
$log 21
 else
   echo ulimit -a for user $USER  $log
   ulimit -a  $log 21




svn commit: r1598452 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/bin/hadoop-daemon.sh

2014-05-29 Thread atm
Author: atm
Date: Fri May 30 01:53:36 2014
New Revision: 1598452

URL: http://svn.apache.org/r1598452
Log:
HADOOP-10638. Updating hadoop-daemon.sh to work as expected when nfs is started 
as a privileged user. Contributed by Manikandan Narayanaswamy.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1598452r1=1598451r2=1598452view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Fri May 30 01:53:36 2014
@@ -168,6 +168,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10639. FileBasedKeyStoresFactory initialization is not using default
 for SSL_REQUIRE_CLIENT_CERT_KEY. (tucu)
 
+HADOOP-10638. Updating hadoop-daemon.sh to work as expected when nfs is
+started as a privileged user. (Manikandan Narayanaswamy via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh?rev=1598452r1=1598451r2=1598452view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
 Fri May 30 01:53:36 2014
@@ -87,6 +87,14 @@ if [ $command == datanode ]  [ $E
   starting_secure_dn=true
 fi
 
+#Determine if we're starting a privileged NFS, if so, redefine the appropriate 
variables
+if [ $command == nfs3 ]  [ $EUID -eq 0 ]  [ -n 
$HADOOP_PRIVILEGED_NFS_USER ]; then
+export HADOOP_PID_DIR=$HADOOP_PRIVILEGED_NFS_PID_DIR
+export HADOOP_LOG_DIR=$HADOOP_PRIVILEGED_NFS_LOG_DIR
+export HADOOP_IDENT_STRING=$HADOOP_PRIVILEGED_NFS_USER
+starting_privileged_nfs=true
+fi
+
 if [ $HADOOP_IDENT_STRING =  ]; then
   export HADOOP_IDENT_STRING=$USER
 fi
@@ -162,6 +170,9 @@ case $startStop in
   echo ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER  $log
   # capture the ulimit info for the appropriate user
   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a'  $log 21
+elif [ true = $starting_privileged_nfs ]; then
+echo ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER 
 $log
+su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a'  
$log 21
 else
   echo ulimit -a for user $USER  $log
   ulimit -a  $log 21




svn commit: r1596020 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/security/UserGroupInformation.java src/test/java/org/apache/hadoop/secur

2014-05-19 Thread atm
Author: atm
Date: Mon May 19 19:56:29 2014
New Revision: 1596020

URL: http://svn.apache.org/r1596020
Log:
HADOOP-10489. UserGroupInformation#getTokens and UserGroupInformation#addToken 
can lead to ConcurrentModificationException. Contributed by Robert Kanter.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1596020r1=1596019r2=1596020view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Mon May 
19 19:56:29 2014
@@ -486,6 +486,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10401. ShellBasedUnixGroupsMapping#getGroups does not always return
 primary group first (Akira AJISAKA via Colin Patrick McCabe)
 
+HADOOP-10489. UserGroupInformation#getTokens and UserGroupInformation
+#addToken can lead to ConcurrentModificationException (Robert Kanter via 
atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java?rev=1596020r1=1596019r2=1596020view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 Mon May 19 19:56:29 2014
@@ -1392,7 +1392,7 @@ public class UserGroupInformation {
* @param token Token to be added
* @return true on successful add of new token
*/
-  public synchronized boolean addToken(Token? extends TokenIdentifier token) 
{
+  public boolean addToken(Token? extends TokenIdentifier token) {
 return (token != null) ? addToken(token.getService(), token) : false;
   }
 
@@ -1403,10 +1403,11 @@ public class UserGroupInformation {
* @param token Token to be added
* @return true on successful add of new token
*/
-  public synchronized boolean addToken(Text alias,
-   Token? extends TokenIdentifier token) 
{
-getCredentialsInternal().addToken(alias, token);
-return true;
+  public boolean addToken(Text alias, Token? extends TokenIdentifier token) {
+synchronized (subject) {
+  getCredentialsInternal().addToken(alias, token);
+  return true;
+}
   }
   
   /**
@@ -1414,10 +1415,11 @@ public class UserGroupInformation {
* 
* @return an unmodifiable collection of tokens associated with user
*/
-  public synchronized
-  CollectionToken? extends TokenIdentifier getTokens() {
-return Collections.unmodifiableCollection(
-new ArrayListToken?(getCredentialsInternal().getAllTokens()));
+  public CollectionToken? extends TokenIdentifier getTokens() {
+synchronized (subject) {
+  return Collections.unmodifiableCollection(
+  new ArrayListToken?(getCredentialsInternal().getAllTokens()));
+}
   }
 
   /**
@@ -1425,23 +1427,27 @@ public class UserGroupInformation {
* 
* @return Credentials of tokens associated with this user
*/
-  public synchronized Credentials getCredentials() {
-Credentials creds = new Credentials(getCredentialsInternal());
-IteratorToken? iter = creds.getAllTokens().iterator();
-while (iter.hasNext()) {
-  if (iter.next() instanceof Token.PrivateToken) {
-iter.remove();
+  public Credentials getCredentials() {
+synchronized (subject) {
+  Credentials creds = new Credentials(getCredentialsInternal());
+  IteratorToken? iter = creds.getAllTokens().iterator();
+  while (iter.hasNext()) {
+if (iter.next() instanceof Token.PrivateToken) {
+  iter.remove();
+}
   }
+  return creds;
 }
-return creds;
   }
   
   /**
* Add the given Credentials to this user.
* @param credentials of tokens and secrets
*/
-  public synchronized void addCredentials(Credentials credentials) {
-getCredentialsInternal().addAll(credentials);
+  public void addCredentials(Credentials credentials) {
+synchronized (subject) {
+  getCredentialsInternal().addAll(credentials);
+}
   }
 
   private synchronized Credentials

svn commit: r1596027 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/security/UserGroupInformation.java src/test/java/org/apache/

2014-05-19 Thread atm
Author: atm
Date: Mon May 19 19:59:08 2014
New Revision: 1596027

URL: http://svn.apache.org/r1596027
Log:
HADOOP-10489. UserGroupInformation#getTokens and UserGroupInformation#addToken 
can lead to ConcurrentModificationException. Contributed by Robert Kanter.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1596027r1=1596026r2=1596027view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Mon May 19 19:59:08 2014
@@ -151,6 +151,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10401. ShellBasedUnixGroupsMapping#getGroups does not always return
 primary group first (Akira AJISAKA via Colin Patrick McCabe)
 
+HADOOP-10489. UserGroupInformation#getTokens and UserGroupInformation
+#addToken can lead to ConcurrentModificationException (Robert Kanter via 
atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java?rev=1596027r1=1596026r2=1596027view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 Mon May 19 19:59:08 2014
@@ -1343,7 +1343,7 @@ public class UserGroupInformation {
* @param token Token to be added
* @return true on successful add of new token
*/
-  public synchronized boolean addToken(Token? extends TokenIdentifier token) 
{
+  public boolean addToken(Token? extends TokenIdentifier token) {
 return (token != null) ? addToken(token.getService(), token) : false;
   }
 
@@ -1354,10 +1354,11 @@ public class UserGroupInformation {
* @param token Token to be added
* @return true on successful add of new token
*/
-  public synchronized boolean addToken(Text alias,
-   Token? extends TokenIdentifier token) 
{
-getCredentialsInternal().addToken(alias, token);
-return true;
+  public boolean addToken(Text alias, Token? extends TokenIdentifier token) {
+synchronized (subject) {
+  getCredentialsInternal().addToken(alias, token);
+  return true;
+}
   }
   
   /**
@@ -1365,10 +1366,11 @@ public class UserGroupInformation {
* 
* @return an unmodifiable collection of tokens associated with user
*/
-  public synchronized
-  CollectionToken? extends TokenIdentifier getTokens() {
-return Collections.unmodifiableCollection(
-new ArrayListToken?(getCredentialsInternal().getAllTokens()));
+  public CollectionToken? extends TokenIdentifier getTokens() {
+synchronized (subject) {
+  return Collections.unmodifiableCollection(
+  new ArrayListToken?(getCredentialsInternal().getAllTokens()));
+}
   }
 
   /**
@@ -1376,23 +1378,27 @@ public class UserGroupInformation {
* 
* @return Credentials of tokens associated with this user
*/
-  public synchronized Credentials getCredentials() {
-Credentials creds = new Credentials(getCredentialsInternal());
-IteratorToken? iter = creds.getAllTokens().iterator();
-while (iter.hasNext()) {
-  if (iter.next() instanceof Token.PrivateToken) {
-iter.remove();
+  public Credentials getCredentials() {
+synchronized (subject) {
+  Credentials creds = new Credentials(getCredentialsInternal());
+  IteratorToken? iter = creds.getAllTokens().iterator();
+  while (iter.hasNext()) {
+if (iter.next() instanceof Token.PrivateToken) {
+  iter.remove();
+}
   }
+  return creds;
 }
-return creds;
   }
   
   /**
* Add the given Credentials to this user.
* @param credentials of tokens and secrets
*/
-  public synchronized void addCredentials(Credentials credentials) {
-getCredentialsInternal().addAll(credentials);
+  public void addCredentials(Credentials credentials

svn commit: r1595352 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src: main/java/org/apache/hadoop/oncrpc/RpcProgram.java test/java/org/apache/hadoop/oncrpc/TestFrameDecoder.

2014-05-16 Thread atm
Author: atm
Date: Fri May 16 21:25:05 2014
New Revision: 1595352

URL: http://svn.apache.org/r1595352
Log:
HDFS-6406. Add capability for NFS gateway to reject connections from 
unprivileged ports. Contributed by Aaron T. Myers.

Added:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/test/resources/

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/test/resources/log4j.properties
Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestFrameDecoder.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java?rev=1595352r1=1595351r2=1595352view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
 Fri May 16 21:25:05 2014
@@ -19,11 +19,14 @@ package org.apache.hadoop.oncrpc;
 
 import java.io.IOException;
 import java.net.DatagramSocket;
+import java.net.InetSocketAddress;
+import java.net.SocketAddress;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.oncrpc.RpcAcceptedReply.AcceptState;
 import org.apache.hadoop.oncrpc.security.Verifier;
+import org.apache.hadoop.oncrpc.security.VerifierNone;
 import org.apache.hadoop.portmap.PortmapMapping;
 import org.apache.hadoop.portmap.PortmapRequest;
 import org.jboss.netty.buffer.ChannelBuffer;
@@ -37,7 +40,7 @@ import org.jboss.netty.channel.SimpleCha
  * and implement {@link #handleInternal} to handle the requests received.
  */
 public abstract class RpcProgram extends SimpleChannelUpstreamHandler {
-  private static final Log LOG = LogFactory.getLog(RpcProgram.class);
+  static final Log LOG = LogFactory.getLog(RpcProgram.class);
   public static final int RPCB_PORT = 111;
   private final String program;
   private final String host;
@@ -45,6 +48,7 @@ public abstract class RpcProgram extends
   private final int progNumber;
   private final int lowProgVersion;
   private final int highProgVersion;
+  private final boolean allowInsecurePorts;
   
   /**
* If not null, this will be used as the socket to use to connect to the
@@ -61,10 +65,14 @@ public abstract class RpcProgram extends
* @param progNumber program number as defined in RFC 1050
* @param lowProgVersion lowest version of the specification supported
* @param highProgVersion highest version of the specification supported
+   * @param DatagramSocket registrationSocket if not null, use this socket to
+   *register with portmap daemon
+   * @param allowInsecurePorts true to allow client connections from
+   *unprivileged ports, false otherwise
*/
   protected RpcProgram(String program, String host, int port, int progNumber,
   int lowProgVersion, int highProgVersion,
-  DatagramSocket registrationSocket) {
+  DatagramSocket registrationSocket, boolean allowInsecurePorts) {
 this.program = program;
 this.host = host;
 this.port = port;
@@ -72,6 +80,9 @@ public abstract class RpcProgram extends
 this.lowProgVersion = lowProgVersion;
 this.highProgVersion = highProgVersion;
 this.registrationSocket = registrationSocket;
+this.allowInsecurePorts = allowInsecurePorts;
+LOG.info(Will  + (allowInsecurePorts ?  : not ) + accept client 
++ connections from unprivileged ports);
   }
 
   /**
@@ -133,43 +144,82 @@ public abstract class RpcProgram extends
   throws Exception {
 RpcInfo info = (RpcInfo) e.getMessage();
 RpcCall call = (RpcCall) info.header();
+
+SocketAddress remoteAddress = info.remoteAddress();
+if (!allowInsecurePorts) {
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Will not allow connections from unprivileged ports.  +
+Checking for valid client port...);
+  }
+  if (remoteAddress instanceof InetSocketAddress) {
+InetSocketAddress inetRemoteAddress = (InetSocketAddress) 
remoteAddress;
+if (inetRemoteAddress.getPort()  1023) {
+  LOG.warn(Connection attempted from ' + inetRemoteAddress + ' 
+  + which is an unprivileged port. Rejecting connection.);
+  sendRejectedReply(call, remoteAddress, ctx);
+  return;
+} else {
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Accepting connection from ' + remoteAddress + ');
+  }
+}
+  } else {
+LOG.warn

svn commit: r1592133 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java src/main/resources/core-def

2014-05-02 Thread atm
Author: atm
Date: Sat May  3 00:25:09 2014
New Revision: 1592133

URL: http://svn.apache.org/r1592133
Log:
HADOOP-10568. Add s3 server-side encryption. Contributed by David S. Wang.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1592133r1=1592132r2=1592133view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Sat May 
 3 00:25:09 2014
@@ -439,6 +439,8 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10562. Namenode exits on exception without printing stack trace
 in AbstractDelegationTokenSecretManager. (Arpit Agarwal)
 
+HADOOP-10568. Add s3 server-side encryption. (David S. Wang via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java?rev=1592133r1=1592132r2=1592133view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 Sat May  3 00:25:09 2014
@@ -63,6 +63,8 @@ class Jets3tNativeFileSystemStore implem
   private boolean multipartEnabled;
   private long multipartCopyBlockSize;
   static final long MAX_PART_SIZE = (long)5 * 1024 * 1024 * 1024;
+
+  private String serverSideEncryptionAlgorithm;
   
   public static final Log LOG =
   LogFactory.getLog(Jets3tNativeFileSystemStore.class);
@@ -87,6 +89,7 @@ class Jets3tNativeFileSystemStore implem
 multipartCopyBlockSize = Math.min(
 conf.getLong(fs.s3n.multipart.copy.block.size, MAX_PART_SIZE),
 MAX_PART_SIZE);
+serverSideEncryptionAlgorithm = 
conf.get(fs.s3n.server-side-encryption-algorithm);
 
 bucket = new S3Bucket(uri.getHost());
   }
@@ -107,6 +110,7 @@ class Jets3tNativeFileSystemStore implem
   object.setDataInputStream(in);
   object.setContentType(binary/octet-stream);
   object.setContentLength(file.length());
+  object.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
   if (md5Hash != null) {
 object.setMd5Hash(md5Hash);
   }
@@ -130,6 +134,7 @@ class Jets3tNativeFileSystemStore implem
 object.setDataInputFile(file);
 object.setContentType(binary/octet-stream);
 object.setContentLength(file.length());
+object.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
 if (md5Hash != null) {
   object.setMd5Hash(md5Hash);
 }
@@ -156,6 +161,7 @@ class Jets3tNativeFileSystemStore implem
   object.setDataInputStream(new ByteArrayInputStream(new byte[0]));
   object.setContentType(binary/octet-stream);
   object.setContentLength(0);
+  object.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
   s3Service.putObject(bucket, object);
 } catch (S3ServiceException e) {
   handleS3ServiceException(e);
@@ -317,8 +323,11 @@ class Jets3tNativeFileSystemStore implem
   return;
 }
   }
+
+  S3Object dstObject = new S3Object(dstKey);
+  
dstObject.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
   s3Service.copyObject(bucket.getName(), srcKey, bucket.getName(),
-  new S3Object(dstKey), false);
+  dstObject, false);
 } catch (ServiceException e) {
   handleServiceException(srcKey, e);
 }

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml?rev=1592133r1=1592132r2=1592133view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
 Sat May  3 00:25:09 2014
@@ -576,6 +576,14 @@
 /property
 
 property
+  namefs.s3n.server-side-encryption-algorithm/name
+  value

svn commit: r1592134 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java src/main/resour

2014-05-02 Thread atm
Author: atm
Date: Sat May  3 00:26:23 2014
New Revision: 1592134

URL: http://svn.apache.org/r1592134
Log:
HADOOP-10568. Add s3 server-side encryption. Contributed by David S. Wang.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1592134r1=1592133r2=1592134view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Sat May  3 00:26:23 2014
@@ -111,6 +111,8 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10562. Namenode exits on exception without printing stack trace
 in AbstractDelegationTokenSecretManager. (Arpit Agarwal)
 
+HADOOP-10568. Add s3 server-side encryption. (David S. Wang via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java?rev=1592134r1=1592133r2=1592134view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 Sat May  3 00:26:23 2014
@@ -63,6 +63,8 @@ class Jets3tNativeFileSystemStore implem
   private boolean multipartEnabled;
   private long multipartCopyBlockSize;
   static final long MAX_PART_SIZE = (long)5 * 1024 * 1024 * 1024;
+
+  private String serverSideEncryptionAlgorithm;
   
   public static final Log LOG =
   LogFactory.getLog(Jets3tNativeFileSystemStore.class);
@@ -87,6 +89,7 @@ class Jets3tNativeFileSystemStore implem
 multipartCopyBlockSize = Math.min(
 conf.getLong(fs.s3n.multipart.copy.block.size, MAX_PART_SIZE),
 MAX_PART_SIZE);
+serverSideEncryptionAlgorithm = 
conf.get(fs.s3n.server-side-encryption-algorithm);
 
 bucket = new S3Bucket(uri.getHost());
   }
@@ -107,6 +110,7 @@ class Jets3tNativeFileSystemStore implem
   object.setDataInputStream(in);
   object.setContentType(binary/octet-stream);
   object.setContentLength(file.length());
+  object.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
   if (md5Hash != null) {
 object.setMd5Hash(md5Hash);
   }
@@ -130,6 +134,7 @@ class Jets3tNativeFileSystemStore implem
 object.setDataInputFile(file);
 object.setContentType(binary/octet-stream);
 object.setContentLength(file.length());
+object.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
 if (md5Hash != null) {
   object.setMd5Hash(md5Hash);
 }
@@ -156,6 +161,7 @@ class Jets3tNativeFileSystemStore implem
   object.setDataInputStream(new ByteArrayInputStream(new byte[0]));
   object.setContentType(binary/octet-stream);
   object.setContentLength(0);
+  object.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
   s3Service.putObject(bucket, object);
 } catch (S3ServiceException e) {
   handleS3ServiceException(e);
@@ -317,8 +323,11 @@ class Jets3tNativeFileSystemStore implem
   return;
 }
   }
+
+  S3Object dstObject = new S3Object(dstKey);
+  
dstObject.setServerSideEncryptionAlgorithm(serverSideEncryptionAlgorithm);
   s3Service.copyObject(bucket.getName(), srcKey, bucket.getName(),
-  new S3Object(dstKey), false);
+  dstObject, false);
 } catch (ServiceException e) {
   handleServiceException(srcKey, e);
 }

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml?rev=1592134r1=1592133r2=1592134view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project

svn commit: r1591181 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/fs/PathIOException.java src/test/java/org/apache/hadoop/fs/shell/TestPat

2014-04-29 Thread atm
Author: atm
Date: Wed Apr 30 03:22:20 2014
New Revision: 1591181

URL: http://svn.apache.org/r1591181
Log:
HADOOP-10543. RemoteException's unwrapRemoteException method failed for 
PathIOException. Contributed by Yongjun Zhang.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1591181r1=1591180r2=1591181view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Wed Apr 
30 03:22:20 2014
@@ -428,6 +428,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10547. Give SaslPropertiesResolver.getDefaultProperties() public
 scope. (Benoy Antony via Arpit Agarwal)
 
+HADOOP-10543. RemoteException's unwrapRemoteException method failed for
+PathIOException. (Yongjun Zhang via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java?rev=1591181r1=1591180r2=1591181view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
 Wed Apr 30 03:22:20 2014
@@ -40,7 +40,7 @@ public class PathIOException extends IOE
*  @param path for the exception
*/
   public PathIOException(String path) {
-this(path, EIO, null);
+this(path, EIO);
   }
 
   /**
@@ -59,7 +59,8 @@ public class PathIOException extends IOE
* @param error custom string to use an the error text
*/
   public PathIOException(String path, String error) {
-this(path, error, null);
+super(error);
+this.path = path;
   }
 
   protected PathIOException(String path, String error, Throwable cause) {

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java?rev=1591181r1=1591180r2=1591181view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java
 Wed Apr 30 03:22:20 2014
@@ -19,11 +19,13 @@
 package org.apache.hadoop.fs.shell;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.ipc.RemoteException;
 import org.junit.Test;
 
 public class TestPathExceptions {
@@ -52,5 +54,25 @@ public class TestPathExceptions {
 assertEquals(new Path(path), pe.getPath());
 assertEquals(` + path + ':  + error, pe.getMessage());
   }
-  
+
+  @Test
+  public void testRemoteExceptionUnwrap() throws Exception {
+PathIOException pe;
+RemoteException re;
+IOException ie;
+
+pe = new PathIOException(path);
+re = new RemoteException(PathIOException.class.getName(), test 
constructor1);
+ie = re.unwrapRemoteException();
+assertTrue(ie instanceof PathIOException);
+ie = re.unwrapRemoteException(PathIOException.class);
+assertTrue(ie instanceof PathIOException);
+
+pe = new PathIOException(path, constructor2);
+re = new RemoteException(PathIOException.class.getName(), test 
constructor2);
+ie = re.unwrapRemoteException();
+assertTrue(ie instanceof PathIOException);
+ie = re.unwrapRemoteException(PathIOException.class);
+assertTrue(ie instanceof PathIOException);
+  }
 }




svn commit: r1591182 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/fs/PathIOException.java src/test/java/org/apache/hadoop/fs/s

2014-04-29 Thread atm
Author: atm
Date: Wed Apr 30 03:24:05 2014
New Revision: 1591182

URL: http://svn.apache.org/r1591182
Log:
HADOOP-10543. RemoteException's unwrapRemoteException method failed for 
PathIOException. Contributed by Yongjun Zhang.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1591182r1=1591181r2=1591182view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Wed Apr 30 03:24:05 2014
@@ -102,6 +102,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10547. Give SaslPropertiesResolver.getDefaultProperties() public
 scope. (Benoy Antony via Arpit Agarwal)
 
+HADOOP-10543. RemoteException's unwrapRemoteException method failed for
+PathIOException. (Yongjun Zhang via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java?rev=1591182r1=1591181r2=1591182view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
 Wed Apr 30 03:24:05 2014
@@ -40,7 +40,7 @@ public class PathIOException extends IOE
*  @param path for the exception
*/
   public PathIOException(String path) {
-this(path, EIO, null);
+this(path, EIO);
   }
 
   /**
@@ -59,7 +59,8 @@ public class PathIOException extends IOE
* @param error custom string to use an the error text
*/
   public PathIOException(String path, String error) {
-this(path, error, null);
+super(error);
+this.path = path;
   }
 
   protected PathIOException(String path, String error, Throwable cause) {

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java?rev=1591182r1=1591181r2=1591182view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java
 Wed Apr 30 03:24:05 2014
@@ -19,11 +19,13 @@
 package org.apache.hadoop.fs.shell;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.ipc.RemoteException;
 import org.junit.Test;
 
 public class TestPathExceptions {
@@ -52,5 +54,25 @@ public class TestPathExceptions {
 assertEquals(new Path(path), pe.getPath());
 assertEquals(` + path + ':  + error, pe.getMessage());
   }
-  
+
+  @Test
+  public void testRemoteExceptionUnwrap() throws Exception {
+PathIOException pe;
+RemoteException re;
+IOException ie;
+
+pe = new PathIOException(path);
+re = new RemoteException(PathIOException.class.getName(), test 
constructor1);
+ie = re.unwrapRemoteException();
+assertTrue(ie instanceof PathIOException);
+ie = re.unwrapRemoteException(PathIOException.class);
+assertTrue(ie instanceof PathIOException);
+
+pe = new PathIOException(path, constructor2);
+re = new RemoteException(PathIOException.class.getName(), test 
constructor2);
+ie = re.unwrapRemoteException();
+assertTrue(ie instanceof PathIOException);
+ie = re.unwrapRemoteException(PathIOException.class);
+assertTrue(ie instanceof PathIOException);
+  }
 }




svn commit: r1589915 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src: main/java/org/apache/hadoop/oncrpc/ test/java/org/apache/hadoop/oncrpc/

2014-04-24 Thread atm
Author: atm
Date: Fri Apr 25 00:19:34 2014
New Revision: 1589915

URL: http://svn.apache.org/r1589915
Log:
HDFS-6281. Provide option to use the NFS Gateway without having to use the 
Hadoop portmapper. Contributed by Aaron T. Myers.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestFrameDecoder.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java?rev=1589915r1=1589914r2=1589915view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
 Fri Apr 25 00:19:34 2014
@@ -18,6 +18,7 @@
 package org.apache.hadoop.oncrpc;
 
 import java.io.IOException;
+import java.net.DatagramSocket;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -46,6 +47,12 @@ public abstract class RpcProgram extends
   private final int highProgVersion;
   
   /**
+   * If not null, this will be used as the socket to use to connect to the
+   * system portmap daemon when registering this RPC server program.
+   */
+  private final DatagramSocket registrationSocket;
+  
+  /**
* Constructor
* 
* @param program program name
@@ -56,13 +63,15 @@ public abstract class RpcProgram extends
* @param highProgVersion highest version of the specification supported
*/
   protected RpcProgram(String program, String host, int port, int progNumber,
-  int lowProgVersion, int highProgVersion) {
+  int lowProgVersion, int highProgVersion,
+  DatagramSocket registrationSocket) {
 this.program = program;
 this.host = host;
 this.port = port;
 this.progNumber = progNumber;
 this.lowProgVersion = lowProgVersion;
 this.highProgVersion = highProgVersion;
+this.registrationSocket = registrationSocket;
   }
 
   /**
@@ -105,14 +114,14 @@ public abstract class RpcProgram extends
   protected void register(PortmapMapping mapEntry, boolean set) {
 XDR mappingRequest = PortmapRequest.create(mapEntry, set);
 SimpleUdpClient registrationClient = new SimpleUdpClient(host, RPCB_PORT,
-mappingRequest);
+mappingRequest, registrationSocket);
 try {
   registrationClient.run();
 } catch (IOException e) {
   String request = set ? Registration : Unregistration;
   LOG.error(request +  failure with  + host + : + port
-  + , portmap entry:  + mapEntry);
-  throw new RuntimeException(request +  failure);
+  + , portmap entry:  + mapEntry, e);
+  throw new RuntimeException(request +  failure, e);
 }
   }
 

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java?rev=1589915r1=1589914r2=1589915view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
 Fri Apr 25 00:19:34 2014
@@ -27,43 +27,56 @@ import java.util.Arrays;
  * A simple UDP based RPC client which just sends one request to a server.
  */
 public class SimpleUdpClient {
+  
   protected final String host;
   protected final int port;
   protected final XDR request;
   protected final boolean oneShot;
+  protected final DatagramSocket clientSocket;
 
-  public SimpleUdpClient(String host, int port, XDR request) {
-this(host, port, request, true);
+  public SimpleUdpClient(String host, int port, XDR request,
+  DatagramSocket clientSocket) {
+this(host, port, request, true, clientSocket);
   }
 
-  public SimpleUdpClient(String host, int port, XDR request, Boolean oneShot) {
+  public SimpleUdpClient(String host, int port, XDR request, Boolean oneShot,
+  DatagramSocket clientSocket) {
 this.host = host;
 this.port = port;
 this.request = request;
 this.oneShot = oneShot;
+this.clientSocket = clientSocket;
   }
 
   public void run() throws

svn commit: r1584227 - in /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src: main/java/org/apache/hadoop/tools/ main/java/org/apache/hadoop/tools/mapred/ test/java/org/apache/hadoop/tools/ test/java

2014-04-02 Thread atm
Author: atm
Date: Thu Apr  3 00:32:25 2014
New Revision: 1584227

URL: http://svn.apache.org/r1584227
Log:
HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is 
specified. Contributed by Yongjun Zhang.

Added:

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSystem.java
Modified:

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListing.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestFileBasedCopyListing.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestGlobbedCopyListing.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestUniformSizeInputFormat.java

hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java

Modified: 
hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java?rev=1584227r1=1584226r2=1584227view=diff
==
--- 
hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
 (original)
+++ 
hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
 Thu Apr  3 00:32:25 2014
@@ -40,6 +40,8 @@ import org.apache.hadoop.util.ToolRunner
 import java.io.IOException;
 import java.util.Random;
 
+import com.google.common.annotations.VisibleForTesting;
+
 /**
  * DistCp is the main driver-class for DistCpV2.
  * For command-line use, DistCp::main() orchestrates the parsing of 
command-line
@@ -87,7 +89,8 @@ public class DistCp extends Configured i
   /**
* To be used with the ToolRunner. Not for public consumption.
*/
-  private DistCp() {}
+  @VisibleForTesting
+  public DistCp() {}
 
   /**
* Implementation of Tool::run(). Orchestrates the copy of source file(s)
@@ -105,7 +108,7 @@ public class DistCp extends Configured i
 
 try {
   inputOptions = (OptionsParser.parse(argv));
-
+  setTargetPathExists();
   LOG.info(Input Options:  + inputOptions);
 } catch (Throwable e) {
   LOG.error(Invalid arguments: , e);
@@ -170,6 +173,18 @@ public class DistCp extends Configured i
   }
 
   /**
+   * Set targetPathExists in both inputOptions and job config,
+   * for the benefit of CopyCommitter
+   */
+  private void setTargetPathExists() throws IOException {
+Path target = inputOptions.getTargetPath();
+FileSystem targetFS = target.getFileSystem(getConf());
+boolean targetExists = targetFS.exists(target);
+inputOptions.setTargetPathExists(targetExists);
+getConf().setBoolean(DistCpConstants.CONF_LABEL_TARGET_PATH_EXISTS, 
+targetExists);
+  }
+  /**
* Create Job object for submitting it, with all the configuration
*
* @return Reference to job object.

Modified: 
hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java?rev=1584227r1=1584226r2=1584227view=diff
==
--- 
hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 (original)
+++ 
hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 Thu Apr  3 00:32:25 2014
@@ -74,6 +74,9 @@ public class DistCpConstants {
*/
   public static final String CONF_LABEL_TARGET_FINAL_PATH = 
distcp.target.final.path;
 
+  /* Boolean to indicate whether the target of distcp exists

svn commit: r1584227 - /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

2014-04-02 Thread atm
Author: atm
Date: Thu Apr  3 00:32:25 2014
New Revision: 1584227

URL: http://svn.apache.org/r1584227
Log:
HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is 
specified. Contributed by Yongjun Zhang.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1584227r1=1584226r2=1584227view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Thu Apr 
 3 00:32:25 2014
@@ -338,6 +338,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10414. Incorrect property name for RefreshUserMappingProtocol in
 hadoop-policy.xml. (Joey Echeverria via atm)
 
+HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is
+specified. (Yongjun Zhang via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES




svn commit: r1584232 - in /hadoop/common/branches/branch-2/hadoop-common-project: ./ hadoop-auth/ hadoop-common/ hadoop-common/CHANGES.txt hadoop-common/src/ hadoop-common/src/main/docs/ hadoop-common

2014-04-02 Thread atm
Author: atm
Date: Thu Apr  3 00:49:52 2014
New Revision: 1584232

URL: http://svn.apache.org/r1584232
Log:
MAPREDUCE-5014. Extending DistCp through a custom CopyListing is not possible. 
(Contributed by Srikanth Sundarrajan)

Modified:
hadoop/common/branches/branch-2/hadoop-common-project/   (props changed)
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-auth/   (props 
changed)
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/   
(props changed)

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
  (props changed)
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/   
(props changed)

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/
   (props changed)

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/
   (props changed)

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/core/
   (props changed)

Propchange: hadoop/common/branches/branch-2/hadoop-common-project/
--
  Merged /hadoop/common/trunk/hadoop-common-project:r1459690

Propchange: hadoop/common/branches/branch-2/hadoop-common-project/hadoop-auth/
--
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-auth:r1459690

Propchange: hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/
--
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common:r1459690

Propchange: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt:r1459690

Propchange: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/
--
  Merged /hadoop/common/trunk/hadoop-common-project/hadoop-common/src:r1459690

Propchange: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs:r1459690

Propchange: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java:r1459690

Propchange: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/core/
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core:r1459690




svn commit: r1584232 - in /hadoop/common/branches/branch-2: ./ hadoop-project/ hadoop-project/src/site/ hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/ hadoop-tools/hadoop-distcp/src

2014-04-02 Thread atm
Author: atm
Date: Thu Apr  3 00:49:52 2014
New Revision: 1584232

URL: http://svn.apache.org/r1584232
Log:
MAPREDUCE-5014. Extending DistCp through a custom CopyListing is not possible. 
(Contributed by Srikanth Sundarrajan)

Modified:
hadoop/common/branches/branch-2/   (props changed)
hadoop/common/branches/branch-2/hadoop-project/   (props changed)
hadoop/common/branches/branch-2/hadoop-project/pom.xml   (props changed)
hadoop/common/branches/branch-2/hadoop-project/src/site/   (props changed)

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListing.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java

Propchange: hadoop/common/branches/branch-2/
--
  Merged /hadoop/common/trunk:r1459690

Propchange: hadoop/common/branches/branch-2/hadoop-project/
--
  Merged /hadoop/common/trunk/hadoop-project:r1459690

Propchange: hadoop/common/branches/branch-2/hadoop-project/pom.xml
--
  Merged /hadoop/common/trunk/hadoop-project/pom.xml:r1459690

Propchange: hadoop/common/branches/branch-2/hadoop-project/src/site/
--
  Merged /hadoop/common/trunk/hadoop-project/src/site:r1459690

Modified: 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java?rev=1584232r1=1584231r2=1584232view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
 Thu Apr  3 00:49:52 2014
@@ -30,6 +30,7 @@ import org.apache.hadoop.tools.util.Dist
 import org.apache.hadoop.security.Credentials;
 
 import java.io.IOException;
+import java.lang.reflect.Constructor;
 
 /**
  * The CopyListing abstraction is responsible for how the list of
@@ -193,14 +194,34 @@ public abstract class CopyListing extend
* @param credentials Credentials object on which the FS delegation tokens 
are cached
* @param options The input Options, to help choose the appropriate 
CopyListing Implementation.
* @return An instance of the appropriate CopyListing implementation.
+   * @throws java.io.IOException - Exception if any
*/
   public static CopyListing getCopyListing(Configuration configuration,
Credentials credentials,
-   DistCpOptions options) {
-if (options.getSourceFileListing() == null) {
-  return new GlobbedCopyListing(configuration, credentials);
-} else {
-  return new FileBasedCopyListing(configuration, credentials);
+   DistCpOptions options)
+  throws IOException {
+
+String copyListingClassName = configuration.get(DistCpConstants.
+CONF_LABEL_COPY_LISTING_CLASS, );
+Class? extends CopyListing copyListingClass;
+try {
+  if (! copyListingClassName.isEmpty()) {
+copyListingClass = configuration.getClass(DistCpConstants.
+CONF_LABEL_COPY_LISTING_CLASS, GlobbedCopyListing.class,
+CopyListing.class);
+  } else {
+if (options.getSourceFileListing() == null) {
+copyListingClass = GlobbedCopyListing.class;
+} else {
+copyListingClass = FileBasedCopyListing.class;
+}
+  }
+  copyListingClassName = copyListingClass.getName();
+  Constructor? extends CopyListing constructor = copyListingClass.
+  getDeclaredConstructor(Configuration.class, Credentials.class);
+  return constructor.newInstance(configuration, credentials);
+} catch (Exception e) {
+  throw new IOException(Unable to instantiate  + copyListingClassName, 
e);
 }
   }
 

Modified: 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
URL: 
http

svn commit: r1584233 - /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

2014-04-02 Thread atm
Author: atm
Date: Thu Apr  3 00:56:12 2014
New Revision: 1584233

URL: http://svn.apache.org/r1584233
Log:
HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is 
specified. Contributed by Yongjun Zhang.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1584233r1=1584232r2=1584233view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Thu Apr  3 00:56:12 2014
@@ -33,6 +33,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10414. Incorrect property name for RefreshUserMappingProtocol in
 hadoop-policy.xml. (Joey Echeverria via atm)
 
+HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is
+specified. (Yongjun Zhang via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES




svn commit: r1584233 - in /hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src: main/java/org/apache/hadoop/tools/ main/java/org/apache/hadoop/tools/mapred/ test/java/org/apache/hadoop/tool

2014-04-02 Thread atm
Author: atm
Date: Thu Apr  3 00:56:12 2014
New Revision: 1584233

URL: http://svn.apache.org/r1584233
Log:
HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is 
specified. Contributed by Yongjun Zhang.

Added:

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSystem.java
Modified:

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListing.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestFileBasedCopyListing.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestGlobbedCopyListing.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestUniformSizeInputFormat.java

hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java

Modified: 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java?rev=1584233r1=1584232r2=1584233view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
 Thu Apr  3 00:56:12 2014
@@ -40,6 +40,8 @@ import org.apache.hadoop.util.ToolRunner
 import java.io.IOException;
 import java.util.Random;
 
+import com.google.common.annotations.VisibleForTesting;
+
 /**
  * DistCp is the main driver-class for DistCpV2.
  * For command-line use, DistCp::main() orchestrates the parsing of 
command-line
@@ -87,7 +89,8 @@ public class DistCp extends Configured i
   /**
* To be used with the ToolRunner. Not for public consumption.
*/
-  private DistCp() {}
+  @VisibleForTesting
+  public DistCp() {}
 
   /**
* Implementation of Tool::run(). Orchestrates the copy of source file(s)
@@ -105,7 +108,7 @@ public class DistCp extends Configured i
 
 try {
   inputOptions = (OptionsParser.parse(argv));
-
+  setTargetPathExists();
   LOG.info(Input Options:  + inputOptions);
 } catch (Throwable e) {
   LOG.error(Invalid arguments: , e);
@@ -170,6 +173,18 @@ public class DistCp extends Configured i
   }
 
   /**
+   * Set targetPathExists in both inputOptions and job config,
+   * for the benefit of CopyCommitter
+   */
+  private void setTargetPathExists() throws IOException {
+Path target = inputOptions.getTargetPath();
+FileSystem targetFS = target.getFileSystem(getConf());
+boolean targetExists = targetFS.exists(target);
+inputOptions.setTargetPathExists(targetExists);
+getConf().setBoolean(DistCpConstants.CONF_LABEL_TARGET_PATH_EXISTS, 
+targetExists);
+  }
+  /**
* Create Job object for submitting it, with all the configuration
*
* @return Reference to job object.

Modified: 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java?rev=1584233r1=1584232r2=1584233view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-distcp/src/main/java/org/apache

svn commit: r1583729 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/conf/hadoop-policy.xml

2014-04-01 Thread atm
Author: atm
Date: Tue Apr  1 16:19:03 2014
New Revision: 1583729

URL: http://svn.apache.org/r1583729
Log:
HADOOP-10414. Incorrect property name for RefreshUserMappingProtocol in 
hadoop-policy.xml. Contributed by Joey Echeverria.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1583729r1=1583728r2=1583729view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Tue Apr 
 1 16:19:03 2014
@@ -335,6 +335,9 @@ Release 2.5.0 - UNRELEASED
 removes unused FileContext.getFileStatus(..) and fixes various javac
 warnings.  (szetszwo)
 
+HADOOP-10414. Incorrect property name for RefreshUserMappingProtocol in
+hadoop-policy.xml. (Joey Echeverria via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml?rev=1583729r1=1583728r2=1583729view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
 Tue Apr  1 16:19:03 2014
@@ -85,7 +85,7 @@
   /property
 
   property
-namesecurity.refresh.usertogroups.mappings.protocol.acl/name
+namesecurity.refresh.user.mappings.protocol.acl/name
 value*/value
 descriptionACL for RefreshUserMappingsProtocol. Used to refresh
 users mappings. The ACL is a comma-separated list of user and




svn commit: r1583730 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/conf/hadoop-policy.xml

2014-04-01 Thread atm
Author: atm
Date: Tue Apr  1 16:22:12 2014
New Revision: 1583730

URL: http://svn.apache.org/r1583730
Log:
HADOOP-10414. Incorrect property name for RefreshUserMappingProtocol in 
hadoop-policy.xml. Contributed by Joey Echeverria.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1583730r1=1583729r2=1583730view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Tue Apr  1 16:22:12 2014
@@ -30,6 +30,9 @@ Release 2.5.0 - UNRELEASED
 
 HADOOP-10439. Fix compilation error in branch-2 after HADOOP-10426. 
(wheat9)
 
+HADOOP-10414. Incorrect property name for RefreshUserMappingProtocol in
+hadoop-policy.xml. (Joey Echeverria via atm)
+
 Release 2.4.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml?rev=1583730r1=1583729r2=1583730view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
 Tue Apr  1 16:22:12 2014
@@ -85,7 +85,7 @@
   /property
 
   property
-namesecurity.refresh.usertogroups.mappings.protocol.acl/name
+namesecurity.refresh.user.mappings.protocol.acl/name
 value*/value
 descriptionACL for RefreshUserMappingsProtocol. Used to refresh
 users mappings. The ACL is a comma-separated list of user and




svn commit: r1580666 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/security/SaslRpcClient.java

2014-03-23 Thread atm
Author: atm
Date: Mon Mar 24 00:00:57 2014
New Revision: 1580666

URL: http://svn.apache.org/r1580666
Log:
HADOOP-10418. SaslRpcClient should not assume that remote principals are in the 
default_realm. Contributed by Aaron T. Myers.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1580666r1=1580665r2=1580666view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Mon Mar 
24 00:00:57 2014
@@ -318,6 +318,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10378. Typo in help printed by hdfs dfs -help.
 (Mit Desai via suresh)
 
+HADOOP-10418. SaslRpcClient should not assume that remote principals are in
+the default_realm. (atm)
+
 Release 2.4.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java?rev=1580666r1=1580665r2=1580666view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
 Mon Mar 24 00:00:57 2014
@@ -300,7 +300,9 @@ public class SaslRpcClient {
 }
 // construct server advertised principal for comparision
 String serverPrincipal = new KerberosPrincipal(
-authType.getProtocol() + / + authType.getServerId()).getName();
+authType.getProtocol() + / + authType.getServerId(),
+KerberosPrincipal.KRB_NT_SRV_HST).getName();
+
 boolean isPrincipalValid = false;
 
 // use the pattern if defined




svn commit: r1580667 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/security/SaslRpcClient.java

2014-03-23 Thread atm
Author: atm
Date: Mon Mar 24 00:02:46 2014
New Revision: 1580667

URL: http://svn.apache.org/r1580667
Log:
HADOOP-10418. SaslRpcClient should not assume that remote principals are in the 
default_realm. Contributed by Aaron T. Myers.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1580667r1=1580666r2=1580667view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Mon Mar 24 00:02:46 2014
@@ -15,6 +15,9 @@ Release 2.5.0 - UNRELEASED
 HADOOP-10378. Typo in help printed by hdfs dfs -help.
 (Mit Desai via suresh)
 
+HADOOP-10418. SaslRpcClient should not assume that remote principals are in
+the default_realm. (atm)
+
 Release 2.4.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java?rev=1580667r1=1580666r2=1580667view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
 Mon Mar 24 00:02:46 2014
@@ -300,7 +300,9 @@ public class SaslRpcClient {
 }
 // construct server advertised principal for comparision
 String serverPrincipal = new KerberosPrincipal(
-authType.getProtocol() + / + authType.getServerId()).getName();
+authType.getProtocol() + / + authType.getServerId(),
+KerberosPrincipal.KRB_NT_SRV_HST).getName();
+
 boolean isPrincipalValid = false;
 
 // use the pattern if defined




svn commit: r1574172 - /hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

2014-03-04 Thread atm
Author: atm
Date: Tue Mar  4 18:38:18 2014
New Revision: 1574172

URL: http://svn.apache.org/r1574172
Log:
Update an affiliation. 

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

Modified: hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml?rev=1574172r1=1574171r2=1574172view=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml Tue 
Mar  4 18:38:18 2014
@@ -291,7 +291,7 @@
tr
  tdstack/td
  tdMichael Stack/td
- tdStumbleUpon/td
+ tdCloudera/td
  tdHBase/td
  td-8/td
/tr




svn commit: r1572235 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/fs/s3native/ src/main/resources/ src/test/java/org/apache/hadoop/fs/s3native/ src

2014-02-26 Thread atm
Author: atm
Date: Wed Feb 26 20:28:41 2014
New Revision: 1572235

URL: http://svn.apache.org/r1572235
Log:
HADOOP-9454. Support multipart uploads for s3native. Contributed by Jordan 
Mendelson and Akira AJISAKA.

Added:

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/jets3t.properties
Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1572235r1=1572234r2=1572235view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Wed Feb 
26 20:28:41 2014
@@ -343,6 +343,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10348. Deprecate hadoop.ssl.configuration in branch-2, and remove
 it in trunk. (Haohui Mai via jing9)
 
+HADOOP-9454. Support multipart uploads for s3native. (Jordan Mendelson and
+Akira AJISAKA via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java?rev=1572235r1=1572234r2=1572235view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 Wed Feb 26 20:28:41 2014
@@ -28,6 +28,9 @@ import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InputStream;
 import java.net.URI;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -41,10 +44,13 @@ import org.jets3t.service.S3ServiceExcep
 import org.jets3t.service.ServiceException;
 import org.jets3t.service.StorageObjectsChunk;
 import org.jets3t.service.impl.rest.httpclient.RestS3Service;
+import org.jets3t.service.model.MultipartPart;
+import org.jets3t.service.model.MultipartUpload;
 import org.jets3t.service.model.S3Bucket;
 import org.jets3t.service.model.S3Object;
 import org.jets3t.service.model.StorageObject;
 import org.jets3t.service.security.AWSCredentials;
+import org.jets3t.service.utils.MultipartUtils;
 
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
@@ -52,6 +58,12 @@ class Jets3tNativeFileSystemStore implem
   
   private S3Service s3Service;
   private S3Bucket bucket;
+
+  private long multipartBlockSize;
+  private boolean multipartEnabled;
+  private long multipartCopyBlockSize;
+  static final long MAX_PART_SIZE = (long)5 * 1024 * 1024 * 1024;
+  
   public static final Log LOG =
   LogFactory.getLog(Jets3tNativeFileSystemStore.class);
 
@@ -67,13 +79,27 @@ class Jets3tNativeFileSystemStore implem
 } catch (S3ServiceException e) {
   handleS3ServiceException(e);
 }
+multipartEnabled =
+conf.getBoolean(fs.s3n.multipart.uploads.enabled, false);
+multipartBlockSize = Math.min(
+conf.getLong(fs.s3n.multipart.uploads.block.size, 64 * 1024 * 1024),
+MAX_PART_SIZE);
+multipartCopyBlockSize = Math.min(
+conf.getLong(fs.s3n.multipart.copy.block.size, MAX_PART_SIZE),
+MAX_PART_SIZE);
+
 bucket = new S3Bucket(uri.getHost());
   }
   
   @Override
   public void storeFile(String key, File file, byte[] md5Hash)
 throws IOException {
-
+
+if (multipartEnabled  file.length() = multipartBlockSize) {
+  storeLargeFile(key, file, md5Hash);
+  return;
+}
+
 BufferedInputStream in = null;
 try {
   in = new BufferedInputStream(new FileInputStream(file));
@@ -98,6 +124,31 @@ class Jets3tNativeFileSystemStore implem
 }
   }
 
+  public void storeLargeFile(String key, File file, byte[] md5Hash)
+  throws IOException {
+S3Object object = new S3Object(key);
+object.setDataInputFile(file);
+object.setContentType(binary/octet-stream);
+object.setContentLength(file.length());
+if (md5Hash != null) {
+  object.setMd5Hash(md5Hash);
+}
+
+ListStorageObject

svn commit: r1572237 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/fs/s3native/ src/main/resources/ src/test/java/org/apache/hadoop/fs/s

2014-02-26 Thread atm
Author: atm
Date: Wed Feb 26 20:31:06 2014
New Revision: 1572237

URL: http://svn.apache.org/r1572237
Log:
HADOOP-9454. Support multipart uploads for s3native. Contributed by Jordan 
Mendelson and Akira AJISAKA.

Added:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/resources/jets3t.properties
Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1572237r1=1572236r2=1572237view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Wed Feb 26 20:31:06 2014
@@ -37,6 +37,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10348. Deprecate hadoop.ssl.configuration in branch-2, and remove
 it in trunk. (Haohui Mai via jing9)
 
+HADOOP-9454. Support multipart uploads for s3native. (Jordan Mendelson and
+Akira AJISAKA via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java?rev=1572237r1=1572236r2=1572237view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 Wed Feb 26 20:31:06 2014
@@ -28,6 +28,9 @@ import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InputStream;
 import java.net.URI;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -41,10 +44,13 @@ import org.jets3t.service.S3ServiceExcep
 import org.jets3t.service.ServiceException;
 import org.jets3t.service.StorageObjectsChunk;
 import org.jets3t.service.impl.rest.httpclient.RestS3Service;
+import org.jets3t.service.model.MultipartPart;
+import org.jets3t.service.model.MultipartUpload;
 import org.jets3t.service.model.S3Bucket;
 import org.jets3t.service.model.S3Object;
 import org.jets3t.service.model.StorageObject;
 import org.jets3t.service.security.AWSCredentials;
+import org.jets3t.service.utils.MultipartUtils;
 
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
@@ -52,6 +58,12 @@ class Jets3tNativeFileSystemStore implem
   
   private S3Service s3Service;
   private S3Bucket bucket;
+
+  private long multipartBlockSize;
+  private boolean multipartEnabled;
+  private long multipartCopyBlockSize;
+  static final long MAX_PART_SIZE = (long)5 * 1024 * 1024 * 1024;
+  
   public static final Log LOG =
   LogFactory.getLog(Jets3tNativeFileSystemStore.class);
 
@@ -67,13 +79,27 @@ class Jets3tNativeFileSystemStore implem
 } catch (S3ServiceException e) {
   handleS3ServiceException(e);
 }
+multipartEnabled =
+conf.getBoolean(fs.s3n.multipart.uploads.enabled, false);
+multipartBlockSize = Math.min(
+conf.getLong(fs.s3n.multipart.uploads.block.size, 64 * 1024 * 1024),
+MAX_PART_SIZE);
+multipartCopyBlockSize = Math.min(
+conf.getLong(fs.s3n.multipart.copy.block.size, MAX_PART_SIZE),
+MAX_PART_SIZE);
+
 bucket = new S3Bucket(uri.getHost());
   }
   
   @Override
   public void storeFile(String key, File file, byte[] md5Hash)
 throws IOException {
-
+
+if (multipartEnabled  file.length() = multipartBlockSize) {
+  storeLargeFile(key, file, md5Hash);
+  return;
+}
+
 BufferedInputStream in = null;
 try {
   in = new BufferedInputStream(new FileInputStream(file));
@@ -98,6 +124,31 @@ class Jets3tNativeFileSystemStore implem
 }
   }
 
+  public void storeLargeFile(String key, File file, byte[] md5Hash)
+  throws IOException {
+S3Object object = new S3Object(key);
+object.setDataInputFile(file);
+object.setContentType

svn commit: r1572241 - in /hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/fs/s3native/ src/main/resources/ src/test/java/org/apache/hadoop/fs

2014-02-26 Thread atm
Author: atm
Date: Wed Feb 26 20:34:27 2014
New Revision: 1572241

URL: http://svn.apache.org/r1572241
Log:
HADOOP-9454. Support multipart uploads for s3native. Contributed by Jordan 
Mendelson and Akira AJISAKA.

Added:

hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/TestJets3tNativeFileSystemStore.java

hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/test/resources/jets3t.properties
Modified:

hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java

hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

Modified: 
hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1572241r1=1572240r2=1572241view=diff
==
--- 
hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/CHANGES.txt
 (original)
+++ 
hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/CHANGES.txt
 Wed Feb 26 20:34:27 2014
@@ -22,6 +22,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10348. Deprecate hadoop.ssl.configuration in branch-2, and remove
 it in trunk. (Haohui Mai via jing9)
 
+HADOOP-9454. Support multipart uploads for s3native. (Jordan Mendelson and
+Akira AJISAKA via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java?rev=1572241r1=1572240r2=1572241view=diff
==
--- 
hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 (original)
+++ 
hadoop/common/branches/branch-2.4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 Wed Feb 26 20:34:27 2014
@@ -28,6 +28,9 @@ import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InputStream;
 import java.net.URI;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -41,10 +44,13 @@ import org.jets3t.service.S3ServiceExcep
 import org.jets3t.service.ServiceException;
 import org.jets3t.service.StorageObjectsChunk;
 import org.jets3t.service.impl.rest.httpclient.RestS3Service;
+import org.jets3t.service.model.MultipartPart;
+import org.jets3t.service.model.MultipartUpload;
 import org.jets3t.service.model.S3Bucket;
 import org.jets3t.service.model.S3Object;
 import org.jets3t.service.model.StorageObject;
 import org.jets3t.service.security.AWSCredentials;
+import org.jets3t.service.utils.MultipartUtils;
 
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
@@ -52,6 +58,12 @@ class Jets3tNativeFileSystemStore implem
   
   private S3Service s3Service;
   private S3Bucket bucket;
+
+  private long multipartBlockSize;
+  private boolean multipartEnabled;
+  private long multipartCopyBlockSize;
+  static final long MAX_PART_SIZE = (long)5 * 1024 * 1024 * 1024;
+  
   public static final Log LOG =
   LogFactory.getLog(Jets3tNativeFileSystemStore.class);
 
@@ -67,13 +79,27 @@ class Jets3tNativeFileSystemStore implem
 } catch (S3ServiceException e) {
   handleS3ServiceException(e);
 }
+multipartEnabled =
+conf.getBoolean(fs.s3n.multipart.uploads.enabled, false);
+multipartBlockSize = Math.min(
+conf.getLong(fs.s3n.multipart.uploads.block.size, 64 * 1024 * 1024),
+MAX_PART_SIZE);
+multipartCopyBlockSize = Math.min(
+conf.getLong(fs.s3n.multipart.copy.block.size, MAX_PART_SIZE),
+MAX_PART_SIZE);
+
 bucket = new S3Bucket(uri.getHost());
   }
   
   @Override
   public void storeFile(String key, File file, byte[] md5Hash)
 throws IOException {
-
+
+if (multipartEnabled  file.length() = multipartBlockSize) {
+  storeLargeFile(key, file, md5Hash);
+  return;
+}
+
 BufferedInputStream in = null;
 try {
   in = new BufferedInputStream(new FileInputStream(file));
@@ -98,6 +124,31 @@ class Jets3tNativeFileSystemStore implem
 }
   }
 
+  public void storeLargeFile(String key, File file, byte[] md5Hash)
+  throws IOException {
+S3Object object = new S3Object(key);
+object.setDataInputFile(file

svn commit: r1570776 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/ipc/ src/main/java/org/apache/hadoop/security/

2014-02-21 Thread atm
Author: atm
Date: Sat Feb 22 01:09:54 2014
New Revision: 1570776

URL: http://svn.apache.org/r1570776
Log:
HADOOP-10070. RPC client doesn't use per-connection conf to determine server's 
expected Kerberos principal name. Contributed by Aaron T. Myers.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ClientCache.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1570776r1=1570775r2=1570776view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Sat Feb 
22 01:09:54 2014
@@ -406,6 +406,9 @@ Release 2.4.0 - UNRELEASED
 
 HADOOP-10355. Fix TestLoadGenerator#testLoadGenerator. (Haohui Mai via 
jing9)
 
+HADOOP-10070. RPC client doesn't use per-connection conf to determine
+server's expected Kerberos principal name. (atm)
+
 Release 2.3.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java?rev=1570776r1=1570775r2=1570776view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 Sat Feb 22 01:09:54 2014
@@ -542,8 +542,11 @@ public class Client {
 
 private synchronized AuthMethod setupSaslConnection(final InputStream in2, 
 final OutputStream out2) throws IOException, InterruptedException {
+  // Do not use Client.conf here! We must use ConnectionId.conf, since the
+  // Client object is cached and shared between all RPC clients, even those
+  // for separate services.
   saslRpcClient = new SaslRpcClient(remoteId.getTicket(),
-  remoteId.getProtocol(), remoteId.getAddress(), conf);
+  remoteId.getProtocol(), remoteId.getAddress(), remoteId.conf);
   return saslRpcClient.saslConnect(in2, out2);
 }
 
@@ -1480,21 +1483,31 @@ public class Client {
 private final boolean doPing; //do we need to send ping message
 private final int pingInterval; // how often sends ping to the server in 
msecs
 private String saslQop; // here for testing
+private final Configuration conf; // used to get the expected kerberos 
principal name
 
 ConnectionId(InetSocketAddress address, Class? protocol, 
- UserGroupInformation ticket, int rpcTimeout, int maxIdleTime, 
- RetryPolicy connectionRetryPolicy, int 
maxRetriesOnSocketTimeouts,
- boolean tcpNoDelay, boolean doPing, int pingInterval) {
+ UserGroupInformation ticket, int rpcTimeout,
+ RetryPolicy connectionRetryPolicy, Configuration conf) {
   this.protocol = protocol;
   this.address = address;
   this.ticket = ticket;
   this.rpcTimeout = rpcTimeout;
-  this.maxIdleTime = maxIdleTime;
   this.connectionRetryPolicy = connectionRetryPolicy;
-  this.maxRetriesOnSocketTimeouts = maxRetriesOnSocketTimeouts;
-  this.tcpNoDelay = tcpNoDelay;
-  this.doPing = doPing;
-  this.pingInterval = pingInterval;
+
+  this.maxIdleTime = conf.getInt(
+  CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY,
+  
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_DEFAULT);
+  this.maxRetriesOnSocketTimeouts = conf.getInt(
+  
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY,
+  
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT);
+  this.tcpNoDelay = conf.getBoolean(
+  CommonConfigurationKeysPublic.IPC_CLIENT_TCPNODELAY_KEY,
+  CommonConfigurationKeysPublic.IPC_CLIENT_TCPNODELAY_DEFAULT);
+  this.doPing = conf.getBoolean(
+  CommonConfigurationKeys.IPC_CLIENT_PING_KEY,
+  CommonConfigurationKeys.IPC_CLIENT_PING_DEFAULT);
+  this.pingInterval = (doPing ? Client.getPingInterval(conf) : 0);
+  this.conf = conf;
 }
 
 InetSocketAddress getAddress() {
@@ -1572,19 +1585,8 @@ public class Client {
 max, retryInterval

svn commit: r1570777 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/ipc/ src/main/java/org/apache/hadoop/security/

2014-02-21 Thread atm
Author: atm
Date: Sat Feb 22 01:12:13 2014
New Revision: 1570777

URL: http://svn.apache.org/r1570777
Log:
HADOOP-10070. RPC client doesn't use per-connection conf to determine server's 
expected Kerberos principal name. Contributed by Aaron T. Myers.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ClientCache.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1570777r1=1570776r2=1570777view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Sat Feb 22 01:12:13 2014
@@ -69,6 +69,9 @@ Release 2.4.0 - UNRELEASED
 
 HADOOP-10355. Fix TestLoadGenerator#testLoadGenerator. (Haohui Mai via 
jing9)
 
+HADOOP-10070. RPC client doesn't use per-connection conf to determine
+server's expected Kerberos principal name. (atm)
+
 Release 2.3.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java?rev=1570777r1=1570776r2=1570777view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 Sat Feb 22 01:12:13 2014
@@ -542,8 +542,11 @@ public class Client {
 
 private synchronized AuthMethod setupSaslConnection(final InputStream in2, 
 final OutputStream out2) throws IOException, InterruptedException {
+  // Do not use Client.conf here! We must use ConnectionId.conf, since the
+  // Client object is cached and shared between all RPC clients, even those
+  // for separate services.
   saslRpcClient = new SaslRpcClient(remoteId.getTicket(),
-  remoteId.getProtocol(), remoteId.getAddress(), conf);
+  remoteId.getProtocol(), remoteId.getAddress(), remoteId.conf);
   return saslRpcClient.saslConnect(in2, out2);
 }
 
@@ -1480,21 +1483,31 @@ public class Client {
 private final boolean doPing; //do we need to send ping message
 private final int pingInterval; // how often sends ping to the server in 
msecs
 private String saslQop; // here for testing
+private final Configuration conf; // used to get the expected kerberos 
principal name
 
 ConnectionId(InetSocketAddress address, Class? protocol, 
- UserGroupInformation ticket, int rpcTimeout, int maxIdleTime, 
- RetryPolicy connectionRetryPolicy, int 
maxRetriesOnSocketTimeouts,
- boolean tcpNoDelay, boolean doPing, int pingInterval) {
+ UserGroupInformation ticket, int rpcTimeout,
+ RetryPolicy connectionRetryPolicy, Configuration conf) {
   this.protocol = protocol;
   this.address = address;
   this.ticket = ticket;
   this.rpcTimeout = rpcTimeout;
-  this.maxIdleTime = maxIdleTime;
   this.connectionRetryPolicy = connectionRetryPolicy;
-  this.maxRetriesOnSocketTimeouts = maxRetriesOnSocketTimeouts;
-  this.tcpNoDelay = tcpNoDelay;
-  this.doPing = doPing;
-  this.pingInterval = pingInterval;
+
+  this.maxIdleTime = conf.getInt(
+  CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY,
+  
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECTION_MAXIDLETIME_DEFAULT);
+  this.maxRetriesOnSocketTimeouts = conf.getInt(
+  
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY,
+  
CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT);
+  this.tcpNoDelay = conf.getBoolean(
+  CommonConfigurationKeysPublic.IPC_CLIENT_TCPNODELAY_KEY,
+  CommonConfigurationKeysPublic.IPC_CLIENT_TCPNODELAY_DEFAULT);
+  this.doPing = conf.getBoolean(
+  CommonConfigurationKeys.IPC_CLIENT_PING_KEY,
+  CommonConfigurationKeys.IPC_CLIENT_PING_DEFAULT);
+  this.pingInterval = (doPing ? Client.getPingInterval(conf) : 0

svn commit: r1566965 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/fs/s3/ src/main/java/org/apache/hadoop/fs/s3native/ src/test/java/org/apache/hado

2014-02-10 Thread atm
Author: atm
Date: Tue Feb 11 02:47:05 2014
New Revision: 1566965

URL: http://svn.apache.org/r1566965
Log:
HADOOP-10326. M/R jobs can not access S3 if Kerberos is enabled. Contributed by 
bc Wong.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1566965r1=1566964r2=1566965view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Tue Feb 
11 02:47:05 2014
@@ -331,6 +331,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10330. TestFrameDecoder fails if it cannot bind port 12345.
 (Arpit Agarwal)
 
+HADOOP-10326. M/R jobs can not access S3 if Kerberos is enabled. (bc Wong
+via atm)
+
 Release 2.3.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java?rev=1566965r1=1566964r2=1566965view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
 Tue Feb 11 02:47:05 2014
@@ -443,6 +443,12 @@ public class S3FileSystem extends FileSy
 return getConf().getLong(fs.s3.block.size, 64 * 1024 * 1024);
   }
 
+  @Override
+  public String getCanonicalServiceName() {
+// Does not support Token
+return null;
+  }
+
   // diagnostic methods
 
   void dump() throws IOException {

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java?rev=1566965r1=1566964r2=1566965view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 Tue Feb 11 02:47:05 2014
@@ -733,4 +733,10 @@ public class NativeS3FileSystem extends 
   public Path getWorkingDirectory() {
 return workingDir;
   }
+
+  @Override
+  public String getCanonicalServiceName() {
+// Does not support Token
+return null;
+  }
 }

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java?rev=1566965r1=1566964r2=1566965view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
 Tue Feb 11 02:47:05 2014
@@ -54,5 +54,10 @@ public abstract class S3FileSystemContra
 assertEquals(Double default block size, newBlockSize,
fs.getFileStatus(file).getBlockSize());
   }
-  
+
+  public void testCanonicalName() throws Exception {
+assertNull(s3 doesn't support security token and shouldn't have canonical 
name,
+   fs.getCanonicalServiceName());
+  }
+
 }

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java?rev=1566965r1=1566964r2=1566965view=diff

svn commit: r1566966 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/fs/s3/ src/main/java/org/apache/hadoop/fs/s3native/ src/test/java/org

2014-02-10 Thread atm
Author: atm
Date: Tue Feb 11 02:49:07 2014
New Revision: 1566966

URL: http://svn.apache.org/r1566966
Log:
HADOOP-10326. M/R jobs can not access S3 if Kerberos is enabled. Contributed by 
bc Wong.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1566966r1=1566965r2=1566966view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Tue Feb 11 02:49:07 2014
@@ -33,6 +33,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10330. TestFrameDecoder fails if it cannot bind port 12345.
 (Arpit Agarwal)
 
+HADOOP-10326. M/R jobs can not access S3 if Kerberos is enabled. (bc Wong
+via atm)
+
 Release 2.3.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java?rev=1566966r1=1566965r2=1566966view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
 Tue Feb 11 02:49:07 2014
@@ -350,6 +350,12 @@ public class S3FileSystem extends FileSy
 return getConf().getLong(fs.s3.block.size, 64 * 1024 * 1024);
   }
 
+  @Override
+  public String getCanonicalServiceName() {
+// Does not support Token
+return null;
+  }
+
   // diagnostic methods
 
   void dump() throws IOException {

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java?rev=1566966r1=1566965r2=1566966view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 Tue Feb 11 02:49:07 2014
@@ -699,4 +699,10 @@ public class NativeS3FileSystem extends 
   public Path getWorkingDirectory() {
 return workingDir;
   }
+
+  @Override
+  public String getCanonicalServiceName() {
+// Does not support Token
+return null;
+  }
 }

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java?rev=1566966r1=1566965r2=1566966view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
 Tue Feb 11 02:49:07 2014
@@ -54,5 +54,10 @@ public abstract class S3FileSystemContra
 assertEquals(Double default block size, newBlockSize,
fs.getFileStatus(file).getBlockSize());
   }
-  
+
+  public void testCanonicalName() throws Exception {
+assertNull(s3 doesn't support security token and shouldn't have canonical 
name,
+   fs.getCanonicalServiceName());
+  }
+
 }

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native

svn commit: r1562868 - in /hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/ipc/Server.java

2014-01-30 Thread atm
Author: atm
Date: Thu Jan 30 15:56:32 2014
New Revision: 1562868

URL: http://svn.apache.org/r1562868
Log:
HADOOP-10310. SaslRpcServer should be initialized even when no secret manager 
present. Contributed by Aaron T. Myers.

Modified:

hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java

Modified: 
hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1562868r1=1562867r2=1562868view=diff
==
--- 
hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/CHANGES.txt
 (original)
+++ 
hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/CHANGES.txt
 Thu Jan 30 15:56:32 2014
@@ -367,6 +367,9 @@ Release 2.3.0 - UNRELEASED
 HADOOP-10252. HttpServer can't start if hostname is not specified. (Jimmy
 Xiang via atm)
 
+HADOOP-10310. SaslRpcServer should be initialized even when no secret
+manager present. (atm)
+
 Release 2.2.0 - 2013-10-13
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java?rev=1562868r1=1562867r2=1562868view=diff
==
--- 
hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 (original)
+++ 
hadoop/common/branches/branch-2.3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 Thu Jan 30 15:56:32 2014
@@ -2120,7 +2120,7 @@ public abstract class Server {
 // Create the responder here
 responder = new Responder();
 
-if (secretManager != null) {
+if (secretManager != null || UserGroupInformation.isSecurityEnabled()) {
   SaslRpcServer.init(conf);
 }
 




svn commit: r1562863 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/ipc/Server.java

2014-01-30 Thread atm
Author: atm
Date: Thu Jan 30 15:49:47 2014
New Revision: 1562863

URL: http://svn.apache.org/r1562863
Log:
HADOOP-10310. SaslRpcServer should be initialized even when no secret manager 
present. Contributed by Aaron T. Myers.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1562863r1=1562862r2=1562863view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Thu Jan 
30 15:49:47 2014
@@ -665,6 +665,9 @@ Release 2.3.0 - UNRELEASED
 HADOOP-10288. Explicit reference to Log4JLogger breaks non-log4j users
 (todd)
 
+HADOOP-10310. SaslRpcServer should be initialized even when no secret
+manager present. (atm)
+
 Release 2.2.0 - 2013-10-13
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java?rev=1562863r1=1562862r2=1562863view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 Thu Jan 30 15:49:47 2014
@@ -2206,7 +2206,7 @@ public abstract class Server {
 // Create the responder here
 responder = new Responder();
 
-if (secretManager != null) {
+if (secretManager != null || UserGroupInformation.isSecurityEnabled()) {
   SaslRpcServer.init(conf);
 }
 




svn commit: r1562867 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/ipc/Server.java

2014-01-30 Thread atm
Author: atm
Date: Thu Jan 30 15:52:39 2014
New Revision: 1562867

URL: http://svn.apache.org/r1562867
Log:
HADOOP-10310. SaslRpcServer should be initialized even when no secret manager 
present. Contributed by Aaron T. Myers.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1562867r1=1562866r2=1562867view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Thu Jan 30 15:52:39 2014
@@ -379,6 +379,9 @@ Release 2.3.0 - UNRELEASED
 HADOOP-10288. Explicit reference to Log4JLogger breaks non-log4j users
 (todd)
 
+HADOOP-10310. SaslRpcServer should be initialized even when no secret
+manager present. (atm)
+
 Release 2.2.0 - 2013-10-13
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java?rev=1562867r1=1562866r2=1562867view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 Thu Jan 30 15:52:39 2014
@@ -2120,7 +2120,7 @@ public abstract class Server {
 // Create the responder here
 responder = new Responder();
 
-if (secretManager != null) {
+if (secretManager != null || UserGroupInformation.isSecurityEnabled()) {
   SaslRpcServer.init(conf);
 }
 




svn commit: r1561720 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java

2014-01-27 Thread atm
Author: atm
Date: Mon Jan 27 16:18:04 2014
New Revision: 1561720

URL: http://svn.apache.org/r1561720
Log:
HADOOP-10203. Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata. 
Contributed by Andrei Savu.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1561720r1=1561719r2=1561720view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Mon Jan 
27 16:18:04 2014
@@ -536,6 +536,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10252. HttpServer can't start if hostname is not specified. (Jimmy
 Xiang via atm)
 
+HADOOP-10203. Connection leak in
+Jets3tNativeFileSystemStore#retrieveMetadata. (Andrei Savu via atm)
+
 Release 2.3.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java?rev=1561720r1=1561719r2=1561720view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java
 Mon Jan 27 16:18:04 2014
@@ -110,23 +110,29 @@ class Jets3tNativeFileSystemStore implem
   handleS3ServiceException(e);
 }
   }
-  
+
   @Override
   public FileMetadata retrieveMetadata(String key) throws IOException {
+StorageObject object = null;
 try {
   if(LOG.isDebugEnabled()) {
 LOG.debug(Getting metadata for key:  + key +  from bucket: + 
bucket.getName());
   }
-  S3Object object = s3Service.getObject(bucket.getName(), key);
+  object = s3Service.getObjectDetails(bucket.getName(), key);
   return new FileMetadata(key, object.getContentLength(),
   object.getLastModifiedDate().getTime());
-} catch (S3ServiceException e) {
+
+} catch (ServiceException e) {
   // Following is brittle. Is there a better way?
-  if (e.getS3ErrorCode().matches(NoSuchKey)) {
+  if (NoSuchKey.equals(e.getErrorCode())) {
 return null; //return null if key not found
   }
-  handleS3ServiceException(e);
+  handleServiceException(e);
   return null; //never returned - keep compiler happy
+} finally {
+  if (object != null) {
+object.closeDataInputStream();
+  }
 }
   }
 




svn commit: r1561860 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: ./ dev-support/ src/main/java/org/apache/hadoop/util/ src/test/java/org/apache/hadoop/util/

2014-01-27 Thread atm
Author: atm
Date: Mon Jan 27 21:36:42 2014
New Revision: 1561860

URL: http://svn.apache.org/r1561860
Log:
HADOOP-10250. VersionUtil returns wrong value when comparing two versions. 
Contributed by Yongjun Zhang.

Added:

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionUtil.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestVersionUtil.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1561860r1=1561859r2=1561860view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Mon Jan 
27 21:36:42 2014
@@ -539,6 +539,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10203. Connection leak in
 Jets3tNativeFileSystemStore#retrieveMetadata. (Andrei Savu via atm)
 
+HADOOP-10250. VersionUtil returns wrong value when comparing two versions.
+(Yongjun Zhang via atm)
+
 Release 2.3.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml?rev=1561860r1=1561859r2=1561860view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
 Mon Jan 27 21:36:42 2014
@@ -364,4 +364,11 @@
   Bug pattern=OBL_UNSATISFIED_OBLIGATION/
 /Match
 
+ !-- code from maven source, null value is checked at callee side. --
+ Match
+   Class name=org.apache.hadoop.util.ComparableVersion$ListItem /
+   Method name=compareTo /
+   Bug code=NP /
+ /Match
+
 /FindBugsFilter

Added: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java?rev=1561860view=auto
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
 (added)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
 Mon Jan 27 21:36:42 2014
@@ -0,0 +1,479 @@
+// Code source of this file: 
+//   http://grepcode.com/file/repo1.maven.org/maven2/
+// org.apache.maven/maven-artifact/3.1.1/
+//   org/apache/maven/artifact/versioning/ComparableVersion.java/
+//
+// Modifications made on top of the source:
+//   1. Changed
+//package org.apache.maven.artifact.versioning;
+//  to
+//package org.apache.hadoop.util;
+//   2. Removed author tags to clear hadoop author tag warning
+//author a href=mailto:ken...@apache.org;Kenney Westerhof/a
+//author a href=mailto:hbout...@apache.org;Hervé Boutemy/a
+//
+package org.apache.hadoop.util;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.Locale;
+import java.util.Properties;
+import java.util.Stack;
+
+/**
+ * Generic implementation of version comparison.
+ * 
+ * pFeatures:
+ * ul
+ * limixing of 'code-/code' (dash) and 'code./code' (dot) 
separators,/li

svn commit: r1561861 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: ./ dev-support/ src/main/java/org/apache/hadoop/util/ src/test/java/org/apache/hadoop/util/

2014-01-27 Thread atm
Author: atm
Date: Mon Jan 27 21:39:01 2014
New Revision: 1561861

URL: http://svn.apache.org/r1561861
Log:
HADOOP-10250. VersionUtil returns wrong value when comparing two versions. 
Contributed by Yongjun Zhang.

Added:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionUtil.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestVersionUtil.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1561861r1=1561860r2=1561861view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Mon Jan 27 21:39:01 2014
@@ -243,6 +243,9 @@ Release 2.4.0 - UNRELEASED
 HADOOP-10203. Connection leak in
 Jets3tNativeFileSystemStore#retrieveMetadata. (Andrei Savu via atm)
 
+HADOOP-10250. VersionUtil returns wrong value when comparing two versions.
+(Yongjun Zhang via atm)
+
 Release 2.3.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml?rev=1561861r1=1561860r2=1561861view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
 Mon Jan 27 21:39:01 2014
@@ -364,4 +364,11 @@
   Bug pattern=OBL_UNSATISFIED_OBLIGATION/
 /Match
 
+ !-- code from maven source, null value is checked at callee side. --
+ Match
+   Class name=org.apache.hadoop.util.ComparableVersion$ListItem /
+   Method name=compareTo /
+   Bug code=NP /
+ /Match
+
 /FindBugsFilter

Added: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java?rev=1561861view=auto
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
 (added)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
 Mon Jan 27 21:39:01 2014
@@ -0,0 +1,479 @@
+// Code source of this file: 
+//   http://grepcode.com/file/repo1.maven.org/maven2/
+// org.apache.maven/maven-artifact/3.1.1/
+//   org/apache/maven/artifact/versioning/ComparableVersion.java/
+//
+// Modifications made on top of the source:
+//   1. Changed
+//package org.apache.maven.artifact.versioning;
+//  to
+//package org.apache.hadoop.util;
+//   2. Removed author tags to clear hadoop author tag warning
+//author a href=mailto:ken...@apache.org;Kenney Westerhof/a
+//author a href=mailto:hbout...@apache.org;Hervé Boutemy/a
+//
+package org.apache.hadoop.util;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.Locale

svn commit: r1560450 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/http/HttpServer.java src/test/java/org/apache/hadoop/http/TestHttpServer

2014-01-22 Thread atm
Author: atm
Date: Wed Jan 22 18:10:59 2014
New Revision: 1560450

URL: http://svn.apache.org/r1560450
Log:
HADOOP-10252. HttpServer can't start if hostname is not specified. Contributed 
by Jimmy Xiang.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1560450r1=1560449r2=1560450view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Wed Jan 
22 18:10:59 2014
@@ -533,6 +533,9 @@ Release 2.4.0 - UNRELEASED
 
 HADOOP-10235. Hadoop tarball has 2 versions of stax-api JARs. (tucu)
 
+HADOOP-10252. HttpServer can't start if hostname is not specified. (Jimmy
+Xiang via atm)
+
 Release 2.3.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java?rev=1560450r1=1560449r2=1560450view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
 Wed Jan 22 18:10:59 2014
@@ -455,7 +455,7 @@ public class HttpServer implements Filte
   public HttpServer(String name, String bindAddress, int port,
   boolean findPort, Configuration conf, AccessControlList adminsAcl, 
   Connector connector, String[] pathSpecs) throws IOException {
-this(new Builder().setName(name)
+this(new Builder().setName(name).hostName(bindAddress)
 .addEndpoint(URI.create(http://; + bindAddress + : + port))
 .setFindPort(findPort).setConf(conf).setACL(adminsAcl)
 .setConnector(connector).setPathSpec(pathSpecs));

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java?rev=1560450r1=1560449r2=1560450view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
 Wed Jan 22 18:10:59 2014
@@ -524,6 +524,17 @@ public class TestHttpServer extends Http
 Assert.assertFalse(HttpServer.isInstrumentationAccessAllowed(context, 
request, response));
   }
 
+  @Test
+  @SuppressWarnings(deprecation)
+  public void testOldConstructor() throws Exception {
+HttpServer server = new HttpServer(test, 0.0.0.0, 0, false);
+try {
+  server.start();
+} finally {
+  server.stop();
+}
+  }
+
   @Test public void testBindAddress() throws Exception {
 checkBindAddress(localhost, 0, false).stop();
 // hang onto this one for a bit more testing




svn commit: r1560451 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/http/HttpServer.java src/test/java/org/apache/hadoop/http/Te

2014-01-22 Thread atm
Author: atm
Date: Wed Jan 22 18:13:22 2014
New Revision: 1560451

URL: http://svn.apache.org/r1560451
Log:
HADOOP-10252. HttpServer can't start if hostname is not specified. Contributed 
by Jimmy Xiang.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1560451r1=1560450r2=1560451view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Wed Jan 22 18:13:22 2014
@@ -237,6 +237,9 @@ Release 2.4.0 - UNRELEASED
 
 HADOOP-10235. Hadoop tarball has 2 versions of stax-api JARs. (tucu)
 
+HADOOP-10252. HttpServer can't start if hostname is not specified. (Jimmy
+Xiang via atm)
+
 Release 2.3.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java?rev=1560451r1=1560450r2=1560451view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
 Wed Jan 22 18:13:22 2014
@@ -455,7 +455,7 @@ public class HttpServer implements Filte
   public HttpServer(String name, String bindAddress, int port,
   boolean findPort, Configuration conf, AccessControlList adminsAcl, 
   Connector connector, String[] pathSpecs) throws IOException {
-this(new Builder().setName(name)
+this(new Builder().setName(name).hostName(bindAddress)
 .addEndpoint(URI.create(http://; + bindAddress + : + port))
 .setFindPort(findPort).setConf(conf).setACL(adminsAcl)
 .setConnector(connector).setPathSpec(pathSpecs));

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java?rev=1560451r1=1560450r2=1560451view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
 Wed Jan 22 18:13:22 2014
@@ -524,6 +524,17 @@ public class TestHttpServer extends Http
 Assert.assertFalse(HttpServer.isInstrumentationAccessAllowed(context, 
request, response));
   }
 
+  @Test
+  @SuppressWarnings(deprecation)
+  public void testOldConstructor() throws Exception {
+HttpServer server = new HttpServer(test, 0.0.0.0, 0, false);
+try {
+  server.start();
+} finally {
+  server.stop();
+}
+  }
+
   @Test public void testBindAddress() throws Exception {
 checkBindAddress(localhost, 0, false).stop();
 // hang onto this one for a bit more testing




svn commit: r1556082 - in /hadoop/common/branches/branch-1: CHANGES.txt src/test/org/apache/hadoop/fs/TestCopyFiles.java src/tools/org/apache/hadoop/tools/DistCp.java

2014-01-06 Thread atm
Author: atm
Date: Tue Jan  7 00:24:09 2014
New Revision: 1556082

URL: http://svn.apache.org/r1556082
Log:
HDFS-5685. DistCp will fail to copy with -delete switch. Contributed by Yongjun 
Zhang.

Modified:
hadoop/common/branches/branch-1/CHANGES.txt

hadoop/common/branches/branch-1/src/test/org/apache/hadoop/fs/TestCopyFiles.java

hadoop/common/branches/branch-1/src/tools/org/apache/hadoop/tools/DistCp.java

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1556082r1=1556081r2=1556082view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Tue Jan  7 00:24:09 2014
@@ -181,6 +181,9 @@ Release 1.3.0 - unreleased
 MAPREDUCE-5698. Backport MAPREDUCE-1285 to branch-1 (Yongjun Zhang via
 Sandy Ryza)
 
+HDFS-5685. DistCp will fail to copy with -delete switch. (Yongjun Zhang
+via atm)
+
 Release 1.2.2 - unreleased
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-1/src/test/org/apache/hadoop/fs/TestCopyFiles.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/test/org/apache/hadoop/fs/TestCopyFiles.java?rev=1556082r1=1556081r2=1556082view=diff
==
--- 
hadoop/common/branches/branch-1/src/test/org/apache/hadoop/fs/TestCopyFiles.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/test/org/apache/hadoop/fs/TestCopyFiles.java
 Tue Jan  7 00:24:09 2014
@@ -49,6 +49,10 @@ import org.apache.log4j.Level;
  * A JUnit test for copying files recursively.
  */
 public class TestCopyFiles extends TestCase {
+  
+  private static final String JT_STAGING_AREA_ROOT = 
mapreduce.jobtracker.staging.root.dir;
+  private static final String JT_STAGING_AREA_ROOT_DEFAULT = 
/tmp/hadoop/mapred/staging;
+
   {
 ((Log4JLogger)LogFactory.getLog(org.apache.hadoop.hdfs.StateChange)
 ).getLogger().setLevel(Level.OFF);
@@ -56,8 +60,9 @@ public class TestCopyFiles extends TestC
 ((Log4JLogger)FSNamesystem.LOG).getLogger().setLevel(Level.OFF);
 ((Log4JLogger)DistCp.LOG).getLogger().setLevel(Level.ALL);
   }
-  
-  static final URI LOCAL_FS = URI.create(file:///);
+
+  private static final String LOCAL_FS_STR = file:///;
+  private static final URI LOCAL_FS_URI = URI.create(LOCAL_FS_STR);
   
   private static final Random RAN = new Random();
   private static final int NFILES = 20;
@@ -255,11 +260,11 @@ public class TestCopyFiles extends TestC
   /** copy files from local file system to local file system */
   public void testCopyFromLocalToLocal() throws Exception {
 Configuration conf = new Configuration();
-FileSystem localfs = FileSystem.get(LOCAL_FS, conf);
-MyFile[] files = createFiles(LOCAL_FS, TEST_ROOT_DIR+/srcdat);
+FileSystem localfs = FileSystem.get(LOCAL_FS_URI, conf);
+MyFile[] files = createFiles(LOCAL_FS_URI, TEST_ROOT_DIR+/srcdat);
 ToolRunner.run(new DistCp(new Configuration()),
-   new String[] {file:///+TEST_ROOT_DIR+/srcdat,
- file:///+TEST_ROOT_DIR+/destdat});
+   new String[] {LOCAL_FS_STR+TEST_ROOT_DIR+/srcdat,
+ 
LOCAL_FS_STR+TEST_ROOT_DIR+/destdat});
 assertTrue(Source and destination directories do not match.,
checkFiles(localfs, TEST_ROOT_DIR+/destdat, files));
 deldir(localfs, TEST_ROOT_DIR+/destdat);
@@ -305,11 +310,11 @@ public class TestCopyFiles extends TestC
   final FileSystem hdfs = cluster.getFileSystem();
   final String namenode = hdfs.getUri().toString();
   if (namenode.startsWith(hdfs://)) {
-MyFile[] files = createFiles(LOCAL_FS, TEST_ROOT_DIR+/srcdat);
+MyFile[] files = createFiles(LOCAL_FS_URI, TEST_ROOT_DIR+/srcdat);
 ToolRunner.run(new DistCp(conf), new String[] {
  -log,
  namenode+/logs,
- file:///+TEST_ROOT_DIR+/srcdat,
+ LOCAL_FS_STR+TEST_ROOT_DIR+/srcdat,
  namenode+/destdat});
 assertTrue(Source and destination directories do not match.,
checkFiles(cluster.getFileSystem(), /destdat, files));
@@ -317,7 +322,7 @@ public class TestCopyFiles extends TestC
 hdfs.exists(new Path(namenode+/logs)));
 deldir(hdfs, /destdat);
 deldir(hdfs, /logs);
-deldir(FileSystem.get(LOCAL_FS, conf), TEST_ROOT_DIR+/srcdat);
+deldir(FileSystem.get(LOCAL_FS_URI, conf), TEST_ROOT_DIR+/srcdat);
   }
 } finally {
   if (cluster != null) { cluster.shutdown(); }
@@ -329,7 +334,7 @@ public class TestCopyFiles extends TestC

svn commit: r1523147 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java

2013-09-13 Thread atm
Author: atm
Date: Fri Sep 13 23:54:45 2013
New Revision: 1523147

URL: http://svn.apache.org/r1523147
Log:
HADOOP-9945. HAServiceState should have a state for stopped services. 
Contributed by Karthik Kambatla.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1523147r1=1523146r2=1523147view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Fri Sep 
13 23:54:45 2013
@@ -408,6 +408,9 @@ Release 2.1.1-beta - UNRELEASED
 HADOOP-9918. Add addIfService to CompositeService (Karthik Kambatla via
 Sandy Ryza)
 
+HADOOP-9945. HAServiceState should have a state for stopped services.
+(Karthik Kambatla via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java?rev=1523147r1=1523146r2=1523147view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
 Fri Sep 13 23:54:45 2013
@@ -43,13 +43,15 @@ public interface HAServiceProtocol {
   public static final long versionID = 1L;
 
   /**
-   * An HA service may be in active or standby state. During
-   * startup, it is in an unknown INITIALIZING state.
+   * An HA service may be in active or standby state. During startup, it is in
+   * an unknown INITIALIZING state. During shutdown, it is in the STOPPING 
state
+   * and can no longer return to active/standby states.
*/
   public enum HAServiceState {
 INITIALIZING(initializing),
 ACTIVE(active),
-STANDBY(standby);
+STANDBY(standby),
+STOPPING(stopping);
 
 private String name;
 




svn commit: r1523150 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java

2013-09-13 Thread atm
Author: atm
Date: Fri Sep 13 23:58:45 2013
New Revision: 1523150

URL: http://svn.apache.org/r1523150
Log:
HADOOP-9945. HAServiceState should have a state for stopped services. 
Contributed by Karthik Kambatla.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1523150r1=1523149r2=1523150view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Fri Sep 13 23:58:45 2013
@@ -129,6 +129,9 @@ Release 2.1.1-beta - UNRELEASED
 HADOOP-9918. Add addIfService to CompositeService (Karthik Kambatla via
 Sandy Ryza)
 
+HADOOP-9945. HAServiceState should have a state for stopped services.
+(Karthik Kambatla via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java?rev=1523150r1=1523149r2=1523150view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
 Fri Sep 13 23:58:45 2013
@@ -43,13 +43,15 @@ public interface HAServiceProtocol {
   public static final long versionID = 1L;
 
   /**
-   * An HA service may be in active or standby state. During
-   * startup, it is in an unknown INITIALIZING state.
+   * An HA service may be in active or standby state. During startup, it is in
+   * an unknown INITIALIZING state. During shutdown, it is in the STOPPING 
state
+   * and can no longer return to active/standby states.
*/
   public enum HAServiceState {
 INITIALIZING(initializing),
 ACTIVE(active),
-STANDBY(standby);
+STANDBY(standby),
+STOPPING(stopping);
 
 private String name;
 




svn commit: r1523152 - in /hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java

2013-09-13 Thread atm
Author: atm
Date: Sat Sep 14 00:02:41 2013
New Revision: 1523152

URL: http://svn.apache.org/r1523152
Log:
HADOOP-9945. HAServiceState should have a state for stopped services. 
Contributed by Karthik Kambatla.

Modified:

hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java

Modified: 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1523152r1=1523151r2=1523152view=diff
==
--- 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt
 (original)
+++ 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt
 Sat Sep 14 00:02:41 2013
@@ -48,6 +48,9 @@ Release 2.1.1-beta - UNRELEASED
 HADOOP-9918. Add addIfService to CompositeService (Karthik Kambatla via
 Sandy Ryza)
 
+HADOOP-9945. HAServiceState should have a state for stopped services.
+(Karthik Kambatla via atm)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java?rev=1523152r1=1523151r2=1523152view=diff
==
--- 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
 (original)
+++ 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
 Sat Sep 14 00:02:41 2013
@@ -43,13 +43,15 @@ public interface HAServiceProtocol {
   public static final long versionID = 1L;
 
   /**
-   * An HA service may be in active or standby state. During
-   * startup, it is in an unknown INITIALIZING state.
+   * An HA service may be in active or standby state. During startup, it is in
+   * an unknown INITIALIZING state. During shutdown, it is in the STOPPING 
state
+   * and can no longer return to active/standby states.
*/
   public enum HAServiceState {
 INITIALIZING(initializing),
 ACTIVE(active),
-STANDBY(standby);
+STANDBY(standby),
+STOPPING(stopping);
 
 private String name;
 




svn commit: r1523155 - /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

2013-09-13 Thread atm
Author: atm
Date: Sat Sep 14 00:12:43 2013
New Revision: 1523155

URL: http://svn.apache.org/r1523155
Log:
HADOOP-9960. Upgrade Jersey version to 1.9. Contributed by Karthik Kambatla.

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1523155r1=1523154r2=1523155view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Sat Sep 
14 00:12:43 2013
@@ -463,6 +463,8 @@ Release 2.1.1-beta - UNRELEASED
 HADOOP-9958. Add old constructor back to DelegationTokenInformation to
 unbreak downstream builds. (Andrew Wang)
 
+HADOOP-9960. Upgrade Jersey version to 1.9. (Karthik Kambatla via atm)
+
 Release 2.1.0-beta - 2013-08-22
 
   INCOMPATIBLE CHANGES




svn commit: r1523155 - /hadoop/common/trunk/hadoop-project/pom.xml

2013-09-13 Thread atm
Author: atm
Date: Sat Sep 14 00:12:43 2013
New Revision: 1523155

URL: http://svn.apache.org/r1523155
Log:
HADOOP-9960. Upgrade Jersey version to 1.9. Contributed by Karthik Kambatla.

Modified:
hadoop/common/trunk/hadoop-project/pom.xml

Modified: hadoop/common/trunk/hadoop-project/pom.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-project/pom.xml?rev=1523155r1=1523154r2=1523155view=diff
==
--- hadoop/common/trunk/hadoop-project/pom.xml (original)
+++ hadoop/common/trunk/hadoop-project/pom.xml Sat Sep 14 00:12:43 2013
@@ -59,6 +59,9 @@
 
hadoop.common.build.dir${basedir}/../../hadoop-common-project/hadoop-common/target/hadoop.common.build.dir
 java.security.egdfile:///dev/urandom/java.security.egd
 
+!-- jersey version --
+jersey.version1.9/jersey.version
+
 !-- ProtocolBuffer version, used to verify the protoc version and --
 !-- define the protobuf JAR version   --
 protobuf.version2.5.0/protobuf.version
@@ -365,12 +368,12 @@
   dependency
 groupIdcom.sun.jersey/groupId
 artifactIdjersey-core/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
   dependency
 groupIdcom.sun.jersey/groupId
 artifactIdjersey-json/artifactId
-version1.8/version
+version${jersey.version}/version
 exclusions
   exclusion
 groupIdjavax.xml.stream/groupId
@@ -381,7 +384,7 @@
   dependency
 groupIdcom.sun.jersey/groupId
 artifactIdjersey-server/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
 
   dependency
@@ -399,19 +402,19 @@
   dependency
 groupIdcom.sun.jersey.contribs/groupId
 artifactIdjersey-guice/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
 
   dependency
 groupIdcom.sun.jersey.jersey-test-framework/groupId
 artifactIdjersey-test-framework-core/artifactId
-version1.8/version
+version${jersey.version}/version
 scopetest/scope
   /dependency
   dependency
 groupIdcom.sun.jersey.jersey-test-framework/groupId
 artifactIdjersey-test-framework-grizzly2/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
 
   dependency




svn commit: r1523156 - /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

2013-09-13 Thread atm
Author: atm
Date: Sat Sep 14 00:15:07 2013
New Revision: 1523156

URL: http://svn.apache.org/r1523156
Log:
HADOOP-9960. Upgrade Jersey version to 1.9. Contributed by Karthik Kambatla.

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1523156r1=1523155r2=1523156view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Sat Sep 14 00:15:07 2013
@@ -193,6 +193,8 @@ Release 2.1.1-beta - UNRELEASED
 HADOOP-9958. Add old constructor back to DelegationTokenInformation to
 unbreak downstream builds. (Andrew Wang)
 
+HADOOP-9960. Upgrade Jersey version to 1.9. (Karthik Kambatla via atm)
+
 Release 2.1.0-beta - 2013-08-22
 
   INCOMPATIBLE CHANGES




svn commit: r1523157 - /hadoop/common/branches/branch-2.1-beta/hadoop-project/pom.xml

2013-09-13 Thread atm
Author: atm
Date: Sat Sep 14 00:17:37 2013
New Revision: 1523157

URL: http://svn.apache.org/r1523157
Log:
HADOOP-9960. Upgrade Jersey version to 1.9. Contributed by Karthik Kambatla.

Modified:
hadoop/common/branches/branch-2.1-beta/hadoop-project/pom.xml

Modified: hadoop/common/branches/branch-2.1-beta/hadoop-project/pom.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.1-beta/hadoop-project/pom.xml?rev=1523157r1=1523156r2=1523157view=diff
==
--- hadoop/common/branches/branch-2.1-beta/hadoop-project/pom.xml (original)
+++ hadoop/common/branches/branch-2.1-beta/hadoop-project/pom.xml Sat Sep 14 
00:17:37 2013
@@ -59,6 +59,9 @@
 
hadoop.common.build.dir${basedir}/../../hadoop-common-project/hadoop-common/target/hadoop.common.build.dir
 java.security.egdfile:///dev/urandom/java.security.egd
 
+!-- jersey version --
+jersey.version1.9/jersey.version
+
 !-- ProtocolBuffer version, used to verify the protoc version and --
 !-- define the protobuf JAR version   --
 protobuf.version2.5.0/protobuf.version
@@ -358,12 +361,12 @@
   dependency
 groupIdcom.sun.jersey/groupId
 artifactIdjersey-core/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
   dependency
 groupIdcom.sun.jersey/groupId
 artifactIdjersey-json/artifactId
-version1.8/version
+version${jersey.version}/version
 exclusions
   exclusion
 groupIdjavax.xml.stream/groupId
@@ -374,7 +377,7 @@
   dependency
 groupIdcom.sun.jersey/groupId
 artifactIdjersey-server/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
 
   dependency
@@ -392,19 +395,19 @@
   dependency
 groupIdcom.sun.jersey.contribs/groupId
 artifactIdjersey-guice/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
 
   dependency
 groupIdcom.sun.jersey.jersey-test-framework/groupId
 artifactIdjersey-test-framework-core/artifactId
-version1.8/version
+version${jersey.version}/version
 scopetest/scope
   /dependency
   dependency
 groupIdcom.sun.jersey.jersey-test-framework/groupId
 artifactIdjersey-test-framework-grizzly2/artifactId
-version1.8/version
+version${jersey.version}/version
   /dependency
 
   dependency




svn commit: r1523157 - /hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt

2013-09-13 Thread atm
Author: atm
Date: Sat Sep 14 00:17:37 2013
New Revision: 1523157

URL: http://svn.apache.org/r1523157
Log:
HADOOP-9960. Upgrade Jersey version to 1.9. Contributed by Karthik Kambatla.

Modified:

hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt

Modified: 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1523157r1=1523156r2=1523157view=diff
==
--- 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt
 (original)
+++ 
hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/CHANGES.txt
 Sat Sep 14 00:17:37 2013
@@ -109,6 +109,8 @@ Release 2.1.1-beta - UNRELEASED
 HADOOP-9958. Add old constructor back to DelegationTokenInformation to
 unbreak downstream builds. (Andrew Wang)
 
+HADOOP-9960. Upgrade Jersey version to 1.9. (Karthik Kambatla via atm)
+
 Release 2.1.0-beta - 2013-08-22
 
   INCOMPATIBLE CHANGES




svn commit: r1509469 [1/2] - /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html

2013-08-01 Thread atm
Author: atm
Date: Thu Aug  1 23:36:44 2013
New Revision: 1509469

URL: http://svn.apache.org/r1509469
Log:
Fix line endings of releasenotes.html.

Modified:

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html



  1   2   3   4   >