Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-22 Thread Mingliang Liu
+1 (non-binding)

1. Download the src tar file and validate the integrity
2. Build and configure the local cluster
3. Operate the HDFS from command line interface
4. Run several example MR jobs
5. Check logs

Thanks.

L

> On Dec 16, 2015, at 6:49 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Hi all,
> 
> I've created a release candidate RC1 for Apache Hadoop 2.7.2.
> 
> As discussed before, this is the next maintenance release to follow up 2.7.1.
> 
> The RC is available for validation at: 
> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ 
> 
> 
> The RC tag in git is: release-2.7.2-RC1
> 
> The maven artifacts are available via repository.apache.org 
>  at 
> https://repository.apache.org/content/repositories/orgapachehadoop-1026/ 
> 
> 
> The release-notes are inside the tar-balls at location 
> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
> this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
> for quick perusal.
> 
> As you may have noted,
> - The RC0 related voting thread got halted due to some critical issues. It 
> took a while again for getting all those blockers out of the way. See the 
> previous voting thread [3] for details.
> - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
> bit. This release's related discussion threads are linked below: [1] and [2].
> 
> Please try the release and vote; the vote will run for the usual 5 days.
> 
> Thanks,
> Vinod
> 
> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 
> 
> [2]: Planning Apache Hadoop 2.7.2 
> http://markmail.org/message/iktqss2qdeykgpqk 
> 
> [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
> http://markmail.org/message/5txhvr2qdiqglrwc
> 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #846

2015-12-22 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9589. Block files which have been hardlinked should be duplicated

--
[...truncated 5816 lines...]
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.335 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.335 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.757 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.275 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.086 sec - in 
org.apache.hadoop.io.file.tfile.TestTFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.158 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.95 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.207 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.911 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSplit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.972 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparator2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.138 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.025 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.414 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestVLong
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.413 sec - in 
org.apache.hadoop.io.file.tfile.TestVLong
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestArrayWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec - in 
org.apache.hadoop.io.TestArrayWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0

[jira] [Created] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2015-12-22 Thread Elliott Clark (JIRA)
Elliott Clark created HADOOP-12670:
--

 Summary: Fix TestNetUtils and TestSecurityUtil when localhost is 
ipv6 only
 Key: HADOOP-12670
 URL: https://issues.apache.org/jira/browse/HADOOP-12670
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Elliott Clark
Assignee: Elliott Clark


{code}
  TestSecurityUtil.testBuildTokenServiceSockAddr:165 expected:<[127.0.0.]1:123> 
but was:<[0:0:0:0:0:0:0:]1:123>
  TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
was:<[0:0:0:0:0:0:0:]1:123>
  
TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
 expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
  
TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
 expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
  
TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
 expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
  TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
was:<[127.0.0.]1>
  TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)
Tianyin Xu created HADOOP-12671:
---

 Summary: Inconsistent configuration values and incorrect comments
 Key: HADOOP-12671
 URL: https://issues.apache.org/jira/browse/HADOOP-12671
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, documentation, fs/s3
Affects Versions: 2.6.2, 2.7.1
Reporter: Tianyin Xu


The two values in [core-default.xml | 
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
 are wrong. 
{{fs.s3a.multipart.purge.age}}
{{fs.s3a.connection.timeout}}
{{fs.s3a.connection.establish.timeout}}
\\
\\

*1. {{fs.s3a.multipart.purge.age}}*
(in both {{2.6.2}} and {{2.7.1}})
In [core-default.xml | 
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
 the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
({{4}} hours).
\\
\\

*2. {{fs.s3a.connection.timeout}}*
(only appear in {{2.6.2}})
In [core-default.xml (2.6.2) | 
https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
 the value is {{5000}}, while in the code it is {{5}}.
{code}
  // seconds until we give up on a connection to s3
  public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
  public static final int DEFAULT_SOCKET_TIMEOUT = 5;
{code}
\\

*3. {{fs.s3a.connection.establish.timeout}}*
(only appear in {{2.7.1}})
In [core-default.xml (2.7.1)| 
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
 the value is {{5000}}, while in the code it is {{5}}.
{code}
  // seconds until we give up trying to establish a connection to s3
  public static final String ESTABLISH_TIMEOUT = 
"fs.s3a.connection.establish.timeout";
  public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
{code}
\\

btw, the code comments are wrong! The two parameters are in the unit of 
*milliseconds* instead of *seconds*...
{code}
-  // seconds until we give up on a connection to s3
+  // milliseconds until we give up on a connection to s3
...
-  // seconds until we give up trying to establish a connection to s3
+  // milliseconds until we give up trying to establish a connection to s3
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #2145

2015-12-22 Thread Apache Jenkins Server
See 

Changes:

[benoy] HDFS-9034. StorageTypeStats Metric should not count failed storage.

--
[...truncated 5395 lines...]
Running org.apache.hadoop.http.TestGlobalFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.025 sec - in 
org.apache.hadoop.http.TestGlobalFilter
Running org.apache.hadoop.crypto.TestOpensslCipher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.crypto.TestOpensslCipher
Running org.apache.hadoop.crypto.TestCryptoCodec
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.368 sec - in 
org.apache.hadoop.crypto.TestCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.245 sec - 
in org.apache.hadoop.crypto.TestCryptoStreams
Running org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Tests run: 14, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 13.502 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Running org.apache.hadoop.crypto.key.TestKeyProviderCryptoExtension
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.985 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderCryptoExtension
Running org.apache.hadoop.crypto.key.TestValueQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.18 sec - in 
org.apache.hadoop.crypto.key.TestValueQueue
Running org.apache.hadoop.crypto.key.TestKeyProviderFactory
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.573 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderFactory
Running org.apache.hadoop.crypto.key.TestKeyShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.507 sec - in 
org.apache.hadoop.crypto.key.TestKeyShell
Running org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.728 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Running org.apache.hadoop.crypto.key.TestKeyProvider
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.778 sec - in 
org.apache.hadoop.crypto.key.TestKeyProvider
Running org.apache.hadoop.crypto.key.kms.TestLoadBalancingKMSClientProvider
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.933 sec - in 
org.apache.hadoop.crypto.key.kms.TestLoadBalancingKMSClientProvider
Running org.apache.hadoop.crypto.key.TestCachingKeyProvider
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.088 sec - in 
org.apache.hadoop.crypto.key.TestCachingKeyProvider
Running org.apache.hadoop.crypto.TestCryptoStreamsNormal
Tests run: 14, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 6.755 sec - in 
org.apache.hadoop.crypto.TestCryptoStreamsNormal
Running org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.026 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.973 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Running org.apache.hadoop.crypto.random.TestOsSecureRandom
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.63 sec - in 
org.apache.hadoop.crypto.random.TestOsSecureRandom
Running org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in 
org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.997 sec - in 
org.apache.hadoop.jmx.TestJMXJsonServlet
Running org.apache.hadoop.tracing.TestTraceUtils
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.517 sec - in 
org.apache.hadoop.tracing.TestTraceUtils
Running org.apache.hadoop.io.TestMD5Hash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec - in 
org.apache.hadoop.io.TestMD5Hash
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.523 sec - in 
org.apache.hadoop.io.serializer.TestSerializationFactory
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.663 sec - in 
org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.369 sec - in 
org.apache.hadoop.io.serializer.TestWritableSerialization
Running org.apache.hadoop.io.TestSecureIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.764 sec - in 
org.apache.hadoop.io.TestSecureIOUtils
Running 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #849

2015-12-22 Thread Apache Jenkins Server
See 

Changes:

[benoy] HDFS-9034. StorageTypeStats Metric should not count failed storage.

--
[...truncated 3835 lines...]
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.206 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.385 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-minikdc ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc ---
[INFO] 
Loading source files for package org.apache.hadoop.minikdc...
Constructing Javadoc information...
Standard Doclet version 1.8.0
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

[jira] [Resolved] (HADOOP-11990) DNS#reverseDns fails with a NumberFormatException when using an IPv6 DNS server

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HADOOP-11990.

  Resolution: Not A Problem
Release Note: 
If your resolvers are ipv6 addresses make sure that you use one of the java 
versions listed in https://bugs.openjdk.java.net/browse/JDK-6991580

jdk8u60, jdk8u65, or jdk9+

> DNS#reverseDns fails with a NumberFormatException when using an IPv6 DNS 
> server
> ---
>
> Key: HADOOP-11990
> URL: https://issues.apache.org/jira/browse/HADOOP-11990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.5.1
> Environment: java version "1.7.0_45"
> Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
> Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
>Reporter: Benoit Sigoure
>  Labels: ipv6
>
> With this resolv.conf:
> {code}
> nameserver 192.168.1.1
> nameserver 2604:5500:3::3
> nameserver 2604:5500:3:3::3
> {code}
> Starting HBase yields the following:
> {code}
> Caused by: java.lang.NumberFormatException: For input string: "5500:3::3"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:492)
> at java.lang.Integer.parseInt(Integer.java:527)
> at com.sun.jndi.dns.DnsClient.(DnsClient.java:122)
> at com.sun.jndi.dns.Resolver.(Resolver.java:61)
> at com.sun.jndi.dns.DnsContext.getResolver(DnsContext.java:570)
> at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:430)
> at 
> com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:231)
> at 
> com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:139)
> at 
> com.sun.jndi.toolkit.url.GenericURLDirContext.getAttributes(GenericURLDirContext.java:103)
> at 
> javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:142)
> at org.apache.hadoop.net.DNS.reverseDns(DNS.java:84)
> at org.apache.hadoop.net.DNS.getHosts(DNS.java:241)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:344)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:362)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:341)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getHostname(RSRpcServices.java:825)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:782)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:195)
> at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:477)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:492)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.(HMasterCommandLine.java:276)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 7 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12431) NameNode should bind on both IPv6 and IPv4 if running on dual-stack machine and IPv6 enabled

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HADOOP-12431.

Resolution: Won't Fix
  Assignee: (was: Nemanja Matkovic)

Going to resolve this one as won't fix. We don't want to bind to ipv6 by 
default. Instead I'm going to open a documentation jira about how to set up a 
cluster with dual stack.

> NameNode should bind on both IPv6 and IPv4 if running on dual-stack machine 
> and IPv6 enabled
> 
>
> Key: HADOOP-12431
> URL: https://issues.apache.org/jira/browse/HADOOP-12431
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Nate Edel
>  Labels: ipv6
>
> NameNode works properly on IPv4 or IPv6 single stack (assuming in the latter 
> case that scripts have been changed to disable preferIPv4Stack, and dependent 
> on the client/data node fix in HDFS-8078).  On dual-stack machines, NameNode 
> listens only on IPv4 (even ignoring preferIPv6Addresses being set.)
> Our initial use case for IPv6 is IPv6-only clusters, but ideally we'd support 
> binding to both the IPv4 and IPv6 machine addresses so that we can support 
> heterogenous clusters (some dual-stack and some IPv6-only machines.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #847

2015-12-22 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-trunk #2144

2015-12-22 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-7163. WebHdfsFileSystem should retry reads according to the

--
[...truncated 3875 lines...]
Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

[INFO] Building jar: 

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 9 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.48 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.88 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

Joining Strings

2015-12-22 Thread dam6923 .
Hello!

I have been looking through some of the Hadoop code and I see that
there are (at least) four different ways used to join a collection of
Strings with a delimiter:

1) org.apache.hadoop.util.StringUtils.join(...)
2) org.apache.commons.lang.StringUtils.join(...)
3) com.google.common.base.Joiner
4) Manual - StringBuilder/For-Loop

This question came to me when I was looking to replace some instances
of "Manual" with a library approach.  Is there a preferred way of
doing it?  The Hadoop StringUtils did not have the signature I was
looking for: StringUtils.join(Object[]).  However, Apache Commons and
Google Guava both support the signature.

I imagine one would want to "Eat their own dog food" and use the
Apache Commons library, however, there would be some issues on the
import list, trying to user two classes with the same name.  Should I
bring over the required Join methods from Apache Commons and place
them into the Hadoop library?  Should we be using the Joiner?

Thanks.


[jira] [Resolved] (HADOOP-11582) org.apache.hadoop.net.TestDNS failing with NumberFormatException -IPv6 related?

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HADOOP-11582.

Resolution: Not A Problem

This was fixed in a different jira.

> org.apache.hadoop.net.TestDNS failing with NumberFormatException -IPv6 
> related?
> ---
>
> Key: HADOOP-11582
> URL: https://issues.apache.org/jira/browse/HADOOP-11582
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 3.0.0
> Environment: OSX yosemite
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>  Labels: ipv6
>
> {{org.apache.hadoop.net.TestDNS}} failing {{java.lang.NumberFormatException: 
> For input string: ":3246:9aff:fe80:438f"}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


FYI: windows 2.6.3 binaries

2015-12-22 Thread Steve Loughran

I've just built up and published on github the windows binaries to go with the 
2.6.3 release

https://github.com/steveloughran/winutils/tree/master/hadoop-2.6.3/bin

these are just off the release commit, #95d8146, built on windows server 
2012/64 bit.

They aren't official ASF artifacts, but I've signed them with my gpg KEY.

Handy for anyone trying to run spark standalone on windows & other desktop-side 
apps

-Steve


[jira] [Created] (HADOOP-12672) Split RPC timeout from IPC ping

2015-12-22 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-12672:
-

 Summary: Split RPC timeout from IPC ping
 Key: HADOOP-12672
 URL: https://issues.apache.org/jira/browse/HADOOP-12672
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #2146

2015-12-22 Thread Apache Jenkins Server
See 

Changes:

[rohithsharmaks] YARN-4109. Exception on RM scheduler page loading with labels. 
(Mohammad

--
[...truncated 5415 lines...]
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.651 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsConfig
Running org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.723 sec - in 
org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.769 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.14 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.sink.TestFileSink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.847 sec - in 
org.apache.hadoop.metrics2.sink.TestFileSink
Running org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.507 sec - in 
org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.802 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.log.TestLogLevel
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.242 sec - in 
org.apache.hadoop.log.TestLogLevel
Running org.apache.hadoop.log.TestLog4Json
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.738 sec - in 
org.apache.hadoop.log.TestLog4Json
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.74 sec - in 
org.apache.hadoop.jmx.TestJMXJsonServlet
Running org.apache.hadoop.ipc.TestIPCServerResponder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.636 sec - in 
org.apache.hadoop.ipc.TestIPCServerResponder
Running org.apache.hadoop.ipc.TestRPCWaitForProxy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.556 sec - in 
org.apache.hadoop.ipc.TestRPCWaitForProxy
Running org.apache.hadoop.ipc.TestSocketFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.645 sec - in 
org.apache.hadoop.ipc.TestSocketFactory
Running org.apache.hadoop.ipc.TestCallQueueManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.243 sec - in 
org.apache.hadoop.ipc.TestCallQueueManager
Running org.apache.hadoop.ipc.TestIdentityProviders
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.095 sec - in 
org.apache.hadoop.ipc.TestIdentityProviders
Running org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.885 sec - in 
org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Running org.apache.hadoop.ipc.TestRPCCompatibility
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.711 sec - in 
org.apache.hadoop.ipc.TestRPCCompatibility
Running org.apache.hadoop.ipc.TestProtoBufRpc
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.879 sec - in 
org.apache.hadoop.ipc.TestProtoBufRpc
Running org.apache.hadoop.ipc.TestMultipleProtocolServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.784 sec - in 
org.apache.hadoop.ipc.TestMultipleProtocolServer
Running org.apache.hadoop.ipc.TestRPCCallBenchmark
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.049 sec - in 
org.apache.hadoop.ipc.TestRPCCallBenchmark
Running org.apache.hadoop.ipc.TestRetryCacheMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.837 sec - in 
org.apache.hadoop.ipc.TestRetryCacheMetrics
Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.702 sec - in 
org.apache.hadoop.ipc.TestMiniRPCBenchmark
Running org.apache.hadoop.ipc.TestIPC
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.592 sec - 
in org.apache.hadoop.ipc.TestIPC
Running org.apache.hadoop.ipc.TestDecayRpcScheduler
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.536 sec - in 
org.apache.hadoop.ipc.TestDecayRpcScheduler
Running org.apache.hadoop.ipc.TestFairCallQueue
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.985 sec - in 
org.apache.hadoop.ipc.TestFairCallQueue
Running org.apache.hadoop.ipc.TestServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.005 sec - in 
org.apache.hadoop.ipc.TestServer
Running org.apache.hadoop.ipc.TestRPC
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.857 sec - 
in org.apache.hadoop.ipc.TestRPC
Running 

[jira] [Created] (HADOOP-12666) Support Windows Azure Data Lake - as a file system in Hadoop

2015-12-22 Thread vishwajeet dusane (JIRA)
vishwajeet dusane created HADOOP-12666:
--

 Summary: Support Windows Azure Data Lake - as a file system in 
Hadoop
 Key: HADOOP-12666
 URL: https://issues.apache.org/jira/browse/HADOOP-12666
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: vishwajeet dusane


h2. Description
This JIRA describes a new file system implementation for accessing Windows 
Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as input 
or output.
 
ADL is ultra-high capacity, Optimized for massive throughput with rich 
management and security features. More details available at 
https://azure.microsoft.com/en-us/services/data-lake-store/

h2. High level design
ADL file system exposes RESTful interfaces compatible with WebHdfs 
specification 2.7.1.
At a high level, the code here extends the SWebHdfsFileSystem class to provide 
an implementation for accessing ADL storage; the scheme ADL is used for 
accessing it over HTTPS. We use the URI scheme:
{code}adl:///path/to/file{code} 
to address individual Files/Folders. Tests are implemented mostly using a 
Contract implementation for the ADL functionality, with an option to test 
against a real ADL storage if configured.

h2. Credits and history
This has been ongoing work for a while, and the early version of this work can 
be seen in. Credit for this work goes to the team: [~vishwajeet.dusane], 
[~snayak], [~srevanka], [~kiranch], [~chakrab], [~omkarksa], [~snvijaya], 
[~ansaiprasanna]  [~jsangwan]

h2. Test
Besides Contract tests, we have used ADL as the additional file system in the 
current public preview release. Various different customer and test workloads 
have been run against clusters with such configurations for quite some time. 
The current version reflects to the version of the code tested and used in our 
production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Common-trunk #2141

2015-12-22 Thread Apache Jenkins Server
See 



Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-22 Thread Junping Du
Checked 2.7.2-RC1 tag which match exactly with 2.7.2 branch.
Downloaded the release bit and deploy a single node cluster which works fine 
with running some example jobs.
Built from src and check signatures, all looks good.
Checked release note which has 138 commits (HADOOP:22, HDFS:42, MAPREDUCE:17, 
YARN:57) match exactly with JIRA fixed list in 2.7.2.
However, when I look at our commit log and CHANGES.txt, I found something we 
are missing:
1. HDFS-9470 and YARN-4424 are missing from the 2.7.2 branch and RC1 tag.
2. HADOOP-5323, HDFS-8767 are missing in CHANGE.txt
For 2, I think we can fix in creating the final tag. For 1, I will let release 
manager to decide if HDFS-9470 & YARN-4424 has to go to 2.7.2. If not, I will 
+1 and we can fix release notes later.

Thanks,

Junping

From: Chang Li 
Sent: Tuesday, December 22, 2015 2:35 PM
To: mapreduce-...@hadoop.apache.org
Cc: common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org; 
hdfs-...@hadoop.apache.org; Vinod Kumar Vavilapalli
Subject: Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

+1 (non binding)
Downloaed src and built on single node cluster.
Ran some MR jobs successfully.
Verified signature.

Thanks,
Chang

On Mon, Dec 21, 2015 at 9:34 PM, Tsuyoshi Ozawa  wrote:

> +1
>
> - downloaded src and bin tar balls and verified signatures.
> - built Tez and Spark with 2.7.2 artifacts and JDK7.
> - ran tests of Tez with 2.7.2 artifacts, it passed.
>
> FYI: YARN-4348, reported by Jian, is one of critical issues of 2.7.2
> release.It's better to release 2.7.3 as soon as possible after this
> release.
>
> Thanks,
> - Tsuyoshi
>
> On Tue, Dec 22, 2015 at 4:51 AM, Wangda Tan  wrote:
> > +1 (binding)
> >
> > - Build & deploy single-node Hadoop from source code
> > - Add/Remove node labels to queues/nodes
> > - Run distributed shell commanding using default/specified node labels
> >
> > Thanks,
> > Wangda
> >
> >
> > On Mon, Dec 21, 2015 at 9:58 AM, Masatake Iwasaki <
> > iwasak...@oss.nttdata.co.jp> wrote:
> >
> >> +1(non-binding)
> >>
> >> - verified mds and signature of source and binary tarball
> >> - started 3 node cluster and ran example jobs such as wordcount and
> >> terasort
> >> - built from source tarball with -Pnative on CentOS 7 and OpenJDK 7
> >> - built site documentation and skimmed the contents
> >>
> >> Thanks,
> >> Masatake Iwasaki
> >>
> >>
> >>
> >> On 12/17/15 11:49, Vinod Kumar Vavilapalli wrote:
> >>
> >>> Hi all,
> >>>
> >>> I've created a release candidate RC1 for Apache Hadoop 2.7.2.
> >>>
> >>> As discussed before, this is the next maintenance release to follow up
> >>> 2.7.1.
> >>>
> >>> The RC is available for validation at:
> >>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ <
> >>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/>
> >>>
> >>> The RC tag in git is: release-2.7.2-RC1
> >>>
> >>> The maven artifacts are available via repository.apache.org <
> >>> http://repository.apache.org/> at
> >>>
> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
> >>> <
> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
> >>> >
> >>>
> >>> The release-notes are inside the tar-balls at location
> >>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
> >>> hosted this at
> >>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html
> for
> >>> quick perusal.
> >>>
> >>> As you may have noted,
> >>>   - The RC0 related voting thread got halted due to some critical
> issues.
> >>> It took a while again for getting all those blockers out of the way.
> See
> >>> the previous voting thread [3] for details.
> >>>   - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by
> >>> quite a bit. This release's related discussion threads are linked
> below:
> >>> [1] and [2].
> >>>
> >>> Please try the release and vote; the vote will run for the usual 5
> days.
> >>>
> >>> Thanks,
> >>> Vinod
> >>>
> >>> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes
> <
> >>> http://markmail.org/message/oozq3gvd4nhzsaes>
> >>> [2]: Planning Apache Hadoop 2.7.2
> >>> http://markmail.org/message/iktqss2qdeykgpqk <
> >>> http://markmail.org/message/iktqss2qdeykgpqk>
> >>> [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0:
> >>> http://markmail.org/message/5txhvr2qdiqglrwc
> >>>
> >>>
> >>>
> >>
>


[jira] [Created] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-12667:
--

 Summary: s3a: Support createNonRecursive API
 Key: HADOOP-12667
 URL: https://issues.apache.org/jira/browse/HADOOP-12667
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Sean Mackrory
Assignee: Sean Mackrory


HBase and other clients rely on the createNonRecursive API, which was recently 
un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-22 Thread Chang Li
+1 (non binding)
Downloaed src and built on single node cluster.
Ran some MR jobs successfully.
Verified signature.

Thanks,
Chang

On Mon, Dec 21, 2015 at 9:34 PM, Tsuyoshi Ozawa  wrote:

> +1
>
> - downloaded src and bin tar balls and verified signatures.
> - built Tez and Spark with 2.7.2 artifacts and JDK7.
> - ran tests of Tez with 2.7.2 artifacts, it passed.
>
> FYI: YARN-4348, reported by Jian, is one of critical issues of 2.7.2
> release.It's better to release 2.7.3 as soon as possible after this
> release.
>
> Thanks,
> - Tsuyoshi
>
> On Tue, Dec 22, 2015 at 4:51 AM, Wangda Tan  wrote:
> > +1 (binding)
> >
> > - Build & deploy single-node Hadoop from source code
> > - Add/Remove node labels to queues/nodes
> > - Run distributed shell commanding using default/specified node labels
> >
> > Thanks,
> > Wangda
> >
> >
> > On Mon, Dec 21, 2015 at 9:58 AM, Masatake Iwasaki <
> > iwasak...@oss.nttdata.co.jp> wrote:
> >
> >> +1(non-binding)
> >>
> >> - verified mds and signature of source and binary tarball
> >> - started 3 node cluster and ran example jobs such as wordcount and
> >> terasort
> >> - built from source tarball with -Pnative on CentOS 7 and OpenJDK 7
> >> - built site documentation and skimmed the contents
> >>
> >> Thanks,
> >> Masatake Iwasaki
> >>
> >>
> >>
> >> On 12/17/15 11:49, Vinod Kumar Vavilapalli wrote:
> >>
> >>> Hi all,
> >>>
> >>> I've created a release candidate RC1 for Apache Hadoop 2.7.2.
> >>>
> >>> As discussed before, this is the next maintenance release to follow up
> >>> 2.7.1.
> >>>
> >>> The RC is available for validation at:
> >>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ <
> >>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/>
> >>>
> >>> The RC tag in git is: release-2.7.2-RC1
> >>>
> >>> The maven artifacts are available via repository.apache.org <
> >>> http://repository.apache.org/> at
> >>>
> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
> >>> <
> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
> >>> >
> >>>
> >>> The release-notes are inside the tar-balls at location
> >>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
> >>> hosted this at
> >>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html
> for
> >>> quick perusal.
> >>>
> >>> As you may have noted,
> >>>   - The RC0 related voting thread got halted due to some critical
> issues.
> >>> It took a while again for getting all those blockers out of the way.
> See
> >>> the previous voting thread [3] for details.
> >>>   - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by
> >>> quite a bit. This release's related discussion threads are linked
> below:
> >>> [1] and [2].
> >>>
> >>> Please try the release and vote; the vote will run for the usual 5
> days.
> >>>
> >>> Thanks,
> >>> Vinod
> >>>
> >>> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes
> <
> >>> http://markmail.org/message/oozq3gvd4nhzsaes>
> >>> [2]: Planning Apache Hadoop 2.7.2
> >>> http://markmail.org/message/iktqss2qdeykgpqk <
> >>> http://markmail.org/message/iktqss2qdeykgpqk>
> >>> [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0:
> >>> http://markmail.org/message/5txhvr2qdiqglrwc
> >>>
> >>>
> >>>
> >>
>


[jira] [Created] (HADOOP-12668) Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak Ciphers through ssl-server.conf

2015-12-22 Thread Vijay Singh (JIRA)
Vijay Singh created HADOOP-12668:


 Summary: Modify HDFS embeded jetty server logic in 
HttpServer2.java to exclude weak Ciphers through ssl-server.conf
 Key: HADOOP-12668
 URL: https://issues.apache.org/jira/browse/HADOOP-12668
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.7.1
Reporter: Vijay Singh
Assignee: Vijay Singh
Priority: Critical
 Fix For: 2.7.2


Currently Embeded jetty Server used across all hadoop services is configured 
through ssl-server.xml file from their respective configuration section. 
However, the SSL/TLS protocol being used for this jetty servers can be 
downgraded to weak cipher suites. This code changes aims to add following 
functionality:
1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
spawn jetty servers with ability to exclude weak cipher suites. I propose we 
make this though ssl-server.xml and hence each service can choose to disable 
specific ciphers.
2) Modify DFSUtil.java used by HDFS code to supply new parameter 
ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #850

2015-12-22 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-trunk #2142

2015-12-22 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9589. Block files which have been hardlinked should be duplicated

--
[...truncated 5560 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 1.019 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 1.134 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 1.238 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 1.589 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.276 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.955 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.735 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.779 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.02 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.653 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.278 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractCreate
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.373 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSetTimes
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.436 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSeek
Running org.apache.hadoop.fs.TestContentSummary
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.713 sec - in 
org.apache.hadoop.fs.TestContentSummary
Running org.apache.hadoop.fs.TestTrash
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.105 sec - in 
org.apache.hadoop.fs.TestTrash
Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.176 sec - in 
org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.181 sec <<< 
FAILURE! - in org.apache.hadoop.fs.TestLocalFsFCStatistics
testStatisticsThreadLocalDataCleanUp(org.apache.hadoop.fs.TestLocalFsFCStatistics)
  Time elapsed: 10.387 sec  <<< ERROR!
java.util.concurrent.TimeoutException: Timed out waiting for condition. Thread 
diagnostics:
Timestamp: 2015-12-22 07:06:46,690

"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"main"  prio=5 tid=1 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1289)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 

Build failed in Jenkins: Hadoop-Common-trunk #2143

2015-12-22 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HDFS-9458. TestBackupNode always binds to port 50070, which can cause

--
[...truncated 5431 lines...]
Running org.apache.hadoop.fs.TestStat
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec - in 
org.apache.hadoop.fs.TestStat
Running org.apache.hadoop.fs.TestFsOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in 
org.apache.hadoop.fs.TestFsOptions
Running org.apache.hadoop.fs.TestGlobExpander
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec - in 
org.apache.hadoop.fs.TestGlobExpander
Running org.apache.hadoop.fs.TestHarFileSystemBasics
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.444 sec - in 
org.apache.hadoop.fs.TestHarFileSystemBasics
Running org.apache.hadoop.fs.TestFileContextResolveAfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.809 sec - in 
org.apache.hadoop.fs.TestFileContextResolveAfs
Running org.apache.hadoop.fs.TestFileSystemCanonicalization
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.456 sec - in 
org.apache.hadoop.fs.TestFileSystemCanonicalization
Running org.apache.hadoop.fs.TestFsShellCopy
Tests run: 16, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 1.671 sec - in 
org.apache.hadoop.fs.TestFsShellCopy
Running org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
Tests run: 63, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 3.605 sec - in 
org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
Running org.apache.hadoop.fs.TestFileContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.367 sec - in 
org.apache.hadoop.fs.TestFileContext
Running org.apache.hadoop.fs.TestDelegationTokenRenewer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.68 sec - in 
org.apache.hadoop.fs.TestDelegationTokenRenewer
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.114 sec - in 
org.apache.hadoop.fs.TestFileSystemCaching
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec - in 
org.apache.hadoop.fs.TestFcLocalFsUtil
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.914 sec - in 
org.apache.hadoop.fs.TestLocalFsFCStatistics
Running org.apache.hadoop.fs.TestTruncatedInputBug
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.742 sec - in 
org.apache.hadoop.fs.TestTruncatedInputBug
Running org.apache.hadoop.fs.TestDU
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.483 sec - in 
org.apache.hadoop.fs.TestDU
Running org.apache.hadoop.fs.TestFsShell
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec - in 
org.apache.hadoop.fs.TestFsShell
Running org.apache.hadoop.fs.TestLocalDirAllocator
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.42 sec - in 
org.apache.hadoop.fs.TestLocalDirAllocator
Running org.apache.hadoop.fs.viewfs.TestChRootedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.506 sec - in 
org.apache.hadoop.fs.viewfs.TestChRootedFileSystem
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegationTokenSupport
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.855 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegationTokenSupport
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.718 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegation
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.045 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegation
Running org.apache.hadoop.fs.viewfs.TestViewFsConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.609 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsConfig
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.951 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem
Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.119 sec - in 
org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.581 sec - in 
org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
Running org.apache.hadoop.fs.viewfs.TestViewFsURIs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.686 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsURIs
Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
Tests run: 1,