[jira] [Created] (HADOOP-11090) [Umbrella] Issues with Java 8 in Hadoop

2014-09-12 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-11090:
--

 Summary: [Umbrella] Issues with Java 8 in Hadoop
 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
works  with Java 8 is important for the Apache community.
  
This JIRA is to track  the issues/experiences encountered during Java 8 
migration. If you find a potential bug , please create a separate JIRA either 
as a sub-task or linked into this JIRA.
If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
well. Or you can add  a comment  here.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11089) change "-tokenCacheFile" option to support non-local FS URIs.

2014-09-12 Thread zhihai xu (JIRA)
zhihai xu created HADOOP-11089:
--

 Summary: change "-tokenCacheFile" option to support non-local FS 
URIs.
 Key: HADOOP-11089
 URL: https://issues.apache.org/jira/browse/HADOOP-11089
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: zhihai xu
Assignee: zhihai xu


change "-tokenCacheFile" option to support non-local FS URIs.
The current code in GenericOptionsParser only support local FS URIs, It will be 
better to support non-local FS URIs. 
{code}
FileSystem localFs = FileSystem.getLocal(conf);
  Path p = localFs.makeQualified(new Path(fileName));
  if (!localFs.exists(p)) {
  throw new FileNotFoundException("File "+fileName+" does not exist.");
  }
{code}
We can change above code to
{code}
FileSystem localFs = FileSystem.getLocal(conf);
  Path p = localFs.makeQualified(new Path(fileName));
  if (!p.getFileSystem(conf).exists(p)) {
  throw new FileNotFoundException("File "+fileName+" does not exist.");
  }
{code}
This issue will depend on MAPREDUCE-6086.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Git repo ready to use

2014-09-12 Thread Colin McCabe
It's an issue with test-patch.sh.  See
https://issues.apache.org/jira/browse/HADOOP-11084

best,
Colin

On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang  wrote:
> We're still not seeing findbugs results show up on precommit runs. I see
> that we're archiving "../patchprocess/*", and Ted thinks that since it's
> not in $WORKSPACE it's not getting picked up. Can we get confirmation of
> this issue? If so, we could just add "patchprocess" to the toplevel
> .gitignore.
>
> On Thu, Sep 4, 2014 at 8:54 AM, Sangjin Lee  wrote:
>
>> That's good to know. Thanks.
>>
>>
>> On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B 
>> wrote:
>>
>> > I think its still pointing to old svn repository which is just read only
>> > now.
>> >
>> > You can use latest mirror:
>> > https://github.com/apache/hadoop
>> >
>> > Regards,
>> > Vinay
>> > On Sep 4, 2014 11:37 AM, "Sangjin Lee"  wrote:
>> >
>> > > It seems like the github mirror at
>> > https://github.com/apache/hadoop-common
>> > > has stopped getting updates as of 8/22. Could this mirror have been
>> > broken
>> > > by the git transition?
>> > >
>> > > Thanks,
>> > > Sangjin
>> > >
>> > >
>> > > On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu  wrote:
>> > >
>> > > > From https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console :
>> > > >
>> > > > ERROR: No artifacts found that match the file pattern
>> > > > "trunk/hadoop-hdfs-project/*/target/*.tar.gz". Configuration
>> > > > error?ERROR :
>> > > > ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match anything,
>> > > > but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps that?s what
>> > > > you mean?
>> > > >
>> > > >
>> > > > I corrected the path to hdfs tar ball.
>> > > >
>> > > >
>> > > > FYI
>> > > >
>> > > >
>> > > >
>> > > > On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur <
>> t...@cloudera.com
>> > >
>> > > > wrote:
>> > > >
>> > > > > it seems we missed updating the HADOOP precommit job to use Git, it
>> > was
>> > > > > still using SVN. I've just updated it.
>> > > > >
>> > > > > thx
>> > > > >
>> > > > >
>> > > > > On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu 
>> wrote:
>> > > > >
>> > > > > > Currently patchprocess/ (contents shown below) is one level
>> higher
>> > > than
>> > > > > > ${WORKSPACE}
>> > > > > >
>> > > > > > diffJavadocWarnings.txt
>> > > >  newPatchFindbugsWarningshadoop-hdfs.html
>> > > > > >  patchFindBugsOutputhadoop-hdfs.txt
>> patchReleaseAuditOutput.txt
>> > > > > >  trunkJavadocWarnings.txt
>> > > > > > filteredPatchJavacWarnings.txt
>> > > newPatchFindbugsWarningshadoop-hdfs.xml
>> > > > > > patchFindbugsWarningshadoop-hdfs.xml
>> patchReleaseAuditWarnings.txt
>> > > > > > filteredTrunkJavacWarnings.txt  patch
>> > > > > > patchJavacWarnings.txttestrun_hadoop-hdfs.txt
>> > > > > > jirapatchEclipseOutput.txt
>> > > > > >  patchJavadocWarnings.txt  trunkJavacWarnings.txt
>> > > > > >
>> > > > > > Under Files to archive input box of
>> > PreCommit-HDFS-Build/configure, I
>> > > > > saw:
>> > > > > >
>> > > > > > '../patchprocess/*' doesn't match anything, but '*' does. Perhaps
>> > > > that's
>> > > > > > what you mean?
>> > > > > >
>> > > > > > I guess once patchprocess is moved back under ${WORKSPACE}, a lot
>> > of
>> > > > > things
>> > > > > > would be back to normal.
>> > > > > >
>> > > > > > Cheers
>> > > > > >
>> > > > > > On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur <
>> > > t...@cloudera.com
>> > > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > i'm also seeing broken links for javadocs warnings.
>> > > > > > >
>> > > > > > > Alejandro
>> > > > > > > (phone typing)
>> > > > > > >
>> > > > > > > > On Aug 28, 2014, at 20:00, Andrew Wang <
>> > andrew.w...@cloudera.com
>> > > >
>> > > > > > wrote:
>> > > > > > > >
>> > > > > > > > I noticed that the JUnit test results aren't getting picked
>> up
>> > > > > > anymore. I
>> > > > > > > > suspect we just need to update the path to the surefire
>> output,
>> > > but
>> > > > > > based
>> > > > > > > > on a quick examination I'm not sure what that is.
>> > > > > > > >
>> > > > > > > > Does someone mind taking another look?
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On Thu, Aug 28, 2014 at 4:21 PM, Karthik Kambatla <
>> > > > > ka...@cloudera.com>
>> > > > > > > > wrote:
>> > > > > > > >
>> > > > > > > >> Thanks Giri and Ted for fixing the builds.
>> > > > > > > >>
>> > > > > > > >>
>> > > > > > > >>> On Thu, Aug 28, 2014 at 9:49 AM, Ted Yu <
>> yuzhih...@gmail.com
>> > >
>> > > > > wrote:
>> > > > > > > >>>
>> > > > > > > >>> Charles:
>> > > > > > > >>> QA build is running for your JIRA:
>> > > > > > > >>>
>> > > > >
>> https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/
>> > > > > > > >>>
>> > > > > > > >>> Cheers
>> > > > > > > >>>
>> > > > > > > >>>
>> > > > > > >  On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb <
>> > > > cl...@cloudera.com
>> > > > > >
>> > > > > > > >>> wrote:
>> > > > > >

[jira] [Created] (HADOOP-11088) TestKeyShell and TestCredShell assume UNIX path separator for JECKS key store path

2014-09-12 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-11088:
---

 Summary: TestKeyShell and TestCredShell assume UNIX path separator 
for JECKS key store path
 Key: HADOOP-11088
 URL: https://issues.apache.org/jira/browse/HADOOP-11088
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Affects Versions: 2.4.1
Reporter: Xiaoyu Yao


TestKeyShell and TestCredShell assume UNIX path separator for JECKS key store 
path. This will fail the tests on Windows which uses a different path 
separator. The fix should be something like:

{code}
-jceksProvider = "jceks://file" + tmpDir + "/keystore.jceks";
+final Path jksPath = new Path(tmpDir.toString(), "keystore.jceks");
+jceksProvider = "jceks://file" + jksPath.toUri();
{code}






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-0.23-Build #1070

2014-09-12 Thread Apache Jenkins Server
See 

--
[...truncated 8263 lines...]
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.883 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.323 sec
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec
Running org.apache.hadoop.io.nativeio.TestNativeIO
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.16 sec
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec
Running org.apache.hadoop.io.TestMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.654 sec
Running org.apache.hadoop.io.TestUTF8
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.318 sec
Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.042 sec
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.21 sec
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec
Running org.apache.hadoop.io.TestSetFile
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.925 sec
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.314 sec
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.536 sec
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.691 sec
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.523 sec
Running org.apache.hadoop.util.TestJarFinder
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.779 sec
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.31 sec
Running org.apache.hadoop.util.TestHostsFileReader
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.485 sec
Running org.apache.hadoop.util.TestStringUtils
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.14 sec
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.26 sec
Running org.apache.hadoop.util.TestAsyncDiskService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124 sec
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec
Running org.apache.hadoop.util.TestDataChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec
Running org.apache.hadoop.util.TestRunJar
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec
Running org.apache.hadoop.util.TestOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.199 sec
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.537 sec
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.159 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec
Running org.apache.hadoop.security.TestGroupFallback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.283 sec
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.679 sec
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.14 sec
Running o