[jira] [Created] (HADOOP-12583) Sundry symlink problems on Solaris

2015-11-18 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12583:
--

 Summary: Sundry symlink problems on Solaris
 Key: HADOOP-12583
 URL: https://issues.apache.org/jira/browse/HADOOP-12583
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.7.1
 Environment: Solaris
Reporter: Alan Burlison
Priority: Minor


There are various filesystem test failures on Solaris:

{code}
  TestSymlinkLocalFSFileContext>TestSymlinkLocalFS.testDanglingLink:156 
expected:<[alanbur]> but was:<[]>
  
TestSymlinkLocalFSFileContext>TestSymlinkLocalFS.testSetTimesSymlinkToDir:233->SymlinkBaseTest.testSetTimesSymlinkToDir:1391
 expected:<1447788288000> but was:<3000>
  
TestSymlinkLocalFSFileContext>TestSymlinkLocalFS.testSetTimesSymlinkToFile:227->SymlinkBaseTest.testSetTimesSymlinkToFile:1376
 expected:<144778829> but was:<3000>
  TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testDanglingLink:156 
expected:<[alanbur]> but was:<[]>
  
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testSetTimesSymlinkToDir:233->SymlinkBaseTest.testSetTimesSymlinkToDir:1391
 expected:<1447788416000> but was:<3000>
  
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testSetTimesSymlinkToFile:227->SymlinkBaseTest.testSetTimesSymlinkToFile:1376
 expected:<1447788417000> but was:<3000>
{code}

I'm not sure what the root cause it, most likely Linux-specific assumptions 
about how symlinks behave. Further investigation needed.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: continuing releases on Apache Hadoop 2.6.x

2015-11-18 Thread Chris Trezzo
Thanks Junping for the clarification! It was not my intention to violate
the rules. I would be happy to work with you and help you manage the
release in whatever way is most effective.

Chris

On Wednesday, November 18, 2015, Junping Du  wrote:

> Thanks Chris Trezzo for volunteer on helping 2.6.3 release. I think
> Sangjin was asking for a committer to serve as release manager for 2.6.3
> according to Apache rules:
> http://www.apache.org/dev/release-publishing.html.
> I would like to serve as that role to work closely with you and Sangjin on
> 2.6.3 release if no objects from others.
>
> Thanks,
>
> Junping
> 
> From: Chris Trezzo >
> Sent: Wednesday, November 18, 2015 1:13 AM
> To: yarn-...@hadoop.apache.org 
> Cc: common-dev@hadoop.apache.org ;
> hdfs-...@hadoop.apache.org ; mapreduce-...@hadoop.apache.org
> 
> Subject: Re: continuing releases on Apache Hadoop 2.6.x
>
> Hi Sangjin,
>
> I would be happy to volunteer to work with you as a release manager for
> 2.6.3. Shooting for a time in early December seems reasonable to me. I also
> agree that if we miss that window, January would be the next best option.
>
> Thanks,
> Chris
>
> On Tue, Nov 17, 2015 at 5:10 PM, Sangjin Lee  > wrote:
>
> > I'd like to pick up this email discussion again. It is time that we
> started
> > thinking about the next release in the 2.6.x line. IMO we want to walk
> the
> > balance between maintaining a reasonable release cadence and getting a
> good
> > amount of high-quality fixes. The timeframe is a little tricky as the
> > holidays are approaching. If we have enough fixes accumulated in
> > branch-2.6, some time early December might be a good target for cutting
> the
> > first release candidate. Once we miss that window, I think we are looking
> > at next January. I'd like to hear your thoughts on this.
> >
> > It'd be good if someone can volunteer for the release manager for 2.6.3.
> > I'd be happy to help out in any way I can. Thanks!
> >
> > Regards,
> > Sangjin
> >
> > On Mon, Nov 2, 2015 at 11:45 AM, Vinod Vavilapalli <
> > vino...@hortonworks.com >
> > wrote:
> >
> > > Just to stress on the following, it is very important that any critical
> > > bug-fixes that we push into 2.8.0 or even trunk, we should consider
> them
> > > for 2.6.3 and 2.7.3 if it makes sense. This is the only way we can
> avoid
> > > extremely long release cycles like that of 2.6.1.
> > >
> > > Also, to clarify a little, use Target-version if you want a discussion
> of
> > > the backport, but if you do end up backporting patches after that, you
> > > should set the fix-version to be 2.6.1.
> > >
> > > Thanks
> > > +Vinod
> > >
> > >
> > > > On Nov 2, 2015, at 11:29 AM, Sangjin Lee  > wrote:
> > > >
> > > > As you may have seen, 2.6.2 is out
> > > > . I have also
> retargeted
> > > all
> > > > open issues that were targeted for 2.6.2 to 2.6.3.
> > > >
> > > > Continuing the discussion in the email thread here
> > > > , I'd like us to
> maintain
> > > the
> > > > cadence of monthly point releases in the 2.6.x line. It would be
> great
> > if
> > > > we can have 2.6.3 released before the year-end holidays.
> > > >
> > > > If you have any bugfixes and improvements that are targeted for 2.7.x
> > (or
> > > > 2.8) that you think are applicable to 2.6.x, please *set the target
> > > version
> > > > to 2.6.3* and merge them to branch-2.6. Please use your judgment in
> > terms
> > > > of the applicability and quality of the changes so that we can ensure
> > > each
> > > > point release is consistently better quality than the previous one.
> > > Thanks
> > > > everyone!
> > > >
> > > > Regards,
> > > > Sangjin
> > >
> > >
> >
>


Re: continuing releases on Apache Hadoop 2.6.x

2015-11-18 Thread Junping Du
Thanks Chris Trezzo for volunteer on helping 2.6.3 release. I think Sangjin was 
asking for a committer to serve as release manager for 2.6.3 according to 
Apache rules: http://www.apache.org/dev/release-publishing.html. 
I would like to serve as that role to work closely with you and Sangjin on 
2.6.3 release if no objects from others.

Thanks,

Junping

From: Chris Trezzo 
Sent: Wednesday, November 18, 2015 1:13 AM
To: yarn-...@hadoop.apache.org
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org
Subject: Re: continuing releases on Apache Hadoop 2.6.x

Hi Sangjin,

I would be happy to volunteer to work with you as a release manager for
2.6.3. Shooting for a time in early December seems reasonable to me. I also
agree that if we miss that window, January would be the next best option.

Thanks,
Chris

On Tue, Nov 17, 2015 at 5:10 PM, Sangjin Lee  wrote:

> I'd like to pick up this email discussion again. It is time that we started
> thinking about the next release in the 2.6.x line. IMO we want to walk the
> balance between maintaining a reasonable release cadence and getting a good
> amount of high-quality fixes. The timeframe is a little tricky as the
> holidays are approaching. If we have enough fixes accumulated in
> branch-2.6, some time early December might be a good target for cutting the
> first release candidate. Once we miss that window, I think we are looking
> at next January. I'd like to hear your thoughts on this.
>
> It'd be good if someone can volunteer for the release manager for 2.6.3.
> I'd be happy to help out in any way I can. Thanks!
>
> Regards,
> Sangjin
>
> On Mon, Nov 2, 2015 at 11:45 AM, Vinod Vavilapalli <
> vino...@hortonworks.com>
> wrote:
>
> > Just to stress on the following, it is very important that any critical
> > bug-fixes that we push into 2.8.0 or even trunk, we should consider them
> > for 2.6.3 and 2.7.3 if it makes sense. This is the only way we can avoid
> > extremely long release cycles like that of 2.6.1.
> >
> > Also, to clarify a little, use Target-version if you want a discussion of
> > the backport, but if you do end up backporting patches after that, you
> > should set the fix-version to be 2.6.1.
> >
> > Thanks
> > +Vinod
> >
> >
> > > On Nov 2, 2015, at 11:29 AM, Sangjin Lee  wrote:
> > >
> > > As you may have seen, 2.6.2 is out
> > > . I have also retargeted
> > all
> > > open issues that were targeted for 2.6.2 to 2.6.3.
> > >
> > > Continuing the discussion in the email thread here
> > > , I'd like us to maintain
> > the
> > > cadence of monthly point releases in the 2.6.x line. It would be great
> if
> > > we can have 2.6.3 released before the year-end holidays.
> > >
> > > If you have any bugfixes and improvements that are targeted for 2.7.x
> (or
> > > 2.8) that you think are applicable to 2.6.x, please *set the target
> > version
> > > to 2.6.3* and merge them to branch-2.6. Please use your judgment in
> terms
> > > of the applicability and quality of the changes so that we can ensure
> > each
> > > point release is consistently better quality than the previous one.
> > Thanks
> > > everyone!
> > >
> > > Regards,
> > > Sangjin
> >
> >
>


[jira] [Created] (HADOOP-12582) Using BytesWritable's getLength() and getBytes() instead of get() and getSize()

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12582:
---

 Summary: Using BytesWritable's getLength() and getBytes() instead 
of get() and getSize()
 Key: HADOOP-12582
 URL: https://issues.apache.org/jira/browse/HADOOP-12582
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


BytesWritable's deprecated methods,  get() and getSize(), are still used in 
some tests: TestTFileSeek, TestTFileSeqFileComparison, TestSequenceFile, and so 
on. We can also remove them if targeting this to 3.0.0

https://builds.apache.org/job/PreCommit-HADOOP-Build/8084/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_85.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: continuing releases on Apache Hadoop 2.6.x

2015-11-18 Thread Haohui Mai
Hi,

I can help on releasing 2.6.3.

~Haohui


On Wed, Nov 18, 2015 at 8:20 AM, Chris Trezzo  wrote:
> Thanks Junping for the clarification! It was not my intention to violate
> the rules. I would be happy to work with you and help you manage the
> release in whatever way is most effective.
>
> Chris
>
> On Wednesday, November 18, 2015, Junping Du  wrote:
>
>> Thanks Chris Trezzo for volunteer on helping 2.6.3 release. I think
>> Sangjin was asking for a committer to serve as release manager for 2.6.3
>> according to Apache rules:
>> http://www.apache.org/dev/release-publishing.html.
>> I would like to serve as that role to work closely with you and Sangjin on
>> 2.6.3 release if no objects from others.
>>
>> Thanks,
>>
>> Junping
>> 
>> From: Chris Trezzo >
>> Sent: Wednesday, November 18, 2015 1:13 AM
>> To: yarn-...@hadoop.apache.org 
>> Cc: common-dev@hadoop.apache.org ;
>> hdfs-...@hadoop.apache.org ; mapreduce-...@hadoop.apache.org
>> 
>> Subject: Re: continuing releases on Apache Hadoop 2.6.x
>>
>> Hi Sangjin,
>>
>> I would be happy to volunteer to work with you as a release manager for
>> 2.6.3. Shooting for a time in early December seems reasonable to me. I also
>> agree that if we miss that window, January would be the next best option.
>>
>> Thanks,
>> Chris
>>
>> On Tue, Nov 17, 2015 at 5:10 PM, Sangjin Lee > > wrote:
>>
>> > I'd like to pick up this email discussion again. It is time that we
>> started
>> > thinking about the next release in the 2.6.x line. IMO we want to walk
>> the
>> > balance between maintaining a reasonable release cadence and getting a
>> good
>> > amount of high-quality fixes. The timeframe is a little tricky as the
>> > holidays are approaching. If we have enough fixes accumulated in
>> > branch-2.6, some time early December might be a good target for cutting
>> the
>> > first release candidate. Once we miss that window, I think we are looking
>> > at next January. I'd like to hear your thoughts on this.
>> >
>> > It'd be good if someone can volunteer for the release manager for 2.6.3.
>> > I'd be happy to help out in any way I can. Thanks!
>> >
>> > Regards,
>> > Sangjin
>> >
>> > On Mon, Nov 2, 2015 at 11:45 AM, Vinod Vavilapalli <
>> > vino...@hortonworks.com >
>> > wrote:
>> >
>> > > Just to stress on the following, it is very important that any critical
>> > > bug-fixes that we push into 2.8.0 or even trunk, we should consider
>> them
>> > > for 2.6.3 and 2.7.3 if it makes sense. This is the only way we can
>> avoid
>> > > extremely long release cycles like that of 2.6.1.
>> > >
>> > > Also, to clarify a little, use Target-version if you want a discussion
>> of
>> > > the backport, but if you do end up backporting patches after that, you
>> > > should set the fix-version to be 2.6.1.
>> > >
>> > > Thanks
>> > > +Vinod
>> > >
>> > >
>> > > > On Nov 2, 2015, at 11:29 AM, Sangjin Lee > > wrote:
>> > > >
>> > > > As you may have seen, 2.6.2 is out
>> > > > . I have also
>> retargeted
>> > > all
>> > > > open issues that were targeted for 2.6.2 to 2.6.3.
>> > > >
>> > > > Continuing the discussion in the email thread here
>> > > > , I'd like us to
>> maintain
>> > > the
>> > > > cadence of monthly point releases in the 2.6.x line. It would be
>> great
>> > if
>> > > > we can have 2.6.3 released before the year-end holidays.
>> > > >
>> > > > If you have any bugfixes and improvements that are targeted for 2.7.x
>> > (or
>> > > > 2.8) that you think are applicable to 2.6.x, please *set the target
>> > > version
>> > > > to 2.6.3* and merge them to branch-2.6. Please use your judgment in
>> > terms
>> > > > of the applicability and quality of the changes so that we can ensure
>> > > each
>> > > > point release is consistently better quality than the previous one.
>> > > Thanks
>> > > > everyone!
>> > > >
>> > > > Regards,
>> > > > Sangjin
>> > >
>> > >
>> >
>>


Build failed in Jenkins: Hadoop-Common-trunk #2002

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[ozawa] Moved HADOOP-8419 in CHANGES.txt from 3.0.0 to 2.8.0.

[ozawa] HADOOP-12564. Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io

--
[...truncated 5396 lines...]
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.crypto.key.TestValueQueue.testgetAtMostPolicyALL(TestValueQueue.java:149)

Running org.apache.hadoop.crypto.key.TestKeyProviderFactory
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.237 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderFactory
Running org.apache.hadoop.crypto.key.TestCachingKeyProvider
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.306 sec - in 
org.apache.hadoop.crypto.key.TestCachingKeyProvider
Running org.apache.hadoop.crypto.TestCryptoStreamsNormal
Tests run: 14, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 9.716 sec - in 
org.apache.hadoop.crypto.TestCryptoStreamsNormal
Running org.apache.hadoop.crypto.random.TestOsSecureRandom
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.821 sec - in 
org.apache.hadoop.crypto.random.TestOsSecureRandom
Running org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.236 sec - in 
org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Running org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.177 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Running org.apache.hadoop.crypto.TestOpensslCipher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.263 sec - in 
org.apache.hadoop.crypto.TestOpensslCipher
Running org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.323 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.312 sec - 
in org.apache.hadoop.crypto.TestCryptoStreams
Running org.apache.hadoop.service.TestServiceLifecycle
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.588 sec - in 
org.apache.hadoop.service.TestServiceLifecycle
Running org.apache.hadoop.service.TestCompositeService
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.352 sec - in 
org.apache.hadoop.service.TestCompositeService
Running org.apache.hadoop.service.TestGlobalStateChangeListener
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.354 sec - in 
org.apache.hadoop.service.TestGlobalStateChangeListener
Running org.apache.hadoop.ha.TestActiveStandbyElectorRealZK
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.052 sec - in 
org.apache.hadoop.ha.TestActiveStandbyElectorRealZK
Running org.apache.hadoop.ha.TestHealthMonitor
Exception: java.lang.RuntimeException thrown from the UncaughtExceptionHandler 
in thread "Health Monitor for DummyHAService #3"
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.788 sec - in 
org.apache.hadoop.ha.TestHealthMonitor
Running org.apache.hadoop.ha.TestHealthMonitorWithDedicatedHealthAddress
Exception: java.lang.RuntimeException thrown from the UncaughtExceptionHandler 
in thread "Health Monitor for DummyHAService #3"
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.887 sec - in 
org.apache.hadoop.ha.TestHealthMonitorWithDedicatedHealthAddress
Running org.apache.hadoop.ha.TestZKFailoverController
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.072 sec - 
in org.apache.hadoop.ha.TestZKFailoverController
Running org.apache.hadoop.ha.TestZKFailoverControllerStress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.624 sec - in 
org.apache.hadoop.ha.TestZKFailoverControllerStress
Running org.apache.hadoop.ha.TestActiveStandbyElector
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.233 sec - in 
org.apache.hadoop.ha.TestActiveStandbyElector
Running org.apache.hadoop.ha.TestSshFenceByTcpPort
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.099 sec - in 
org.apache.hadoop.ha.TestSshFenceByTcpPort
Running org.apache.hadoop.ha.TestHAAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.737 sec - in 
org.apache.hadoop.ha.TestHAAdmin
Running org.apache.hadoop.ha.TestFailoverController
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.338 sec - in 
org.apache.hadoop.ha.TestFailoverController
Running org.apache.hadoop.ha.TestShellCommandFencer
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.577 sec - in 
org.apache.hadoop.ha.TestShellCommandFencer
Running org.apache.hadoop.ha.TestNodeFencer
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.369 sec - in 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #707

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[jlowe] Update CHANGES.txt to reflect commit of MR-6377 to branch-2.7 and

--
[...truncated 5876 lines...]
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.321 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.133 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.152 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.797 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.175 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.467 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.267 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.231 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.793 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.238 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.618 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.219 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.352 sec - in 
org.apache.hadoop.util.hash.TestHash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSignalLogger
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec - in 

Jenkins build is back to normal : Hadoop-Common-trunk #2003

2015-11-18 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #708

2015-11-18 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-12581) ShellBasedIdMapping needs suport for Solaris

2015-11-18 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12581:
--

 Summary: ShellBasedIdMapping needs suport for Solaris
 Key: HADOOP-12581
 URL: https://issues.apache.org/jira/browse/HADOOP-12581
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.7.1
 Environment: Solaris
Reporter: Alan Burlison


ShellBasedIdMapping only supports Linux and OSX, support for Solaris needs 
adding.

>From looking at the Linux support in ShellBasedIdMapping, the same sequences 
>of shell commands should work for Solaris as well so all that's probably 
>needed is to change the implementation of checkSupportedPlatform() to treat 
>Linux and Solaris the same way, plus possibly some renaming of other methods 
>to make it more obvious they are not Linux-only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12580) Hadoop needs a SysInfo class for Solaris

2015-11-18 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12580:
--

 Summary: Hadoop needs a SysInfo class for Solaris
 Key: HADOOP-12580
 URL: https://issues.apache.org/jira/browse/HADOOP-12580
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Affects Versions: 2.7.1
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison


During testing multiple failures of the following sort are reported:

{code}
java.lang.UnsupportedOperationException: Could not determine OS
at org.apache.hadoop.util.SysInfo.newInstance(SysInfo.java:43)
at 
org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.(ResourceCalculatorPlugin.java:41)
at 
org.apache.hadoop.mapred.gridmix.DummyResourceCalculatorPlugin.(DummyResourceCalculatorPlugin.java:32)
at 
org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation.testTotalHeapUsageEmulatorPlugin(TestGridmixMemoryEmulation.java:131)
{code}

This is because there is no SysInfo subclass for Solaris, from SysInfo.java

{code}
  public static SysInfo newInstance() {
if (Shell.LINUX) {
  return new SysInfoLinux();
}
if (Shell.WINDOWS) {
  return new SysInfoWindows();
}
throw new UnsupportedOperationException("Could not determine OS");
  }
{code}

An implementation of SysInfoSolaris needs to be written and plumbed in to 
SysInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] hadoop pull request: YARN-3477 timeline diagnostics

2015-11-18 Thread steveloughran
GitHub user steveloughran opened a pull request:

https://github.com/apache/hadoop/pull/47

YARN-3477 timeline diagnostics

YARN-3477 timeline diagnostics: add more details on why things are failing, 
including stack traces (at debug level sometimes)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/steveloughran/hadoop 
stevel/YARN-3477-timeline-diagnostics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/47.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #47


commit 5278ac3de77e866e6528b5b6fb6f8d294c541a5f
Author: Steve Loughran 
Date:   2015-04-23T14:18:26Z

YARN-3477 TimelineClientImpl swallows exceptions

commit 7a3701b66ef415ff8c5f9fdeec4ebe292d0eab75
Author: Steve Loughran 
Date:   2015-04-24T12:03:09Z

YARN-3477 patch 002
# rethrowing runtime exception on timeout, but including the IOE as an 
inner exception
# using constant strings in the error messages
# clean up tests to (a) use those constant strings in tests, (b) throw the 
original exception on any mismatch, plus other improvements

commit 43b6b1fc126bff5b4be95bdd2fbab3bf686edde5
Author: Steve Loughran 
Date:   2015-11-18T22:25:18Z

YARN-3277 make sure there's spaces; chop line > 80 chars wide




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HADOOP-12584) Disable directory browsing in HttpServer2

2015-11-18 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-12584:
--

 Summary: Disable directory browsing in HttpServer2
 Key: HADOOP-12584
 URL: https://issues.apache.org/jira/browse/HADOOP-12584
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0
Reporter: Robert Kanter
Assignee: Robert Kanter


We found a minor security issue with the Yarn Web UIs (or anything using 
{{HttpServer2}}.  Currently, you can list the contents of the {{/static}} 
directory for the RM, NM, and JHS.  This isn't a huge deal, but there are some 
ways to abuse this to get access to files on the host, though it would be 
pretty difficult.  It's also good practice to disable directory listing on web 
apps.

Here are the URLs:
- http://HOST:8088/static/
- http://HOST:19888/static/
- http://HOST:8042/static/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12585) Removing deprecated methods in 3.0.0 release

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12585:
---

 Summary: Removing deprecated methods in 3.0.0 release
 Key: HADOOP-12585
 URL: https://issues.apache.org/jira/browse/HADOOP-12585
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


There are lots deprecated methods in hadoop - 3.0.0 release is a good time to 
remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12586:
---

 Summary: Dockerfile cannot work correctly behind a proxy
 Key: HADOOP-12586
 URL: https://issues.apache.org/jira/browse/HADOOP-12586
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsuyoshi Ozawa


{{apt-get}} command fails because there are not way to change proxy.

{quote}
Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev   
  protobuf-c-compiler libprotobuf-dev build-essential libtool 
zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 libbz2-dev  
   libjansson-dev fuse libfuse-dev libcurl4-openssl-dev python 
python2.7 pylint openjdk-7-jdk doxygen
 ---> Running in 072a97b7fa45
Err http://archive.ubuntu.com trusty InRelease
  
Err http://archive.ubuntu.com trusty-updates InRelease
  
Err http://archive.ubuntu.com trusty-security InRelease
  
Err http://archive.ubuntu.com trusty Release.gpg
  Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]
Err http://archive.ubuntu.com trusty-updates Release.gpg
  Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]
Err http://archive.ubuntu.com trusty-security Release.gpg
  Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]
Reading package lists...
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  

W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]

W: Some index files failed to download. They have been ignored, or old ones 
used instead.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)