[jira] [Created] (HADOOP-12113) update test-patch branch to latest code

2015-06-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12113:
-

 Summary: update test-patch branch to latest code
 Key: HADOOP-12113
 URL: https://issues.apache.org/jira/browse/HADOOP-12113
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer


[~sekikn] and I have been working on github.  We should update the codebase to 
reflect all of those changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-06-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597758#comment-14597758
 ] 

Vinayakumar B commented on HADOOP-12111:


This sounds really awesome. 
I would also love to get involved in this work.

-Thanks

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: mergedpatch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12113-HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597850#comment-14597850
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7022/console in case of 
problems.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2015-06-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597837#comment-14597837
 ] 

Sangjin Lee commented on HADOOP-12090:
--

Ping?

 minikdc-related unit tests fail consistently on some platforms
 --

 Key: HADOOP-12090
 URL: https://issues.apache.org/jira/browse/HADOOP-12090
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, test
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch


 On some platforms all unit tests that use minikdc fail consistently. Those 
 tests include TestKMS, TestSaslDataTransfer, 
 TestTimelineAuthenticationFilter, etc.
 Typical failures on the unit tests:
 {noformat}
 java.lang.AssertionError: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Cannot get a 
 KDC reply)
   at org.junit.Assert.fail(Assert.java:88)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
 {noformat}
 The errors that cause this failure on the KDC server on the minikdc are a 
 NullPointerException:
 {noformat}
 org.apache.mina.filter.codec.ProtocolDecoderException: 
 java.lang.NullPointerException: message (Hexdump: ...)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
   at 
 org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
   at 
 org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.NullPointerException: message
   at 
 org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
   at 
 org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
   ... 15 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597852#comment-14597852
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7023/console in case of 
problems.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597851#comment-14597851
 ] 

Sangjin Lee commented on HADOOP-12107:
--

Thanks for pointing that out [~walter.k.su]. I think you're right in that 
normally one {{Statistics}} instance is created per {{FileSystem}} class. That 
said, it is possible to create {{Statistics}} instances in other manners simply 
via invoking the public constructor.

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12114) Make hadoop-tools/hadoop-pipes Native code -Wall-clean

2015-06-23 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12114:
--

 Summary: Make hadoop-tools/hadoop-pipes Native code -Wall-clean
 Key: HADOOP-12114
 URL: https://issues.apache.org/jira/browse/HADOOP-12114
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison


As we specify -Wall as a default compilation flag, it would be helpful if the 
Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597847#comment-14597847
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7021/console in case of 
problems.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12112) Make hadoop-common-project Native code -Wall-clean

2015-06-23 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12112:
--

 Summary: Make hadoop-common-project Native code -Wall-clean
 Key: HADOOP-12112
 URL: https://issues.apache.org/jira/browse/HADOOP-12112
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison


As we specify -Wall as a default compilation flag, it would be helpful if the 
Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12106) org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS fails with IBM JVM

2015-06-23 Thread Tony Reix (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597584#comment-14597584
 ] 

Tony Reix commented on HADOOP-12106:


Since the issue deals more with IBM JVM than with AIX, I've changed the title 
of the defect.
It would be useful to test Hadoop with IBM JVM too, at least for these crypto 
tests.

 org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS fails with IBM JVM
 ---

 Key: HADOOP-12106
 URL: https://issues.apache.org/jira/browse/HADOOP-12106
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0, 2.7.0
 Environment: Hadoop 2.60 and 2.7+
  - AIX/PowerPC/IBMJVM
  - Ubuntu/i386/IBMJVM
Reporter: Tony Reix
 Attachments: mvn.Test.TestCryptoStreamsForLocalFS.res20.AIX.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res20.Ubuntu-i386.IBMJVM.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res22.OpenJDK.Errors


 On AIX (IBM JVM available only), many sub-tests of :
org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
 fail:
  Tests run: 13, Failures: 5, Errors: 1, Skipped: 
   - testCryptoIV
   - testSeek
   - testSkip
   - testAvailable
   - testPositionedRead
 When testing SAME exact code on Ubuntu/i386 :
   - with OpenJDK, all tests are OK
   - with IBM JVM, tests randomly fail.
 The issue may be in the IBM JVM, or in some Hadoop code that not perfectly 
 handles differences due to different IBM JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-06-23 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597702#comment-14597702
 ] 

Chris Nauroth commented on HADOOP-12111:


I have done the following:
* Create a new feature branch in git named HADOOP-12111.
* Add hadoop-common-project/hadoop-common/CHANGES-HADOOP-12111.txt for tracking 
commits of sub-tasks without running into merge conflicts on the main 
CHANGES.txt.
* Create a HADOOP-12111 version in jira, so all sub-tasks can be resolved with 
fix version set to HADOOP-12111.

cc [~busbey], [~aw], [~ndimiduk], [~apurtell], [~abayer]

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12106) org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS fails with IBM JVM

2015-06-23 Thread Tony Reix (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Reix updated HADOOP-12106:
---
Summary: org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS fails with 
IBM JVM  (was: org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS fails on 
AIX)

 org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS fails with IBM JVM
 ---

 Key: HADOOP-12106
 URL: https://issues.apache.org/jira/browse/HADOOP-12106
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0, 2.7.0
 Environment: Hadoop 2.60 and 2.7+
  - AIX/PowerPC/IBMJVM
  - Ubuntu/i386/IBMJVM
Reporter: Tony Reix
 Attachments: mvn.Test.TestCryptoStreamsForLocalFS.res20.AIX.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res20.Ubuntu-i386.IBMJVM.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res22.OpenJDK.Errors


 On AIX (IBM JVM available only), many sub-tests of :
org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
 fail:
  Tests run: 13, Failures: 5, Errors: 1, Skipped: 
   - testCryptoIV
   - testSeek
   - testSkip
   - testAvailable
   - testPositionedRead
 When testing SAME exact code on Ubuntu/i386 :
   - with OpenJDK, all tests are OK
   - with IBM JVM, tests randomly fail.
 The issue may be in the IBM JVM, or in some Hadoop code that not perfectly 
 handles differences due to different IBM JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-06-23 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12036:
---
Attachment: HADOOP-12036.002.patch

Updated patch, as per discussion

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Assignee: Alan Burlison
 Attachments: HADOOP-12036.001.patch, HADOOP-12036.002.patch


 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12113-HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: (!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7022/console in case of 
problems.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: (!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7023/console in case of 
problems.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: (!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7021/console in case of 
problems.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597978#comment-14597978
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 9s 
{color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) 
issues (total was 59, now 49). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 2m 
56s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 3m 12s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741323/HADOOP-12113-HADOOP-12111.patch
 |
| git revision | trunk / 41ae776 |
| Optional Tests | asflicense shellcheck site |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7024/artifact/patchprocess/diffpatchshellcheck.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7024/console |


This message was automatically generated.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-06-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597950#comment-14597950
 ] 

Allen Wittenauer commented on HADOOP-12111:
---

I've added:

{code}
--jira-user=hadoopqa \
--project=hadoop \
--branch-default=trunk \
{code}

to the current hadoop common jenkins job so that our builds will know that they 
are running underneath Hadoop.

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2015-06-23 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598041#comment-14598041
 ] 

Haohui Mai commented on HADOOP-12090:
-

Thanks for the work. I'll take a look later today.

 minikdc-related unit tests fail consistently on some platforms
 --

 Key: HADOOP-12090
 URL: https://issues.apache.org/jira/browse/HADOOP-12090
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, test
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch


 On some platforms all unit tests that use minikdc fail consistently. Those 
 tests include TestKMS, TestSaslDataTransfer, 
 TestTimelineAuthenticationFilter, etc.
 Typical failures on the unit tests:
 {noformat}
 java.lang.AssertionError: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Cannot get a 
 KDC reply)
   at org.junit.Assert.fail(Assert.java:88)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
 {noformat}
 The errors that cause this failure on the KDC server on the minikdc are a 
 NullPointerException:
 {noformat}
 org.apache.mina.filter.codec.ProtocolDecoderException: 
 java.lang.NullPointerException: message (Hexdump: ...)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
   at 
 org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
   at 
 org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.NullPointerException: message
   at 
 org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
   at 
 org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
   ... 15 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12113-HADOOP-12111.patch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597969#comment-14597969
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7024/console in case of 
problems.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598135#comment-14598135
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 1s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s 
{color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) 
issues (total was 59, now 48). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 3m 
44s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 4m 4s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741345/HADOOP-12113-HADOOP-12111.patch
 |
| git revision | trunk / 41ae776 |
| Optional Tests | asflicense shellcheck site |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7025/artifact/patchprocess/diffpatchshellcheck.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7025/console |


This message was automatically generated.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2015-06-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598064#comment-14598064
 ] 

Sangjin Lee commented on HADOOP-12090:
--

Thanks!

 minikdc-related unit tests fail consistently on some platforms
 --

 Key: HADOOP-12090
 URL: https://issues.apache.org/jira/browse/HADOOP-12090
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, test
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch


 On some platforms all unit tests that use minikdc fail consistently. Those 
 tests include TestKMS, TestSaslDataTransfer, 
 TestTimelineAuthenticationFilter, etc.
 Typical failures on the unit tests:
 {noformat}
 java.lang.AssertionError: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Cannot get a 
 KDC reply)
   at org.junit.Assert.fail(Assert.java:88)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
 {noformat}
 The errors that cause this failure on the KDC server on the minikdc are a 
 NullPointerException:
 {noformat}
 org.apache.mina.filter.codec.ProtocolDecoderException: 
 java.lang.NullPointerException: message (Hexdump: ...)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
   at 
 org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
   at 
 org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.NullPointerException: message
   at 
 org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
   at 
 org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
   ... 15 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12113-HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10679) Authorize webui access using ServiceAuthorizationManager

2015-06-23 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598144#comment-14598144
 ] 

Haohui Mai commented on HADOOP-10679:
-

Is it possible to separate the refactor into another jira? That would 
facilitate the review process. Thanks!

 Authorize webui access using ServiceAuthorizationManager
 

 Key: HADOOP-10679
 URL: https://issues.apache.org/jira/browse/HADOOP-10679
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10679.patch, HADOOP-10679.patch, hadoop-10679.pdf


 Currently accessing Hadoop via RPC can be authorized using 
 _ServiceAuthorizationManager_. But there is no uniform authorization of the 
 HTTP access. Some of the servlets check for admin privilege. 
 This creates an inconsistency of authorization between access via RPC vs 
 HTTP. 
 The fix is to enable authorization of the webui access also using 
 _ServiceAuthorizationManager_. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598130#comment-14598130
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7025/console in case of 
problems.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12113-HADOOP-12111.patch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: (!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7024/console in case of 
problems.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12113-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: \\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 9s 
{color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) 
issues (total was 59, now 49). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 2m 
56s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 3m 12s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741323/HADOOP-12113-HADOOP-12111.patch
 |
| git revision | trunk / 41ae776 |
| Optional Tests | asflicense shellcheck site |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7024/artifact/patchprocess/diffpatchshellcheck.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7024/console |


This message was automatically generated.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12049) Control http authentication cookie persistence via configuration

2015-06-23 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598183#comment-14598183
 ] 

Benoy Antony commented on HADOOP-12049:
---

+1.
If there are no further comments, I can commit this tomorrow.

 Control http authentication cookie persistence via configuration
 

 Key: HADOOP-12049
 URL: https://issues.apache.org/jira/browse/HADOOP-12049
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Benoy Antony
Assignee: hzlu
  Labels: patch
 Fix For: 3.0.0

 Attachments: HADOOP-12049.001.patch, HADOOP-12049.003.patch, 
 HADOOP-12049.005.patch, HADOOP-12049.007.patch


 During http authentication, a cookie is dropped. This is a persistent cookie. 
 The cookie is valid across browser sessions.
 For clusters which require enhanced security,  it is desirable to have a 
 session cookie so that cookie gets deleted when the user closes browser 
 session.
 It should be possible to specify cookie persistence (session or persistent) 
 via configuration 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2015-06-23 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598298#comment-14598298
 ] 

Haohui Mai commented on HADOOP-12090:
-

I think it might make more sense to upgrade apacheds to the latest version to 
resolve the issue. There is no backward compatibility concerns as apacheds is 
only used by minikdc.

Tweaking the buffer size might work most of the time, but it does not seem to 
guarantee the packet will not be fragmented due to timing issues.

 minikdc-related unit tests fail consistently on some platforms
 --

 Key: HADOOP-12090
 URL: https://issues.apache.org/jira/browse/HADOOP-12090
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, test
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch


 On some platforms all unit tests that use minikdc fail consistently. Those 
 tests include TestKMS, TestSaslDataTransfer, 
 TestTimelineAuthenticationFilter, etc.
 Typical failures on the unit tests:
 {noformat}
 java.lang.AssertionError: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Cannot get a 
 KDC reply)
   at org.junit.Assert.fail(Assert.java:88)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
 {noformat}
 The errors that cause this failure on the KDC server on the minikdc are a 
 NullPointerException:
 {noformat}
 org.apache.mina.filter.codec.ProtocolDecoderException: 
 java.lang.NullPointerException: message (Hexdump: ...)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
   at 
 org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
   at 
 org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.NullPointerException: message
   at 
 org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
   at 
 org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
   ... 15 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-06-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598295#comment-14598295
 ] 

Colin Patrick McCabe commented on HADOOP-12036:
---

bq. The first item is needed for 64-bit builds on Solaris. That's because 
CMAKE_SYSTEM_PROCESSOR is the same as the output of 'uname -p', which 'i386' on 
Solaris x86 for example, because we support both 32 and 64 bit executables  
 simultaneously. We have to fake it up this way to get CMake to build 
64 bit. That's what the comment above the block in question is saying. Horrid, 
yes.

Ah, ok.

bq. The workaround \[for bzip2 library finding issues\] is to set 
CMAKE_LIBRARY_ARCHITECTURE to the correct value for the LP64 library location 
on the platform amd64 or sparcv9.

Good idea.  I see that the patch sets this for Solaris.  Should we do this for 
Linux as well?  Or should we do that in a follow-on, in case it's disruptive?

{code}
130 # Add in support other compilers here, if necessary.
131 if(NOT (CMAKE_COMPILER_IS_GNUCC AND CMAKE_COMPILER_IS_GNUCXX))
132
133 # Assume GCC and set the shared compiler flags.
134 else()
135 hadoop_add_compiler_flags(${GCC_SHARED_FLAGS})
136 endif()
{code}
Hmm, if I'm reading this right, I still don't think this will work for clang or 
icc?  Clang won't set CMAKE_COMPILER_IS_GNUCC or CMAKE_COMPILER_IS_GNUCXX set, 
so it won't get the correct CFLAGS set.  Maybe let's just get rid of this if 
statement  and add a comment saying that compiler which don't use the gcc flags 
should be special-cased here. (We don't support any such compilers yet, but 
we'd like to in the future)

thanks

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Assignee: Alan Burlison
 Attachments: HADOOP-12036.001.patch, HADOOP-12036.002.patch


 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-23 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598388#comment-14598388
 ] 

Ming Ma commented on HADOOP-12107:
--

Thanks [~sjlee0]. Latest patch LGTM. [~cmccabe], [~jira.shegalov] any 
additional comments?

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-23 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598397#comment-14598397
 ] 

Gera Shegalov commented on HADOOP-12107:


Thanks for v3 [~sjlee0]! 

*FileSystem.java:*

*{{getThreadStatistics}}:*
Minimize the code executed under the monitor. Pull reference creation out of 
{{synchronized}} similar to what it was before. Note that currentThread is a 
native call.


*{{Cleaner#run}}*
Catch and log InterruptedException in the while loop, such that thread does not 
die on a spurious wakeup. It's safe since it's a daemon thread.

Nits:
can we be more specific in the naming, to the tune of: STATS_DATA_CLEANER, 
STATS_DATA_REFQUEUE, StatsDataCleaner. 

*{{testStatisticsThreadLocalDataCleanUp}}*
Since the test uses waits, pass some reasonable timeout to {{@Test}}

make 'int size' and 'int maxSeconds' final. 

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-06-23 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598333#comment-14598333
 ] 

Alan Burlison commented on HADOOP-12036:


I've been as conservative as I can with the existing Linux code as I'm wary of 
breaking distros other than the one I'm using (Centos), so I thing doing it as 
a follow-up might be prudent.

And yes, much better as you say to add a XXX Add new compiler support here 
comment, I'll make that change.

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Assignee: Alan Burlison
 Attachments: HADOOP-12036.001.patch, HADOOP-12036.002.patch


 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-06-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-12053:
---
 Target Version/s: 2.8.0
Affects Version/s: 2.7.0
   Status: Patch Available  (was: Open)

 Harfs defaulturiport should be Zero ( should not -1)
 

 Key: HADOOP-12053
 URL: https://issues.apache.org/jira/browse/HADOOP-12053
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-12053.001.patch


 The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and 
 returns -1 . But -1 can't pass the checkPath method when the 
 {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
  *Test Code :* 
 {code}
 for (FileStatus file : files) {
   String[] edges = file.getPath().getName().split(-);
   if (applicationId.toString().compareTo(edges[0]) = 0  
 applicationId.toString().compareTo(edges[1]) = 0) {
 Path harPath = new Path(har:// + 
 file.getPath().toUri().getPath());
 harPath = harPath.getFileSystem(conf).makeQualified(harPath);
 remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
 harPath, applicationId, appOwner,
 LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
 if 
 (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
  {
 remoteDirSet.add(remoteAppDir);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2015-06-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598370#comment-14598370
 ] 

Sangjin Lee commented on HADOOP-12090:
--

Thanks for the comments [~wheat9]. FWIW, it's not clear when this fixed version 
of apacheds will be released (2.0.0-M21). For those of us who cannot wait for 
that would need this patch internally to work around the issue.

Also, regarding the packet fragmentation, it's correct that packets can be 
fragmented for other reasons as well. That said, it is mitigated by 2 facts: 
(1) typical kerberos authentication request messages are much smaller (~ 500 
bytes) than the proposed window size (64 KB), and (2) mini-kdc is basically 
loopback connections thus there is little risk of fragmentation unless software 
(e.g. apacheds) sets the window size arbitrarily small.

 minikdc-related unit tests fail consistently on some platforms
 --

 Key: HADOOP-12090
 URL: https://issues.apache.org/jira/browse/HADOOP-12090
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, test
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch


 On some platforms all unit tests that use minikdc fail consistently. Those 
 tests include TestKMS, TestSaslDataTransfer, 
 TestTimelineAuthenticationFilter, etc.
 Typical failures on the unit tests:
 {noformat}
 java.lang.AssertionError: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Cannot get a 
 KDC reply)
   at org.junit.Assert.fail(Assert.java:88)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
   at 
 org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
 {noformat}
 The errors that cause this failure on the KDC server on the minikdc are a 
 NullPointerException:
 {noformat}
 org.apache.mina.filter.codec.ProtocolDecoderException: 
 java.lang.NullPointerException: message (Hexdump: ...)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
   at 
 org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
   at 
 org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
   at 
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
   at 
 org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.NullPointerException: message
   at 
 org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
   at 
 org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
   at 
 org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
   ... 15 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-06-23 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12036:
---
Attachment: HADOOP-12036.003.patch

Assume GCC by default

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Assignee: Alan Burlison
 Attachments: HADOOP-12036.001.patch, HADOOP-12036.002.patch, 
 HADOOP-12036.003.patch


 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-23 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597297#comment-14597297
 ] 

Walter Su commented on HADOOP-12107:


bq. This will create one additional thread per FileSystem object.
003 patch is good. I just don't understand why 001 patch creates one additional 
thread per FileSystem *object*?
I look at {{FileSystem#getStaticstics(..)}}. I think it creates one Staticstics 
object per FileSystem *class*, so 001 patch creates one additional thread per 
FileSystem *class*? I'll be grateful if somebody can guide me through this.

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-06-23 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597464#comment-14597464
 ] 

Alan Burlison commented on HADOOP-12036:


The workaround is to set CMAKE_LIBRARY_ARCHITECTURE to the correct value for 
the LP64 library location on the platform amd64 or sparcv9.

Before:

Found ZLIB: /usr/lib/libz.so.1 (found version 1.2.8)
Looking for BZ2_bzCompressInit in /usr/lib/libbz2.so.1
Looking for BZ2_bzCompressInit in /usr/lib/libbz2.so.1 - not found

Which is wrong as it's finding the 32 bit and not the 64 bit library, then 
feeding it into a 64-bit test compile which not surprisingly blows up.

After:

Found ZLIB: /usr/lib/amd64/libz.so.1 (found version 1.2.8)
Looking for BZ2_bzCompressInit in /usr/lib/amd64/libbz2.so.1
Looking for BZ2_bzCompressInit in /usr/lib/amd64/libbz2.so.1 - found

which is correct.

This and the other architecture-related stuff needs moving from HadoopJNI.cmake 
into HadoopCommon.cmake as all compilations need the tweaks, not just JNI code. 
I believe the same move is also needed for the Linux equivalents.

The other issue of warnings such as Manually-specified variables were not used 
by the project ... REQUIRE_BZIP2 is because when the library *is* found the 
REQUIRE_BZIP2 variable isn't tested and CMake isn't smart enough to see it's 
used down a different code path. The easiest way of stopping the warnings is to 
put a dummy assignment to REQUIRE_BZIP2 in the library found code path.

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Assignee: Alan Burlison
 Attachments: HADOOP-12036.001.patch


 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598843#comment-14598843
 ] 

Hadoop QA commented on HADOOP-12053:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 12s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 45s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 17s | The applied patch generated  1 
new checkstyle issues (total was 32, now 33). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 12s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | tools/hadoop tests |   1m 11s | Tests passed in 
hadoop-azure. |
| | |  64m 37s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741426/HADOOP-12053.002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 49dfad9 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7028/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7028/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-azure test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7028/artifact/patchprocess/testrun_hadoop-azure.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7028/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7028/console |


This message was automatically generated.

 Harfs defaulturiport should be Zero ( should not -1)
 

 Key: HADOOP-12053
 URL: https://issues.apache.org/jira/browse/HADOOP-12053
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch


 The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and 
 returns -1 . But -1 can't pass the checkPath method when the 
 {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
  *Test Code :* 
 {code}
 for (FileStatus file : files) {
   String[] edges = file.getPath().getName().split(-);
   if (applicationId.toString().compareTo(edges[0]) = 0  
 applicationId.toString().compareTo(edges[1]) = 0) {
 Path harPath = new Path(har:// + 
 file.getPath().toUri().getPath());
 harPath = harPath.getFileSystem(conf).makeQualified(harPath);
 remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
 harPath, applicationId, appOwner,
 LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
 if 
 (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
  {
 remoteDirSet.add(remoteAppDir);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-06-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-12053:
--
Assignee: Gera Shegalov  (was: Brahma Reddy Battula)

 Harfs defaulturiport should be Zero ( should not -1)
 

 Key: HADOOP-12053
 URL: https://issues.apache.org/jira/browse/HADOOP-12053
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Gera Shegalov
 Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch


 The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and 
 returns -1 . But -1 can't pass the checkPath method when the 
 {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
  *Test Code :* 
 {code}
 for (FileStatus file : files) {
   String[] edges = file.getPath().getName().split(-);
   if (applicationId.toString().compareTo(edges[0]) = 0  
 applicationId.toString().compareTo(edges[1]) = 0) {
 Path harPath = new Path(har:// + 
 file.getPath().toUri().getPath());
 harPath = harPath.getFileSystem(conf).makeQualified(harPath);
 remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
 harPath, applicationId, appOwner,
 LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
 if 
 (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
  {
 remoteDirSet.add(remoteAppDir);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-06-23 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598861#comment-14598861
 ] 

Brahma Reddy Battula commented on HADOOP-12053:
---

[~jira.shegalov] Thanks for working on this..Please fix the checksystyle 
comment and I am thinking it is incompatible..

 Harfs defaulturiport should be Zero ( should not -1)
 

 Key: HADOOP-12053
 URL: https://issues.apache.org/jira/browse/HADOOP-12053
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Gera Shegalov
 Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch


 The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and 
 returns -1 . But -1 can't pass the checkPath method when the 
 {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
  *Test Code :* 
 {code}
 for (FileStatus file : files) {
   String[] edges = file.getPath().getName().split(-);
   if (applicationId.toString().compareTo(edges[0]) = 0  
 applicationId.toString().compareTo(edges[1]) = 0) {
 Path harPath = new Path(har:// + 
 file.getPath().toUri().getPath());
 harPath = harPath.getFileSystem(conf).makeQualified(harPath);
 remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
 harPath, applicationId, appOwner,
 LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
 if 
 (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
  {
 remoteDirSet.add(remoteAppDir);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-06-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-12053:
---
Attachment: HADOOP-12053.002.patch

 Harfs defaulturiport should be Zero ( should not -1)
 

 Key: HADOOP-12053
 URL: https://issues.apache.org/jira/browse/HADOOP-12053
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch


 The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and 
 returns -1 . But -1 can't pass the checkPath method when the 
 {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
  *Test Code :* 
 {code}
 for (FileStatus file : files) {
   String[] edges = file.getPath().getName().split(-);
   if (applicationId.toString().compareTo(edges[0]) = 0  
 applicationId.toString().compareTo(edges[1]) = 0) {
 Path harPath = new Path(har:// + 
 file.getPath().toUri().getPath());
 harPath = harPath.getFileSystem(conf).makeQualified(harPath);
 remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
 harPath, applicationId, appOwner,
 LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
 if 
 (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
  {
 remoteDirSet.add(remoteAppDir);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598527#comment-14598527
 ] 

Hadoop QA commented on HADOOP-12053:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m  3s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 17s | The applied patch generated  1 
new checkstyle issues (total was 8, now 9). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 38s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  21m 19s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | tools/hadoop tests |   1m 10s | Tests passed in 
hadoop-azure. |
| | |  63m 20s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.TestFcLocalFsUtil |
|   | hadoop.fs.TestLocalFsFCStatistics |
|   | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestFileUtil |
|   | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
|   | hadoop.fs.TestFileContextDeleteOnExit |
|   | hadoop.fs.TestS3_LocalFileContextURI |
|   | hadoop.fs.viewfs.TestChRootedFs |
|   | hadoop.fs.TestLocalFSFileContextCreateMkdir |
|   | hadoop.fs.viewfs.TestViewFsLocalFs |
|   | hadoop.fs.viewfs.TestFcCreateMkdirLocalFs |
|   | hadoop.fs.TestLocal_S3FileContextURI |
|   | hadoop.fs.viewfs.TestFcMainOperationsLocalFs |
|   | hadoop.fs.TestLocalFSFileContextMainOperations |
|   | hadoop.fs.viewfs.TestFcPermissionsLocalFs |
|   | hadoop.fs.TestFileContextResolveAfs |
|   | hadoop.fs.TestFcLocalFsPermission |
|   | hadoop.fs.shell.TestTextCommand |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740699/HADOOP-12053.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 122cad6 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7026/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7026/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-azure test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7026/artifact/patchprocess/testrun_hadoop-azure.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7026/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7026/console |


This message was automatically generated.

 Harfs defaulturiport should be Zero ( should not -1)
 

 Key: HADOOP-12053
 URL: https://issues.apache.org/jira/browse/HADOOP-12053
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-12053.001.patch


 The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and 
 returns -1 . But -1 can't pass the checkPath method when the 
 {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
  *Test Code :* 
 {code}
 for (FileStatus file : files) {
   String[] edges = file.getPath().getName().split(-);
   if (applicationId.toString().compareTo(edges[0]) = 0  
 applicationId.toString().compareTo(edges[1]) = 0) {
 Path harPath = new Path(har:// + 
 file.getPath().toUri().getPath());
 harPath = harPath.getFileSystem(conf).makeQualified(harPath);
 remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
 harPath, applicationId, appOwner,
 LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
 if 
 (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
  {
 remoteDirSet.add(remoteAppDir);
 }
 {code}



--
This 

[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-23 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598617#comment-14598617
 ] 

Gera Shegalov commented on HADOOP-12107:


+1, 004 LGTM

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-23 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12107:
-
Attachment: HADOOP-12107.004.patch

patch v.4 posted

[~jira.shegalov], those are good suggestions.

Changes:
- made the cleaner thread uninterruptible
- reduced the locking scope in getThreadStatistics()
- renamed variables and a class to be more specific to stats data
- added a timeout to the test method
- made size and maxSeconds final

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598570#comment-14598570
 ] 

Hadoop QA commented on HADOOP-12107:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 18s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  4s | The applied patch generated  1 
new checkstyle issues (total was 142, now 141). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  21m 41s | Tests passed in 
hadoop-common. |
| | |  60m 38s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741393/HADOOP-12107.004.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 122cad6 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7027/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7027/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7027/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7027/console |


This message was automatically generated.

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)