[jira] [Resolved] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-20 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-16007.
-
Resolution: Duplicate

This was fixed by HADOOP-15554.

> Order of property settings is incorrect when includes are processed
> ---
>
> Key: HADOOP-16007
> URL: https://issues.apache.org/jira/browse/HADOOP-16007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0, 3.1.1, 3.0.4
>    Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Blocker
>
> If a configuration file contains a setting for a property then later includes 
> another file that also sets that property to a different value then the 
> property will be parsed incorrectly. For example, consider the following 
> configuration file:
> {noformat}
> http://www.w3.org/2001/XInclude";>
>  
>  myprop
>  val1
>  
> 
> 
> {noformat}
> with the contents of /some/other/file.xml as:
> {noformat}
>  
>myprop
>val2
>  
> {noformat}
> Parsing this configuration should result in myprop=val2, but it actually 
> results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2018-12-19 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-16016:
---

 Summary: TestSSLFactory#testServerWeakCiphers sporadically fails 
in precommit builds
 Key: HADOOP-16016
 URL: https://issues.apache.org/jira/browse/HADOOP-16016
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.3.0
Reporter: Jason Lowe


I have seen a couple of precommit builds across JIRAs fail in 
TestSSLFactory#testServerWeakCiphers with the error:
{noformat}
[ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no cipher 
suites in common' but got unexpected 
exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
(protocol is disabled or cipher suites are inappropriate)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-14 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-16007:
---

 Summary: Order of property settings is incorrect when includes are 
processed
 Key: HADOOP-16007
 URL: https://issues.apache.org/jira/browse/HADOOP-16007
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.1.1, 3.2.0, 3.0.4
Reporter: Jason Lowe


If a configuration file contains a setting for a property then later includes 
another file that also sets that property to a different value then the 
property will be parsed incorrectly. For example, consider the following 
configuration file:
{noformat}
http://www.w3.org/2001/XInclude";>
 
 myprop
 val1
 


{noformat}
with the contents of /some/other/file.xml as:
{noformat}
 
   myprop
   val2
 
{noformat}
Parsing this configuration should result in myprop=val2, but it actually 
results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.0 - RC0

2018-11-28 Thread Jason Lowe
Thanks for driving this release, Sunil!

+1 (binding)

- Verified signatures and digests
- Successfully performed a native build
- Deployed a single-node cluster
- Ran some sample jobs

Jason

On Fri, Nov 23, 2018 at 6:07 AM Sunil G  wrote:

> Hi folks,
>
>
>
> Thanks to all contributors who helped in this release [1]. I have created
>
> first release candidate (RC0) for Apache Hadoop 3.2.0.
>
>
> Artifacts for this RC are available here:
>
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC0/
>
>
>
> RC tag in git is release-3.2.0-RC0.
>
>
>
> The maven artifacts are available via repository.apache.org at
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1174/
>
>
> This vote will run 7 days (5 weekdays), ending on Nov 30 at 11:59 pm PST.
>
>
>
> 3.2.0 contains 1079 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
>
> are the highlights of this release.
>
> 1. Node Attributes Support in YARN
>
> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>
> 3. Support service upgrade via YARN Service API and CLI
>
> 4. HDFS Storage Policy Satisfier
>
> 5. Support Windows Azure Storage - Blob file system in Hadoop
>
> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>
> 7. Improvements in Router-based HDFS federation
>
>
>
> Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
>
> I have done few testing with my pseudo cluster. My +1 to start.
>
>
>
> Regards,
>
> Sunil
>
>
>
> [1]
>
>
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER BY fixVersion ASC
>


Re: [VOTE] Release Apache Hadoop 2.9.2 (RC0)

2018-11-19 Thread Jason Lowe
Thanks for driving this release, Akira!

+1 (binding)

- Verified signatures and digests
- Successfully performed native build from source
- Deployed a single-node cluster and ran some sample jobs

Jason

On Tue, Nov 13, 2018 at 7:02 PM Akira Ajisaka  wrote:

> Hi folks,
>
> I have put together a release candidate (RC0) for Hadoop 2.9.2. It
> includes 204 bug fixes and improvements since 2.9.1. [1]
>
> The RC is available at http://home.apache.org/~aajisaka/hadoop-2.9.2-RC0/
> Git signed tag is release-2.9.2-RC0 and the checksum is
> 826afbeae31ca687bc2f8471dc841b66ed2c6704
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1166/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Please try the release and vote. The vote will run for 5 days.
>
> [1] https://s.apache.org/2.9.2-fixed-jiras
>
> Thanks,
> Akira
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HADOOP-15822) zstd compressor can fail with a small output buffer

2018-10-05 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15822:
---

 Summary: zstd compressor can fail with a small output buffer
 Key: HADOOP-15822
 URL: https://issues.apache.org/jira/browse/HADOOP-15822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.9.0
Reporter: Jason Lowe
Assignee: Jason Lowe


TestZStandardCompressorDecompressor fails a couple of tests on my machine with 
the latest zstd library (1.3.5).  Compression can fail to successfully finalize 
the stream when a small output buffer is used resulting in a failed to init 
error, and decompression with a direct buffer can fail with an invalid src size 
error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15820) ZStandardDecompressor native code sets an integer field as a long

2018-10-04 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15820:
---

 Summary: ZStandardDecompressor native code sets an integer field 
as a long
 Key: HADOOP-15820
 URL: https://issues.apache.org/jira/browse/HADOOP-15820
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2, 2.9.0
Reporter: Jason Lowe
Assignee: Jason Lowe


Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_init in 
ZStandardDecompressor.c sets the {{remaining}} field as a long when it actually 
is an integer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Next Hadoop Contributors Meetup on September 25th

2018-09-13 Thread Jason Lowe
I am happy to announce that Oath will be hosting the next Hadoop
Contributors meetup on Tuesday, September 25th at Yahoo Building G, 589
Java Drive, Sunnyvale CA from 8:00AM to 6:00PM.

The agenda will look roughly as follows:

08:00AM - 08:30AM Arrival and Check-in
08:30AM - 12:00PM A series of brief talks with some of the topics including:
  - HDFS scalability and security
  - Use cases and future directions for Docker on YARN
  - Submarine (Deep Learning on YARN)
  - Hadoop in the cloud
  - Oath's use of machine learning, Vespa, and Storm
11:45PM - 12:30PM Lunch Break
12:30PM - 02:00PM Brief talks series resume
02:00PM - 04:30PM Parallel breakout sessions to discuss topics suggested by
attendees.  Some proposed topics include:
  - Improved security credentials management for long-running YARN
applications
  - Improved management of parallel development lines
  - Proposals for the next bug bash
  - Tez shuffle handler and DAG aware scheduler overview
 04:30PM - 06:00PM Closing Reception

RSVP at https://www.meetup.com/Hadoop-Contributors/events/254012512/ is
REQUIRED to attend and spots are limited.  Security will be checking the
attendee list as you enter the building.

We will host a Google Hangouts/Meet so people who are interested but unable
to attend in person can participate remotely.  Details will be posted to
the meetup event.

Hope to see you there!

Jason


Re: [VOTE] Release Apache Hadoop 2.8.5 (RC0)

2018-09-10 Thread Jason Lowe
Thanks for driving the release, Junping!

+1 (binding)

- Verified signatures and digests
- Successfully performed a native build from source
- Successfully deployed a single-node cluster with the timeline server
- Ran some sample jobs and examined the web UI and job logs

Jason

On Mon, Sep 10, 2018 at 7:00 AM, 俊平堵  wrote:

> Hi all,
>
>  I've created the first release candidate (RC0) for Apache
> Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It
> includes 33 important fixes and improvements.
>
>
> The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.5-RC0
>
>
> The RC tag in git is: release-2.8.5-RC0
>
>
>
> The maven artifacts are available via repository.apache.org<
> http://repository.apache.org> at:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1140
>
>
> Please try the release and vote; the vote will run for the usual 5
> working
> days, ending on 9/15/2018 PST time.
>
>
> Thanks,
>
>
> Junping
>


[jira] [Resolved] (HADOOP-15738) MRAppBenchmark.benchmark1() fails with NullPointerException

2018-09-10 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-15738.
-
Resolution: Duplicate

> MRAppBenchmark.benchmark1() fails with NullPointerException
> ---
>
> Key: HADOOP-15738
> URL: https://issues.apache.org/jira/browse/HADOOP-15738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Oleksandr Shevchenko
>Priority: Minor
>
> MRAppBenchmark.benchmark1() fails with NullPointerException:
> 1. We do not set any queue for this test. As the result we got the following 
> exception:
> {noformat}
> 2018-09-10 17:04:23,486 ERROR [Thread-0] rm.RMCommunicator 
> (RMCommunicator.java:register(177)) - Exception while registering
> java.lang.NullPointerException
> at org.apache.avro.util.Utf8$2.toUtf8(Utf8.java:123)
> at org.apache.avro.util.Utf8.getBytesFor(Utf8.java:172)
> at org.apache.avro.util.Utf8.(Utf8.java:39)
> at 
> org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent.(JobQueueChangeEvent.java:35)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.setQueueName(JobImpl.java:1167)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:174)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.run(MRAppBenchmark.java:72)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.benchmark1(MRAppBenchmark.java:194)
> {noformat}
> 2. We override createSchedulerProxy method and do not set application 
> priority that was added later by MAPREDUCE-6515. We got the following error:
> {noformat}
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleJobPriorityChange(RMContainerAllocator.java:1025)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:880)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:286)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:280)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In both cases, the job never will be run and the test stuck and will not be 
> finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15738) MRAppBenchmark.benchmark1() fails with NullPointerException

2018-09-10 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-15738:
-

> MRAppBenchmark.benchmark1() fails with NullPointerException
> ---
>
> Key: HADOOP-15738
> URL: https://issues.apache.org/jira/browse/HADOOP-15738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Oleksandr Shevchenko
>Priority: Minor
>
> MRAppBenchmark.benchmark1() fails with NullPointerException:
> 1. We do not set any queue for this test. As the result we got the following 
> exception:
> {noformat}
> 2018-09-10 17:04:23,486 ERROR [Thread-0] rm.RMCommunicator 
> (RMCommunicator.java:register(177)) - Exception while registering
> java.lang.NullPointerException
> at org.apache.avro.util.Utf8$2.toUtf8(Utf8.java:123)
> at org.apache.avro.util.Utf8.getBytesFor(Utf8.java:172)
> at org.apache.avro.util.Utf8.(Utf8.java:39)
> at 
> org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent.(JobQueueChangeEvent.java:35)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.setQueueName(JobImpl.java:1167)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:174)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.run(MRAppBenchmark.java:72)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.benchmark1(MRAppBenchmark.java:194)
> {noformat}
> 2. We override createSchedulerProxy method and do not set application 
> priority that was added later by MAPREDUCE-6515. We got the following error:
> {noformat}
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleJobPriorityChange(RMContainerAllocator.java:1025)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:880)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:286)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:280)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In both cases, the job never will be run and the test stuck and will not be 
> finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.7

2018-07-16 Thread Jason Lowe
Thanks for driving this release, Steve!

+1 (binding)

- Verified signatures and digests
- Successfully performed a native build from source
- Deployed a single-node cluster
- Ran some sample jobs and checked web UI and logs

Jason


On Wed, Jul 11, 2018 at 10:05 AM, Steve Loughran 
wrote:

>
>
> Hi
>
> I've got RC0 of Hadoop 2.7.7 up for people to download and play with
>
> http://people.apache.org/~stevel/hadoop-2.7.7/RC0/
>
> Nexus artifacts 2.7.7 are up in staging
> https://repository.apache.org/content/repositories/orgapachehadoop-1135
>
> Git (signed) tag release-2.7.7-RC0, checksum c1aad84bd27cd79c3d1a7dd58202a8
> c3ee1ed3ac
>
> My current GPG key is 38237EE425050285077DB57AD22CF846DBB162A0
> you can download this direct (and it MUST be direct) from the ASF HTTPS
> site
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Please try the release and vote. The vote will run for 5 days.
>
> Thanks
>
> -Steve
>
> (I should add: this is my first ever attempt at a Hadoop release, please
> be (a) rigorous and (b) forgiving. Credit to Wei-Chiu Chuang for his
> advice).
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: What's the difference between branch-3 and branch-3.0?

2018-07-06 Thread Jason Lowe
branch-3 has re-appeared yet again on June 2nd.  Looks like the
inadvertent branch-3 change from HDFS-11847 was somehow pushed again?

Anyway this branch needs to be deleted yet again as committers are
accidentally committing to it again thinking their change is going
into 3.0.x.  Is an INFRA ticket still the way to clean this up?

I'm starting to wonder if we need a push hook to prevent "branch-3"
pushes until we really need a branch-3 (i.e.: when Hadoop 4.x starts
development).  Or to generalize it a bit more, block creation of
branches with the prefix of "branch-" could be blocked by default but
allowed if the first commit fit a particular form like "Initial commit
for the X.Y.Z release."  That way it shouldn't be easy to accidentally
create a new release line.

Jason

On Thu, Feb 22, 2018 at 2:55 PM, Chris Douglas  wrote:
> branch-3 has reappeared. Filed INFRA-16086 [1]. -C
>
> [1]: https://issues.apache.org/jira/browse/INFRA-16086
>
>
> On Wed, Jan 17, 2018 at 12:32 PM, Jason Lowe  wrote:
>> I filed INFRA-15859 to have branch-3 deleted.
>>
>> Jason
>>
>> On Wed, Jan 17, 2018 at 1:23 PM, Eric Payne <
>> eric.payne1...@yahoo.com.invalid> wrote:
>>
>>> +1 for removing the branch-3 branch. It should be done soon so more
>>> confusion can be avoided. Thanks Jason for tracking this down.
>>> -Eric
>>>
>>>
>>>   From: Jason Lowe 
>>>  To: Hadoop Common ; ma...@cloudera.com
>>>  Sent: Wednesday, January 17, 2018 12:42 PM
>>>  Subject: Re: What's the difference between branch-3 and branch-3.0?
>>>
>>> This was created accidentally when HDFS-11847 was committed.  As such we
>>> should delete the branch-3 branch and port over the commits that went into
>>> branch-3 instead of branch-3.0.  For the former, I'm assuming that requires
>>> an INFRA ticket since I would hope any committer would not have the ability
>>> to destroy a branch-* branch.  Unfortunately even after it's deleted I
>>> suspect we will see it reappear if someone pushes up their old copy of
>>> branch-3 again, so committers will need to be vigilant.
>>>
>>> I'll work on porting the missing changes below from branch-3 over to
>>> branch-3.0.  I'll wait for some more consensus on the branch-3 deletion
>>> before filing the INFRA ticket since deleting a branch shouldn't be done
>>> lightly.
>>>
>>> Jason
>>>
>>>
>>> commit 0802d8afa355d9a0683fdb2e9c4963e8fea8644f
>>> Author: Vinayakumar B 
>>> Date:  Wed Jan 17 14:16:48 2018 +0530
>>>
>>> HDFS-9049. Make Datanode Netty reverse proxy port to be configurable.
>>> Contributed by Vinayakumar B.
>>>
>>> (cherry picked from commit 09efdfe9e13c9695867ce4034aa6ec970c2032f1)
>>>
>>> commit db8345fa9cd124728d935f725525e2626438b4c1
>>> Author: Lei Xu 
>>> Date:  Tue Jan 16 15:15:11 2018 -0800
>>>
>>> HDFS-13004. TestLeaseRecoveryStriped.testLeaseRecovery is failing when
>>> safeLength is 0MB or larger than the test file. (Zsolt Venczel via lei)
>>>
>>> (cherry picked from commit 3bd9ea63df769345a9d02a404cfb61323a4cd7e3)
>>>
>>> commit 82741091a78d7ce62c240ec3e7f81a3a9a3fee36
>>> Author: Inigo Goiri 
>>> Date:  Mon Jan 15 12:21:24 2018 -0800
>>>
>>> HDFS-12919. RBF: Support erasure coding methods in RouterRpcServer.
>>> Contributed by Inigo Goiri.
>>>
>>> commit d3fbcd92fe53192a319683b9ac72179cb28bd978
>>> Author: Yiqun Lin 
>>> Date:  Sat Jan 6 14:31:08 2018 +0800
>>>
>>> HDFS-11848. Enhance dfsadmin listOpenFiles command to list files under
>>> a given path. Contributed by Yiqun Lin.
>>>
>>> commit ee44783515a55ab9fd368660c5cc2c2bc392132e
>>> Author: Manoj Govindassamy 
>>> Date:  Tue Jan 2 14:59:36 2018 -0800
>>>
>>> HDFS-11847. Enhance dfsadmin listOpenFiles command to list files
>>> blocking datanode decommissioning.
>>>
>>>
>>>
>>> On Wed, Jan 17, 2018 at 10:53 AM, Brahma Reddy Battula 
>>> wrote:
>>>
>>> > IMHO,we no need to have *branch-3* still trunk moves.Shall we remove as
>>> it
>>> > make confuses.??
>>> >
>>> >
>>> >
>>> >
>>> > Brahma Reddy Battula
>>> >
>>> > On Wed, Jan 17, 2018 at 9:41 PM, Jason Lowe 
>>> > wrote:
>>> >
>>> > > I recently noticed som

[jira] [Created] (HADOOP-15406) hadoop-nfs dependencies for mockito and junit are not test scope

2018-04-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15406:
---

 Summary: hadoop-nfs dependencies for mockito and junit are not 
test scope
 Key: HADOOP-15406
 URL: https://issues.apache.org/jira/browse/HADOOP-15406
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Reporter: Jason Lowe


hadoop-nfs asks for mockito-all and junit for its unit tests but it does not 
mark the dependency as being required only for tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.6 (RC0)

2018-04-16 Thread Jason Lowe
Thanks for driving the release, Konstatin!

+1 (binding)

- Verified signatures and digests
- Completed a native build from source
- Deployed a single-node cluster
- Ran some sample jobs

Jason

On Mon, Apr 9, 2018 at 6:14 PM, Konstantin Shvachko
 wrote:
> Hi everybody,
>
> This is the next dot release of Apache Hadoop 2.7 line. The previous one 2.7.5
> was released on December 14, 2017.
> Release 2.7.6 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.6-RC0/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.6-RC0/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 04/16/2018.
>
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Thanks,
> --Konstantin

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13500) Concurrency issues when using Configuration iterator

2018-04-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-13500:
-

This is not a duplicate of HADOOP-13556.  That JIRA only changed the 
getPropsWithPrefix method which was not involved in the error reported by this 
JIRA or TEZ-3413.  AFAICT iterating a shared configuration object is still 
unsafe.

> Concurrency issues when using Configuration iterator
> 
>
> Key: HADOOP-13500
> URL: https://issues.apache.org/jira/browse/HADOOP-13500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>    Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Major
>
> It is possible to encounter a ConcurrentModificationException while trying to 
> iterate a Configuration object.  The iterator method tries to walk the 
> underlying Property object without proper synchronization, so another thread 
> simultaneously calling the set method can trigger it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



CVE-2017-15713: Apache Hadoop MapReduce job history server vulnerability

2018-01-19 Thread Jason Lowe
CVE-2017-15713: Apache Hadoop MapReduce job history server vulnerability

Severity: Severe

Vendor: The Apache Software Foundation

Versions Affected:
  Hadoop 0.23.0 to 0.23.11
  Hadoop 2.0.0-alpha to 2.8.2
  Hadoop 3.0.0-alpha to 3.0.0-beta1

Users affected: Users running the MapReduce job history server daemon

Impact:  Vulnerability allows a cluster user to expose private files
owned by the user running the MapReduce job history server process.
The malicious user can construct a configuration file containing XML
directives that reference sensitive files on the MapReduce job history
server host.

Mitigation: Users should upgrade to Apache Hadoop 2.7.5, 2.8.3, 2.9.0, or 3.0.0.

Credit: This issue was discovered by Man Yue Mo of lgtm.com

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: What's the difference between branch-3 and branch-3.0?

2018-01-17 Thread Jason Lowe
I filed INFRA-15859 to have branch-3 deleted.

Jason

On Wed, Jan 17, 2018 at 1:23 PM, Eric Payne <
eric.payne1...@yahoo.com.invalid> wrote:

> +1 for removing the branch-3 branch. It should be done soon so more
> confusion can be avoided. Thanks Jason for tracking this down.
> -Eric
>
>
>   From: Jason Lowe 
>  To: Hadoop Common ; ma...@cloudera.com
>  Sent: Wednesday, January 17, 2018 12:42 PM
>  Subject: Re: What's the difference between branch-3 and branch-3.0?
>
> This was created accidentally when HDFS-11847 was committed.  As such we
> should delete the branch-3 branch and port over the commits that went into
> branch-3 instead of branch-3.0.  For the former, I'm assuming that requires
> an INFRA ticket since I would hope any committer would not have the ability
> to destroy a branch-* branch.  Unfortunately even after it's deleted I
> suspect we will see it reappear if someone pushes up their old copy of
> branch-3 again, so committers will need to be vigilant.
>
> I'll work on porting the missing changes below from branch-3 over to
> branch-3.0.  I'll wait for some more consensus on the branch-3 deletion
> before filing the INFRA ticket since deleting a branch shouldn't be done
> lightly.
>
> Jason
>
>
> commit 0802d8afa355d9a0683fdb2e9c4963e8fea8644f
> Author: Vinayakumar B 
> Date:  Wed Jan 17 14:16:48 2018 +0530
>
> HDFS-9049. Make Datanode Netty reverse proxy port to be configurable.
> Contributed by Vinayakumar B.
>
> (cherry picked from commit 09efdfe9e13c9695867ce4034aa6ec970c2032f1)
>
> commit db8345fa9cd124728d935f725525e2626438b4c1
> Author: Lei Xu 
> Date:  Tue Jan 16 15:15:11 2018 -0800
>
> HDFS-13004. TestLeaseRecoveryStriped.testLeaseRecovery is failing when
> safeLength is 0MB or larger than the test file. (Zsolt Venczel via lei)
>
> (cherry picked from commit 3bd9ea63df769345a9d02a404cfb61323a4cd7e3)
>
> commit 82741091a78d7ce62c240ec3e7f81a3a9a3fee36
> Author: Inigo Goiri 
> Date:  Mon Jan 15 12:21:24 2018 -0800
>
> HDFS-12919. RBF: Support erasure coding methods in RouterRpcServer.
> Contributed by Inigo Goiri.
>
> commit d3fbcd92fe53192a319683b9ac72179cb28bd978
> Author: Yiqun Lin 
> Date:  Sat Jan 6 14:31:08 2018 +0800
>
> HDFS-11848. Enhance dfsadmin listOpenFiles command to list files under
> a given path. Contributed by Yiqun Lin.
>
> commit ee44783515a55ab9fd368660c5cc2c2bc392132e
> Author: Manoj Govindassamy 
> Date:  Tue Jan 2 14:59:36 2018 -0800
>
> HDFS-11847. Enhance dfsadmin listOpenFiles command to list files
> blocking datanode decommissioning.
>
>
>
> On Wed, Jan 17, 2018 at 10:53 AM, Brahma Reddy Battula 
> wrote:
>
> > IMHO,we no need to have *branch-3* still trunk moves.Shall we remove as
> it
> > make confuses.??
> >
> >
> >
> >
> > Brahma Reddy Battula
> >
> > On Wed, Jan 17, 2018 at 9:41 PM, Jason Lowe 
> > wrote:
> >
> > > I recently noticed some committers posting commits to branch-3 and
> > marking
> > > the JIRA as fixed in 3.0.1.  I thought branch-3.0 was tracking 3.0.x
> > > releases, including branch-3.0.1 as of now, so I am confused what
> > branch-3
> > > is for.  The versions in the poms between branch-3 and branch-3.0 both
> > say
> > > they are 3.0.1-SNAPSHOT.
> > >
> > > I recall we discussed _not_ creating branch-3 until it is necessary,
> and
> > it
> > > is only necessary for branch-3 to exist when trunk stop tracking 3.x
> > > releases (i.e.: when trunk moves to 4.0.0-SNAPSHOT).
> > >
> > > Jason
> > >
> >
> >
> >
> > --
> >
> >
> >
> > --Brahma Reddy Battula
> >
>
>
>
>


Re: What's the difference between branch-3 and branch-3.0?

2018-01-17 Thread Jason Lowe
This was created accidentally when HDFS-11847 was committed.  As such we
should delete the branch-3 branch and port over the commits that went into
branch-3 instead of branch-3.0.  For the former, I'm assuming that requires
an INFRA ticket since I would hope any committer would not have the ability
to destroy a branch-* branch.  Unfortunately even after it's deleted I
suspect we will see it reappear if someone pushes up their old copy of
branch-3 again, so committers will need to be vigilant.

I'll work on porting the missing changes below from branch-3 over to
branch-3.0.  I'll wait for some more consensus on the branch-3 deletion
before filing the INFRA ticket since deleting a branch shouldn't be done
lightly.

Jason


commit 0802d8afa355d9a0683fdb2e9c4963e8fea8644f
Author: Vinayakumar B 
Date:   Wed Jan 17 14:16:48 2018 +0530

HDFS-9049. Make Datanode Netty reverse proxy port to be configurable.
Contributed by Vinayakumar B.

(cherry picked from commit 09efdfe9e13c9695867ce4034aa6ec970c2032f1)

commit db8345fa9cd124728d935f725525e2626438b4c1
Author: Lei Xu 
Date:   Tue Jan 16 15:15:11 2018 -0800

HDFS-13004. TestLeaseRecoveryStriped.testLeaseRecovery is failing when
safeLength is 0MB or larger than the test file. (Zsolt Venczel via lei)

(cherry picked from commit 3bd9ea63df769345a9d02a404cfb61323a4cd7e3)

commit 82741091a78d7ce62c240ec3e7f81a3a9a3fee36
Author: Inigo Goiri 
Date:   Mon Jan 15 12:21:24 2018 -0800

HDFS-12919. RBF: Support erasure coding methods in RouterRpcServer.
Contributed by Inigo Goiri.

commit d3fbcd92fe53192a319683b9ac72179cb28bd978
Author: Yiqun Lin 
Date:   Sat Jan 6 14:31:08 2018 +0800

HDFS-11848. Enhance dfsadmin listOpenFiles command to list files under
a given path. Contributed by Yiqun Lin.

commit ee44783515a55ab9fd368660c5cc2c2bc392132e
Author: Manoj Govindassamy 
Date:   Tue Jan 2 14:59:36 2018 -0800

HDFS-11847. Enhance dfsadmin listOpenFiles command to list files
blocking datanode decommissioning.



On Wed, Jan 17, 2018 at 10:53 AM, Brahma Reddy Battula 
wrote:

> IMHO,we no need to have *branch-3* still trunk moves.Shall we remove as it
> make confuses.??
>
>
>
>
> Brahma Reddy Battula
>
> On Wed, Jan 17, 2018 at 9:41 PM, Jason Lowe 
> wrote:
>
> > I recently noticed some committers posting commits to branch-3 and
> marking
> > the JIRA as fixed in 3.0.1.  I thought branch-3.0 was tracking 3.0.x
> > releases, including branch-3.0.1 as of now, so I am confused what
> branch-3
> > is for.  The versions in the poms between branch-3 and branch-3.0 both
> say
> > they are 3.0.1-SNAPSHOT.
> >
> > I recall we discussed _not_ creating branch-3 until it is necessary, and
> it
> > is only necessary for branch-3 to exist when trunk stop tracking 3.x
> > releases (i.e.: when trunk moves to 4.0.0-SNAPSHOT).
> >
> > Jason
> >
>
>
>
> --
>
>
>
> --Brahma Reddy Battula
>


What's the difference between branch-3 and branch-3.0?

2018-01-17 Thread Jason Lowe
I recently noticed some committers posting commits to branch-3 and marking
the JIRA as fixed in 3.0.1.  I thought branch-3.0 was tracking 3.0.x
releases, including branch-3.0.1 as of now, so I am confused what branch-3
is for.  The versions in the poms between branch-3 and branch-3.0 both say
they are 3.0.1-SNAPSHOT.

I recall we discussed _not_ creating branch-3 until it is necessary, and it
is only necessary for branch-3 to exist when trunk stop tracking 3.x
releases (i.e.: when trunk moves to 4.0.0-SNAPSHOT).

Jason


[jira] [Created] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-12 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15170:
---

 Summary: Add symlink support to FileUtil#unTarUsingJava 
 Key: HADOOP-15170
 URL: https://issues.apache.org/jira/browse/HADOOP-15170
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Jason Lowe
Priority: Minor


Now that JDK7 or later is required, we can leverage 
java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
archives that contain symbolic links.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop 3.0.1 Release plan

2018-01-09 Thread Jason Lowe
Is it necessary to cut the branch so far ahead of the release?  branch-3.0
is already a maintenance line for 3.0.x releases.  Is there a known
feature/improvement planned to go into branch-3.0 that is not desirable for
the 3.0.1 release?

I have found in the past that branching so early leads to many useful fixes
being unnecessarily postponed to future releases because committers forget
to pick to the new, relatively long-lived patch branch.  This becomes
especially true if blockers end up dragging out the ultimate release date,
which has historically been quite common.  My preference would be to cut
this branch as close to the RC as possible.

Jason


On Tue, Jan 9, 2018 at 1:17 PM, Lei Xu  wrote:

> Hi, All
>
> We have released Apache Hadoop 3.0.0 in December [1]. To further
> improve the quality of release, we plan to cut branch-3.0.1 branch
> tomorrow for the preparation of Apache Hadoop 3.0.1 release. The focus
> of 3.0.1 will be fixing blockers (3), critical bugs (1) and bug fixes
> [2].  No new features and improvement should be included.
>
> We plan to cut branch-3.0.1 tomorrow (Jan 10th) and vote for RC on Feb
> 1st, targeting for Feb 9th release.
>
> Please feel free to share your insights.
>
> [1] https://www.mail-archive.com/general@hadoop.apache.org/msg07757.html
> [2] https://issues.apache.org/jira/issues/?filter=12342842
>
> Best,
> --
> Lei (Eddy) Xu
> Software Engineer, Cloudera
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-12 Thread Jason Lowe
Thanks for driving the release, Konstantin!

+1 (binding)

- Verified signatures and digests
- Successfully performed a native build from source
- Deployed a single-node cluster
- Ran some sample jobs and checked the logs

Jason


On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko 
wrote:

> Hi everybody,
>
> I updated CHANGES.txt and fixed documentation links.
> Also committed  MAPREDUCE-6165, which fixes a consistently failing test.
>
> This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> previous one 2.7.4 was release August 4, 2017.
> Release 2.7.5 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 12/13/2017.
>
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Thanks,
> --Konstantin
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-12 Thread Jason Lowe
Thanks for driving this release, Junping!

+1 (binding)

- Verified signatures and digests
- Successfully performed native build from source
- Deployed a single-node cluster
- Ran some test jobs and examined the logs

Jason

On Tue, Dec 5, 2017 at 3:58 AM, Junping Du  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> important fixes and improvements.
>
>   The RC artifacts are available at: http://home.apache.org/~
> junping_du/hadoop-2.8.3-RC0
>
>   The RC tag in git is: release-2.8.3-RC0
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1072
>
>   Please try the release and vote; the vote will run for the usual 5
> working days, ending on 12/12/2017 PST time.
>
> Thanks,
>
> Junping
>


Re: Same jenkins build running on 2 patches.

2017-12-01 Thread Jason Lowe
Is it possible to track the patch by JIRA attachment ID rather than assume
the most recent attachment is the right one?  I thought the admin precommit
build was kicking off the project-specific precommit build with an
attachment ID argument so the project precommit can be consistent with the
admin precommit build on what triggered the precommit process.  If the
patch is tracked by attachment ID then I think the build would remain
consistent even when users attach new patches in the middle of the
precommit process.

Jason


On Fri, Dec 1, 2017 at 2:43 PM, Allen Wittenauer 
wrote:

>
> > On Dec 1, 2017, at 12:18 PM, Rushabh Shah  wrote:
> > Can someone explain me what happened ?
>
> Yetus downloaded the patch to make sure it applied before
> bothering to do anything else to make sure it wasn’t going to burn cycles
> on the build boxes for no reason.  Docker mode was active so it then went
> to re-exec itself under Docker.  But it had to build the Docker image
> first. This can take anywhere from 9 minutes to 20 minutes, depending
> primarily on which branch’s Dockerfile was in use.  While this was going
> on, another two patches were uploaded.  Docker build finishes. When Yetus
> re-exec’ed itself under Docker, it re-grabs the patch (since the
> world—including Yetus itself!--may be different now that it is Docker).  In
> this case, it grabbed the last of the newly uploaded patches and attempted
> to churn it’s way through it.
>
> a) Uploading two patches at once has never ever worked and will
> likely never be made to work. (There are lots of reasons for this.)
>
> b) Before uploading a new patch, wait for the feedback or at least
> make sure the Jenkins job is actually past “Determining needed tests”
> before uploading a new one.  Just be aware that test output is going to get
> very hard to follow with all of the cross posting.
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-01 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15085:
---

 Summary: Output streams closed with IOUtils suppressing write 
errors
 Key: HADOOP-15085
 URL: https://issues.apache.org/jira/browse/HADOOP-15085
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Lowe


There are a few places in hadoop-common that are closing an output stream with 
IOUtils.cleanupWithLogger like this:
{code}
  try {
...write to outStream...
  } finally {
IOUtils.cleanupWithLogger(LOG, outStream);
  }
{code}
This suppresses any IOException that occurs during the close() method which 
could lead to partial/corrupted output without throwing a corresponding 
exception.  The code should either use try-with-resources or explicitly close 
the stream within the try block so the exception thrown during close() is 
properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15078) dtutil ignores nonexistent files

2017-11-30 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15078:
---

 Summary: dtutil ignores nonexistent files
 Key: HADOOP-15078
 URL: https://issues.apache.org/jira/browse/HADOOP-15078
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0-alpha1
Reporter: Jason Lowe


While investigating issues in HADOOP-15059 I ran the dtutil append command like 
this:
{noformat}
$ hadoop dtutil append -format protobuf foo foo.pb
{noformat}

expecting the append command to translate the existing tokens in file {{foo}} 
into the currently non-existent file {{foo.pb}}.  Instead the command executed 
without error and overwrote {{foo}} instead of creating {{foo.pb}} as I 
expected.  I now understand how append works, but it was very surprising to 
have dtutil _silently ignore_ filenames requested on the command-line.  At best 
it is a bit surprising to the user.  At worst it clobbers data the user did not 
expect to be overwritten.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-17 Thread Jason Lowe
Thanks for putting this release together!

+1 (binding)

- Verified signatures and digests
- Successfully built from source including native
- Deployed to single-node cluster and ran some test jobs

Jason


On Mon, Nov 13, 2017 at 6:10 PM, Arun Suresh  wrote:

> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2.
>
> More information about the 2.9.0 release plan can be found here:
> *https://cwiki.apache.org/confluence/display/HADOOP/
> Roadmap#Roadmap-Version2.9
>  Roadmap#Roadmap-Version2.9>*
>
> New RC is available at: *https://home.apache.org/~
> asuresh/hadoop-2.9.0-RC3/
> *
>
> The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
> 756ebc8394e473ac25feac05fa493f6d612e6c50.
>
> The maven artifacts are available via repository.apache.org at:
>  apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066&sa=D&
> sntz=1&usg=AFQjCNFcern4uingMV_sEreko_zeLlgdlg>*https://
> repository.apache.org/content/repositories/orgapachehadoop-1068/
>  >*
>
> We are carrying over the votes from the previous RC given that the delta is
> the license fix.
>
> Given the above - we are also going to stick with the original deadline for
> the vote : ending on Friday 17th November 2017 2pm PT time.
>
> Thanks,
> -Arun/Subru
>


Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-23 Thread Jason Lowe
My apologies, false alarm on the CHANGES.md and RELEASENOTES.md.  I was in
the process of reviewing the release and was interrupted, and when I
resumed I thought I had already downloaded the CHANGES and RELEASENOTES,
but in fact they were the old versions from a prior review of 2.8.0.  I
reviewed both of them for 2.8.2 (for real this time!) and they look
correct.  Again my apologies for the confusion.

Jason

On Mon, Oct 23, 2017 at 3:26 PM, Jason Lowe  wrote:

> +1 (binding)
>
> - Verified signatures and digests
> - Performed a native build from source
> - Deployed to a single-node cluster
> - Ran some sample jobs
>
> The CHANGES.md and RELEASENOTES.md both refer to release 2.8.0 instead of
> 2.8.2, and I do not see the list of JIRAs in CHANGES.md that have been
> committed since 2.8.1.  Since we're voting on the source bits rather than
> the change log I kept my vote as a +1 as I do see the 2.8.2 changes in the
> source code.
>
> Jason
>
>
> On Thu, Oct 19, 2017 at 7:42 PM, Junping Du  wrote:
>
>> Hi folks,
>>  I've created our new release candidate (RC1) for Apache Hadoop 2.8.2.
>>
>>  Apache Hadoop 2.8.2 is the first stable release of Hadoop 2.8 line
>> and will be the latest stable/production release for Apache Hadoop - it
>> includes 315 new fixed issues since 2.8.1 and 69 fixes are marked as
>> blocker/critical issues.
>>
>>   More information about the 2.8.2 release plan can be found here:
>> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>>
>>   New RC is available at: http://home.apache.org/~junpin
>> g_du/hadoop-2.8.2-RC1<http://home.apache.org/~junping_du/hadoop-2.8.2-RC0
>> >
>>
>>   The RC tag in git is: release-2.8.2-RC1, and the latest commit id
>> is: 66c47f2a01ad9637879e95f80c41f798373828fb
>>
>>   The maven artifacts are available via repository.apache.org<
>> http://repository.apache.org/> at: https://repository.apache.org/
>> content/repositories/orgapachehadoop-1064<https://repository
>> .apache.org/content/repositories/orgapachehadoop-1062>
>>
>>   Please try the release and vote; the vote will run for the usual 5
>> days, ending on 10/24/2017 6pm PST time.
>>
>> Thanks,
>>
>> Junping
>>
>>
>


Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-23 Thread Jason Lowe
+1 (binding)

- Verified signatures and digests
- Performed a native build from source
- Deployed to a single-node cluster
- Ran some sample jobs

The CHANGES.md and RELEASENOTES.md both refer to release 2.8.0 instead of
2.8.2, and I do not see the list of JIRAs in CHANGES.md that have been
committed since 2.8.1.  Since we're voting on the source bits rather than
the change log I kept my vote as a +1 as I do see the 2.8.2 changes in the
source code.

Jason


On Thu, Oct 19, 2017 at 7:42 PM, Junping Du  wrote:

> Hi folks,
>  I've created our new release candidate (RC1) for Apache Hadoop 2.8.2.
>
>  Apache Hadoop 2.8.2 is the first stable release of Hadoop 2.8 line
> and will be the latest stable/production release for Apache Hadoop - it
> includes 315 new fixed issues since 2.8.1 and 69 fixes are marked as
> blocker/critical issues.
>
>   More information about the 2.8.2 release plan can be found here:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>
>   New RC is available at: http://home.apache.org/~
> junping_du/hadoop-2.8.2-RC1 du/hadoop-2.8.2-RC0>
>
>   The RC tag in git is: release-2.8.2-RC1, and the latest commit id
> is: 66c47f2a01ad9637879e95f80c41f798373828fb
>
>   The maven artifacts are available via repository.apache.org repository.apache.org/> at: https://repository.apache.org/
> content/repositories/orgapachehadoop-1064 repository.apache.org/content/repositories/orgapachehadoop-1062>
>
>   Please try the release and vote; the vote will run for the usual 5
> days, ending on 10/24/2017 6pm PST time.
>
> Thanks,
>
> Junping
>
>


[jira] [Created] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated

2017-09-22 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14902:
---

 Summary: LoadGenerator#genFile write close timing is incorrectly 
calculated
 Key: HADOOP-14902
 URL: https://issues.apache.org/jira/browse/HADOOP-14902
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.4.0
Reporter: Jason Lowe


LoadGenerator#genFile's write close timing code looks like the following:
{code}
startTime = Time.now();
executionTime[WRITE_CLOSE] += (Time.now() - startTime);
{code}

That code will generate a zero (or near zero) write close timing since it isn't 
actually closing the file in-between timestamp lookups.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14843) FsPermission symbolic parsing failed to detect invalid argument

2017-09-06 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14843:
---

 Summary: FsPermission symbolic parsing failed to detect invalid 
argument
 Key: HADOOP-14843
 URL: https://issues.apache.org/jira/browse/HADOOP-14843
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.1, 2.7.4
Reporter: Jason Lowe


A user misunderstood the syntax format for the FsPermission symbolic 
constructor and passed the argument "-rwr" instead of "u=rw,g=r".  In 2.7 and 
earlier this was silently misinterpreted as mode 0777 and in 2.8 it oddly 
became mode .  In either case FsPermission should have flagged "-rwr" as an 
invalid argument rather than silently misinterpreting it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge feature branch YARN-5355 (Timeline Service v2) to trunk

2017-08-29 Thread Jason Lowe
+1 (binding)

I participated in the review for the reader authorization and verified that
ATSv2 has no significant impact when disabled.  Looking forward to seeing
the next increment in functionality in a release.  A big thank you to
everyone involved in this effort!

Jason


On Tue, Aug 22, 2017 at 1:32 AM, Vrushali Channapattan <
vrushalic2...@gmail.com> wrote:

> Hi folks,
>
> Per earlier discussion [1], I'd like to start a formal vote to merge
> feature branch YARN-5355 [2] (Timeline Service v.2) to trunk. The vote will
> run for 7 days, and will end August 29 11:00 PM PDT.
>
> We have previously completed one merge onto trunk [3] and Timeline Service
> v2 has been part of Hadoop release 3.0.0-alpha1.
>
> Since then, we have been working on extending the capabilities of Timeline
> Service v2 in a feature branch [2] for a while, and we are reasonably
> confident that the state of the feature meets the criteria to be merged
> onto trunk and we'd love folks to get their hands on it in a test capacity
> and provide valuable feedback so that we can make it production-ready.
>
> In a nutshell, Timeline Service v.2 delivers significant scalability and
> usability improvements based on a new architecture. What we would like to
> merge to trunk is termed "alpha 2" (milestone 2). The feature has a
> complete end-to-end read/write flow with security and read level
> authorization via whitelists. You should be able to start setting it up and
> testing it.
>
> At a high level, the following are the key features that have been
> implemented since alpha1:
> - Security via Kerberos Authentication and delegation tokens
> - Read side simple authorization via whitelist
> - Client configurable entity sort ordering
> - Richer REST APIs for apps, app attempts, containers, fetching metrics by
> timerange, pagination, sub-app entities
> - Support for storing sub-application entities (entities that exist outside
> the scope of an application)
> - Configurable TTLs (time-to-live) for tables, configurable table prefixes,
> configurable hbase cluster
> - Flow level aggregations done as dynamic (table level) coprocessors
> - Uses latest stable HBase release 1.2.6
>
> There are a total of 82 subtasks that were completed as part of this
> effort.
>
> We paid close attention to ensure that once disabled Timeline Service v.2
> does not impact existing functionality when disabled (by default).
>
> Special thanks to a team of folks who worked hard and contributed towards
> this effort with patches, reviews and guidance: Rohith Sharma K S, Varun
> Saxena, Haibo Chen, Sangjin Lee, Li Lu, Vinod Kumar Vavilapalli, Joep
> Rottinghuis, Jason Lowe, Jian He, Robert Kanter, Micheal Stack.
>
> Regards,
> Vrushali
>
> [1] http://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg27383.html
> [2] https://issues.apache.org/jira/browse/YARN-5355
> [3] https://issues.apache.org/jira/browse/YARN-2928
> [4] https://github.com/apache/hadoop/commits/YARN-5355
>


Re: [DISCUSS] Branches and versions for Hadoop 3

2017-08-28 Thread Jason Lowe
Allen Wittenauer wrote:


> > On Aug 25, 2017, at 1:23 PM, Jason Lowe  wrote:
> >
> > Allen Wittenauer wrote:
> >
> > > Doesn't this place an undue burden on the contributor with the first
> incompatible patch to prove worthiness?  What happens if it is decided that
> it's not good enough?
> >
> > It is a burden for that first, "this can't go anywhere else but 4.x"
> change, but arguably that should not be a change done lightly anyway.  (Or
> any other backwards-incompatible change for that matter.)  If it's worth
> committing then I think it's perfectly reasonable to send out the dev
> announce that there's reason for trunk to diverge from 3.x, cut branch-3,
> and move on.  This is no different than Andrew's recent announcement that
> there's now a need for separating trunk and the 3.0 line based on what's
> about to go in.
>
> So, by this definition as soon as a patch comes in to remove
> deprecated bits there will be no issue with a branch-3 getting created,
> correct?
>

I think this gets back to the "if it's worth committing" part.  I feel the
community should collectively decide when it's worth taking the hit to
maintain the separate code line.  IMHO removing deprecated bits alone is
not reason enough to diverge the code base and the additional maintenance
that comes along with the extra code line.  A new feature is traditionally
the reason to diverge because that's something users would actually care
enough about to take the compatibility hit when moving to the version that
has it.  That also helps drive a timely release of the new code line
because users want the feature that went into it.


> >  Otherwise if past trunk behavior is any indication, it ends up mostly
> enabling people to commit to just trunk, forgetting that the thing they are
> committing is perfectly valid for branch-3.
>
> I'm not sure there was any "forgetting" involved.  We likely
> wouldn't be talking about 3.x at all if it wasn't for the code diverging
> enough.
>

I don't think it was the myriad of small patches that went only into trunk
over the last 6 years that drove this.  Instead I think it was simply that
an "important enough" feature went in, like erasure coding, that gathered
momentum behind this release.  Trunk sat ignored for basically 5+ years,
and plenty of patches went into just trunk that should have gone into at
least branch-2 as well.  I don't think we as a community did the
contributors any favors by putting their changes into a code line that
didn't see a release for a very long time.  Yes 3.x could have released
sooner to help solve that issue, but given the complete lack of excitement
around 3.x until just recently is there any reason this won't happen again
with 4.x?  Seems to me 4.x will need to have something "interesting enough"
to drive people to release it relative to 3.x, which to me indicates we
shouldn't commit things only to there until we have an interest to do so.

> > Given the number of committers that openly ignore discussions like
> this, who is going to verify that incompatible changes don't get in?
> >
> > The same entities who are verifying other bugs don't get in, i.e.: the
> committers and the Hadoop QA bot running the tests.
> >  Yes, I know that means it's inevitable that compatibility breakages
> will happen, and we can and should improve the automation around
> compatibility testing when possible.
>
> The automation only goes so far.  At least while investigating
> Yetus bugs, I've seen more than enough blatant and purposeful ignored
> errors and warnings that I'm not convinced it will be effective. ("That
> javadoc compile failure didn't come from my patch!"  Um, yes, yes it did.)
> PR for features has greatly trumped code correctness for a few years now.
>

I totally agree here.  We can and should do better about this outside of
automation.  I brought up automation since I see it as a useful part of the
total solution along with better developer education, oversight, etc.  I'm
thinking specifically about tools that can report on public API signature
changes, but that's just one aspect of compatibility.  Semantic behavior is
not something a static analysis tool can automatically detect, and the only
way to automate some of that is something like end-to-end compatibility
testing.  Bigtop may cover some of this with testing of older versions of
downstream projects like HBase, Hive, Oozie, etc., and we could setup some
tests that standup two different Hadoop clusters and run tests that verify
interop between them.  But the tests will never be exhaustive and we will
still need educated commit

Re: [DISCUSS] Branches and versions for Hadoop 3

2017-08-25 Thread Jason Lowe
Allen Wittenauer wrote:


> Doesn't this place an undue burden on the contributor with the first
> incompatible patch to prove worthiness?  What happens if it is decided that
> it's not good enough?


It is a burden for that first, "this can't go anywhere else but 4.x"
change, but arguably that should not be a change done lightly anyway.  (Or
any other backwards-incompatible change for that matter.)  If it's worth
committing then I think it's perfectly reasonable to send out the dev
announce that there's reason for trunk to diverge from 3.x, cut branch-3,
and move on.  This is no different than Andrew's recent announcement that
there's now a need for separating trunk and the 3.0 line based on what's
about to go in.

I do not think it makes sense to pay for the maintenance overhead of two
nearly-identical lines with no backwards-incompatible changes between them
until we have the need.  Otherwise if past trunk behavior is any
indication, it ends up mostly enabling people to commit to just trunk,
forgetting that the thing they are committing is perfectly valid for
branch-3.  If we can agree that trunk and branch-3 should be equivalent
until an incompatible change goes into trunk, why pay for the commit
overhead and potential for accidentally missed commits until it is really
necessary?

How many will it take before the dam will break?  Or is there a timeline
> going to be given before trunk gets set to 4.x?


I think the threshold count for the dam should be 1.  As soon as we have a
JIRA that needs to be committed to move the project forward and we cannot
ship it in a 3.x release then we create branch-3 and move trunk to 4.x.
As for a timeline going to 4.x, again I don't see it so much as a "baking
period" as a "when we need it" criteria.  If we need it in a week then we
should cut it in a week.  Or a year then a year.  It all depends upon when
that 4.x-only change is ready to go in.

Given the number of committers that openly ignore discussions like this,
> who is going to verify that incompatible changes don't get in?
>

The same entities who are verifying other bugs don't get in, i.e.: the
committers and the Hadoop QA bot running the tests.  Yes, I know that means
it's inevitable that compatibility breakages will happen, and we can and
should improve the automation around compatibility testing when possible.
But I don't think there's a magic bullet for preventing all compatibility
bugs from being introduced, just like there isn't one for preventing
general bugs.  Does having a trunk branch separate but essentially similar
to branch-3 make this any better?

Longer term:  what is the PMC doing to make sure we start doing major
> releases in a timely fashion again?  In other words, is this really an
> issue if we shoot for another major in (throws dart) 2 years?
>

If we're trying to do semantic versioning then we shouldn't have a regular
cadence for major releases unless we have a regular cadence of changes that
break compatibility.  I'd hope that's not something we would strive
towards.  I do agree that we should try to be better about shipping
releases, major or minor, in a more timely manner, but I don't agree that
we should cut 4.0 simply based on a duration since the last major release.
The release contents and community's desire for those contents should
dictate the release numbering and schedule, respectively.

Jason


On Fri, Aug 25, 2017 at 2:16 PM, Allen Wittenauer 
wrote:

>
> > On Aug 25, 2017, at 10:36 AM, Andrew Wang 
> wrote:
>
> > Until we need to make incompatible changes, there's no need for
> > a Hadoop 4.0 version.
>
> Some questions:
>
> Doesn't this place an undue burden on the contributor with the
> first incompatible patch to prove worthiness?  What happens if it is
> decided that it's not good enough?
>
> How many will it take before the dam will break?  Or is there a
> timeline going to be given before trunk gets set to 4.x?
>
> Given the number of committers that openly ignore discussions like
> this, who is going to verify that incompatible changes don't get in?
>
> Longer term:  what is the PMC doing to make sure we start doing
> major releases in a timely fashion again?  In other words, is this really
> an issue if we shoot for another major in (throws dart) 2 years?
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Re: Branch merges and 3.0.0-beta1 scope

2017-08-25 Thread Jason Lowe
Andrew Wang wrote:


> This means I'll cut branch-3 and
> branch-3.0, and move trunk to 4.0.0 before these VOTEs end. This will open
> up development for Hadoop 3.1.0 and 4.0.0.


I can see a need for branch-3.0, but please do not create branch-3.  Doing
so will relegate trunk back to the "patch purgatory" branch, a place where
patches won't see a release for years.  Unless something is imminently
going in that will break backwards compatibility and warrant a new 4.x
release, I don't see the need to distinguish trunk from the 3.x line.
Leaving trunk as the 3.x line means less branches to commit patches through
and more testing of every patch since trunk would remain an active area for
testing and releasing.  If we separate trunk and branch-3 then it's almost
certain only-trunk patches will start to accumulate and never get any
"real" testing until someone eventually decides it's time to go to Hadoop
4.x.  Looking back at trunk-as-3.x for an example, patches committed there
in the early days after branch-2 was cut didn't see a release for almost 6
years.

My apologies if I've missed a feature that is just going to miss the 3.0
release and will break compatibility when it goes in.  If so then we need
to cut branch-3, but if not then here's my plea to hold off until we do
need it.

Jason


On Thu, Aug 24, 2017 at 3:33 PM, Andrew Wang 
wrote:

> Glad to see the discussion continued in my absence :)
>
> From a release management perspective, it's *extremely* reasonable to block
> the inclusion of new features a month from the planned release date. A
> typical software development lifecycle includes weeks of feature freeze and
> weeks of code freeze. It is no knock on any developer or any feature to say
> that we should not include something in 3.0.0.
>
> I've been very open and clear about the goals, schedule, and scope of 3.0.0
> over the last year plus. The point of the extended alpha process was to get
> all our features in during alpha, and the alpha merge window has been open
> for a year. I'm unmoved by arguments about how long a feature has been
> worked on. None of these were not part of the original 3.0.0 scope, and our
> users have been waiting even longer for big-ticket 3.0 items like JDK8 and
> HDFS EC that were part of the discussed scope.
>
> I see that two VOTEs have gone out since I was out. I still plan to follow
> the proposal in my original email. This means I'll cut branch-3 and
> branch-3.0, and move trunk to 4.0.0 before these VOTEs end. This will open
> up development for Hadoop 3.1.0 and 4.0.0.
>
> I'm reaching out to the lead contributor of each of these features
> individually to discuss. We need to close on this quickly, and email is too
> low bandwidth at this stage.
>
> Best,
> Andrew
>


Re: Question about how to best contribute

2017-08-09 Thread Jason Lowe
+1 for Steve's and Chris's sentiments.  Mass reformatting of existing code can 
make maintaining anything released prior to the makeover very difficult.  
Almost all of Apache Hadoop's users are not on trunk or branch-2, and I'm not 
sure we want large refactoring patches going into stability lines like 
branch-2.8, branch-2.7, and branch-2.6 where most of the users are.  We should 
definitely consider the maintenance costs of refactoring decisions.

Jason 

On Wednesday, August 9, 2017 4:55 AM, Steve Loughran 
 wrote:
 

 
> On 8 Aug 2017, at 21:33, Chris Douglas  wrote:
> 
> Lars-
> 
> Welcome!
> 
> As a mild refinement of enthusiasm for this proposal: when you
> approach a "cleanup", please consider the cost to tracing the lineage
> of changes in the codebase. Working on a project as large and
> long-running as Hadoop, we often need to trace what motivated a
> particular change using only the commit log and JIRA. Sifting through
> cosmetic changes that obscure the reasoning behind a module not worth
> the aesthetic benefits of consistently formatted code. As a strawman:
> hitting 100% checkstyle compliance would not improve our users'
> experience, so please use your judgement.
> 
> As you point out, we're not going to maintain perfect discipline going
> forward, either. Nitpicking our contributors beyond what is necessary
> to keep the code legible discourages them from continuing as
> contributors. As a general heuristic: the stricter the rule, the more
> automation is required to enforce it. This prevents everyone from
> burning out on minutiae.
> 
> All that said, if you propose a refactoring that makes it easier to
> maintain code that's developed more vestigial parts that functional
> ones (and we have more than a few of those), that is hugely valuable.
> -C


That reminds me of a few more issues

* Major cleanup patches invariably break handling pending patches from others 
(which we should review) and also make cherrypicking harder. Which we why we 
tend to avoid things like a "lets fix the import order" patch for the sake of 
it.

* We can't treat things tagged @Private as stuff we can break on a whim. I know 
it'd be nice, but often they get picked up because they're the only way to do 
things...even the example YARN app does this. So changes there always need to 
go through a scan of the downstream apps. A few of us have IntelliJ set up to 
include all the main projects so we can find if/where a class or method gets 
used...and use that to temper our enthusiasm. Chris himself had to deal with 
this last week with the proposed removal of FileStatus.isDir in HADOOP-14726 . 
We never want to break downstream code


FWIW, I'd use any new styleguide to manage future contribs, not reapply to all 
existing code, except during other work. Even then, with caution


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


   

Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-08-02 Thread Jason Lowe
Thanks for driving the 2.7.4 release!
+1 (binding)
- Verified signatures and digests- Successfully built from source including 
native- Deployed to a single-node cluster and ran sample MapReduce jobs
Jason 

On Saturday, July 29, 2017 6:29 PM, Konstantin Shvachko 
 wrote:
 

 Hi everybody,

Here is the next release of Apache Hadoop 2.7 line. The previous stable
release 2.7.3 was available since 25 August, 2016.
Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are
critical bug fixes and major optimizations. See more details in Release
Note:
http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html

The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/

Please give it a try and vote on this thread. The vote will run for 5 days
ending 08/04/2017.

Please note that my up to date public key are available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
Please don't forget to refresh the page if you've been there recently.
There are other place on Apache sites, which may contain my outdated key.

Thanks,
--Konstantin


   

[jira] [Created] (HADOOP-14713) Audit for durations that should be measured via Time.monotonicNow

2017-08-01 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14713:
---

 Summary: Audit for durations that should be measured via 
Time.monotonicNow
 Key: HADOOP-14713
 URL: https://issues.apache.org/jira/browse/HADOOP-14713
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jason Lowe


Whenever we are measuring a time delta or duration in the same process, the 
timestamps probably should be using Time.monotonicNow rather than Time.now or 
System.currentTimeMillis.  The latter two are directly reading the system clock 
which can move faster or slower than actual time if the system is undergoing a 
time adjustment (e.g.: adjtime or admin sets a new system time).

We should go through the code base and identify places where the code is using 
the system clock but really should be using monotonic time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop 2.8.2 Release Plan

2017-07-21 Thread Jason Lowe
+1 to base the 2.8.2 release off of the more recent activity on branch-2.8.  
Because branch-2.8.2 was cut so long ago it is missing a lot of fixes that are 
in branch-2.8.  There also are a lot of JIRAs that claim they are fixed in 
2.8.2 but are not in branch-2.8.2.  Having the 2.8.2 release be based on recent 
activity in branch-2.8 would solve both of these issues, and we'd only need to 
move the handful of JIRAs that have marked themselves correctly as fixed in 
2.8.3 to be fixed in 2.8.2.

Jason
 

On Friday, July 21, 2017 10:01 AM, Kihwal Lee 
 wrote:
 

 Thanks for driving the next 2.8 release, Junping. While I was committing a 
blocker for 2.7.4, I noticed some of the jiras are back-ported to 2.7, but 
missing in branch-2.8.2.  Perhaps it is safer and easier to simply rebranch 
2.8.2.
Thanks,Kihwal

On Thursday, July 20, 2017, 3:32:16 PM CDT, Junping Du  
wrote:

Hi all,
    Per Vinod's previous email, we just announce Apache Hadoop 2.8.1 get 
released today which is a special security release. Now, we should work towards 
2.8.2 release which aim for production deployment. The focus obviously is to 
fix blocker/critical issues [2], bug-fixes and *no* features / improvements. We 
currently have 13 blocker/critical issues, and 10 of them are Patch Available.

  I plan to cut an RC in a month - target for releasing before end of Aug., to 
give enough time for outstanding blocker / critical issues. Will start moving 
out any tickets that are not blockers and/or won't fit the timeline. For 
progress of releasing effort, please refer our release wiki [2].

  Please share thoughts if you have any. Thanks!

Thanks,

Junping

[1] 2.8.2 release Blockers/Criticals: https://s.apache.org/JM5x
[2] 2.8 Release wiki: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release


From: Vinod Kumar Vavilapalli 
Sent: Thursday, July 20, 2017 1:05 PM
To: gene...@hadoop.apache.org
Subject: [ANNOUNCE] Apache Hadoop 2.8.1 is released

Hi all,

The Apache Hadoop PMC has released version 2.8.1. You can get it from this 
page: http://hadoop.apache.org/releases.html#Download
This is a security release in the 2.8.0 release line. It consists of 2.8.0 plus 
security fixes. Users on 2.8.0 are encouraged to upgrade to 2.8.1.

Please note that 2.8.x release line continues to be not yet ready for 
production use. Critical issues are being ironed out via testing and downstream 
adoption. Production users should wait for a subsequent release in the 2.8.x 
line.

Thanks
+Vinod


-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

   

[jira] [Created] (HADOOP-14669) GenericTestUtils.waitFor should use monotonic time

2017-07-18 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14669:
---

 Summary: GenericTestUtils.waitFor should use monotonic time
 Key: HADOOP-14669
 URL: https://issues.apache.org/jira/browse/HADOOP-14669
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0-alpha4
Reporter: Jason Lowe
Priority: Trivial


GenericTestUtils.waitFor should be calling Time.monotonicNow rather than 
Time.now.  Otherwise if the system clock adjusts during unit testing the 
timeout period could be incorrect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: zstd compression

2017-07-17 Thread Jason Lowe
I think we are OK to leave support for the zstd codec in the Hadoop code base.  
I asked Chris Mattman for clarification, noting that the support for the zstd 
codec requires the user to install the zstd headers and libraries and then 
configure it to be included in the native Hadoop build.  The Hadoop releases 
are not shipping any zstd code (e.g.: headers or libraries) nor does it require 
zstd as a mandatory dependency.  Here's what he said:


On Monday, July 17, 2017 11:07 AM, Chris Mattmann  wrote:

> Hi Jason,
> 
> This sounds like an optional dependency on a Cat-X software. This isn’t the 
> only type of compression
> that is allowed within Hadoop, correct? If it is truly optional and you have 
> gone to that level of detail
> below to make the user opt in, and if we are not shipping zstd with our 
> products (source code releases),
> then this is an acceptable usage.
> 
> Cheers,
> Chris


So I think we are in the clear with respect to zstd usage as long as we keep it 
as an optional codec where the user needs to get the headers and libraries for 
zstd and configure it into the native Hadoop build.

Jason

On Monday, July 17, 2017 9:44 AM, Sean Busbey  wrote:



I know that the HBase community is also looking at what to do about

our inclusion of zstd. We've had it in releases since late 2016. My

plan was to request that they relicense it.


Perhaps the Hadoop PMC could join HBase in the request?


On Sun, Jul 16, 2017 at 8:11 PM, Allen Wittenauer

 wrote:

>

> It looks like HADOOP-13578 added Facebook's zstd compression codec.  
> Unfortunately, that codec is using the same 3-clause BSD (LICENSE file) + 
> patent grant license (PATENTS file) that React is using and RocksDB was using.

>

> Should that code get reverted?

>

>

>

> -

> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org

> For additional commands, e-mail: common-dev-h...@hadoop.apache.org

>




-- 

busbey


-

To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org

For additional commands, e-mail: common-dev-h...@hadoop.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-11 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14412:
---

 Summary: HostsFileReader#getHostDetails is very expensive on large 
clusters
 Key: HADOOP-14412
 URL: https://issues.apache.org/jira/browse/HADOOP-14412
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.8.0
Reporter: Jason Lowe
Assignee: Jason Lowe


After upgrading one of our large clusters to 2.8 we noticed many IPC server 
threads of the resourcemanager spending time in NodesListManager#isValidNode 
which in turn was calling HostsFileReader#getHostDetails.  The latter is 
creating complete copies of the include and exclude sets for every node 
heartbeat, and these sets are not small due to the size of the cluster.  These 
copies are causing multiple resizes of the underlying HashSets being filled and 
creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-04-18 Thread Jason Lowe
Thanks for the pointers, Sean!  According to the infrastructure team, 
apparently it was a typo in the protection scheme that allowed the trunk force 
push to go through.  
 
https://issues.apache.org/jira/browse/INFRA-13902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971643#comment-15971643
   
Jason
 On Monday, April 17, 2017 3:05 PM, Sean Busbey  wrote:
 

 disallowing force pushes to trunk was done back in:

* August 2014: INFRA-8195
* February 2016: INFRA-11136

On Mon, Apr 17, 2017 at 11:18 AM, Jason Lowe
 wrote:
> I found at least one commit that was dropped, MAPREDUCE-6673.  I was able to 
> cherry-pick the original commit hash since it was recorded in the commit 
> email.
> This begs the question of why we're allowing force pushes to trunk.  I 
> thought we asked to have that disabled the last time trunk was accidentally 
> clobbered?
> Jason
>
>
>    On Monday, April 17, 2017 10:18 AM, Arun Suresh  wrote:
>
>
>  Hi
>
> I had the Apr-14 eve version of trunk on my local machine. I've pushed that.
> Don't know if anything was committed over the weekend though.
>
> Cheers
> -Arun
>
> On Mon, Apr 17, 2017 at 7:17 AM, Anu Engineer 
> wrote:
>
>> Hi Allen,
>>
>> https://issues.apache.org/jira/browse/INFRA-13902
>>
>> That happened with ozone branch too. It was an inadvertent force push.
>> Infra has advised us to force push the latest branch if you have it.
>>
>> Thanks
>> Anu
>>
>>
>> On 4/17/17, 7:10 AM, "Allen Wittenauer"  wrote:
>>
>> >Looks like someone reset HEAD back to Mar 31.
>> >
>> >Sent from my iPad
>> >
>> >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server <
>> jenk...@builds.apache.org> wrote:
>> >>
>> >> For more details, see https://builds.apache.org/job/
>> hadoop-qbt-trunk-java8-linux-x86/378/
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> -1 overall
>> >>
>> >>
>> >> The following subsystems voted -1:
>> >>    docker
>> >>
>> >>
>> >> Powered by Apache Yetus 0.5.0-SNAPSHOT  http://yetus.apache.org
>> >>
>> >>
>> >>
>> >> -
>> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>> >
>> >-
>> >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> >
>> >
>>
>>
>
>
>



-- 
busbey


   

Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-04-17 Thread Jason Lowe
I found at least one commit that was dropped, MAPREDUCE-6673.  I was able to 
cherry-pick the original commit hash since it was recorded in the commit email.
This begs the question of why we're allowing force pushes to trunk.  I thought 
we asked to have that disabled the last time trunk was accidentally clobbered?
Jason
 

On Monday, April 17, 2017 10:18 AM, Arun Suresh  wrote:
 

 Hi

I had the Apr-14 eve version of trunk on my local machine. I've pushed that.
Don't know if anything was committed over the weekend though.

Cheers
-Arun

On Mon, Apr 17, 2017 at 7:17 AM, Anu Engineer 
wrote:

> Hi Allen,
>
> https://issues.apache.org/jira/browse/INFRA-13902
>
> That happened with ozone branch too. It was an inadvertent force push.
> Infra has advised us to force push the latest branch if you have it.
>
> Thanks
> Anu
>
>
> On 4/17/17, 7:10 AM, "Allen Wittenauer"  wrote:
>
> >Looks like someone reset HEAD back to Mar 31.
> >
> >Sent from my iPad
> >
> >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
> >>
> >> For more details, see https://builds.apache.org/job/
> hadoop-qbt-trunk-java8-linux-x86/378/
> >>
> >>
> >>
> >>
> >>
> >> -1 overall
> >>
> >>
> >> The following subsystems voted -1:
> >>    docker
> >>
> >>
> >> Powered by Apache Yetus 0.5.0-SNAPSHOT  http://yetus.apache.org
> >>
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
> >-
> >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>
>


   

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Jason Lowe
+1 (binding)
- Verfied signatures and digests- Performed a native build from the release 
tag- Deployed to a single node cluster- Ran some sample jobs
Jason
 

On Friday, March 17, 2017 4:18 AM, Junping Du  wrote:
 

 Hi all,
    With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

    This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

      More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

      New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

      The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

      The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

      Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping

   

Re: Updated 2.8.0-SNAPSHOT artifact

2016-11-04 Thread Jason Lowe
At this point my preference would be to do the most expeditious thing to 
release 2.8, whether that's sticking with the branch-2.8 we have today or 
re-cutting it on branch-2.  Doing a quick JIRA query, there's been almost 2,400 
JIRAs resolved in 2.8.0 (1).  For many of them, it's well-past time they saw a 
release vehicle.  If re-cutting the branch means we have to wrap up a few extra 
things that are still in-progress on branch-2 or add a few more blockers to the 
list before we release then I'd rather stay where we're at and ship it ASAP.

Jason
(1) 
https://issues.apache.org/jira/issues/?jql=project%20in%20%28hadoop%2C%20yarn%2C%20mapreduce%2C%20hdfs%29%20and%20resolution%20%3D%20Fixed%20and%20fixVersion%20%3D%202.8.0





On Tuesday, October 25, 2016 5:31 PM, Karthik Kambatla  
wrote:
 

 Is there value in releasing current branch-2.8? Aren't we better off
re-cutting the branch off of branch-2?

On Tue, Oct 25, 2016 at 12:20 AM, Akira Ajisaka 
wrote:

> It's almost a year since branch-2.8 has cut.
> I'm thinking we need to release 2.8.0 ASAP.
>
> According to the following list, there are 5 blocker and 6 critical issues.
> https://issues.apache.org/jira/issues/?filter=12334985
>
> Regards,
> Akira
>
>
> On 10/18/16 10:47, Brahma Reddy Battula wrote:
>
>> Hi Vinod,
>>
>> Any plan on first RC for branch-2.8 ? I think, it has been long time.
>>
>>
>>
>>
>> --Brahma Reddy Battula
>>
>> -Original Message-
>> From: Vinod Kumar Vavilapalli [mailto:vino...@apache.org]
>> Sent: 20 August 2016 00:56
>> To: Jonathan Eagles
>> Cc: common-dev@hadoop.apache.org
>> Subject: Re: Updated 2.8.0-SNAPSHOT artifact
>>
>> Jon,
>>
>> That is around the time when I branched 2.8, so I guess you were getting
>> SNAPSHOT artifacts till then from the branch-2 nightly builds.
>>
>> If you need it, we can set up SNAPSHOT builds. Or just wait for the first
>> RC, which is around the corner.
>>
>> +Vinod
>>
>> On Jul 28, 2016, at 4:27 PM, Jonathan Eagles  wrote:
>>>
>>> Latest snapshot is uploaded in Nov 2015, but checkins are still coming
>>> in quite frequently.
>>> https://repository.apache.org/content/repositories/snapshots/org/apach
>>> e/hadoop/hadoop-yarn-api/
>>>
>>> Are there any plans to start producing updated SNAPSHOT artifacts for
>>> current hadoop development lines?
>>>
>>
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


   

Re: [VOTE] Release Apache Hadoop 2.6.5 (RC1)

2016-10-10 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Built native from source- Deployed to a 
single-node cluster and ran some sample jobs
Jason
 

On Sunday, October 2, 2016 7:13 PM, Sangjin Lee  wrote:
 

 Hi folks,

I have pushed a new release candidate (R1) for the Apache Hadoop 2.6.5
release (the next maintenance release in the 2.6.x release line). RC1
contains fixes to CHANGES.txt, and is otherwise identical to RC0.

Below are the details of this release candidate:

The RC is available for validation at:
http://home.apache.org/~sjlee/hadoop-2.6.5-RC1/.

The RC tag in git is release-2.6.5-RC1 and its git commit is
e8c9fe0b4c252caf2ebf1464220599650f119997.

The maven artifacts are staged via repository.apache.org at:
https://repository.apache.org/content/repositories/orgapachehadoop-1050/.

You can find my public key at
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.

Please try the release and vote. The vote will run for the usual 5 days. I
would greatly appreciate your timely vote. Thanks!

Regards,
Sangjin


   

[jira] [Created] (HADOOP-13552) RetryInvocationHandler logs all remote exceptions

2016-08-26 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-13552:
---

 Summary: RetryInvocationHandler logs all remote exceptions
 Key: HADOOP-13552
 URL: https://issues.apache.org/jira/browse/HADOOP-13552
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.8.0
Reporter: Jason Lowe
Priority: Blocker


RetryInvocationHandler logs a warning for any exception that it does not retry. 
 There are many exceptions that the client can automatically handle, like 
FileNotFoundException, UnresolvedPathException, etc., so now every one of these 
generates a scary looking stack trace as a warning then the program continues 
normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC2

2016-08-22 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Successfully built from source with native 
support- Deployed a single-node cluster- Ran some sample jobs successfully

Jason

  From: Vinod Kumar Vavilapalli 
 To: "common-dev@hadoop.apache.org" ; 
hdfs-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
"mapreduce-...@hadoop.apache.org"  
Cc: Vinod Kumar Vavilapalli 
 Sent: Wednesday, August 17, 2016 9:05 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.3 RC2
   
Hi all,

I've created a new release candidate RC2 for Apache Hadoop 2.7.3.

As discussed before, this is the next maintenance release to follow up 2.7.2.

The RC is available for validation at: 
http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/ 


The RC tag in git is: release-2.7.3-RC2

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1046 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/releasenotes.html 
 for your 
quick perusal.

As you may have noted,
 - few issues with RC0 forced a RC1 [1]
 - few more issues with RC1 forced a RC2 [2]
 - a very long fix-cycle for the License & Notice issues (HADOOP-12893) caused 
2.7.3 (along with every other Hadoop release) to slip by quite a bit. This 
release's related discussion thread is linked below: [3].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 

[2] [VOTE] Release Apache Hadoop 2.7.3 RC1: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg26336.html 

[3] 2.7.3 release plan: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 


   

[jira] [Created] (HADOOP-13500) Concurrency issues when using Configuration iterator

2016-08-16 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-13500:
---

 Summary: Concurrency issues when using Configuration iterator
 Key: HADOOP-13500
 URL: https://issues.apache.org/jira/browse/HADOOP-13500
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Jason Lowe


It is possible to encounter a ConcurrentModificationException while trying to 
iterate a Configuration object.  The iterator method tries to walk the 
underlying Property object without proper synchronization, so another thread 
simultaneously calling the set method can trigger it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-15 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Built from source with native support- 
Deployed a pseudo-distributed cluster- Ran some sample jobs
Jason

  From: Vinod Kumar Vavilapalli 
 To: "common-dev@hadoop.apache.org" ; 
hdfs-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
"mapreduce-...@hadoop.apache.org"  
Cc: Vinod Kumar Vavilapalli 
 Sent: Friday, August 12, 2016 11:45 AM
 Subject: [VOTE] Release Apache Hadoop 2.7.3 RC1
   
Hi all,

I've created a release candidate RC1 for Apache Hadoop 2.7.3.

As discussed before, this is the next maintenance release to follow up 2.7.2.

The RC is available for validation at: 
http://home.apache.org/~vinodkv/hadoop-2.7.3-RC1/ 


The RC tag in git is: release-2.7.3-RC1

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1045/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at home.apache.org/~vinodkv/hadoop-2.7.3-RC1/releasenotes.html 
 for your 
quick perusal.

As you may have noted,
 - few issues with RC0 forced a RC1 [1]
 - a very long fix-cycle for the License & Notice issues (HADOOP-12893) caused 
2.7.3 (along with every other Hadoop release) to slip by quite a bit. This 
release's related discussion thread is linked below: [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 

[2]: 2.7.3 release plan: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 


   

Re: [Release thread] 2.6.5 release activities

2016-08-10 Thread Jason Lowe
Thanks for organizing this, Chris!
I don't believe HADOOP-13362 is needed since it's related to ContainerMetrics.  
ContainerMetrics weren't added until 2.7 by YARN-2984.
YARN-4794 looks applicable to 2.6.  The change drops right in except it has 
JDK7-isms (multi-catch clause), so it needs a slight change.

Jason

  From: Chris Trezzo 
 To: "common-dev@hadoop.apache.org" ; 
hdfs-...@hadoop.apache.org; "mapreduce-...@hadoop.apache.org" 
; "yarn-...@hadoop.apache.org" 
 
 Sent: Tuesday, August 9, 2016 7:32 PM
 Subject: [Release thread] 2.6.5 release activities
   
Based on the sentiment in the "[DISCUSS] 2.6.x line releases" thread, I
have moved forward with some of the initial effort in creating a 2.6.5
release. I am forking this thread so we have a dedicated 2.6.5 release
thread.

I have gone through the git logs and gathered a list of JIRAs that are in
branch-2.7 but are missing from branch-2.6. I limited the diff to issues
with a commit date after 1/26/2016. I did this because 2.6.4 was cut from
branch-2.6 around that date (http://markmail.org/message/xmy7ebs6l3643o5e)
and presumably issues that were committed to branch-2.7 before then were
already looked at as part of 2.6.4.

I have collected these issues in a spreadsheet and have given them an
initial triage on whether they are candidates for a backport to 2.6.5. The
spreadsheet is sorted by the status of the issues with the potential
backport candidates at the top. Here is a link to the spreadsheet:
https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3olWpOCo6EBAey1WYC8hTRUemHvYPPzY/edit?usp=sharing

As of now, I have identified 16 potential backport candidates. Please take
a look at the list and let me know if there are any that you think should
not be on the list, or ones that you think I have missed. This was just an
initial high-level triage, so there could definitely be issues that are
miss-labeled.

As a side note: we still need to look at the pre-commit build for 2.6 and
follow up with an addendum for HADOOP-12800.

Thanks everyone!
Chris Trezzo


  

Re: [VOTE] Release Apache Hadoop 2.7.3 RC0

2016-08-05 Thread Jason Lowe
Both sound like real problems to me, and I think it's appropriate to file JIRAs 
to track them.
Jason


  From: Andrew Wang 
 To: Karthik Kambatla  
Cc: larry mccay ; Vinod Kumar Vavilapalli 
; "common-dev@hadoop.apache.org" 
; "hdfs-...@hadoop.apache.org" 
; "yarn-...@hadoop.apache.org" 
; "mapreduce-...@hadoop.apache.org" 

 Sent: Thursday, August 4, 2016 5:56 PM
 Subject: Re: [VOTE] Release Apache Hadoop 2.7.3 RC0
   
Could a YARN person please comment on these two issues, one of which Vinay
also hit? If someone already triaged or filed JIRAs, I missed it.

On Mon, Jul 25, 2016 at 11:52 AM, Andrew Wang 
wrote:

> I'll also add that, as a YARN newbie, I did hit two usability issues.
> These are very unlikely to be regressions, and I can file JIRAs if they
> seem fixable.
>
> * I didn't have SSH to localhost set up (new laptop), and when I tried to
> run the Pi job, it'd exit my window manager session. I feel there must be a
> more developer-friendly solution here.
> * If you start the NodeManager and not the RM, the NM has a handler for
> SIGTERM and SIGINT that blocked my Ctrl-C and kill attempts during startup.
> I had to kill -9 it.
>
> On Mon, Jul 25, 2016 at 11:44 AM, Andrew Wang 
> wrote:
>
>> I got asked this off-list, so as a reminder, only PMC votes are binding
>> on releases. Everyone is encouraged to vote on releases though!
>>
>> +1 (binding)
>>
>> * Downloaded source, built
>> * Started up HDFS and YARN
>> * Ran Pi job which as usual returned 4, and a little teragen
>>
>> On Mon, Jul 25, 2016 at 11:08 AM, Karthik Kambatla 
>> wrote:
>>
>>> +1 (binding)
>>>
>>> * Downloaded and build from source
>>> * Checked LICENSE and NOTICE
>>> * Pseudo-distributed cluster with FairScheduler
>>> * Ran MR and HDFS tests
>>> * Verified basic UI
>>>
>>> On Sun, Jul 24, 2016 at 1:07 PM, larry mccay  wrote:
>>>
>>> > +1 binding
>>> >
>>> > * downloaded and built from source
>>> > * checked LICENSE and NOTICE files
>>> > * verified signatures
>>> > * ran standalone tests
>>> > * installed pseudo-distributed instance on my mac
>>> > * ran through HDFS and mapreduce tests
>>> > * tested credential command
>>> > * tested webhdfs access through Apache Knox
>>> >
>>> >
>>> > On Fri, Jul 22, 2016 at 10:15 PM, Vinod Kumar Vavilapalli <
>>> > vino...@apache.org> wrote:
>>> >
>>> > > Hi all,
>>> > >
>>> > > I've created a release candidate RC0 for Apache Hadoop 2.7.3.
>>> > >
>>> > > As discussed before, this is the next maintenance release to follow
>>> up
>>> > > 2.7.2.
>>> > >
>>> > > The RC is available for validation at:
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ <
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/>
>>> > >
>>> > > The RC tag in git is: release-2.7.3-RC0
>>> > >
>>> > > The maven artifacts are available via repository.apache.org <
>>> > > http://repository.apache.org/> at
>>> > > https://repository.apache.org/content/repositories/
>>> orgapachehadoop-1040/
>>> > <
>>> > > https://repository.apache.org/content/repositories/
>>> orgapachehadoop-1040/
>>> > >
>>> > >
>>> > > The release-notes are inside the tar-balls at location
>>> > > hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html.
>>> I
>>> > > hosted this at
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html <
>>> > > http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html
>>> >
>>> > for
>>> > > your quick perusal.
>>> > >
>>> > > As you may have noted, a very long fix-cycle for the License & Notice
>>> > > issues (HADOOP-12893) caused 2.7.3 (along with every other Hadoop
>>> > release)
>>> > > to slip by quite a bit. This release's related discussion thread is
>>> > linked
>>> > > below: [1].
>>> > >
>>> > > Please try the release and vote; the vote will run for the usual 5
>>> days.
>>> > >
>>> > > Thanks,
>>> > > Vinod
>>> > >
>>> > > [1]: 2.7.3 release plan:
>>> > > https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/
>>> msg24439.html
>>> > <
>>> > > http://markmail.org/thread/6yv2fyrs4jlepmmr>
>>> >
>>>
>>
>>
>


   

Re: [VOTE] Release Apache Hadoop 2.7.3 RC0

2016-07-25 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Built from source with native support- 
Deployed a pseudo-distributed cluster- Ran some sample jobs
Jason

  From: Vinod Kumar Vavilapalli 
 To: "common-dev@hadoop.apache.org" ; 
hdfs-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
"mapreduce-...@hadoop.apache.org"  
Cc: Vinod Kumar Vavilapalli 
 Sent: Friday, July 22, 2016 9:15 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.3 RC0
   
Hi all,

I've created a release candidate RC0 for Apache Hadoop 2.7.3.

As discussed before, this is the next maintenance release to follow up 2.7.2.

The RC is available for validation at: 
http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ 


The RC tag in git is: release-2.7.3-RC0

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1040/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html 
 for your 
quick perusal.

As you may have noted, a very long fix-cycle for the License & Notice issues 
(HADOOP-12893) caused 2.7.3 (along with every other Hadoop release) to slip by 
quite a bit. This release's related discussion thread is linked below: [1].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1]: 2.7.3 release plan: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 


   

[jira] [Reopened] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters

2016-07-11 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-13362:
-
  Assignee: Junping Du

Reopening to target a fix just to the DefaultMetricsSystem for 2.7 rather than 
pulling in the entire patch from YARN-5296 (and its dependencies).

> DefaultMetricsSystem leaks the source name when a source unregisters
> 
>
> Key: HADOOP-13362
> URL: https://issues.apache.org/jira/browse/HADOOP-13362
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>    Reporter: Jason Lowe
>Assignee: Junping Du
>Priority: Critical
>
> Ran across a nodemanager that was spending most of its time in GC.  Upon 
> examination of the heap most of the memory was going to the map of names in 
> org.apache.hadoop.metrics2.lib.UniqueNames.  In this case the map had almost 
> 2 million entries.  Looking at a few of the map showed entries like 
> "ContainerResource_container_e01_1459548490386_8560138_01_002020", 
> "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc.
> Looks like the ContainerMetrics for each container will cause a unique name 
> to be registered with UniqueNames and the name will never be unregistered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters

2016-07-11 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-13362.
-
Resolution: Duplicate

> DefaultMetricsSystem leaks the source name when a source unregisters
> 
>
> Key: HADOOP-13362
> URL: https://issues.apache.org/jira/browse/HADOOP-13362
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>    Reporter: Jason Lowe
>Priority: Critical
>
> Ran across a nodemanager that was spending most of its time in GC.  Upon 
> examination of the heap most of the memory was going to the map of names in 
> org.apache.hadoop.metrics2.lib.UniqueNames.  In this case the map had almost 
> 2 million entries.  Looking at a few of the map showed entries like 
> "ContainerResource_container_e01_1459548490386_8560138_01_002020", 
> "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc.
> Looks like the ContainerMetrics for each container will cause a unique name 
> to be registered with UniqueNames and the name will never be unregistered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13343) globStatus returns null for file path that exists but is filtered

2016-07-06 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-13343:
---

 Summary: globStatus returns null for file path that exists but is 
filtered
 Key: HADOOP-13343
 URL: https://issues.apache.org/jira/browse/HADOOP-13343
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.2
Reporter: Jason Lowe
Priority: Minor


If a file path without globs is passed to globStatus and the file exists but 
the specified input filter suppresses it then globStatus will return null 
instead of an empty array.  This makes it impossible for the caller to discern 
the difference between the file not existing at all vs. being suppressed by the 
filter and is inconsistent with the way it handles globs for an existing dir 
but fail to match anything within the dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12966) TestNativeLibraryChecker is crashing

2016-03-25 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12966:
---

 Summary: TestNativeLibraryChecker is crashing
 Key: HADOOP-12966
 URL: https://issues.apache.org/jira/browse/HADOOP-12966
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Jason Lowe


Precommit builds have reported TestNativeLibraryChecker failing.  The logs show 
the JVM is crashing in unicode_length:
{noformat}
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7fdf71b45c90, pid=11625, tid=140597680293632
#
# JRE version: Java(TM) SE Runtime Environment (8.0_74-b02) (build 1.8.0_74-b02)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.74-b02 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
{noformat}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12958:
---

 Summary: PhantomReference for filesystem statistics can trigger OOM
 Key: HADOOP-12958
 URL: https://issues.apache.org/jira/browse/HADOOP-12958
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.4, 2.7.3
Reporter: Jason Lowe
 Fix For: 2.7.3, 2.6.5


I saw an OOM that appears to have been caused by the phantom references 
introduced for file system statistics management.  I'll post details in a 
followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.4 RC0

2016-02-08 Thread Jason Lowe
+1 (binding)
- verified signatures and digests- built native from source- deployed a 
single-node cluster and ran some sample MapReduce jobs.
Jason


  From: Junping Du 
 To: "hdfs-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"common-dev@hadoop.apache.org"  
 Sent: Wednesday, February 3, 2016 1:01 AM
 Subject: [VOTE] Release Apache Hadoop 2.6.4 RC0
   
Hi community folks,
  I've created a release candidate RC0 for Apache Hadoop 2.6.4 (the next 
maintenance release to follow up 2.6.3.) according to email thread of release 
plan 2.6.4 [1]. Below is details of this release candidate:

The RC is available for validation at:
*http://people.apache.org/~junping_du/hadoop-2.6.4-RC0/
*

The RC tag in git is: release-2.6.4-RC0

The maven artifacts are staged via repository.apache.org at:
*https://repository.apache.org/content/repositories/orgapachehadoop-1028/?
*

You can find my public key at:
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS

Please try the release and vote. The vote will run for the usual 5 days.

Thanks!


Cheers,

Junping


[1]: 2.6.4 release plan: http://markmail.org/message/fk3ud3c665lscvx5?


  

Re: [VOTE] Release Apache Hadoop 2.7.2 RC2

2016-01-19 Thread Jason Lowe
That's reasonable, especially if we don't take nearly as long for 2.7.3.  Note 
that there are almost 50 JIRAs already committed to 2.7.3, so hopefully we'll 
have a plan for that soon.
+1 (binding) for 2.7.2 RC2.
Jason


  From: Vinod Kumar Vavilapalli 
 To: mapreduce-...@hadoop.apache.org; Jason Lowe  
Cc: Hadoop Common ; "hdfs-...@hadoop.apache.org" 
; "yarn-...@hadoop.apache.org" 

 Sent: Tuesday, January 19, 2016 5:25 PM
 Subject: Re: [VOTE] Release Apache Hadoop 2.7.2 RC2
   
The JIRA YARN-4610 links YARN-3434 as the one causing the breakage, and 
YARN-3434 already exists in 2.7.1 itself. That categorizes the new issue as an 
existing bug.
If you agree with that sentiment, and given that there is a clear work-around, 
in the interest of progress of 2.7.2 (we have spent > 2 months on this now), 
I’d like to move forward.
Please LMK what you think.
Thanks+Vinod


On Jan 19, 2016, at 3:13 PM, Jason Lowe  wrote:
-1 (binding)
We have been running a release derived from 2.7 on some of our clusters, and we 
recently hit a bug where an application making large container requests can 
drastically slow down container allocations for other users in the same queue.  
See YARN-4610 for details.  Since 
yarn.scheduler.capacity.reservations-continue-look-all-nodes is on by default, 
I think we should fix this.  If we decide to ship 2.7.2 without that fix then 
the release notes should call out that JIRA and mention the workaround of 
setting yarn.scheduler.capacity.reservations-continue-look-all-nodes to false.
Jason


  From: Vinod Kumar Vavilapalli 
 To: Hadoop Common ; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
 Sent: Thursday, January 14, 2016 10:57 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.2 RC2

Hi all,

I've created an updated release candidate RC2 for Apache Hadoop 2.7.2.

As discussed before, this is the next maintenance release to follow up 2.7.1.

The RC is available for validation at: 
http://people.apache.org/~vinodkv/hadoop-2.7.2-RC2/

The RC tag in git is: release-2.7.2-RC2

The maven artifacts are available via repository.apache.org 
<http://repository.apache.org/> at 
https://repository.apache.org/content/repositories/orgapachehadoop-1027 
<https://repository.apache.org/content/repositories/orgapachehadoop-1027>

The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC2/releasenotes.html 
<http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html> for your 
quick perusal.

As you may have noted,
 - I terminated the RC1 related voting thread after finding out that we didn’t 
have a bunch of patches that are already in the released 2.6.3 version. After a 
brief discussion, we decided to keep the parallel 2.6.x and 2.7.x releases 
incremental, see [4] for this discussion.
 - The RC0 related voting thread got halted due to some critical issues. It 
took a while again for getting all those blockers out of the way. See the 
previous voting thread [3] for details.
 - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
bit. This release's related discussion threads are linked below: [1] and [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 
<http://markmail.org/message/oozq3gvd4nhzsaes>
[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 
<http://markmail.org/message/iktqss2qdeykgpqk>
[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
http://markmail.org/message/5txhvr2qdiqglrwc 
<http://markmail.org/message/5txhvr2qdiqglrwc>
[4] Retracted [VOTE] Release Apache Hadoop 2.7.2 RC1: 
http://markmail.org/thread/n7ljbsnquihn3wlw





  

Re: [VOTE] Release Apache Hadoop 2.7.2 RC2

2016-01-19 Thread Jason Lowe
-1 (binding)
We have been running a release derived from 2.7 on some of our clusters, and we 
recently hit a bug where an application making large container requests can 
drastically slow down container allocations for other users in the same queue.  
See YARN-4610 for details.  Since 
yarn.scheduler.capacity.reservations-continue-look-all-nodes is on by default, 
I think we should fix this.  If we decide to ship 2.7.2 without that fix then 
the release notes should call out that JIRA and mention the workaround of 
setting yarn.scheduler.capacity.reservations-continue-look-all-nodes to false.
Jason


  From: Vinod Kumar Vavilapalli 
 To: Hadoop Common ; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
 Sent: Thursday, January 14, 2016 10:57 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.2 RC2
   
Hi all,

I've created an updated release candidate RC2 for Apache Hadoop 2.7.2.

As discussed before, this is the next maintenance release to follow up 2.7.1.

The RC is available for validation at: 
http://people.apache.org/~vinodkv/hadoop-2.7.2-RC2/

The RC tag in git is: release-2.7.2-RC2

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1027 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC2/releasenotes.html 
 for your 
quick perusal.

As you may have noted,
 - I terminated the RC1 related voting thread after finding out that we didn’t 
have a bunch of patches that are already in the released 2.6.3 version. After a 
brief discussion, we decided to keep the parallel 2.6.x and 2.7.x releases 
incremental, see [4] for this discussion.
 - The RC0 related voting thread got halted due to some critical issues. It 
took a while again for getting all those blockers out of the way. See the 
previous voting thread [3] for details.
 - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
bit. This release's related discussion threads are linked below: [1] and [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 

[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 

[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
http://markmail.org/message/5txhvr2qdiqglrwc 

[4] Retracted [VOTE] Release Apache Hadoop 2.7.2 RC1: 
http://markmail.org/thread/n7ljbsnquihn3wlw

  

[jira] [Created] (HADOOP-12706) TestLocalFsFCStatistics fails occasionally

2016-01-13 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12706:
---

 Summary: TestLocalFsFCStatistics fails occasionally
 Key: HADOOP-12706
 URL: https://issues.apache.org/jira/browse/HADOOP-12706
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Jason Lowe


TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp.  
The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-18 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Spot checked CHANGES.txt files- Successfully 
performed a native build from source- Deployed to a single node cluster and ran 
sample jobs
We have been running with the fix for YARN-4354 on two of our clusters for some 
time with no issues, so I feel confident that prior blocker is now fixed.
Jason
 

  From: Vinod Kumar Vavilapalli 
 To: Hadoop Common ; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
Cc: Vinod Kumar Vavilapalli 
 Sent: Wednesday, December 16, 2015 8:49 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.2 RC1
   
Hi all,

I've created a release candidate RC1 for Apache Hadoop 2.7.2.

As discussed before, this is the next maintenance release to follow up 2.7.1.

The RC is available for validation at: 
http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ 


The RC tag in git is: release-2.7.2-RC1

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1026/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
for quick perusal.

As you may have noted,
 - The RC0 related voting thread got halted due to some critical issues. It 
took a while again for getting all those blockers out of the way. See the 
previous voting thread [3] for details.
 - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
bit. This release's related discussion threads are linked below: [1] and [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 

[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 

[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
http://markmail.org/message/5txhvr2qdiqglrwc


   

Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Successfully built from source with native 
code support- Deployed to a single-node cluster and ran some test jobs
Jason

  From: Junping Du 
 To: Hadoop Common ; "hdfs-...@hadoop.apache.org" 
; "mapreduce-...@hadoop.apache.org" 
; "yarn-...@hadoop.apache.org" 
 
Cc: "junping...@apache.org" 
 Sent: Friday, December 11, 2015 6:16 PM
 Subject: [VOTE] Release Apache Hadoop 2.6.3 RC0
   

Hi all developers in hadoop community,
  I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
maintenance release to follow up 2.6.2.) according to email thread of release 
plan 2.6.3 [1]. Sorry for this RC coming a bit late as several blocker issues 
were getting committed until yesterday. Below is the details:

The RC is available for validation at:
*http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
*

The RC tag in git is: release-2.6.3-RC0

The maven artifacts are staged via repository.apache.org at:
*https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
*

You can find my public key at:
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS

Please try the release and vote. The vote will run for the usual 5 days.

Thanks and happy weekend!


Cheers,

Junping


[1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y


 

[jira] [Resolved] (HADOOP-12594) Deadlock in metrics subsystem

2015-11-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-12594.
-
Resolution: Duplicate

Resolving as a duplicate of HADOOP-11361 now that it was reverted.

> Deadlock in metrics subsystem
> -
>
> Key: HADOOP-12594
> URL: https://issues.apache.org/jira/browse/HADOOP-12594
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.3
>    Reporter: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-12594.patch
>
>
> Saw a YARN ResourceManager process encounter a deadlock which appears to be 
> caused by the metrics subsystem.  Stack trace to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-11361:
-

Based on initial patch in HADOOP-12594 and earlier comments, I'm reverting this.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12594) Deadlock in metrics subsystem

2015-11-24 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12594:
---

 Summary: Deadlock in metrics subsystem
 Key: HADOOP-12594
 URL: https://issues.apache.org/jira/browse/HADOOP-12594
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.7.1
Reporter: Jason Lowe
Priority: Critical


Saw a YARN ResourceManager process encounter a deadlock which appears to be 
caused by the metrics subsystem.  Stack trace to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC0

2015-11-13 Thread Jason Lowe
-1 (binding)
Ran into public localization issues and filed YARN-4354. We need that resolved 
before the release is ready.  We will either need a timely fix or may have to 
revert YARN-2902 to unblock the release if my root-cause analysis is correct.  
I'll dig into this more today.

Jason

  From: Vinod Kumar Vavilapalli 
 To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
Cc: vino...@apache.org 
 Sent: Wednesday, November 11, 2015 10:31 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.2 RC0
   
Hi all,


I've created a release candidate RC0 for Apache Hadoop 2.7.2.


As discussed before, this is the next maintenance release to follow up
2.7.1.


The RC is available for validation at:

*http://people.apache.org/~vinodkv/hadoop-2.7.2-RC0/

*


The RC tag in git is: release-2.7.2-RC0


The maven artifacts are available via repository.apache.org at

*https://repository.apache.org/content/repositories/orgapachehadoop-1023/

*


As you may have noted, an unusually long 2.6.3 release caused 2.7.2 to slip
by quite a bit. This release's related discussion threads are linked below:
[1] and [2].


Please try the release and vote; the vote will run for the usual 5 days.


Thanks,

Vinod


[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes

[2]: Planning Apache Hadoop 2.7.2
http://markmail.org/message/iktqss2qdeykgpqk


  

Re: [VOTE] Release Apache Hadoop 2.6.2

2015-10-26 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Performed native build from source- Deployed 
a single-node cluster and ran some test jobs

Jason
  From: Sangjin Lee 
 To: "common-dev@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" ; 
"hdfs-...@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org"  
Cc: Vinod Kumar Vavilapalli  
 Sent: Thursday, October 22, 2015 4:14 PM
 Subject: [VOTE] Release Apache Hadoop 2.6.2
   
Hi all,

I have created a release candidate (RC0) for Hadoop 2.6.2.

The RC is available at: http://people.apache.org/~sjlee/hadoop-2.6.2-RC0/

The RC tag in git is: release-2.6.2-RC0

The list of JIRAs committed for 2.6.2:
https://issues.apache.org/jira/browse/YARN-4101?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20fixVersion%20%3D%202.6.2

The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1022/

Please try out the release candidate and vote. The vote will run for 5 days.

Thanks,
Sangjin


   

[jira] [Resolved] (HADOOP-12290) hadoop fs -ls command returns inconsistent results with wildcards

2015-07-30 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-12290.
-
Resolution: Invalid

This appears to be pilot error rather than a bug in Hadoop.  The wildcards are 
not quoted and therefore the shell is expanding them _before_ Hadoop even sees 
the wildcard.  You must be running on a Mac, which would explain why it's 
trying to lookup things like /Applications, /Library, /System, etc.  This needs 
to be something like:
{noformat}
hadoop fs -ls '/*'
{noformat}
to keep the shell from expanding it.

The same thing is occurring for the /t* case.

For the last case, the shell is not finding anything for /z* and therefore is 
passing it unexpanded to Hadoop, and Hadoop is expanding it to the various z* 
directories.  However I suspect all of those directories are empty, so it lists 
nothing as a result.

Closing as invalid.  Please reopen if there's a real issue here.

> hadoop fs -ls command returns inconsistent results with wildcards
> -
>
> Key: HADOOP-12290
> URL: https://issues.apache.org/jira/browse/HADOOP-12290
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>
> I cannot find any document for wildcard support for "hadoop fs -ls" cmd and 
> the expected behavior. So I did some experiments and got inconsistent results 
> below. This looks like a bug to me. But if we don't support wildcard for 
> "hadoop fs -ls", we should at least document it.
> On a single node cluster with "fs.default.name" configured as 
> hdfs://localhost:9000. 
> Root without wildcard: HDFS only.
> {code}
> $ hdfs dfs -ls /
> Found 11 items
> drwxrwxrwx   - xyao hadoop  0 2015-07-28 15:27 /data
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 23:05 /noez
> drwxr-xr-x   - xyao hadoop  0 2015-07-29 17:33 /path3
> drwxrwxrwx   - xyao hadoop  0 2015-07-26 23:04 /tmp
> drwx--   - xyao hadoop  0 2015-07-26 23:03 /user
> drwxr-xr-x   - xyao hadoop  0 2015-07-29 17:34 /uu
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 23:08 /z1_1
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 21:43 /z1_2new
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 22:00 /z2_0
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 21:43 /z2_1
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 21:55 /z2_2
> {code}
> Root with wildcard: HDFS and local. 
> {code}
> $ hadoop fs -ls /*
> ls: `/Applications': No such file or directory
> ls: `/Library': No such file or directory
> ls: `/Network': No such file or directory
> ls: `/System': No such file or directory
> ls: `/User Information': No such file or directory
> ls: `/Users': No such file or directory
> ls: `/Volumes': No such file or directory
> ls: `/bin': No such file or directory
> ls: `/dev': No such file or directory
> ls: `/etc': No such file or directory
> ls: `/home': No such file or directory
> ls: `/mach_kernel': No such file or directory
> ls: `/net': No such file or directory
> ls: `/opt': No such file or directory
> ls: `/private': No such file or directory
> ls: `/proc': No such file or directory
> ls: `/sbin': No such file or directory
> ls: `/test.jks': No such file or directory
> Found 3 items
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:48 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:50 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:49 /tmp/test
> hello
> ls: `/usr': No such file or directory
> ls: `/var': No such file or directory
> {code}
> Wildcard with prefix 1: HDFS and Local. But HDFS goes one level down.
> {code}
> HW11217:hadoop-hdfs-project xyao$ hadoop fs -ls /t*
> ls: `/test.jks': No such file or directory
> Found 3 items
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:48 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:50 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:49 /tmp/test
> hello
> {code}
> Wildcard and prefix 2: Empty result even though HDFS does have a few 
> directories starts with "z" as shown above. 
> {code}
> hadoop fs -ls /z*
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12191) Bzip2Factory is not thread safe

2015-07-06 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12191:
---

 Summary: Bzip2Factory is not thread safe
 Key: HADOOP-12191
 URL: https://issues.apache.org/jira/browse/HADOOP-12191
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Jason Lowe


Bzip2Factory.isNativeBzip2Loaded is not protected from multiple threads calling 
it simultaneously.  A thread can return false from this method despite logging 
the fact that was going to return true due to manipulations of the static 
boolean from another thread calling the same method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-07-01 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests
- Successfully performed a native build from source- Deployed a single-node 
cluster- Ran sample MapReduce jobs

Jason
  From: Vinod Kumar Vavilapalli 
 To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
Cc: vino...@apache.org 
 Sent: Monday, June 29, 2015 3:45 AM
 Subject: [VOTE] Release Apache Hadoop 2.7.1 RC0
   
Hi all,

I've created a release candidate RC0 for Apache Hadoop 2.7.1.

As discussed before, this is the next stable release to follow up 2.6.0,
and the first stable one in the 2.7.x line.

The RC is available for validation at:
*http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
*

The RC tag in git is: release-2.7.1-RC0

The maven artifacts are available via repository.apache.org at
*https://repository.apache.org/content/repositories/orgapachehadoop-1019/
*

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

PS: It took 2 months instead of the planned [1] 2 weeks in getting this
release out: post-mortem in a separate thread.

[1]: A 2.7.1 release to follow up 2.7.0
http://markmail.org/thread/zwzze6cqqgwq4rmw


   

[jira] [Created] (HADOOP-12125) Retrying UnknownHostException on a proxy does not actually retry hostname resolution

2015-06-26 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12125:
---

 Summary: Retrying UnknownHostException on a proxy does not 
actually retry hostname resolution
 Key: HADOOP-12125
 URL: https://issues.apache.org/jira/browse/HADOOP-12125
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Jason Lowe


When RetryInvocationHandler attempts to retry an UnknownHostException the 
hostname fails to be resolved again.  The InetSocketAddress in the ConnectionId 
has cached the fact that the hostname is unresolvable, and when the proxy tries 
to setup a new Connection object with that ConnectionId it checks if the 
(cached) resolution result is unresolved and immediately throws.

The end result is we sleep and retry for no benefit.  The hostname resolution 
is never attempted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.0 RC0

2015-04-14 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Built from source with native support- 
Deployed to a single-node cluster and ran sample jobs
Jason

  From: Vinod Kumar Vavilapalli 
 To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
Cc: vino...@apache.org 
 Sent: Friday, April 10, 2015 6:44 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.0 RC0
   
Hi all,

I've created a release candidate RC0 for Apache Hadoop 2.7.0.

 The RC is available at: http://people.apache.org/~vinodkv/hadoop-2.7.0-RC0/

The RC tag in git is: release-2.7.0-RC0

 The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1017/

As discussed before
 - This release will only work with JDK 1.7 and above
 - I’d like to use this as a starting release for 2.7.x [1], depending on
how it goes, get it stabilized and potentially use a 2.7.1 in a few
weeks as the stable release.

 Please try the release and vote; the vote will run for the usual 5 days.

 Thanks,
 Vinod

 [1]: A 2.7.1 release to follow up 2.7.0
http://markmail.org/thread/zwzze6cqqgwq4rmw

  

Re: Looking to a Hadoop 3 release

2015-03-05 Thread Jason Lowe
I'm OK with a 3.0.0 release as long as we are minimizing the pain of 
maintaining yet another release line and conscious of the incompatibilities 
going into that release line.
For the former, I would really rather not see a branch-3 cut so soon.  It's yet 
another line onto which to cherry-pick, and I don't see why we need to add this 
overhead at such an early phase.  We should only create branch-3 when there's 
an incompatible change that the community wants and it should _not_ go into the 
next major release (i.e.: it's for Hadoop 4.0).  We can develop 3.0 alphas and 
betas on trunk and release from trunk in the interim.  IMHO we need to stop 
treating trunk as a place to exile patches.

For the latter, I think as a community we need to evaluate the benefits of 
breaking compatibility against the costs of migrating.  Each time we break 
compatibility we create a hurdle for people to jump when they move to the new 
release, and we should make those hurdles worth their time.  For example, 
wire-compatibility has been mentioned as part of this.  Any feature that breaks 
wire compatibility better be absolutely amazing, as it creates a huge hurdle 
for people to jump.
To summarize:+1 for a community-discussed roadmap of what we're breaking in 
Hadoop 3 and why it's worth it for users
-1 for creating branch-3 now, we can release from trunk until the next 
incompatibility for Hadoop 4 arrives
+1 for baking classpath isolation as opt-in on 2.x and eventually default on in 
3.0
Jason
  From: Andrew Wang 
 To: "hdfs-...@hadoop.apache.org"  
Cc: "common-dev@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org"  
 Sent: Wednesday, March 4, 2015 12:15 PM
 Subject: Re: Looking to a Hadoop 3 release
   
Let's not dismiss this quite so handily.

Sean, Jason, and Stack replied on HADOOP-11656 pointing out that while we
could make classpath isolation opt-in via configuration, what we really
want longer term is to have it on by default (or just always on). Stack in
particular points out the practical difficulties in using an opt-in method
in 2.x from a downstream project perspective. It's not pretty.

The plan that both Sean and Jason propose (which I support) is to have an
opt-in solution in 2.x, bake it there, then turn it on by default
(incompatible) in a new major release. I think this lines up well with my
proposal of some alphas and betas leading up to a GA 3.x. I'm also willing
to help with 2.x release management if that would help with testing this
feature.

Even setting aside classpath isolation, a new major release is still
justified by JDK8. Somehow this is being ignored in the discussion. Allen,
historically the voice of the user in our community, just highlighted it as
a major compatibility issue, and myself and Tucu have also expressed our
very strong concerns about bumping this in a minor release. 2.7's bump is a
unique exception, but this is not something to be cited as precedent or
policy.

Where does this resistance to a new major release stem from? As I've
described from the beginning, this will look basically like a 2.x release,
except for the inclusion of classpath isolation by default and target
version JDK8. I've expressed my desire to maintain API and wire
compatibility, and we can audit the set of incompatible changes in trunk to
ensure this. My proposal for doing alpha and beta releases leading up to GA
also gives downstreams a nice amount of time for testing and validation.

Regards,
Andrew



On Tue, Mar 3, 2015 at 2:32 PM, Arun Murthy  wrote:

> Awesome, looks like we can just do this in a compatible manner - nothing
> else on the list seems like it warrants a (premature) major release.
>
> Thanks Vinod.
>
> Arun
>
> 
> From: Vinod Kumar Vavilapalli 
> Sent: Tuesday, March 03, 2015 2:30 PM
> To: common-dev@hadoop.apache.org
> Cc: hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> Subject: Re: Looking to a Hadoop 3 release
>
> I started pitching in more on that JIRA.
>
> To add, I think we can and should strive for doing this in a compatible
> manner, whatever the approach. Marking and calling it incompatible before
> we see proposal/patch seems premature to me. Commented the same on JIRA:
> https://issues.apache.org/jira/browse/HADOOP-11656?focusedCommentId=14345875&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14345875
> .
>
> Thanks
> +Vinod
>
> On Mar 2, 2015, at 8:08 PM, Andrew Wang  andrew.w...@cloudera.com>> wrote:
>
> Regarding classpath isolation, based on what I hear from our customers,
> it's still a big problem (even after the MR classloader work). The latest
> Jackson version bump was quite painful for our downstream projects, and the
> HDFS client still leaks a lot of dependencies. Would welcome more
> discussion of this on HADOOP-11656, Steve, Colin, and Haohui have already
> chimed in.
>
>


  

Re: 2.7 status

2015-02-13 Thread Jason Lowe
I'd like to see a 2.7 release sooner than later.  It has been almost 3 months 
since Hadoop 2.6 was released, and there have already been 634 JIRAs committed 
to 2.7.  That's a lot of changes waiting for an official release.
https://issues.apache.org/jira/issues/?jql=project%20in%20%28hadoop%2Chdfs%2Cyarn%2Cmapreduce%29%20AND%20fixversion%3D2.7.0%20AND%20resolution%3DFixed
Jason

  From: Sangjin Lee 
 To: "common-dev@hadoop.apache.org"  
 Sent: Tuesday, February 10, 2015 1:30 PM
 Subject: 2.7 status
   
Folks,

What is the current status of the 2.7 release? I know initially it started
out as a "java-7" only release, but looking at the JIRAs that is very much
not the case.

Do we have a certain timeframe for 2.7 or is it time to discuss it?

Thanks,
Sangjin


  

[jira] [Resolved] (HADOOP-11532) RAT checker complaining about PSD images

2015-02-02 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-11532.
-
Resolution: Duplicate

This is a duplicate of YARN-3113.

> RAT checker complaining about PSD images
> 
>
> Key: HADOOP-11532
> URL: https://issues.apache.org/jira/browse/HADOOP-11532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>
> Jenkins is rejecting builds as {{Sorting icons.psd}} doesn't have an ASF 
> header.
> {code}
>  !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/images/Sorting
>  icons.psd
> Lines that start with ? in the release audit report indicate files that 
> do not have an Apache license header.
> {code}
> It's a layered photoshop image that either needs to be excluded from RAT or 
> cut from the source tree



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11473) test-patch says "-1 overall" even when all checks are +1

2015-01-12 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11473:
---

 Summary: test-patch says "-1 overall" even when all checks are +1
 Key: HADOOP-11473
 URL: https://issues.apache.org/jira/browse/HADOOP-11473
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Lowe


I noticed recently that test-patch is posting "-1 overall" despite all 
sub-checks being +1.  See HDFS-7533 and HDFS-7598 for some examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11409:
---

 Summary: FileContext.getFileContext can stack overflow if default 
fs misconfigured
 Key: HADOOP-11409
 URL: https://issues.apache.org/jira/browse/HADOOP-11409
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe


If the default filesystem is misconfigured such that it doesn't have a scheme 
then FileContext.getFileContext(URI, Configuration) will call 
FileContext.getFileContext(Configuration) which in turn calls the former and we 
loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.0

2014-11-17 Thread Jason Lowe
+1 (binding)
- verified signatures and digests- verified late-arriving fixes for YARN-2846 
and MAPREDUCE-6156 were present
- built from source- deployed to a single-node cluster 
- ran some sample MapReduce jobs
Jason
  From: Arun C Murthy 
 To: "common-dev@hadoop.apache.org" ; 
"hdfs-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org"  
 Sent: Thursday, November 13, 2014 5:08 PM
 Subject: [VOTE] Release Apache Hadoop 2.6.0 
   
Folks,

I've created another release candidate (rc1) for hadoop-2.6.0 based on the 
feedback.

The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.6.0-rc1
The RC tag in git is: release-2.6.0-rc1

The maven artifacts are available via repository.apache.org at 
https://repository.apache.org/content/repositories/orgapachehadoop-1013.

Please try the release and vote; the vote will run for the usual 5 days.

thanks,
Arun


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


   

Re: [VOTE] Release Apache Hadoop 2.6.0

2014-11-13 Thread Jason Lowe
I just committed 2.6 blockes YARN-2846 and MAPREDUCE-6156 which should also be 
in the 2.6.0 rc1 build.
Jason
  From: Arun C Murthy 
 To: yarn-...@hadoop.apache.org 
Cc: mapreduce-...@hadoop.apache.org; Ravi Prakash ; 
"hdfs-...@hadoop.apache.org" ; 
"common-dev@hadoop.apache.org"  
 Sent: Wednesday, November 12, 2014 10:58 AM
 Subject: Re: [VOTE] Release Apache Hadoop 2.6.0
   
Sounds good. I'll create an rc1. Thanks.

Arun

On Nov 11, 2014, at 2:06 PM, Robert Kanter  wrote:

> Hi Arun,
> 
> We were testing the RC and ran into a problem with the recent fixes that
> were done for POODLE for Tomcat (HADOOP-11217 for KMS and HDFS-7274 for
> HttpFS).  Basically, in disabling SSLv3, we also disabled SSLv2Hello, which
> is required for older clients (e.g. Java 6 with openssl 0.9.8x) so they
> can't connect without it.  Just to be clear, it does not mean SSLv2, which
> is insecure.  This also affects the MR shuffle in HADOOP-11243.
> 
> The fix is super simple, so I think we should reopen these 3 JIRAs and put
> in addendum patches and get them into 2.6.0.
> 
> thanks
> - Robert
> 
> On Tue, Nov 11, 2014 at 1:04 PM, Ravi Prakash  wrote:
> 
>> Hi Arun!
>> We are very close to completion on YARN-1964 (DockerContainerExecutor).
>> I'd also like HDFS-4882 to be checked in. Do you think these issues merit
>> another RC?
>> ThanksRavi
>> 
>> 
>>    On Tuesday, November 11, 2014 11:57 AM, Steve Loughran <
>> ste...@hortonworks.com> wrote:
>> 
>> 
>> +1 binding
>> 
>> -patched slider pom to build against 2.6.0
>> 
>> -verified build did download, which it did at up to ~8Mbps. Faster than a
>> local build.
>> 
>> -full clean test runs on OS/X & Linux
>> 
>> 
>> Windows 2012:
>> 
>> Same thing. I did have to first build my own set of the windows native
>> binaries, by checking out branch-2.6.0; doing a native build, copying the
>> binaries and then purging the local m2 repository of hadoop artifacts to be
>> confident I was building against. For anyone who wants those native libs
>> they will be up on
>> https://github.com/apache/incubator-slider/tree/develop/bin/windows/ once
>> it syncs with the ASF repos.
>> 
>> afterwords: the tests worked!
>> 
>> 
>> On 11 November 2014 02:52, Arun C Murthy  wrote:
>> 
>>> Folks,
>>> 
>>> I've created a release candidate (rc0) for hadoop-2.6.0 that I would like
>>> to see released.
>>> 
>>> The RC is available at:
>>> http://people.apache.org/~acmurthy/hadoop-2.6.0-rc0
>>> The RC tag in git is: release-2.6.0-rc0
>>> 
>>> The maven artifacts are available via repository.apache.org at
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1012.
>>> 
>>> Please try the release and vote; the vote will run for the usual 5 days.
>>> 
>>> thanks,
>>> Arun
>>> 
>>> 
>>> --
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>> to
>>> which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified
>> that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender
>> immediately
>>> and delete it from your system. Thank You.
>>> 
>> 
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>> 
>> 
>> 
>> 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/hdp/





-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


  

[jira] [Resolved] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-11288.
-
Resolution: Invalid

The CapacityScheduler is very much supported, and is actively being developed.  
It's setting as the default scheduler is intentional, see YARN-137.

> yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
> documentation
> --
>
> Key: HADOOP-11288
> URL: https://issues.apache.org/jira/browse/HADOOP-11288
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: DeepakVohra
>
> The yarn.resourcemanager.scheduler.class property is wrongly set to 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
>  CapacitySchduler is not even supported. Should be 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.5.1 RC0

2014-09-10 Thread Jason Lowe

+1 (binding)

- verified signatures and digests
- built from source
- examined CHANGES.txt for items fixed in 2.5.1
- deployed to a single-node cluster and ran some sample MR jobs

Jason

On 09/05/2014 07:18 PM, Karthik Kambatla wrote:

Hi folks,

I have put together a release candidate (RC0) for Hadoop 2.5.1.

The RC is available at: http://people.apache.org/~kasha/hadoop-2.5.1-RC0/
The RC git tag is release-2.5.1-RC0
The maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1010/

You can find my public key at:
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS

Please try the release and vote. The vote will run for the now usual 5
days.

Thanks
Karthik





[jira] [Created] (HADOOP-11007) Reinstate building of ant tasks support

2014-08-26 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11007:
---

 Summary: Reinstate building of ant tasks support
 Key: HADOOP-11007
 URL: https://issues.apache.org/jira/browse/HADOOP-11007
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, fs
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Jason Lowe


The ant tasks support from HADOOP-1508 is still present under 
hadoop-hdfs/src/ant/ but is no longer being built.  It would be nice if this 
was reinstated in the build and distributed as part of the release.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Release Apache Hadoop 2.5.0 RC2

2014-08-10 Thread Jason Lowe

+1 (binding)

- verified signatures and digests
- built from source
- deployed a single-node cluster
- ran some sample jobs

Jason

On 08/06/2014 03:59 PM, Karthik Kambatla wrote:

Hi folks,

I have put together a release candidate (rc2) for Hadoop 2.5.0.

The RC is available at: http://people.apache.org/~kasha/hadoop-2.5.0-RC2/
The RC tag in svn is here:
https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.5.0-rc2/
The maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1009/

You can find my public key at:
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS

Please try the release and vote. The vote will run for the now usual 5
days.

Thanks





Re: [VOTE] Migration from subversion to git for version control

2014-08-10 Thread Jason Lowe

+1

Jason

On 08/08/2014 09:57 PM, Karthik Kambatla wrote:

I have put together this proposal based on recent discussion on this topic.

Please vote on the proposal. The vote runs for 7 days.

1. Migrate from subversion to git for version control.
2. Force-push to be disabled on trunk and branch-* branches. Applying
changes from any of trunk/branch-* to any of branch-* should be through
"git cherry-pick -x".
3. Force-push on feature-branches is allowed. Before pulling in a
feature, the feature-branch should be rebased on latest trunk and the
changes applied to trunk through "git rebase --onto" or "git cherry-pick
".
4. Every time a feature branch is rebased on trunk, a tag that
identifies the state before the rebase needs to be created (e.g.
tag_feature_JIRA-2454_2014-08-07_rebase). These tags can be deleted once
the feature is pulled into trunk and the tags are no longer useful.
5. The relevance/use of tags stay the same after the migration.

Thanks
Karthik

PS: Per Andrew Wang, this should be a "Adoption of New Codebase" kind of
vote and will be Lazy 2/3 majority of PMC members.





[jira] [Created] (HADOOP-10945) 4-digit octal permissions throw a parse error

2014-08-07 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10945:
---

 Summary: 4-digit octal permissions throw a parse error
 Key: HADOOP-10945
 URL: https://issues.apache.org/jira/browse/HADOOP-10945
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.2.0
Reporter: Jason Lowe


Providing a 4-digit octal number for fs permissions leads to a parse error, 
e.g.: -Dfs.permissions.umask-mode=0022




--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [DISCUSS] Assume Private-Unstable for classes that are not annotated

2014-07-23 Thread Jason Lowe
I think that's a reasonable proposal as long as we understand it changes 
the burden from finding all the things that should be marked @Private to 
finding all the things that should be marked @Public. As Tom Graves 
pointed out in an earlier discussion about @LimitedPrivate, it may be 
impossible to do a straightforward task and use only interfaces marked 
@Public.  If users can't do basic things without straying from @Public 
interfaces then tons of code can break if we assume it's always fair 
game to change anything not marked @Public.  The "well you shouldn't 
have used a non-@Public interface" argument is not very useful in that 
context.


So as long as we're good about making sure officially supported features 
have corresponding @Public interfaces to wield them then I agree it will 
be easier to track those rather than track all the classes that should 
be @Private.  Hopefully if users understand that's how things work 
they'll help file JIRAs for interfaces that need to be @Public to get 
their work done.


Jason

On 07/22/2014 04:54 PM, Karthik Kambatla wrote:

Hi devs

As you might have noticed, we have several classes and methods in them that
are not annotated at all. This is seldom intentional. Avoiding incompatible
changes to all these classes can be considerable baggage.

I was wondering if we should add an explicit disclaimer in our
compatibility guide that says, "Classes without annotations are to
considered @Private"

For methods, is it reasonable to say - "Class members without specific
annotations inherit the annotations of the class"?

Thanks
Karthik





Re: [VOTE] Release Apache Hadoop 2.4.1

2014-06-27 Thread Jason Lowe

+1

- Verified signatures and digests
- Built from source, installed on single-node cluster and ran some 
sample jobs


Jason

On 06/21/2014 01:51 AM, Arun C Murthy wrote:

Folks,

I've created another release candidate (rc1) for hadoop-2.4.1 based on the 
feedback that I would like to push out.

The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1
The RC tag in svn is here: 
https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.1-rc1

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun



--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/hdp/







Re: [VOTE] Release Apache Hadoop 0.23.11

2014-06-25 Thread Jason Lowe

+1 (binding)

- Verified signatures and digests
- Deployed binary tarball to a single-node cluster and ran some MR 
example jobs
- Built from source, deployed to a single-node cluster and ran some MR 
example jobs


Jason

On 06/19/2014 10:14 AM, Thomas Graves wrote:

Hey Everyone,

There have been various bug fixes that have went into
branch-0.23 since the 0.23.10 release.  We think its time to do a 0.23.11.

This is also the last planned release off of branch-0.23 we plan on doing.

The RC is available at:
http://people.apache.org/~tgraves/hadoop-0.23.11-candidate-0/


The RC Tag in svn is here:
http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.11-rc0/

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days
til June 26th.

I am +1 (binding).

thanks,
Tom Graves








[jira] [Reopened] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-06-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-10468:
-


Reopening this issue as it breaks all existing metrics2 property files.  Before 
this change the properties needed to be lower-cased but now they must be 
camel-cased (e.g.: namenode.* now must be NameNode.*).

The release note states that the metrics2 file became case-sensitive, but I 
don't believe that's the case.  MetricsConfig uses 
org.apache.commons.configuration.SubsetConfiguration which I think has always 
been case-sensitive.

I'm hoping there's a way we can fix the underlying issue without breaking 
existing metrics2 property files, because the way in which they break is 
silent.  The settings are simply ignored rather than an error being thrown for 
unrecognized/unhandled properties.

> TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
> ---
>
> Key: HADOOP-10468
> URL: https://issues.apache.org/jira/browse/HADOOP-10468
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.5.0
>
> Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch
>
>
> {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
> due to the insufficient size of the sink queue:
> {code}
> 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
> queue and can't consume the given metrics.
> 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
> queue and can't consume the given metrics.
> 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
> queue and can't consume the given metrics.
> {code}
> The unit test should increase the default queue size to avoid intermediate 
> failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Change by-laws on release votes: 5 days instead of 7

2014-06-24 Thread Jason Lowe

+1 (binding)

Jason

On 06/24/2014 03:53 AM, Arun C Murthy wrote:

Folks,

  As discussed, I'd like to call a vote on changing our by-laws to change 
release votes from 7 days to 5.

  I've attached the change to by-laws I'm proposing.

  Please vote, the vote will the usual period of 7 days.

thanks,
Arun



[main]$ svn diff
Index: author/src/documentation/content/xdocs/bylaws.xml
===
--- author/src/documentation/content/xdocs/bylaws.xml   (revision 1605015)
+++ author/src/documentation/content/xdocs/bylaws.xml   (working copy)
@@ -344,7 +344,16 @@
  Votes are open for a period of 7 days to allow all active
  voters time to consider the vote. Votes relating to code
  changes are not subject to a strict timetable but should be
-made as timely as possible.
+made as timely as possible.
+
+ 
+  Product Release - Vote Timeframe
+   Release votes, alone, run for a period of 5 days. All other
+ votes are subject to the above timeframe of 7 days.
+ 
+   
+   
+
 
 
  




[jira] [Created] (HADOOP-10739) Renaming a file into a directory containing the same filename results in a confusing I/O error

2014-06-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10739:
---

 Summary: Renaming a file into a directory containing the same 
filename results in a confusing I/O error
 Key: HADOOP-10739
 URL: https://issues.apache.org/jira/browse/HADOOP-10739
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.4.0
Reporter: Jason Lowe


Renaming a file to another existing filename says "File
exists" but colliding with a file in a directory results in the cryptic
"Input/output error".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10622) Shell.runCommand can deadlock

2014-05-20 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10622:
---

 Summary: Shell.runCommand can deadlock
 Key: HADOOP-10622
 URL: https://issues.apache.org/jira/browse/HADOOP-10622
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Critical


Ran into a deadlock in Shell.runCommand.  Stacktrace details to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10474) Move o.a.h.record to hadoop-streaming

2014-05-19 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-10474.
-

   Resolution: Fixed
Fix Version/s: (was: 2.5.0)
   3.0.0

I reverted HADOOP-10485 and HADOOP-10474 from branch-2.

> Move o.a.h.record to hadoop-streaming
> -
>
> Key: HADOOP-10474
> URL: https://issues.apache.org/jira/browse/HADOOP-10474
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HADOOP-10474.000.patch, HADOOP-10474.001.patch, 
> HADOOP-10474.002.patch
>
>
> The classes in o.a.h.record have been deprecated for more than a year and a 
> half. They should be removed. As the first step, the jira moves all these 
> classes into the hadoop-streaming project, which is the only user of these 
> classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10474) Move o.a.h.record to hadoop-streaming

2014-05-16 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-10474:
-


Reopening this as Hive is an important part of the Hadoop stack.  Arguably we 
shouldn't remove something that hasn't been deprecated for at least one full 
major release.  org.apache.hadoop.record.* wasn't deprecated in 1.x so it seems 
premature to remove it in 2.x, especially in a minor release of 2.x.

Recommend we revert this, at least in branch-2.

> Move o.a.h.record to hadoop-streaming
> -
>
> Key: HADOOP-10474
> URL: https://issues.apache.org/jira/browse/HADOOP-10474
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.5.0
>
> Attachments: HADOOP-10474.000.patch, HADOOP-10474.001.patch, 
> HADOOP-10474.002.patch
>
>
> The classes in o.a.h.record have been deprecated for more than a year and a 
> half. They should be removed. As the first step, the jira moves all these 
> classes into the hadoop-streaming project, which is the only user of these 
> classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-07 Thread Jason Lowe

Here's my late +1, was just finishing up looking at the release.

- Verified signatures and digests
- Examined LICENSE file
- Installed binary distribution, ran some sample MapReduce jobs and 
examined logs and job history

- Built from source

Jason

On 04/07/2014 03:04 PM, Arun C Murthy wrote:

With 11 +1s (4 binding) and no -1s the vote passes. Thanks to everyone who 
tried out the release and passed their feedback along.

I'll send a note out once I actually get the bits out and the site updated etc.

thanks,
Arun

On Mar 31, 2014, at 2:22 AM, Arun C Murthy  wrote:


Folks,

I've created a release candidate (rc0) for hadoop-2.4.0 that I would like to 
get released.

The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0
The RC tag in svn is here: 
https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0-rc0

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun


--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/







[jira] [Resolved] (HADOOP-9344) Configuration.writeXml can warn about deprecated properties user did not set

2014-03-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-9344.


Resolution: Duplicate

Looks like this was fixed by HADOOP-10178.

> Configuration.writeXml can warn about deprecated properties user did not set
> 
>
> Key: HADOOP-9344
> URL: https://issues.apache.org/jira/browse/HADOOP-9344
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.3-alpha, 0.23.5
>    Reporter: Jason Lowe
>Assignee: Rushabh S Shah
>
> When the configuration is serialized it can emit warnings about deprecated 
> properties that the user did not specify.  Converting the config to XML 
> causes all the properties in the config to be processed for deprecation, and 
> after HADOOP-8167 setting a proper config property also causes the deprecated 
> forms to be set.  Processing all the keys in the config for deprecation 
> therefore can trigger warnings for keys that were never specified by the 
> user, leaving users confused as to how their code could be triggering these 
> warnings.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >