Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-04 Thread Eli Collins
+1  for another rc.  There have been quite a few issues found (handful
marked blocker) and this is only the first release candidate, seems
like the point of having multiple release candidates is to iterate
with another one that addresses the major issues found with the
previous one.

On Fri, Apr 4, 2014 at 5:06 PM, Gera Shegalov  wrote:
> I built the release from the rc tag, enabled timeline history service and
> ran a sleep job on a pseudo-distributed cluster.
>
> I encourage another rc, for 2.4.0 (non-binding)
>
> 1) Despite the discussion on YARN-1701, timeline AHS still sets
> yarn.timeline-service.generic-application-history.fs-history-store.uri to a
> location under ${hadoop.log.dir} that is meant for local file system, but
> uses it on HDFS by default.
>
> 2) Critical patch for WebHdfs/Hftp to fix the filesystem contract HDFS-6143
> is not included
>
> 3) Several patches that already proved themselves useful for diagnostics in
> production and have been available for some months are still not included.
> MAPREDUCE-5044/YARN-1515 is the most obvious example. Our users need to see
> where the task container JVM got stuck when it was timed out by AM.
>
> Thanks,
>
> Gera
>
>
>
>
> On Fri, Apr 4, 2014 at 3:51 PM, Azuryy  wrote:
>
>> Arun,
>>
>> Do you mean you will cut another RC for 2.4?
>>
>>
>> Sent from my iPhone5s
>>
>> > On 2014年4月5日, at 3:50, "Arun C. Murthy"  wrote:
>> >
>> > Thanks for helping Tsuyoshi. Pls mark them as Blockers and set the
>> fix-version to 2.4.1.
>> >
>> > Thanks again.
>> >
>> > Arun
>> >
>> >
>> >> On Apr 3, 2014, at 11:38 PM, Tsuyoshi OZAWA 
>> wrote:
>> >>
>> >> Hi,
>> >>
>> >> Updated a test result log based on the result of 2.4.0-rc0:
>> >> https://gist.github.com/oza/9965197
>> >>
>> >> IMO, there are some blockers to be fixed:
>> >> * MAPREDUCE-5815(TestMRAppMaster failure)
>> >> * YARN-1872(TestDistributedShell failure)
>> >> * HDFS: TestSymlinkLocalFSFileSystem failure on Linux (I cannot find
>> >> JIRA about this failure)
>> >>
>> >> Now I'm checking the problem reported by Azuryy.
>> >>
>> >> Thanks,
>> >> - Tsuyoshi
>> >>
>> >>> On Fri, Apr 4, 2014 at 8:55 AM, Tsuyoshi OZAWA <
>> ozawa.tsuyo...@gmail.com> wrote:
>> >>> Hi,
>> >>>
>> >>> Ran tests and confirmed that some tests(TestSymlinkLocalFSFileSystem)
>> fail.
>> >>> The log of the test failure is as follows:
>> >>>
>> >>> https://gist.github.com/oza/9965197
>> >>>
>> >>> Should we fix or disable the feature?
>> >>>
>> >>> Thanks,
>> >>> - Tsuyoshi
>> >>>
>>  On Mon, Mar 31, 2014 at 6:22 PM, Arun C Murthy 
>> wrote:
>>  Folks,
>> 
>>  I've created a release candidate (rc0) for hadoop-2.4.0 that I would
>> like to get released.
>> 
>>  The RC is available at:
>> http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0
>>  The RC tag in svn is here:
>> https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0-rc0
>> 
>>  The maven artifacts are available via repository.apache.org.
>> 
>>  Please try the release and vote; the vote will run for the usual 7
>> days.
>> 
>>  thanks,
>>  Arun
>> 
>>  --
>>  Arun C. Murthy
>>  Hortonworks Inc.
>>  http://hortonworks.com/
>> 
>> 
>> 
>>  --
>>  CONFIDENTIALITY NOTICE
>>  NOTICE: This message is intended for the use of the individual or
>> entity to
>>  which it is addressed and may contain information that is
>> confidential,
>>  privileged and exempt from disclosure under applicable law. If the
>> reader
>>  of this message is not the intended recipient, you are hereby
>> notified that
>>  any printing, copying, dissemination, distribution, disclosure or
>>  forwarding of this communication is strictly prohibited. If you have
>>  received this communication in error, please contact the sender
>> immediately
>>  and delete it from your system. Thank You.
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> - Tsuyoshi
>> >>
>> >>
>> >>
>> >> --
>> >> - Tsuyoshi
>> >
>> > --
>> > CONFIDENTIALITY NOTICE
>> > NOTICE: This message is intended for the use of the individual or entity
>> to
>> > which it is addressed and may contain information that is confidential,
>> > privileged and exempt from disclosure under applicable law. If the reader
>> > of this message is not the intended recipient, you are hereby notified
>> that
>> > any printing, copying, dissemination, distribution, disclosure or
>> > forwarding of this communication is strictly prohibited. If you have
>> > received this communication in error, please contact the sender
>> immediately
>> > and delete it from your system. Thank You.
>>


Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-04 Thread Gera Shegalov
I built the release from the rc tag, enabled timeline history service and
ran a sleep job on a pseudo-distributed cluster.

I encourage another rc, for 2.4.0 (non-binding)

1) Despite the discussion on YARN-1701, timeline AHS still sets
yarn.timeline-service.generic-application-history.fs-history-store.uri to a
location under ${hadoop.log.dir} that is meant for local file system, but
uses it on HDFS by default.

2) Critical patch for WebHdfs/Hftp to fix the filesystem contract HDFS-6143
is not included

3) Several patches that already proved themselves useful for diagnostics in
production and have been available for some months are still not included.
MAPREDUCE-5044/YARN-1515 is the most obvious example. Our users need to see
where the task container JVM got stuck when it was timed out by AM.

Thanks,

Gera




On Fri, Apr 4, 2014 at 3:51 PM, Azuryy  wrote:

> Arun,
>
> Do you mean you will cut another RC for 2.4?
>
>
> Sent from my iPhone5s
>
> > On 2014年4月5日, at 3:50, "Arun C. Murthy"  wrote:
> >
> > Thanks for helping Tsuyoshi. Pls mark them as Blockers and set the
> fix-version to 2.4.1.
> >
> > Thanks again.
> >
> > Arun
> >
> >
> >> On Apr 3, 2014, at 11:38 PM, Tsuyoshi OZAWA 
> wrote:
> >>
> >> Hi,
> >>
> >> Updated a test result log based on the result of 2.4.0-rc0:
> >> https://gist.github.com/oza/9965197
> >>
> >> IMO, there are some blockers to be fixed:
> >> * MAPREDUCE-5815(TestMRAppMaster failure)
> >> * YARN-1872(TestDistributedShell failure)
> >> * HDFS: TestSymlinkLocalFSFileSystem failure on Linux (I cannot find
> >> JIRA about this failure)
> >>
> >> Now I'm checking the problem reported by Azuryy.
> >>
> >> Thanks,
> >> - Tsuyoshi
> >>
> >>> On Fri, Apr 4, 2014 at 8:55 AM, Tsuyoshi OZAWA <
> ozawa.tsuyo...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> Ran tests and confirmed that some tests(TestSymlinkLocalFSFileSystem)
> fail.
> >>> The log of the test failure is as follows:
> >>>
> >>> https://gist.github.com/oza/9965197
> >>>
> >>> Should we fix or disable the feature?
> >>>
> >>> Thanks,
> >>> - Tsuyoshi
> >>>
>  On Mon, Mar 31, 2014 at 6:22 PM, Arun C Murthy 
> wrote:
>  Folks,
> 
>  I've created a release candidate (rc0) for hadoop-2.4.0 that I would
> like to get released.
> 
>  The RC is available at:
> http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0
>  The RC tag in svn is here:
> https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0-rc0
> 
>  The maven artifacts are available via repository.apache.org.
> 
>  Please try the release and vote; the vote will run for the usual 7
> days.
> 
>  thanks,
>  Arun
> 
>  --
>  Arun C. Murthy
>  Hortonworks Inc.
>  http://hortonworks.com/
> 
> 
> 
>  --
>  CONFIDENTIALITY NOTICE
>  NOTICE: This message is intended for the use of the individual or
> entity to
>  which it is addressed and may contain information that is
> confidential,
>  privileged and exempt from disclosure under applicable law. If the
> reader
>  of this message is not the intended recipient, you are hereby
> notified that
>  any printing, copying, dissemination, distribution, disclosure or
>  forwarding of this communication is strictly prohibited. If you have
>  received this communication in error, please contact the sender
> immediately
>  and delete it from your system. Thank You.
> >>>
> >>>
> >>>
> >>> --
> >>> - Tsuyoshi
> >>
> >>
> >>
> >> --
> >> - Tsuyoshi
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
>


Re: Plans of moving towards JDK7 in trunk

2014-04-04 Thread Alejandro Abdelnur
So, you want to compile hdfs/yarn/mapred clients (and hadoop-common and
hadoop-auth) with JDK6 and the rest with JDK7?


On Fri, Apr 4, 2014 at 3:15 PM, Haohui Mai  wrote:

> I'm referring to the later case. Indeed migrating JDK7 for branch-2 is more
> difficult.
>
> I think one reasonable approach is to put the hdfs / yarn clients into
> separate jars. The client-side jars can only use JDK6 APIs, so that
> downstream projects running on top of JDK6 continue to work.
>
> The HDFS/YARN/MR servers need to be run on top of JDK7, and we're free to
> use JDK7 APIs inside them. Given the fact that there're way more code in
> the server-side compared to the client-side, having the ability to use JDK7
> in the server-side only might still be a win.
>
> The downside I can think of is that it might complicate the effort of
> publishing maven jars, but this should be an one-time issue.
>
> ~Haohui
>
>
> On Fri, Apr 4, 2014 at 2:37 PM, Alejandro Abdelnur  >wrote:
>
> > Haohui,
> >
> > Is the idea to compile/test with JDK7 and recommend it for runtime and
> stop
> > there? Or to start using JDK7 API stuff as well? If the later is the
> case,
> > then backporting stuff to branch-2 may break and patches may have to be
> > refactored for JDK6. Given that branch-2 got GA status not so long ago, I
> > assume it will be active for a while.
> >
> > What are your thoughts on this regard?
> >
> > Thanks
> >
> >
> > On Fri, Apr 4, 2014 at 2:29 PM, Haohui Mai  wrote:
> >
> > > Hi,
> > >
> > > There have been multiple discussions on deprecating supports of JDK6
> and
> > > moving towards JDK7. It looks to me that the consensus is that now
> hadoop
> > > is ready to drop the support of JDK6 and to move towards JDK7. Based on
> > the
> > > consensus, I wonder whether it is a good time to start the migration.
> > >
> > > Here are my understandings of the current status:
> > >
> > > 1. There is no more public updates of JDK6 since Feb 2013. Users no
> > longer
> > > get fixes of security vulnerabilities through official public updates.
> > > 2. Hadoop core is stuck with out-of-date dependency unless moving
> towards
> > > JDK7. (see
> > > http://hadoop.6.n7.nabble.com/very-old-dependencies-td71486.html)
> > > The implementation can also benefit from it thanks to the new
> > > functionalities in JDK7.
> > > 3. The code is ready for JDK7. Cloudera and Hortonworks have successful
> > > stories of supporting Hadoop on JDK7.
> > >
> > >
> > > It seems that the real work of moving to JDK7 is minimal. We only need
> to
> > > (1) make sure the jenkins are running on top of JDK7, and (2) to update
> > the
> > > minimum required Java version from 6 to 7. Therefore I propose that
> let's
> > > move towards JDK7 in trunk in the short term.
> > >
> > > Your feedbacks are appreciated.
> > >
> > > Regards,
> > > Haohui
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> > >
> >
> >
> >
> > --
> > Alejandro
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>



-- 
Alejandro


Re: Plans of moving towards JDK7 in trunk

2014-04-04 Thread Haohui Mai
bq. It might not be as clear cut...

Totally agree. I think the key is that we can do the work in an incremental
way. We can only introduce JDK7 dependency on the server side. In order to
do this we need to separate the client-side code to separate jars. I've
already proposed to create a hdfs-client jar in the hdfs-dev mailing list.

bq.  I would have thought it could be easily achieved by marking certain
project poms with source/target 1.6 in their maven compiler plugin
configuration while upgrading the default setting to 1.7. Do you anticipate
more issues?

Correct me if I'm wrong, but I think that's enough. The work should be
minimal.

~Haohui

On Fri, Apr 4, 2014 at 3:43 PM, Sangjin Lee  wrote:

> Please don't forget the mac os build on JDK 7. :)
>
>
> On Fri, Apr 4, 2014 at 3:15 PM, Haohui Mai  wrote:
>
> > I'm referring to the later case. Indeed migrating JDK7 for branch-2 is
> more
> > difficult.
> >
> > I think one reasonable approach is to put the hdfs / yarn clients into
> > separate jars. The client-side jars can only use JDK6 APIs, so that
> > downstream projects running on top of JDK6 continue to work.
> >
>
> It might not be as clear cut. For clients to run clean on JDK 6, not only
> the client projects/artifacts but also any of their dependencies must be
> free of JDK 7 code. And this obviously includes things like hadoop-common
> (or any downstream dependencies for that matter).
>
>
> >
> > The HDFS/YARN/MR servers need to be run on top of JDK7, and we're free to
> > use JDK7 APIs inside them. Given the fact that there're way more code in
> > the server-side compared to the client-side, having the ability to use
> JDK7
> > in the server-side only might still be a win.
> >
> > The downside I can think of is that it might complicate the effort of
> > publishing maven jars, but this should be an one-time issue.
> >
>
> Could you elaborate on why it would complicate maven jar publication?
> Perhaps I'm over-simplifying things, but I would have thought it could be
> easily achieved by marking certain project poms with source/target 1.6 in
> their maven compiler plugin configuration while upgrading the default
> setting to 1.7. Do you anticipate more issues?
>
>
> >
> > ~Haohui
> >
> >
> > On Fri, Apr 4, 2014 at 2:37 PM, Alejandro Abdelnur  > >wrote:
> >
> > > Haohui,
> > >
> > > Is the idea to compile/test with JDK7 and recommend it for runtime and
> > stop
> > > there? Or to start using JDK7 API stuff as well? If the later is the
> > case,
> > > then backporting stuff to branch-2 may break and patches may have to be
> > > refactored for JDK6. Given that branch-2 got GA status not so long
> ago, I
> > > assume it will be active for a while.
> > >
> > > What are your thoughts on this regard?
> > >
> > > Thanks
> > >
> > >
> > > On Fri, Apr 4, 2014 at 2:29 PM, Haohui Mai 
> wrote:
> > >
> > > > Hi,
> > > >
> > > > There have been multiple discussions on deprecating supports of JDK6
> > and
> > > > moving towards JDK7. It looks to me that the consensus is that now
> > hadoop
> > > > is ready to drop the support of JDK6 and to move towards JDK7. Based
> on
> > > the
> > > > consensus, I wonder whether it is a good time to start the migration.
> > > >
> > > > Here are my understandings of the current status:
> > > >
> > > > 1. There is no more public updates of JDK6 since Feb 2013. Users no
> > > longer
> > > > get fixes of security vulnerabilities through official public
> updates.
> > > > 2. Hadoop core is stuck with out-of-date dependency unless moving
> > towards
> > > > JDK7. (see
> > > > http://hadoop.6.n7.nabble.com/very-old-dependencies-td71486.html)
> > > > The implementation can also benefit from it thanks to the new
> > > > functionalities in JDK7.
> > > > 3. The code is ready for JDK7. Cloudera and Hortonworks have
> successful
> > > > stories of supporting Hadoop on JDK7.
> > > >
> > > >
> > > > It seems that the real work of moving to JDK7 is minimal. We only
> need
> > to
> > > > (1) make sure the jenkins are running on top of JDK7, and (2) to
> update
> > > the
> > > > minimum required Java version from 6 to 7. Therefore I propose that
> > let's
> > > > move towards JDK7 in trunk in the short term.
> > > >
> > > > Your feedbacks are appreciated.
> > > >
> > > > Regards,
> > > > Haohui
> > > >
> > > > --
> > > > CONFIDENTIALITY NOTICE
> > > > NOTICE: This message is intended for the use of the individual or
> > entity
> > > to
> > > > which it is addressed and may contain information that is
> confidential,
> > > > privileged and exempt from disclosure under applicable law. If the
> > reader
> > > > of this message is not the intended recipient, you are hereby
> notified
> > > that
> > > > any printing, copying, dissemination, distribution, disclosure or
> > > > forwarding of this communication is strictly prohibited. If you have
> > > > received this communication in error, please contact the sender
> > > immediately
> > > > and delete it from your system. Thank You.
> > > >
> > >
> >

Re: Plans of moving towards JDK7 in trunk

2014-04-04 Thread Sangjin Lee
Please don't forget the mac os build on JDK 7. :)


On Fri, Apr 4, 2014 at 3:15 PM, Haohui Mai  wrote:

> I'm referring to the later case. Indeed migrating JDK7 for branch-2 is more
> difficult.
>
> I think one reasonable approach is to put the hdfs / yarn clients into
> separate jars. The client-side jars can only use JDK6 APIs, so that
> downstream projects running on top of JDK6 continue to work.
>

It might not be as clear cut. For clients to run clean on JDK 6, not only
the client projects/artifacts but also any of their dependencies must be
free of JDK 7 code. And this obviously includes things like hadoop-common
(or any downstream dependencies for that matter).


>
> The HDFS/YARN/MR servers need to be run on top of JDK7, and we're free to
> use JDK7 APIs inside them. Given the fact that there're way more code in
> the server-side compared to the client-side, having the ability to use JDK7
> in the server-side only might still be a win.
>
> The downside I can think of is that it might complicate the effort of
> publishing maven jars, but this should be an one-time issue.
>

Could you elaborate on why it would complicate maven jar publication?
Perhaps I'm over-simplifying things, but I would have thought it could be
easily achieved by marking certain project poms with source/target 1.6 in
their maven compiler plugin configuration while upgrading the default
setting to 1.7. Do you anticipate more issues?


>
> ~Haohui
>
>
> On Fri, Apr 4, 2014 at 2:37 PM, Alejandro Abdelnur  >wrote:
>
> > Haohui,
> >
> > Is the idea to compile/test with JDK7 and recommend it for runtime and
> stop
> > there? Or to start using JDK7 API stuff as well? If the later is the
> case,
> > then backporting stuff to branch-2 may break and patches may have to be
> > refactored for JDK6. Given that branch-2 got GA status not so long ago, I
> > assume it will be active for a while.
> >
> > What are your thoughts on this regard?
> >
> > Thanks
> >
> >
> > On Fri, Apr 4, 2014 at 2:29 PM, Haohui Mai  wrote:
> >
> > > Hi,
> > >
> > > There have been multiple discussions on deprecating supports of JDK6
> and
> > > moving towards JDK7. It looks to me that the consensus is that now
> hadoop
> > > is ready to drop the support of JDK6 and to move towards JDK7. Based on
> > the
> > > consensus, I wonder whether it is a good time to start the migration.
> > >
> > > Here are my understandings of the current status:
> > >
> > > 1. There is no more public updates of JDK6 since Feb 2013. Users no
> > longer
> > > get fixes of security vulnerabilities through official public updates.
> > > 2. Hadoop core is stuck with out-of-date dependency unless moving
> towards
> > > JDK7. (see
> > > http://hadoop.6.n7.nabble.com/very-old-dependencies-td71486.html)
> > > The implementation can also benefit from it thanks to the new
> > > functionalities in JDK7.
> > > 3. The code is ready for JDK7. Cloudera and Hortonworks have successful
> > > stories of supporting Hadoop on JDK7.
> > >
> > >
> > > It seems that the real work of moving to JDK7 is minimal. We only need
> to
> > > (1) make sure the jenkins are running on top of JDK7, and (2) to update
> > the
> > > minimum required Java version from 6 to 7. Therefore I propose that
> let's
> > > move towards JDK7 in trunk in the short term.
> > >
> > > Your feedbacks are appreciated.
> > >
> > > Regards,
> > > Haohui
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> > >
> >
> >
> >
> > --
> > Alejandro
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: Plans of moving towards JDK7 in trunk

2014-04-04 Thread Haohui Mai
I'm referring to the later case. Indeed migrating JDK7 for branch-2 is more
difficult.

I think one reasonable approach is to put the hdfs / yarn clients into
separate jars. The client-side jars can only use JDK6 APIs, so that
downstream projects running on top of JDK6 continue to work.

The HDFS/YARN/MR servers need to be run on top of JDK7, and we're free to
use JDK7 APIs inside them. Given the fact that there're way more code in
the server-side compared to the client-side, having the ability to use JDK7
in the server-side only might still be a win.

The downside I can think of is that it might complicate the effort of
publishing maven jars, but this should be an one-time issue.

~Haohui


On Fri, Apr 4, 2014 at 2:37 PM, Alejandro Abdelnur wrote:

> Haohui,
>
> Is the idea to compile/test with JDK7 and recommend it for runtime and stop
> there? Or to start using JDK7 API stuff as well? If the later is the case,
> then backporting stuff to branch-2 may break and patches may have to be
> refactored for JDK6. Given that branch-2 got GA status not so long ago, I
> assume it will be active for a while.
>
> What are your thoughts on this regard?
>
> Thanks
>
>
> On Fri, Apr 4, 2014 at 2:29 PM, Haohui Mai  wrote:
>
> > Hi,
> >
> > There have been multiple discussions on deprecating supports of JDK6 and
> > moving towards JDK7. It looks to me that the consensus is that now hadoop
> > is ready to drop the support of JDK6 and to move towards JDK7. Based on
> the
> > consensus, I wonder whether it is a good time to start the migration.
> >
> > Here are my understandings of the current status:
> >
> > 1. There is no more public updates of JDK6 since Feb 2013. Users no
> longer
> > get fixes of security vulnerabilities through official public updates.
> > 2. Hadoop core is stuck with out-of-date dependency unless moving towards
> > JDK7. (see
> > http://hadoop.6.n7.nabble.com/very-old-dependencies-td71486.html)
> > The implementation can also benefit from it thanks to the new
> > functionalities in JDK7.
> > 3. The code is ready for JDK7. Cloudera and Hortonworks have successful
> > stories of supporting Hadoop on JDK7.
> >
> >
> > It seems that the real work of moving to JDK7 is minimal. We only need to
> > (1) make sure the jenkins are running on top of JDK7, and (2) to update
> the
> > minimum required Java version from 6 to 7. Therefore I propose that let's
> > move towards JDK7 in trunk in the short term.
> >
> > Your feedbacks are appreciated.
> >
> > Regards,
> > Haohui
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>
>
>
> --
> Alejandro
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Plans of moving towards JDK7 in trunk

2014-04-04 Thread Alejandro Abdelnur
Haohui,

Is the idea to compile/test with JDK7 and recommend it for runtime and stop
there? Or to start using JDK7 API stuff as well? If the later is the case,
then backporting stuff to branch-2 may break and patches may have to be
refactored for JDK6. Given that branch-2 got GA status not so long ago, I
assume it will be active for a while.

What are your thoughts on this regard?

Thanks


On Fri, Apr 4, 2014 at 2:29 PM, Haohui Mai  wrote:

> Hi,
>
> There have been multiple discussions on deprecating supports of JDK6 and
> moving towards JDK7. It looks to me that the consensus is that now hadoop
> is ready to drop the support of JDK6 and to move towards JDK7. Based on the
> consensus, I wonder whether it is a good time to start the migration.
>
> Here are my understandings of the current status:
>
> 1. There is no more public updates of JDK6 since Feb 2013. Users no longer
> get fixes of security vulnerabilities through official public updates.
> 2. Hadoop core is stuck with out-of-date dependency unless moving towards
> JDK7. (see
> http://hadoop.6.n7.nabble.com/very-old-dependencies-td71486.html)
> The implementation can also benefit from it thanks to the new
> functionalities in JDK7.
> 3. The code is ready for JDK7. Cloudera and Hortonworks have successful
> stories of supporting Hadoop on JDK7.
>
>
> It seems that the real work of moving to JDK7 is minimal. We only need to
> (1) make sure the jenkins are running on top of JDK7, and (2) to update the
> minimum required Java version from 6 to 7. Therefore I propose that let's
> move towards JDK7 in trunk in the short term.
>
> Your feedbacks are appreciated.
>
> Regards,
> Haohui
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>



-- 
Alejandro


Plans of moving towards JDK7 in trunk

2014-04-04 Thread Haohui Mai
Hi,

There have been multiple discussions on deprecating supports of JDK6 and
moving towards JDK7. It looks to me that the consensus is that now hadoop
is ready to drop the support of JDK6 and to move towards JDK7. Based on the
consensus, I wonder whether it is a good time to start the migration.

Here are my understandings of the current status:

1. There is no more public updates of JDK6 since Feb 2013. Users no longer
get fixes of security vulnerabilities through official public updates.
2. Hadoop core is stuck with out-of-date dependency unless moving towards
JDK7. (see http://hadoop.6.n7.nabble.com/very-old-dependencies-td71486.html)
The implementation can also benefit from it thanks to the new
functionalities in JDK7.
3. The code is ready for JDK7. Cloudera and Hortonworks have successful
stories of supporting Hadoop on JDK7.


It seems that the real work of moving to JDK7 is minimal. We only need to
(1) make sure the jenkins are running on top of JDK7, and (2) to update the
minimum required Java version from 6 to 7. Therefore I propose that let's
move towards JDK7 in trunk in the short term.

Your feedbacks are appreciated.

Regards,
Haohui

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-04 Thread Arun C. Murthy
Thanks for helping Tsuyoshi. Pls mark them as Blockers and set the fix-version 
to 2.4.1. 

Thanks again.

Arun


> On Apr 3, 2014, at 11:38 PM, Tsuyoshi OZAWA  wrote:
> 
> Hi,
> 
> Updated a test result log based on the result of 2.4.0-rc0:
> https://gist.github.com/oza/9965197
> 
> IMO, there are some blockers to be fixed:
> * MAPREDUCE-5815(TestMRAppMaster failure)
> * YARN-1872(TestDistributedShell failure)
> * HDFS: TestSymlinkLocalFSFileSystem failure on Linux (I cannot find
> JIRA about this failure)
> 
> Now I'm checking the problem reported by Azuryy.
> 
> Thanks,
> - Tsuyoshi
> 
>> On Fri, Apr 4, 2014 at 8:55 AM, Tsuyoshi OZAWA  
>> wrote:
>> Hi,
>> 
>> Ran tests and confirmed that some tests(TestSymlinkLocalFSFileSystem) fail.
>> The log of the test failure is as follows:
>> 
>> https://gist.github.com/oza/9965197
>> 
>> Should we fix or disable the feature?
>> 
>> Thanks,
>> - Tsuyoshi
>> 
>>> On Mon, Mar 31, 2014 at 6:22 PM, Arun C Murthy  wrote:
>>> Folks,
>>> 
>>> I've created a release candidate (rc0) for hadoop-2.4.0 that I would like 
>>> to get released.
>>> 
>>> The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0
>>> The RC tag in svn is here: 
>>> https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0-rc0
>>> 
>>> The maven artifacts are available via repository.apache.org.
>>> 
>>> Please try the release and vote; the vote will run for the usual 7 days.
>>> 
>>> thanks,
>>> Arun
>>> 
>>> --
>>> Arun C. Murthy
>>> Hortonworks Inc.
>>> http://hortonworks.com/
>>> 
>>> 
>>> 
>>> --
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity to
>>> which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>> 
>> 
>> 
>> --
>> - Tsuyoshi
> 
> 
> 
> -- 
> - Tsuyoshi

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-10463) Bring RawLocalFileSystem test coverage to 100%

2014-04-04 Thread jay vyas (JIRA)
jay vyas created HADOOP-10463:
-

 Summary: Bring RawLocalFileSystem test coverage to 100%
 Key: HADOOP-10463
 URL: https://issues.apache.org/jira/browse/HADOOP-10463
 Project: Hadoop Common
  Issue Type: Test
  Components: fs
Reporter: jay vyas


RawLocalFileSystem coverage is at about 80% (measured with cobertura) at the 
moment.  

A few notable untested code paths are:

* primitiveMkdir
* markSupported
* int read() 

Lets get it as close as possible to 100% coverage.  In the process of analyzing 
existing abstract tests which exersize RawLocalFileSystem,  we will also pave 
the way for HADOOP-10461.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Build failed in Jenkins: Hadoop-Common-trunk #1089

2014-04-04 Thread Apache Jenkins Server
See 

Changes:

[kihwal] Fixing an error in CHANGES.txt

[jeagles] HADOOP-10454. Provide FileContext version of har file system. (Kihwal 
Lee via jeagles)

[zjshen] MAPREDUCE-5818. Added "hsadmin" command into mapred.cmd. Contributed 
by Jian He.

[wheat9] HDFS-6190. Minor textual fixes in DFSClient. Contributed by Charles 
Lamb.

--
[...truncated 62545 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir -> 

Setting project property: test.exclude.pattern -> _
Setting project property: hadoop.assemblies.version -> 3.0.0-SNAPSHOT
Setting project property: test.exclude -> _
Setting project property: distMgmtSnapshotsId -> apache.snapshots.https
Setting project property: project.build.sourceEncoding -> UTF-8
Setting project property: java.security.egd -> file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl -> 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl -> 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version -> 1.7.4
Setting project property: test.build.data -> 

Setting project property: commons-daemon.version -> 1.0.13
Setting project property: hadoop.common.build.dir -> 

Setting project property: testsThreadCount -> 4
Setting project property: maven.test.redirectTestOutputToFile -> true
Setting project property: jdiff.version -> 1.0.9
Setting project property: build.platform -> Linux-i386-32
Setting project property: project.reporting.outputEncoding -> UTF-8
Setting project property: distMgmtStagingName -> Apache Release Distribution 
Repository
Setting project property: protobuf.version -> 2.5.0
Setting project property: failIfNoTests -> false
Setting project property: protoc.path -> ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version -> 1.9
Setting project property: distMgmtStagingId -> apache.staging.https
Setting project property: distMgmtSnapshotsName -> Apache Development Snapshot 
Repository
Setting project property: ant.file -> 

[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId -> org.apache.hadoop
Setting project property: project.artifactId -> hadoop-common-project
Setting project property: project.name -> Apache Hadoop Common Project
Setting project property: project.description -> Apache Hadoop Common Project
Setting project property: project.version -> 3.0.0-SNAPSHOT
Setting project property: project.packaging -> pom
Setting project property: project.build.directory -> 

Setting project property: project.build.outputDirectory -> 

Setting project property: project.build.testOutputDirectory -> 

Setting project property: project.build.sourceDirectory -> 

Setting project property: project.build.testSourceDirectory -> 

Setting project property: localRepository ->id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none
Setting project property: settings.localRepository -> 
/home/jenkins/.m2/