RE: Plans of moving towards JDK7 in trunk

2014-04-08 Thread Ottenheimer, Davi
 From: Eli Collins [mailto:e...@cloudera.com]
 Sent: Monday, April 07, 2014 11:54 AM
 
 
 IMO we should not drop support for Java 6 in a minor update of a stable
 release (v2).  I don't think the larger Hadoop user base would find it
 acceptable that upgrading to a minor update caused their systems to stop
 working because they didn't upgrade Java. There are people still getting
 support for Java 6. ...
 
 Thanks,
 Eli

Hi Eli, 

Technically you are correct those with extended support get critical security 
fixes for 6 until the end of 2016. I am curious whether many of those are in 
the Hadoop user base. Do you know? My guess is the vast majority are within 
Oracle's official public end of life, which was over 12 months ago. Even 
Premier support ended Dec 2013:

http://www.oracle.com/technetwork/java/eol-135779.html

The end of Java 6 support carries much risk. It has to be considered in terms 
of serious security vulnerabilities such as CVE-2013-2465 with CVSS score 10.0. 

http://www.cvedetails.com/cve/CVE-2013-2465/

Since you mentioned caused systems to stop as an example of what would be a 
concern to Hadoop users, please note the CVE-2013-2465 availability impact:

Complete (There is a total shutdown of the affected resource. The attacker can 
render the resource completely unavailable.)

This vulnerability was patched in Java 6 Update 51, but post end of life. Apple 
pushed out the update specifically because of this vulnerability 
(http://support.apple.com/kb/HT5717) as did some other vendors privately, but 
for the majority of people using Java 6 means they have a ticking time bomb. 

Allowing it to stay should be considered in terms of accepting the whole risk 
posture.

Davi


Re: Plans of moving towards JDK7 in trunk

2014-04-08 Thread Sandy Ryza
+1 for maintaining Java 6 support in branch-2.

Hadoop continuing to support Java 6 is not an endorsement of Java 6.  It's
an acknowledgement that many users of Hadoop 2 have Java 6 embedded in
their stack, and that upgrading is costly for some users and simply not an
option for others.  If a similar vulnerability were to be discovered in a
recent version of RHEL, I don't think it would make sense for Hadoop to
drop that version as a supported platform.

Assuming that we want to maintain Java 6 compatibility in branch-2, it
seems to me that we should do the same in trunk until we start seriously
planning a release of Hadoop 3.  Since we released 2.2 GA, trunk has mainly
been used as a staging area for changes that will go into branch-2.  The
larger the divergence between trunk and branch-2, the higher the overhead
for developers writing patches that need to go into both.  Eventually we'll
need to stomach this, but is there an advantage to doing so while Hadoop 3
is still remote?

-Sandy

On Tue, Apr 8, 2014 at 2:00 AM, Ottenheimer, Davi
davi.ottenhei...@emc.comwrote:

  From: Eli Collins [mailto:e...@cloudera.com]
  Sent: Monday, April 07, 2014 11:54 AM
 
 
  IMO we should not drop support for Java 6 in a minor update of a stable
  release (v2).  I don't think the larger Hadoop user base would find it
  acceptable that upgrading to a minor update caused their systems to stop
  working because they didn't upgrade Java. There are people still getting
  support for Java 6. ...
 
  Thanks,
  Eli

 Hi Eli,

 Technically you are correct those with extended support get critical
 security fixes for 6 until the end of 2016. I am curious whether many of
 those are in the Hadoop user base. Do you know? My guess is the vast
 majority are within Oracle's official public end of life, which was over 12
 months ago. Even Premier support ended Dec 2013:

 http://www.oracle.com/technetwork/java/eol-135779.html

 The end of Java 6 support carries much risk. It has to be considered in
 terms of serious security vulnerabilities such as CVE-2013-2465 with CVSS
 score 10.0.

 http://www.cvedetails.com/cve/CVE-2013-2465/

 Since you mentioned caused systems to stop as an example of what would
 be a concern to Hadoop users, please note the CVE-2013-2465 availability
 impact:

 Complete (There is a total shutdown of the affected resource. The
 attacker can render the resource completely unavailable.)

 This vulnerability was patched in Java 6 Update 51, but post end of life.
 Apple pushed out the update specifically because of this vulnerability (
 http://support.apple.com/kb/HT5717) as did some other vendors privately,
 but for the majority of people using Java 6 means they have a ticking time
 bomb.

 Allowing it to stay should be considered in terms of accepting the whole
 risk posture.

 Davi



Build failed in Jenkins: Hadoop-Common-trunk #1093

2014-04-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1093/changes

Changes:

[wheat9] HADOOP-10468. TestMetricsSystemImpl.testMultiThreadedPublish fails 
intermediately. Contributed by Haohui Mai.

[wheat9] HDFS-6143. WebHdfsFileSystem open should throw FileNotFoundException 
for non-existing paths. Contributed by Gera Shegalov.

[cnauroth] HDFS-6198. DataNode rolling upgrade does not correctly identify 
current block pool directory and replace with trash on Windows. Contributed by 
Chris Nauroth.

[wheat9] HDFS-6180. Dead node count / listing is very broken in JMX and old 
GUI. Contributed by Haohui Mai.

[cnauroth] HDFS-6197. Rolling upgrade rollback on Windows can fail attempting 
to rename edit log segment files to a destination that already exists. 
Contributed by Chris Nauroth.

[brandonli] HDFS-6181. Fix the wrong property names in NFS user guide. 
Contributed by Brandon Li

[kihwal] HDFS-6191. Disable quota checks when replaying edit log.

[szetszwo] HADOOP-10466. Lower the log level in UserGroupInformation.  
Contributed by Nicolas Liochon

--
[...truncated 62626 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: build.platform - Linux-i386-32
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory 

Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-08 Thread Steve Loughran
another belated +1 binding

-set mvn to pull in the 2.4.0 artifacts

-rebuilt my slider YARN code. Only compile failure was caused in our custom
AmFilterInitializer not compiling due to YARN-1553 dropping a static
isSecure() method -switched to
inlining  WebAppUtils.getHttpSchemePrefix(conf) to create code that still
compiles and runs against 2.3.0 clusters as well as 2.4.0. This filter
isn't actually used right now -and may get dropped. I just patched the code
to handle it,. YARN proxy filtering is pretty low-level code that most
people won't go near.

-successful test runs against YARN clusters on VMs, bringing up Hbase
clusters

  2.4.0-based CentoOS VM  Java 7 (HDP 2.1 sandbox)
 branch-2 Ubuntu VM running kerberized on Java8.

Steve

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Plans of moving towards JDK7 in trunk

2014-04-08 Thread Eli Collins
On Tue, Apr 8, 2014 at 2:00 AM, Ottenheimer, Davi
davi.ottenhei...@emc.com wrote:
 From: Eli Collins [mailto:e...@cloudera.com]
 Sent: Monday, April 07, 2014 11:54 AM


 IMO we should not drop support for Java 6 in a minor update of a stable
 release (v2).  I don't think the larger Hadoop user base would find it
 acceptable that upgrading to a minor update caused their systems to stop
 working because they didn't upgrade Java. There are people still getting
 support for Java 6. ...

 Thanks,
 Eli

 Hi Eli,

 Technically you are correct those with extended support get critical security 
 fixes for 6 until the end of 2016. I am curious whether many of those are in 
 the Hadoop user base. Do you know? My guess is the vast majority are within 
 Oracle's official public end of life, which was over 12 months ago. Even 
 Premier support ended Dec 2013:

 http://www.oracle.com/technetwork/java/eol-135779.html

 The end of Java 6 support carries much risk. It has to be considered in terms 
 of serious security vulnerabilities such as CVE-2013-2465 with CVSS score 
 10.0.

 http://www.cvedetails.com/cve/CVE-2013-2465/

 Since you mentioned caused systems to stop as an example of what would be a 
 concern to Hadoop users, please note the CVE-2013-2465 availability impact:

 Complete (There is a total shutdown of the affected resource. The attacker 
 can render the resource completely unavailable.)

 This vulnerability was patched in Java 6 Update 51, but post end of life. 
 Apple pushed out the update specifically because of this vulnerability 
 (http://support.apple.com/kb/HT5717) as did some other vendors privately, but 
 for the majority of people using Java 6 means they have a ticking time bomb.

 Allowing it to stay should be considered in terms of accepting the whole risk 
 posture.


There are some who get extended support, but I suspect many just have
a if-it's-not-broke mentality when it comes to production deployments.
The current code supports both java6 and java7 and so allows these
people to remain compatible, while enabling others to upgrade to the
java7 runtime. This seems like the right compromise for a stable
release series. Again, absolutely makes sense for trunk (ie v3) to
require java7 or greater.


Re: Plans of moving towards JDK7 in trunk

2014-04-08 Thread Raymie Stata
Is there broad consensus that, by end of 3Q2014 at the latest, the
average contributor to Hadoop should be free to use Java7 features?
And start pulling in libraries that have a Java7 dependency?  And
start doing the janitorial work of taking advantage of the Java7
APIs?  Or do we think that the bulk of Hadoop work will be done
against Java6 APIs (and avoiding Java7-dependent libraries) through
the end of the year?

If the consensus is that we introduce Java7 into the bulk of Hadoop
coding, what's the plan for getting there?  The answer can't be right
now, in trunk.  Even if we agreed to start allowing Java7
dependencies into trunk, as a practical matter this isn't enough.
Right now, if I'm a random Hadoop contributor, I'd be stupid to
contribute to trunk: I know that any stable release in the near term
will be from branch2, so if I want a prayer of seeing my change in a
stable release, I'd better contribute to branch2.

If we want a path to allowing Java7 dependencies by Q4, then we need
one of the following:

1) branch3 plan: The major Hadoop vendors (you know who you are)
commit to shipping a v3 of Hadoop in Q4 that allows Java7
dependencies and show signs of living up to that commitment (e.g., a
branch3 is created sometime soon).  This puts us all on a path towards
a real release of Hadoop that allows Java7 dependencies.

2) branch2 plan: deprecate Java6 as a runtime environment now,
publicly declare a time frame (e.g., 4Q2014) when _future development_
stops supporting Java6 runtime, and work with our customers in the
meantime to get them off a crazy-old version of Java (that's what
we're doing right now).

I don't see another path to allowing Java7 dependencies.  In the
current state of indecision, the smart programmer would be assuming no
Java7 dependencies into 2015.

On the one hand, I don't see the branch3 plan actually happening.
This is a big decision involving marketing, engineering, customer
support.  Plus it creates a problem for sales: Come summertime,
they'll have a hard time selling 2.x-based releases because they've
pre-announced support for 3.x.  It's just not going to happen.

On the other hand, I don't see the problem with the branch2 plan.  The
branch2 plan also requires the commitment from the major vendors, but
this decision is not nearly as galactic.  By the time 3Q2014 comes
along, this problem will be very rarified.  Also, don't forget that it
typically takes a customer 3-6 months to upgrade their Hadoop -- and a
customer who's afraid to shift off Java6 in 3Q2014 will probably take
a year to upgrade.  The branch2 plan implies a last Java6 release of
Hadoop in 3Q2014.  If we assume a Java7-averse customer will take a
year to upgrade to this release -- and then will take another year to
upgrade their cluster after that -- then they can be happily using
Java6 all the way into 2016.  (Another point, if 3Q2014 comes along
and vendors find they have so many customers still on Java6 that they
can't afford the discontinuity, then they can shift their MAJOR
version number of their product to communicate the discontinuity --
there's nothing that says that a vendor's versioning scheme must agree
exactly with Hadoop's.)

In short, we don't currently have a realistic path for introducing
Java7 dependencies into Hadoop.  Simply allowing them into trunk will
NOT solve this problem: any contributor who wants to see their code in
a stable release knows it'll have to flow through branch2 -- and thus
they'll have to avoid Java6 dependencies.  The branch2 plan is the
only plan proposed so far that gets us to Java7 dependencies by Q4.
And the important part of the branch2 plan is we make the decision
soon -- so we have time to notify folks and otherwise work that
decision out into the field.

  Raymie



On Tue, Apr 8, 2014 at 9:19 AM, Eli Collins e...@cloudera.com wrote:
 On Tue, Apr 8, 2014 at 2:00 AM, Ottenheimer, Davi
 davi.ottenhei...@emc.com wrote:
 From: Eli Collins [mailto:e...@cloudera.com]
 Sent: Monday, April 07, 2014 11:54 AM


 IMO we should not drop support for Java 6 in a minor update of a stable
 release (v2).  I don't think the larger Hadoop user base would find it
 acceptable that upgrading to a minor update caused their systems to stop
 working because they didn't upgrade Java. There are people still getting
 support for Java 6. ...

 Thanks,
 Eli

 Hi Eli,

 Technically you are correct those with extended support get critical 
 security fixes for 6 until the end of 2016. I am curious whether many of 
 those are in the Hadoop user base. Do you know? My guess is the vast 
 majority are within Oracle's official public end of life, which was over 12 
 months ago. Even Premier support ended Dec 2013:

 http://www.oracle.com/technetwork/java/eol-135779.html

 The end of Java 6 support carries much risk. It has to be considered in 
 terms of serious security vulnerabilities such as CVE-2013-2465 with CVSS 
 score 10.0.

 http://www.cvedetails.com/cve/CVE-2013-2465/

 Since you 

Re: Plans of moving towards JDK7 in trunk

2014-04-08 Thread Karthik Kambatla
+1 to NOT breaking compatibility in branch-2.

I think it is reasonable to require JDK7 for trunk, if we limit use of
JDK7-only API to security fixes etc. If we make other optimizations (like
IO), it would be a pain to backport things to branch-2. I guess this all
depends on when we see ourselves shipping Hadoop-3. Any ideas on that?


On Tue, Apr 8, 2014 at 9:19 AM, Eli Collins e...@cloudera.com wrote:

 On Tue, Apr 8, 2014 at 2:00 AM, Ottenheimer, Davi
 davi.ottenhei...@emc.com wrote:
  From: Eli Collins [mailto:e...@cloudera.com]
  Sent: Monday, April 07, 2014 11:54 AM
 
 
  IMO we should not drop support for Java 6 in a minor update of a stable
  release (v2).  I don't think the larger Hadoop user base would find it
  acceptable that upgrading to a minor update caused their systems to stop
  working because they didn't upgrade Java. There are people still getting
  support for Java 6. ...
 
  Thanks,
  Eli
 
  Hi Eli,
 
  Technically you are correct those with extended support get critical
 security fixes for 6 until the end of 2016. I am curious whether many of
 those are in the Hadoop user base. Do you know? My guess is the vast
 majority are within Oracle's official public end of life, which was over 12
 months ago. Even Premier support ended Dec 2013:
 
  http://www.oracle.com/technetwork/java/eol-135779.html
 
  The end of Java 6 support carries much risk. It has to be considered in
 terms of serious security vulnerabilities such as CVE-2013-2465 with CVSS
 score 10.0.
 
  http://www.cvedetails.com/cve/CVE-2013-2465/
 
  Since you mentioned caused systems to stop as an example of what would
 be a concern to Hadoop users, please note the CVE-2013-2465 availability
 impact:
 
  Complete (There is a total shutdown of the affected resource. The
 attacker can render the resource completely unavailable.)
 
  This vulnerability was patched in Java 6 Update 51, but post end of
 life. Apple pushed out the update specifically because of this
 vulnerability (http://support.apple.com/kb/HT5717) as did some other
 vendors privately, but for the majority of people using Java 6 means they
 have a ticking time bomb.
 
  Allowing it to stay should be considered in terms of accepting the whole
 risk posture.
 

 There are some who get extended support, but I suspect many just have
 a if-it's-not-broke mentality when it comes to production deployments.
 The current code supports both java6 and java7 and so allows these
 people to remain compatible, while enabling others to upgrade to the
 java7 runtime. This seems like the right compromise for a stable
 release series. Again, absolutely makes sense for trunk (ie v3) to
 require java7 or greater.



[jira] [Created] (HADOOP-10469) ProxyUser improvements

2014-04-08 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10469:
-

 Summary: ProxyUser improvements
 Key: HADOOP-10469
 URL: https://issues.apache.org/jira/browse/HADOOP-10469
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony


This is an umbrella jira which addresses few enhancements to proxyUser 
capability via sub tasks



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10470) Change synchronization mechanism in ProxyUsers to readwrite lock

2014-04-08 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10470:
-

 Summary: Change synchronization mechanism in ProxyUsers to 
readwrite lock
 Key: HADOOP-10470
 URL: https://issues.apache.org/jira/browse/HADOOP-10470
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony


Currently _ProxyUsers_ class achieve synchronization via _synchornized_. 
Performance on  _ProxyUsers.authorize_can be improved by replacing this with 
read/write lock.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10471) Reduce the visibility of constants in ProxyUsers

2014-04-08 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10471:
-

 Summary: Reduce the visibility of constants in ProxyUsers
 Key: HADOOP-10471
 URL: https://issues.apache.org/jira/browse/HADOOP-10471
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony


Most of the constants in proxyusers have public visibility unnecessarily. 
These public constants should be set to private  and their external usage 
should be replaced by the corresponding functions.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Plans of moving towards JDK7 in trunk

2014-04-08 Thread Sandy Ryza
It might make sense to try to enumerate the benefits of switching to Java7
APIs and dependencies.  IMO, the ones listed so far on this thread don't
make a compelling enough case to drop Java6 in branch-2 on any time frame,
even if this means supporting Java6 through 2015.  For example, the change
in RawLocalFileSystem semantics might be an incompatible change for
branch-2 any way.


On Tue, Apr 8, 2014 at 10:05 AM, Karthik Kambatla ka...@cloudera.comwrote:

 +1 to NOT breaking compatibility in branch-2.

 I think it is reasonable to require JDK7 for trunk, if we limit use of
 JDK7-only API to security fixes etc. If we make other optimizations (like
 IO), it would be a pain to backport things to branch-2. I guess this all
 depends on when we see ourselves shipping Hadoop-3. Any ideas on that?


 On Tue, Apr 8, 2014 at 9:19 AM, Eli Collins e...@cloudera.com wrote:

  On Tue, Apr 8, 2014 at 2:00 AM, Ottenheimer, Davi
  davi.ottenhei...@emc.com wrote:
   From: Eli Collins [mailto:e...@cloudera.com]
   Sent: Monday, April 07, 2014 11:54 AM
  
  
   IMO we should not drop support for Java 6 in a minor update of a
 stable
   release (v2).  I don't think the larger Hadoop user base would find it
   acceptable that upgrading to a minor update caused their systems to
 stop
   working because they didn't upgrade Java. There are people still
 getting
   support for Java 6. ...
  
   Thanks,
   Eli
  
   Hi Eli,
  
   Technically you are correct those with extended support get critical
  security fixes for 6 until the end of 2016. I am curious whether many of
  those are in the Hadoop user base. Do you know? My guess is the vast
  majority are within Oracle's official public end of life, which was over
 12
  months ago. Even Premier support ended Dec 2013:
  
   http://www.oracle.com/technetwork/java/eol-135779.html
  
   The end of Java 6 support carries much risk. It has to be considered in
  terms of serious security vulnerabilities such as CVE-2013-2465 with CVSS
  score 10.0.
  
   http://www.cvedetails.com/cve/CVE-2013-2465/
  
   Since you mentioned caused systems to stop as an example of what
 would
  be a concern to Hadoop users, please note the CVE-2013-2465 availability
  impact:
  
   Complete (There is a total shutdown of the affected resource. The
  attacker can render the resource completely unavailable.)
  
   This vulnerability was patched in Java 6 Update 51, but post end of
  life. Apple pushed out the update specifically because of this
  vulnerability (http://support.apple.com/kb/HT5717) as did some other
  vendors privately, but for the majority of people using Java 6 means they
  have a ticking time bomb.
  
   Allowing it to stay should be considered in terms of accepting the
 whole
  risk posture.
  
 
  There are some who get extended support, but I suspect many just have
  a if-it's-not-broke mentality when it comes to production deployments.
  The current code supports both java6 and java7 and so allows these
  people to remain compatible, while enabling others to upgrade to the
  java7 runtime. This seems like the right compromise for a stable
  release series. Again, absolutely makes sense for trunk (ie v3) to
  require java7 or greater.
 



Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-08 Thread sanjay Radia


+1 binding
Verified binaries, ran from binary on single node cluster. Tested some HDFS 
clis and wordcount.

sanjay
On Apr 7, 2014, at 9:52 AM, Suresh Srinivas sur...@hortonworks.com wrote:

 +1 (binding)
 
 Verified the signatures and hashes for both src and binary tars. Built from
 the source, the binary distribution and the documentation. Started a single
 node cluster and tested the following:
 # Started HDFS cluster, verified the hdfs CLI commands such ls, copying
 data back and forth, verified namenode webUI etc.
 # Ran some tests such as sleep job, TestDFSIO, NNBench etc.
 
 I agree with Arun's anaylysis. At this time, the bar for blockers should be
 quite high. We can do a dot release if people want some more bug fixes.
 
 
 On Mon, Mar 31, 2014 at 2:22 AM, Arun C Murthy a...@hortonworks.com wrote:
 
 Folks,
 
 I've created a release candidate (rc0) for hadoop-2.4.0 that I would like
 to get released.
 
 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0
 The RC tag in svn is here:
 https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0-rc0
 
 The maven artifacts are available via repository.apache.org.
 
 Please try the release and vote; the vote will run for the usual 7 days.
 
 thanks,
 Arun
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 
 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.
 
 
 
 
 -- 
 http://hortonworks.com/download/
 
 -- 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader 
 of this message is not the intended recipient, you are hereby notified that 
 any printing, copying, dissemination, distribution, disclosure or 
 forwarding of this communication is strictly prohibited. If you have 
 received this communication in error, please contact the sender immediately 
 and delete it from your system. Thank You.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-10472) KerberosAuthenticator should use org.apache.commons.logging.LogFactory instead of org.slf4j.LoggerFactory

2014-04-08 Thread Jing Zhao (JIRA)
Jing Zhao created HADOOP-10472:
--

 Summary: KerberosAuthenticator should use 
org.apache.commons.logging.LogFactory instead of org.slf4j.LoggerFactory
 Key: HADOOP-10472
 URL: https://issues.apache.org/jira/browse/HADOOP-10472
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HADOOP-10472.000.patch





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10473) TestCallQueueManager is still flaky

2014-04-08 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10473:


 Summary: TestCallQueueManager is still flaky
 Key: HADOOP-10473
 URL: https://issues.apache.org/jira/browse/HADOOP-10473
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


testSwapUnderContention counts the calls and then interrupts as shown below.  
There could be call after counting the call but before interrupt.
{code}
for (Taker t : consumers) {
  totalCallsConsumed += t.callsTaken;
  threads.get(t).interrupt();
}
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10476) Bumping the findbugs version to 2.5.3

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10476:
---

 Summary: Bumping the findbugs version to 2.5.3
 Key: HADOOP-10476
 URL: https://issues.apache.org/jira/browse/HADOOP-10476
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


The findbug version used by hadoop is pretty old (1.3.9). The old version of 
Findbugs itself have some bugs (like 
http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474). Futhermore, 
newer version is able to catch more bugs.

It's a good time to bump the findbugs version to the latest stable version, 
2.5.3.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10477) Clean up findbug warnings found by findbugs 2.0.2

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10477:
---

 Summary: Clean up findbug warnings found by findbugs 2.0.2
 Key: HADOOP-10477
 URL: https://issues.apache.org/jira/browse/HADOOP-10477
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


This is an umbrella jira to clean up the new findbug warnings found by findbugs 
2.0.2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10478) Fix new findbugs warnings in hadoop-maven-plugins

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10478:
---

 Summary: Fix new findbugs warnings in hadoop-maven-plugins
 Key: HADOOP-10478
 URL: https://issues.apache.org/jira/browse/HADOOP-10478
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai


The following findbug warning needs to be fixed:

{noformat}
[INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ 
hadoop-maven-plugins ---
[INFO] BugInstance size is 1
[INFO] Error size is 0
[INFO] Total bugs: 1
[INFO] Found reliance on default encoding in new 
org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread(InputStream): new 
java.io.InputStreamReader(InputStream) 
[org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread] At 
Exec.java:[lines 89-114]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10479) Fix new findbugs warnings in hadoop-minikdc

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10479:
---

 Summary: Fix new findbugs warnings in hadoop-minikdc
 Key: HADOOP-10479
 URL: https://issues.apache.org/jira/browse/HADOOP-10479
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai


The following findbugs warnings need to be fixed:

{noformat}
[INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-minikdc ---
[INFO] BugInstance size is 2
[INFO] Error size is 0
[INFO] Total bugs: 2
[INFO] Found reliance on default encoding in 
org.apache.hadoop.minikdc.MiniKdc.initKDCServer(): new 
java.io.InputStreamReader(InputStream) [org.apache.hadoop.minikdc.MiniKdc] At 
MiniKdc.java:[lines 112-557]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.minikdc.MiniKdc.main(String[]): new java.io.FileReader(File) 
[org.apache.hadoop.minikdc.MiniKdc] At MiniKdc.java:[lines 112-557]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10480:
---

 Summary: Fix new findbugs warnings in hadoop-hdfs
 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai


The following findbugs warnings need to be fixed:

{noformat}
[INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
[INFO] BugInstance size is 14
[INFO] Error size is 0
[INFO] Total bugs: 14
[INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
[org.apache.hadoop.hdfs.BlockReaderFactory] At BlockReaderFactory.java:[lines 
68-808]
[INFO] Increment of volatile field 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
DFSOutputStream.java:[lines 308-1492]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
 DataInputStream, DataOutputStream, String, DataTransferThrottler, 
DatanodeInfo[]): new java.io.FileWriter(File) 
[org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
BlockReceiver.java:[lines 66-905]
[INFO] b must be nonnull but is marked as nullable 
[org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
DatanodeJspHelper.java:[lines 546-549]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
 File, boolean): new java.util.Scanner(File) 
[org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
BlockPoolSlice.java:[lines 58-427]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
 new java.util.Scanner(File) 
[org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
BlockPoolSlice.java:[lines 58-427]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
 new java.io.FileWriter(File) 
[org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
BlockPoolSlice.java:[lines 58-427]
[INFO] Redundant nullcheck of f, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
 Block[]) 
[org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
FsDatasetImpl.java:[lines 60-1910]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
FSImageUtil(): String.getBytes() 
[org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
FSImageUtil.java:[lines 34-89]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
byte[], boolean): new String(byte[]) 
[org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
FSNamesystem.java:[lines 301-7701]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream): 
new java.io.PrintWriter(OutputStream, boolean) 
[org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
[INFO] Redundant nullcheck of fos, which is known to be non-null in 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
 HdfsFileStatus, LocatedBlocks) 
[org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
NamenodeFsck.java:[lines 94-710]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
 new java.io.PrintWriter(File) 
[org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
OfflineImageViewerPB.java:[lines 45-181]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
 new java.io.PrintWriter(OutputStream) 
[org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
OfflineImageViewerPB.java:[lines 45-181]
{noformat}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10482) Fix new findbugs warnings in hadoop-common

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10482:
---

 Summary: Fix new findbugs warnings in hadoop-common
 Key: HADOOP-10482
 URL: https://issues.apache.org/jira/browse/HADOOP-10482
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai


The following findbugs warnings need to be fixed:

{noformat}
[INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-common ---
[INFO] BugInstance size is 97
[INFO] Error size is 0
[INFO] Total bugs: 97
[INFO] Found reliance on default encoding in 
org.apache.hadoop.conf.Configuration.getConfResourceAsReader(String): new 
java.io.InputStreamReader(InputStream) [org.apache.hadoop.conf.Configuration] 
At Configuration.java:[lines 169-2642]
[INFO] Null passed for nonnull parameter of set(String, String) in 
org.apache.hadoop.conf.Configuration.setPattern(String, Pattern) 
[org.apache.hadoop.conf.Configuration] At Configuration.java:[lines 169-2642]
[INFO] Format string should use %n rather than \n in 
org.apache.hadoop.conf.ReconfigurationServlet.printHeader(PrintWriter, String) 
[org.apache.hadoop.conf.ReconfigurationServlet] At 
ReconfigurationServlet.java:[lines 44-234]
[INFO] Format string should use %n rather than \n in 
org.apache.hadoop.conf.ReconfigurationServlet.printHeader(PrintWriter, String) 
[org.apache.hadoop.conf.ReconfigurationServlet] At 
ReconfigurationServlet.java:[lines 44-234]
[INFO] Found reliance on default encoding in new 
org.apache.hadoop.crypto.key.KeyProvider$Metadata(byte[]): new 
java.io.InputStreamReader(InputStream) 
[org.apache.hadoop.crypto.key.KeyProvider$Metadata] At 
KeyProvider.java:[lines 110-204]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.crypto.key.KeyProvider$Metadata.serialize(): new 
java.io.OutputStreamWriter(OutputStream) 
[org.apache.hadoop.crypto.key.KeyProvider$Metadata] At 
KeyProvider.java:[lines 110-204]
[INFO] Redundant nullcheck of clazz, which is known to be non-null in 
org.apache.hadoop.fs.FileSystem.createFileSystem(URI, Configuration) 
[org.apache.hadoop.fs.FileSystem] At FileSystem.java:[lines 89-3017]
[INFO] Unread public/protected field: 
org.apache.hadoop.fs.HarFileSystem$Store.endHash 
[org.apache.hadoop.fs.HarFileSystem$Store] At HarFileSystem.java:[lines 
492-500]
[INFO] Unread public/protected field: 
org.apache.hadoop.fs.HarFileSystem$Store.startHash 
[org.apache.hadoop.fs.HarFileSystem$Store] At HarFileSystem.java:[lines 
492-500]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.fs.HardLink.createHardLink(File, File): new 
java.io.InputStreamReader(InputStream) [org.apache.hadoop.fs.HardLink] At 
HardLink.java:[lines 51-546]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.fs.HardLink.createHardLinkMult(File, String[], File, int): 
new java.io.InputStreamReader(InputStream) [org.apache.hadoop.fs.HardLink] At 
HardLink.java:[lines 51-546]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.fs.HardLink.getLinkCount(File): new 
java.io.InputStreamReader(InputStream) [org.apache.hadoop.fs.HardLink] At 
HardLink.java:[lines 51-546]
[INFO] Bad attempt to compute absolute value of signed random integer in 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(String,
 long, Configuration, boolean) 
[org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext] At 
LocalDirAllocator.java:[lines 247-549]
[INFO] Null passed for nonnull parameter of 
org.apache.hadoop.conf.Configuration.set(String, String) in 
org.apache.hadoop.fs.ftp.FTPFileSystem.initialize(URI, Configuration) 
[org.apache.hadoop.fs.ftp.FTPFileSystem] At FTPFileSystem.java:[lines 51-593]
[INFO] Redundant nullcheck of dirEntries, which is known to be non-null in 
org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPClient, Path, boolean) 
[org.apache.hadoop.fs.ftp.FTPFileSystem] At FTPFileSystem.java:[lines 51-593]
[INFO] Redundant nullcheck of 
org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPClient, Path), which is 
known to be non-null in 
org.apache.hadoop.fs.ftp.FTPFileSystem.exists(FTPClient, Path) 
[org.apache.hadoop.fs.ftp.FTPFileSystem] At FTPFileSystem.java:[lines 51-593]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.fs.shell.Display$AvroFileInputStream.read(): 
String.getBytes() [org.apache.hadoop.fs.shell.Display$AvroFileInputStream] At 
Display.java:[lines 259-309]
[INFO] Format string should use %n rather than \n in 
org.apache.hadoop.fs.shell.Display$Checksum.processPath(PathData) 
[org.apache.hadoop.fs.shell.Display$Checksum] At Display.java:[lines 169-196]
[INFO] Format string should use %n rather than \n in 
org.apache.hadoop.fs.shell.Display$Checksum.processPath(PathData) 
[org.apache.hadoop.fs.shell.Display$Checksum] At Display.java:[lines 169-196]
[INFO] Found reliance on default encoding in 
org.apache.hadoop.fs.shell.Display$TextRecordInputStream.read(): 
String.getBytes() 

Re: Plans of moving towards JDK7 in trunk

2014-04-08 Thread Raymie Stata
 It might make sense to try to enumerate the benefits of switching to
 Java7 APIs and dependencies.

  - Java7 introduced a huge number of language, byte-code, API, and
tooling enhancements!  Just to name a few: try-with-resources, newer
and stronger encyrption methods, more scalable concurrency primitives.
 See http://www.slideshare.net/boulderjug/55-things-in-java-7

  - We can't update current dependencies, and we can't add cool new ones.

  - Putting language/APIs aside, don't forget that a huge amount of effort
goes into qualifying for Java6 (at least, I hope the folks claiming to
support Java6 are putting in such an effort :-).  Wouldn't Hadoop
users/customers be better served if qualification effort went into
Java7/8 versus Java6/7?

Getting to Java7 as a development env (and Java8 as a runtime env)
seems like a no-brainer.  Question is: How?

On Tue, Apr 8, 2014 at 10:21 AM, Sandy Ryza sandy.r...@cloudera.com wrote:
 It might make sense to try to enumerate the benefits of switching to Java7
 APIs and dependencies.  IMO, the ones listed so far on this thread don't
 make a compelling enough case to drop Java6 in branch-2 on any time frame,
 even if this means supporting Java6 through 2015.  For example, the change
 in RawLocalFileSystem semantics might be an incompatible change for
 branch-2 any way.


 On Tue, Apr 8, 2014 at 10:05 AM, Karthik Kambatla ka...@cloudera.comwrote:

 +1 to NOT breaking compatibility in branch-2.

 I think it is reasonable to require JDK7 for trunk, if we limit use of
 JDK7-only API to security fixes etc. If we make other optimizations (like
 IO), it would be a pain to backport things to branch-2. I guess this all
 depends on when we see ourselves shipping Hadoop-3. Any ideas on that?


 On Tue, Apr 8, 2014 at 9:19 AM, Eli Collins e...@cloudera.com wrote:

  On Tue, Apr 8, 2014 at 2:00 AM, Ottenheimer, Davi
  davi.ottenhei...@emc.com wrote:
   From: Eli Collins [mailto:e...@cloudera.com]
   Sent: Monday, April 07, 2014 11:54 AM
  
  
   IMO we should not drop support for Java 6 in a minor update of a
 stable
   release (v2).  I don't think the larger Hadoop user base would find it
   acceptable that upgrading to a minor update caused their systems to
 stop
   working because they didn't upgrade Java. There are people still
 getting
   support for Java 6. ...
  
   Thanks,
   Eli
  
   Hi Eli,
  
   Technically you are correct those with extended support get critical
  security fixes for 6 until the end of 2016. I am curious whether many of
  those are in the Hadoop user base. Do you know? My guess is the vast
  majority are within Oracle's official public end of life, which was over
 12
  months ago. Even Premier support ended Dec 2013:
  
   http://www.oracle.com/technetwork/java/eol-135779.html
  
   The end of Java 6 support carries much risk. It has to be considered in
  terms of serious security vulnerabilities such as CVE-2013-2465 with CVSS
  score 10.0.
  
   http://www.cvedetails.com/cve/CVE-2013-2465/
  
   Since you mentioned caused systems to stop as an example of what
 would
  be a concern to Hadoop users, please note the CVE-2013-2465 availability
  impact:
  
   Complete (There is a total shutdown of the affected resource. The
  attacker can render the resource completely unavailable.)
  
   This vulnerability was patched in Java 6 Update 51, but post end of
  life. Apple pushed out the update specifically because of this
  vulnerability (http://support.apple.com/kb/HT5717) as did some other
  vendors privately, but for the majority of people using Java 6 means they
  have a ticking time bomb.
  
   Allowing it to stay should be considered in terms of accepting the
 whole
  risk posture.
  
 
  There are some who get extended support, but I suspect many just have
  a if-it's-not-broke mentality when it comes to production deployments.
  The current code supports both java6 and java7 and so allows these
  people to remain compatible, while enabling others to upgrade to the
  java7 runtime. This seems like the right compromise for a stable
  release series. Again, absolutely makes sense for trunk (ie v3) to
  require java7 or greater.
 



[jira] [Created] (HADOOP-10484) Remove o.a.h.conf.Reconfig*

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10484:
---

 Summary: Remove o.a.h.conf.Reconfig*
 Key: HADOOP-10484
 URL: https://issues.apache.org/jira/browse/HADOOP-10484
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


A search reveals that these classes are not used by hadoop and any downstream 
projects after 0.20. The have not been maintained since 2011.

This jira proposes to remove them from hadoop-common.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-08 Thread Tsuyoshi OZAWA
Hi Arun,

I apologize for the late response.
If the problems are recognized correctly, +1 for the release(non-binding).

* Ran examples on pseudo distributed cluster.
* Ran tests.
* Built from source.

Let's fix the problems at the target version(2.4.1).

Thanks,
- Tsuyoshi


On Wed, Apr 9, 2014 at 4:45 AM, sanjay Radia san...@hortonworks.com wrote:


 +1 binding
 Verified binaries, ran from binary on single node cluster. Tested some HDFS 
 clis and wordcount.

 sanjay
 On Apr 7, 2014, at 9:52 AM, Suresh Srinivas sur...@hortonworks.com wrote:

 +1 (binding)

 Verified the signatures and hashes for both src and binary tars. Built from
 the source, the binary distribution and the documentation. Started a single
 node cluster and tested the following:
 # Started HDFS cluster, verified the hdfs CLI commands such ls, copying
 data back and forth, verified namenode webUI etc.
 # Ran some tests such as sleep job, TestDFSIO, NNBench etc.

 I agree with Arun's anaylysis. At this time, the bar for blockers should be
 quite high. We can do a dot release if people want some more bug fixes.


 On Mon, Mar 31, 2014 at 2:22 AM, Arun C Murthy a...@hortonworks.com wrote:

 Folks,

 I've created a release candidate (rc0) for hadoop-2.4.0 that I would like
 to get released.

 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0
 The RC tag in svn is here:
 https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0-rc0

 The maven artifacts are available via repository.apache.org.

 Please try the release and vote; the vote will run for the usual 7 days.

 thanks,
 Arun

 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/



 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.




 --
 http://hortonworks.com/download/

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



-- 
- Tsuyoshi


[jira] [Created] (HADOOP-10486) Remove typedbytes support from hadoop-streaming

2014-04-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10486:
---

 Summary: Remove typedbytes support from hadoop-streaming
 Key: HADOOP-10486
 URL: https://issues.apache.org/jira/browse/HADOOP-10486
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


The typed record support in hadoop-streaming is based upon the deprecated 
records package. Neither of them are actively maintained. This jira proposes to 
remove them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)