Hadoop-Hdfs-0.23-Build - Build # 816 - Still Failing

2013-12-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/816/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7896 lines...]
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3319,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto,java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto.Builder)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto,java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto.Builder)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #816

2013-12-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/816/changes

Changes:

[jlowe] svn merge -c 1543973 FIXES: YARN-1053. Diagnostic message from 
ContainerExitEvent is ignored in ContainerImpl. Contributed by Omkar Vinit Joshi

[jeagles] HADOOP-10148. backport hadoop-10107 to branch-0.23 (Chen He via 
jeagles)

--
[...truncated 7703 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[281,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[6301,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package 

Re: [VOTE] Release Apache Hadoop 0.23.10

2013-12-10 Thread Thomas Graves
With 6 +1's (4 binding, 2 non-binding) the vote passes.  I'll publish this
today.

Tom

On 12/3/13 12:22 AM, Thomas Graves tgra...@yahoo-inc.com wrote:

Hey Everyone,

There have been lots of improvements and bug fixes that have went into
branch-0.23 since the 0.23.9 release.  We think its time to do a 0.23.10
so I have created a release candidate (rc0) for a Hadoop-0.23.10 release.

The RC is available at:
http://people.apache.org/~tgraves/hadoop-0.23.10-rc0/


The RC Tag in svn is here:
http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.10-rc0/


The maven artifacts are available via repository.apache.org.


Please try the release and vote; the vote will run for the usual 7 days
til December 9th.

I am +1 (binding).

thanks,
Tom Graves




[jira] [Created] (HDFS-5648) Get rid of perVolumeReplicaMap

2013-12-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5648:
---

 Summary: Get rid of perVolumeReplicaMap
 Key: HDFS-5648
 URL: https://issues.apache.org/jira/browse/HDFS-5648
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The perVolumeReplicaMap in FsDatasetImpl.java is not necessary and can be 
removed. We continue to use the existing volumeMap.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (HDFS-5650) Remove AclReadFlag and AclWriteFlag in FileSystem API

2013-12-10 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5650:


 Summary: Remove AclReadFlag and AclWriteFlag in FileSystem API
 Key: HDFS-5650
 URL: https://issues.apache.org/jira/browse/HDFS-5650
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


AclReadFlag and AclWriteFlag intended to capture various options used in 
getfacl and setfacl. These options determine whether the tool should traverse 
the filesystem recursively, follow the symlink, etc., but they are not part of 
the core ACLs abstractions.

The client program has more information and more flexibility to implement these 
options. This jira proposes to remove these flags to simplify the APIs.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (HDFS-5651) remove dfs.namenode.caching.enabled

2013-12-10 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5651:
--

 Summary: remove dfs.namenode.caching.enabled
 Key: HDFS-5651
 URL: https://issues.apache.org/jira/browse/HDFS-5651
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


We can remove dfs.namenode.caching.enabled and simply always enable caching, 
similar to how we do with snapshots and other features.  The main overhead is 
the size of the cachedBlocks GSet.  However, we can simply make the size of 
this GSet configurable, and people who don't want caching can set it to a very 
small value.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


Re: [VOTE] Merge HDFS-2832 Heterogeneous Storage Phase 1 to trunk

2013-12-10 Thread Arpit Agarwal
With 4 binding +1 votes, the vote to merge the branch HDFS-2832 into trunk
passes. The code will be merged soon. We will address any remaining issues
in trunk.

Thanks to everyone who voted and provided their feedback.

Regards,
Arpit



On Mon, Dec 9, 2013 at 10:51 AM, Andrew Wang andrew.w...@cloudera.comwrote:

 Thanks for clarifying that Arpit. I'm a +0.9 since I haven't reviewed
 enough to +1, but everything thus far looks great.

 Andrew


 On Fri, Dec 6, 2013 at 5:35 PM, Chen He airb...@gmail.com wrote:

  +1 nice feature for HDFS
 
 
  On Fri, Dec 6, 2013 at 7:32 PM, Arpit Agarwal aagar...@hortonworks.com
  wrote:
 
   Hi Andrew,
  
   Our plan as stated back in August was to do this work principally in
 two
   phases.
  
 
 https://issues.apache.org/jira/browse/HDFS-2832?focusedCommentId=13739041
  
   For the second phase which includes API support, we also need quota
   management. For changes of this scope, to do all the work at once while
   keeping the feature branch in sync with ongoing development in trunk is
   unmanageable. Hence we'd like to stick with the initial plan and
 develop
  in
   phases.
  
   Even for datanode caching the initial merge did not include the quota
   management changes which are happening subsequently.
  
   Going forward, we will stabilize the current changes in trunk in the
 2.4
   time frame. Next we will add quota management and API support which can
   align with the 2.5 time frame, with the second merge potentially in
   March/April.
  
   Arpit
  
  
   On Fri, Dec 6, 2013 at 3:15 PM, Andrew Wang andrew.w...@cloudera.com
   wrote:
  
Hi everyone,
   
I'm still getting up to speed on the changes here (my fault for not
following development more closely, other priorities etc etc), but
 the
branch thus far is already quite impressive. It's quite an
 undertaking
  to
turn the DN into a collection of Storages, along with the
 corresponding
datastructure, tracking, and other changes in the NN and DN.
   
Correct me if I'm wrong though, but this still leaves a substantial
  part
   of
the design doc to be implemented. Looking at the list of remaining
subtasks, it seems like we still can't specify a storage type for a
  file
(HDFS-5229) or write a file to a given storage type (HDFS-5391),
 along
   with
the corresponding client protocol changes. This leads me to two
   questions:
   
- If this is merged, what can I do with the new code? Without client
changes or the ability to create a file on a different storage type,
 I
don't know how (for example) I could hand this to our QA team to
 test.
   I'm
wondering why we want to merge now rather than when the branch is
 more
feature complete.
- What's the plan for the implementation of the remaining features?
 How
many phases? What's the timeline for these phases? Particularly,
  related
   to
the use cases presented in section 2 of the design doc.
   
I'm also going to post some design doc questions to the JIRA, there
  are a
few technical q's I'd like to get clarification on.
   
Thanks,
Andrew
   
   
On Wed, Dec 4, 2013 at 7:21 AM, Sirianni, Eric 
  eric.siria...@netapp.com
wrote:
   
 +1

 My team has been developing and testing against the HDFS-2832
 branch
   for
 the past month.  It has proven to be quite stable.

 Eric

 -Original Message-
 From: Arpit Agarwal [mailto:aagar...@hortonworks.com]
 Sent: Monday, December 02, 2013 7:07 PM
 To: hdfs-dev@hadoop.apache.org; common-...@hadoop.apache.org
 Subject: [VOTE] Merge HDFS-2832 Heterogeneous Storage Phase 1 to
  trunk

 Hello all,

 I would like to call a vote to merge phase 1 of the Heterogeneous
   Storage
 feature into trunk.

 *Scope of the changes:*
 The changes allow exposing the DataNode as a collection of storages
  and
set
 the foundation for subsequent work to present Heterogeneous
 Storages
  to
 applications. This allows DataNodes to send block and storage
 reports
 per-storage. In addition this change introduces the ability to add
 a
 'storage type' tag to the storage directories. This enables
  supporting
 different types of storages in addition to disk storage.

 Development of the feature is tracked in the jira
 https://issues.apache.org/jira/browse/HDFS-2832.

 *Details of development and testing:*
 Development has been done in a separate branch -
 https://svn.apache.org/repos/asf/hadoop/common/branches/HDFS-2832.
  The
 updated design is posted at -


   
  
 
 https://issues.apache.org/jira/secure/attachment/12615761/20131125-HeterogeneousStorage.pdf
 .
 The changes involve ~6K changed lines of code, with a third of
 those
 changes being to tests.

 Please see the test plan


   
  
 
 

[jira] [Created] (HDFS-5652) refactoring/uniforming invalid block token exception handling in DFSInputStream

2013-12-10 Thread Liang Xie (JIRA)
Liang Xie created HDFS-5652:
---

 Summary: refactoring/uniforming invalid block token exception 
handling in DFSInputStream
 Key: HDFS-5652
 URL: https://issues.apache.org/jira/browse/HDFS-5652
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.2.0, 3.0.0
Reporter: Liang Xie
Priority: Minor


See comments from Junping and Colin's from HDFS-5637



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (HDFS-5653) Log namenode hostname in various exceptions being thrown in a HA setup

2013-12-10 Thread Arpit Gupta (JIRA)
Arpit Gupta created HDFS-5653:
-

 Summary: Log namenode hostname in various exceptions being thrown 
in a HA setup
 Key: HDFS-5653
 URL: https://issues.apache.org/jira/browse/HDFS-5653
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.2.0
Reporter: Arpit Gupta


In a HA setup any time we see an exception such as safemode or namenode in 
standby etc we dont know which namenode it came from. The user has to go to the 
logs of the namenode and determine which one was active and/or standby around 
the same time.
I think it would help with debugging if any such exceptions could include the 
namenode hostname so the user could know exactly which namenode served the 
request.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)