Re: [VOTE] Release Apache Hadoop 2.2.0

2013-10-14 Thread Suresh Srinivas
+1 (binding)


On Mon, Oct 7, 2013 at 12:00 AM, Arun C Murthy  wrote:

> Folks,
>
> I've created a release candidate (rc0) for hadoop-2.2.0 that I would like
> to get released - this release fixes a small number of bugs and some
> protocol/api issues which should ensure they are now stable and will not
> change in hadoop-2.x.
>
> The RC is available at:
> http://people.apache.org/~acmurthy/hadoop-2.2.0-rc0
> The RC tag in svn is here:
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0-rc0
>
> The maven artifacts are available via repository.apache.org.
>
> Please try the release and vote; the vote will run for the usual 7 days.
>
> thanks,
> Arun
>
> P.S.: Thanks to Colin, Andrew, Daryn, Chris and others for helping nail
> down the symlinks-related issues. I'll release note the fact that we have
> disabled it in 2.2. Also, thanks to Vinod for some heavy-lifting on the
> YARN side in the last couple of weeks.
>
>
>
>
>
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>



-- 
http://hortonworks.com/download/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HDFS-5364) Add OpenFileCtx cache

2013-10-14 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5364:


 Summary: Add OpenFileCtx cache
 Key: HDFS-5364
 URL: https://issues.apache.org/jira/browse/HDFS-5364
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Brandon Li
Assignee: Brandon Li


NFS gateway can run out of memory when the stream timeout is set to a 
relatively long period(e.g., >1 minute) and user uploads thousands of files in 
parallel.  Each stream DFSClient creates a DataStreamer thread, and will 
eventually run out of memory by creating too many threads.

NFS gateway should have a OpenFileCtx cache to limit the total opened files. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5363) Create SPENGO-authenticated connection in URLConnectionFactory instead WebHdfsFileSystem

2013-10-14 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5363:


 Summary: Create SPENGO-authenticated connection in 
URLConnectionFactory instead WebHdfsFileSystem
 Key: HDFS-5363
 URL: https://issues.apache.org/jira/browse/HDFS-5363
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5363.000.patch

Currently the WebHdfsSystem class creates the http connection of urls that 
require SPENGO authentication. This patch moves the above logic to 
URLConnectionFactory, which is the factory class that supposes to create all 
http connection of WebHdfs client.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5362) Add SnapshotException to terse exception group

2013-10-14 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5362:


 Summary: Add SnapshotException to terse exception group
 Key: HDFS-5362
 URL: https://issues.apache.org/jira/browse/HDFS-5362
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Priority: Minor


In trunk, a stack trace of SnapshotException is output NameNode's log via 
ipc.Server class.
The trace of the output method is easy for the message of SnapshotException.
So, it should add SnapshotException to terse exception group of 
NameNodeRpcServer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5359) Allow LightWeightGSet#Iterator to remove elements

2013-10-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-5359.
---

   Resolution: Fixed
Fix Version/s: HDFS-4949

Committed to branch, thanks Colin.

> Allow LightWeightGSet#Iterator to remove elements
> -
>
> Key: HDFS-5359
> URL: https://issues.apache.org/jira/browse/HDFS-5359
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-4949
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: HDFS-4949
>
> Attachments: HDFS-5359-caching.001.patch
>
>
> Small modification to {{GSet#Iterator}} so that it can remove elements.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5361) Change the unit of StartupProgress 'PercentComplete' to percentage

2013-10-14 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-5361:
---

 Summary: Change the unit of StartupProgress 'PercentComplete' to 
percentage
 Key: HDFS-5361
 URL: https://issues.apache.org/jira/browse/HDFS-5361
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Priority: Minor


Now the unit of 'PercentComplete' metrics is rate (maximum is 1.0). It's 
confusing for users because its name includes "percent".
The metrics should be multiplied by 100.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-14 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5360:


 Summary: Improvement of usage message of renameSnapshot and 
deleteSnapshot
 Key: HDFS-5360
 URL: https://issues.apache.org/jira/browse/HDFS-5360
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Priority: Minor


When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it is 
displayed as follows.

{code}
[hadoop@trunk ~]$ hdfs dfs -createSnapshot
-createSnapshot:  is missing.
Usage: hadoop fs [generic options] -createSnapshot  
[]
{code}

On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" is 
displayed as follows. And there are not kind for the user.

{code}
[hadoop@trunk ~]$ hdfs dfs -renameSnapshot
renameSnapshot: args number not 3: 0

[hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
deleteSnapshot: args number not 2: 0
{code}

It changes "-renameSnapshot" and "-deleteSnapshot" to output the message which 
is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5359) allow GSet#Iterator to remove elements

2013-10-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5359:
--

 Summary: allow GSet#Iterator to remove elements
 Key: HDFS-5359
 URL: https://issues.apache.org/jira/browse/HDFS-5359
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Small modification to {{GSet#Iterator}} so that it can remove elements.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5358) Add replication field to PathBasedCacheDirective

2013-10-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-5358.
---

   Resolution: Fixed
Fix Version/s: HDFS-4949
 Hadoop Flags: Reviewed

Committed to branch, thanks Colin for the patch and Chris for the review.

> Add replication field to PathBasedCacheDirective
> 
>
> Key: HDFS-5358
> URL: https://issues.apache.org/jira/browse/HDFS-5358
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-4949
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: HDFS-4949
>
> Attachments: HDFS-5358-caching.001.patch, HDFS-5358-caching.002.patch
>
>
> Add a 'replication' field to PathBasedCacheDirective, so that administrators 
> can configure how many cached replicas of a block the cluster should try to 
> maintain.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Build failed in Jenkins: Hadoop-Hdfs-trunk #1552

2013-10-14 Thread Ted Yu
I noticed test failures in 'Hadoop HDFS' caused tests in other modules to
be skipped:

[INFO] Apache Hadoop HDFS  FAILURE
[1:44:29.257s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED

Should we apply '--fail-at-end' maven option so that all tests are run ?

See http://maven.apache.org/guides/mini/guide-multiple-modules.html

Cheers


On Mon, Oct 14, 2013 at 6:19 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> Changes:
>
> [sandy] MAPREDUCE-5463. Deprecate SLOTS_MILLIS counters. (Tzuyoshi Ozawa
> via Sandy Ryza)
>
> [sandy] YARN-305. Fair scheduler logs too many "Node offered to app"
> messages. (Lohit Vijayarenu via Sandy Ryza)
>
> [sseth] MAPREDUCE-5329. Allow MR applications to use additional
> AuxServices, which are compatible with the default MapReduce shuffle.
> Contributed by Avner BenHanoch.
>
> --
> [...truncated 11425 lines...]
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.954 sec
> - in org.apache.hadoop.hdfs.TestSetrepIncreasing
> Running org.apache.hadoop.hdfs.TestEncryptedTransfer
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.573
> sec - in org.apache.hadoop.hdfs.TestEncryptedTransfer
> Running org.apache.hadoop.hdfs.TestDFSUpgrade
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.229 sec
> - in org.apache.hadoop.hdfs.TestDFSUpgrade
> Running org.apache.hadoop.hdfs.TestCrcCorruption
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.306 sec
> - in org.apache.hadoop.hdfs.TestCrcCorruption
> Running org.apache.hadoop.hdfs.TestHFlush
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.177 sec
> - in org.apache.hadoop.hdfs.TestHFlush
> Running org.apache.hadoop.hdfs.TestFileAppendRestart
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.167 sec
> - in org.apache.hadoop.hdfs.TestFileAppendRestart
> Running org.apache.hadoop.hdfs.TestDatanodeReport
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.518 sec
> - in org.apache.hadoop.hdfs.TestDatanodeReport
> Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.195
> sec - in org.apache.hadoop.hdfs.TestShortCircuitLocalRead
> Running org.apache.hadoop.hdfs.TestFileInputStreamCache
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.205 sec
> - in org.apache.hadoop.hdfs.TestFileInputStreamCache
> Running org.apache.hadoop.hdfs.TestRestartDFS
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.3 sec -
> in org.apache.hadoop.hdfs.TestRestartDFS
> Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.577 sec
> - in org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
> Running org.apache.hadoop.hdfs.TestDFSRemove
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.897 sec
> - in org.apache.hadoop.hdfs.TestDFSRemove
> Running org.apache.hadoop.hdfs.TestHDFSTrash
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.345 sec
> - in org.apache.hadoop.hdfs.TestHDFSTrash
> Running org.apache.hadoop.hdfs.TestClientReportBadBlock
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.52 sec
> - in org.apache.hadoop.hdfs.TestClientReportBadBlock
> Running org.apache.hadoop.hdfs.TestQuota
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.236 sec
> - in org.apache.hadoop.hdfs.TestQuota
> Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.421 sec
> - in org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
> Running org.apache.hadoop.hdfs.TestDatanodeRegistration
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.058 sec
> - in org.apache.hadoop.hdfs.TestDatanodeRegistration
> Running org.apache.hadoop.hdfs.TestAbandonBlock
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.214 sec
> - in org.apache.hadoop.hdfs.TestAbandonBlock
> Running org.apache.hadoop.hdfs.TestDFSShell
> Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.489
> sec - in org.apache.hadoop.hdfs.TestDFSShell
> Running org.apache.hadoop.hdfs.TestListFilesInDFS
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.339 sec
> - in org.apache.hadoop.hdfs.TestListFilesInDFS
> Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.163 sec
> - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
> Running org.apache.hadoop.hdfs.TestPeerCache
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elap

[jira] [Resolved] (HDFS-5349) DNA_CACHE and DNA_UNCACHE should be by blockId only

2013-10-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-5349.


   Resolution: Fixed
Fix Version/s: HDFS-4949

> DNA_CACHE and DNA_UNCACHE should be by blockId only 
> 
>
> Key: HDFS-5349
> URL: https://issues.apache.org/jira/browse/HDFS-5349
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-4949
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: HDFS-4949
>
> Attachments: HDFS-5349-caching.001.patch, HDFS-5349-caching.002.patch
>
>
> DNA_CACHE and DNA_UNCACHE should be by blockId only.  We don't need length 
> and genstamp to know what the NN asked us to cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5358) add 'replication' field to PathBasedCacheDirective

2013-10-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5358:
--

 Summary: add 'replication' field to PathBasedCacheDirective
 Key: HDFS-5358
 URL: https://issues.apache.org/jira/browse/HDFS-5358
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Add a 'replication' field to PathBasedCacheDirective, so that administrators 
can configure how many cached replicas of a block the cluster should try to 
maintain.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5357) TestFileSystemAccessService failures in JDK7

2013-10-14 Thread Robert Parker (JIRA)
Robert Parker created HDFS-5357:
---

 Summary: TestFileSystemAccessService failures in JDK7
 Key: HDFS-5357
 URL: https://issues.apache.org/jira/browse/HDFS-5357
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9
Reporter: Robert Parker


junit.framework.AssertionFailedError: Expected Exception: ServiceException got: 
ExceptionInInitializerError
at junit.framework.Assert.fail(Assert.java:47)
at 
org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:56)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray2(ReflectionUtils.java:208)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:159)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:87)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:95)




--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [VOTE] Release Apache Hadoop 2.2.0

2013-10-14 Thread Arpit Agarwal
+1 (non-binding)

- Verified md5/SHA checksums
- Installed binary distribution on Centos 6.4
- Ran a few MapReduce jobs on a single-node cluster
- Copied files to/from the cluster using HDFS commands

No issues encountered.


On Mon, Oct 7, 2013 at 12:00 AM, Arun C Murthy  wrote:

> Folks,
>
> I've created a release candidate (rc0) for hadoop-2.2.0 that I would like
> to get released - this release fixes a small number of bugs and some
> protocol/api issues which should ensure they are now stable and will not
> change in hadoop-2.x.
>
> The RC is available at:
> http://people.apache.org/~acmurthy/hadoop-2.2.0-rc0
> The RC tag in svn is here:
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0-rc0
>
> The maven artifacts are available via repository.apache.org.
>
> Please try the release and vote; the vote will run for the usual 7 days.
>
> thanks,
> Arun
>
> P.S.: Thanks to Colin, Andrew, Daryn, Chris and others for helping nail
> down the symlinks-related issues. I'll release note the fact that we have
> disabled it in 2.2. Also, thanks to Vinod for some heavy-lifting on the
> YARN side in the last couple of weeks.
>
>
>
>
>
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2013-10-14 Thread haosdent (JIRA)
haosdent created HDFS-5356:
--

 Summary: MiniDFSCluster shoud close all open FileSystems when 
shutdown()
 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: haosdent
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.1#6144)


Hadoop-Hdfs-trunk - Build # 1552 - Still Failing

2013-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1552/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11618 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:44:29.257s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [1.800s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:44:31.893s
[INFO] Finished at: Mon Oct 14 13:19:14 UTC 2013
[INFO] Final Memory: 34M/402M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-5329
Updating MAPREDUCE-5463
Updating YARN-305
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1552

2013-10-14 Thread Apache Jenkins Server
See 

Changes:

[sandy] MAPREDUCE-5463. Deprecate SLOTS_MILLIS counters. (Tzuyoshi Ozawa via 
Sandy Ryza)

[sandy] YARN-305. Fair scheduler logs too many "Node offered to app" messages. 
(Lohit Vijayarenu via Sandy Ryza)

[sseth] MAPREDUCE-5329. Allow MR applications to use additional AuxServices, 
which are compatible with the default MapReduce shuffle. Contributed by Avner 
BenHanoch.

--
[...truncated 11425 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.954 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.573 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.229 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.306 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.177 sec - in 
org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.167 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.518 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.195 sec - 
in org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestFileInputStreamCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.205 sec - in 
org.apache.hadoop.hdfs.TestFileInputStreamCache
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.3 sec - in 
org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.577 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.897 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.345 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.52 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.236 sec - in 
org.apache.hadoop.hdfs.TestQuota
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.421 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.058 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.214 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.489 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.339 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.163 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.317 sec - in 
org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.434 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.008 sec - in 
org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.644 sec - 
in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.9

Hadoop-Hdfs-0.23-Build - Build # 760 - Still Failing

2013-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/760/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7871 lines...]
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3313,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3319,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #760

2013-10-14 Thread Apache Jenkins Server
See 

--
[...truncated 7678 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[270,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[281,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6301,30]
 cannot find sym

[jira] [Created] (HDFS-5355) Hsync Metrics error

2013-10-14 Thread haosdent (JIRA)
haosdent created HDFS-5355:
--

 Summary: Hsync Metrics error
 Key: HDFS-5355
 URL: https://issues.apache.org/jira/browse/HDFS-5355
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: haosdent
Priority: Minor


The authors before comment "Expect two syncs, one from the hsync, one on close" 
in TestDataNodeMetrics.java. In fact there isn't call sync when close. One 
hsync call would produce two syncs, one for checksum and the other for block. I 
modify relate code and make it looks like the hflush metrics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)