[jira] [Resolved] (HDFS-4967) Generate block ID sequentially cannot work with QJM HA

2013-07-09 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4967.
--

Resolution: Not A Problem

> Generate block ID sequentially cannot work with QJM HA
> --
>
> Key: HDFS-4967
> URL: https://issues.apache.org/jira/browse/HDFS-4967
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Fengdong Yu
>Assignee: Arpit Agarwal
>
> There are two name nodes, one is active, another acts as standby name node. 
> QJM Ha  configured.
> After HDFS-4645 committed in the trunk, then the following error showed 
> during name node start:
> {code}
> 2013-07-09 11:28:45,394 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException: Cannot skip to less than the current value 
> (=1073741824), where newValue=0
> at 
> org.apache.hadoop.util.SequentialNumber.skipTo(SequentialNumber.java:58)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setLastAllocatedBlockId(FSNamesystem.java:5124)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:278)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:809)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:798)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:653)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:623)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:260)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:719)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:552)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:401)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:435)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:607)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:592)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1238)
> 2013-07-09 11:28:45,397 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4972) [branch-0.23] permission check and operation are done in a separate lock for getBlockLocations()

2013-07-09 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4972:


 Summary: [branch-0.23] permission check and operation are done in 
a separate lock for getBlockLocations()
 Key: HDFS-4972
 URL: https://issues.apache.org/jira/browse/HDFS-4972
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.8
Reporter: Kihwal Lee
Assignee: Kihwal Lee


For getBlockLocations() call, the read lock is acquired when doing permission 
check. But unlike other namenode methods, this is outside of the lock of the 
actual operation. So it ends up acquiring and releasing the lock twice.  This 
has two implications.
- permissions can change in between the locks
- the lock fairness will penalize getBlockLocations().

This was fixed in trunk and branch-2 as a part of HDFS-4679, but not in 
branch-0.23.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4971) Move IO operations out of locking in OpenFileCtx

2013-07-09 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4971:
---

 Summary: Move IO operations out of locking in OpenFileCtx
 Key: HDFS-4971
 URL: https://issues.apache.org/jira/browse/HDFS-4971
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


Currently some IO operations (such as writing data to HDFS and dumping to local 
disk) in OpenFileCtx may hold a lock which can block processing incoming 
writing requests. This jira aims to optimize OpenFileCtx and move the IO 
operations out of the locking.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 0.23.9

2013-07-09 Thread Thomas Graves
Thanks everyone for voting.
The vote has passed with 9 +1's (4 non-binding and 5 binding) and 0 -1's.
I will push the release.

Tom

On 7/1/13 12:20 PM, "Thomas Graves"  wrote:

>I've created a release candidate (RC0) for hadoop-0.23.9 that I would like
>to release.
>
>The RC is available at:
>http://people.apache.org/~tgraves/hadoop-0.23.9-candidate-0/
>The RC tag in svn is here:
>http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.9-rc0/
>
>The maven artifacts are available via repository.apache.org.
>
>Please try the release and vote; the vote will run for the usual 7 days
>til July 8th.
>
>I am +1 (binding).
>
>thanks,
>Tom Graves



Re: Issues while setting up projects in Eclipse - Need Help

2013-07-09 Thread Zhijie Shen
Hi Vivek,


On Tue, Jul 9, 2013 at 8:14 AM, Vivek Ganesan wrote:

> Hi,
>
> I am trying to configure eclipse for hadoop development. I am working on my
> first JIRA.
>
> I followed the instructions at
> http://wiki.apache.org/hadoop/EclipseEnvironment But, I see that the
> following projects have build failures.  Is this expected?  Can I continue
> my programming ignoring these errors?
>
> hadoop-common
> hadoop-hdfs
> hadoop-mapreduce-client-common
> hadoop-yarn-api
> hadoop-yarn-server-common
> hadoop-yarn-server-nodemanager
>
> Or, am I missing something here?
>
I've suffered the similar problem recently. The build path of some
sub-projects is not completely correct. Probably you need to fix it in
eclipse manually, or you can ignore the errors.

>
> Also, I wanted to ask if there is some IRC channel for HDFS development,
> where people can quickly communicate with each other regarding issues.
>
AFAIK, folks here seem to prefer to using mailing list for discussion.

>
> Thank you.
>
> Regards,
> Vivek Ganesan
>



-- 
Zhijie Shen
Hortonworks Inc.
http://hortonworks.com/


[jira] [Created] (HDFS-4969) TestHttpFSFWithWebhdfsFileSystem has two test failures

2013-07-09 Thread Robert Kanter (JIRA)
Robert Kanter created HDFS-4969:
---

 Summary: TestHttpFSFWithWebhdfsFileSystem has two test failures
 Key: HDFS-4969
 URL: https://issues.apache.org/jira/browse/HDFS-4969
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test, webhdfs
Affects Versions: 3.0.0, 2.0.5-alpha
Reporter: Robert Kanter


org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem has two test 
failures due to a NPE:

{noformat}
testOperation[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
  Time elapsed: 375 sec  <<< ERROR!
java.lang.NullPointerException
at org.apache.hadoop.hdfs.web.JsonUtil.toFileStatus(JsonUtil.java:251)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:629)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:639)
at 
org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testListStatus(BaseTestHttpFSWith.java:292)
at 
org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.operation(BaseTestHttpFSWith.java:506)
at 
org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testOperation(BaseTestHttpFSWith.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:74)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
at 
org.apache.hadoop.test.TestJettyHelper$1.evaluate(TestJettyHelper.java:53)
at 
org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:42)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
{noformat}
{noformat}
testOperationDoAs[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
  Time elapsed: 401 sec  <<< ERROR!
java.lang.NullPointerException
at org

Issues while setting up projects in Eclipse - Need Help

2013-07-09 Thread Vivek Ganesan
Hi,

I am trying to configure eclipse for hadoop development. I am working on my
first JIRA.

I followed the instructions at
http://wiki.apache.org/hadoop/EclipseEnvironment But, I see that the
following projects have build failures.  Is this expected?  Can I continue
my programming ignoring these errors?

hadoop-common
hadoop-hdfs
hadoop-mapreduce-client-common
hadoop-yarn-api
hadoop-yarn-server-common
hadoop-yarn-server-nodemanager

Or, am I missing something here?

Also, I wanted to ask if there is some IRC channel for HDFS development,
where people can quickly communicate with each other regarding issues.

Thank you.

Regards,
Vivek Ganesan


Build failed in Jenkins: Hadoop-Hdfs-trunk #1455

2013-07-09 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] YARN-894. NodeHealthScriptRunner timeout checking is inaccurate on 
Windows. Contributed by Chuan Liu.

[tucu] HDFS-4841. FsShell commands using secure webhfds fail ClientFinalizer 
shutdown hook. (rkanter via tucu)

[vinodkv] YARN-791. Changed RM APIs and web-services related to nodes to ensure 
that both are consistent with each other. Contributed by Sandy Ryza.

[cnauroth] MAPREDUCE-5187. Create mapreduce command scripts on Windows. 
Contributed by Chuan Liu.

[suresh] HADOOP-9688. Adding a file missed in the commit 1500843

[suresh] HADOOP-9688. Add globally unique Client ID to RPC requests. 
Contributed by Suresh Srinivas.

[cnauroth] MAPREDUCE-5366. TestMRAsyncDiskService fails on Windows. Contributed 
by Chuan Liu.

--
[...truncated 10638 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.218 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.347 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.171 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.983 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.401 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.996 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 145.762 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.292 sec
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.164 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.272 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.254 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.902 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.769 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.421 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.094 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.827 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.325 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.702 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.194 sec
Running org.apache.hadoop.hdfs.TestFileInputStreamCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.201 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.615 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.829 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.007 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.79 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.923 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.125 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.539 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.323 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.13 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.473 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.457 sec
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.317 

Hadoop-Hdfs-trunk - Build # 1455 - Still Failing

2013-07-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1455/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10831 lines...]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:32:17.423s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [1.739s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:32:20.023s
[INFO] Finished at: Tue Jul 09 13:06:04 UTC 2013
[INFO] Final Memory: 37M/409M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs: ExecutionException; nested exception is 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without saying properly goodbye. VM crash or System.exit called ? 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-5366
Updating YARN-894
Updating MAPREDUCE-5187
Updating HADOOP-9688
Updating YARN-791
Updating HDFS-4841
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Jenkins build is back to stable : Hadoop-Hdfs-0.23-Build #663

2013-07-09 Thread Apache Jenkins Server
See 



Regarding hsync

2013-07-09 Thread Hemant Bhanawat
Hi, 

I am currently working on hadoop version 2.0.*. 

Currently, hsync does not update the file size on namenode. So, if my process 
dies after calling hsync but before calling file close, the file is left with 
an inconsistent file size. I would like to fix this file size. Is there a way 
to do that? A workaround that I have come across is to open the file stream in 
append mode and close it. This fixes the file size on the namenode. Is it a 
reliable solution? 

Thanks, 
Hemant