All tests are failing because they can't write /tmp:

[exec] ====================================================================== [exec] ======================================================================
    [exec]     Applying patch.
[exec] ====================================================================== [exec] ======================================================================
    [exec]
    [exec]
    [exec] (Stripping trailing CRs from patch.)
    [exec] patch: **** Can't create file /tmp/po2hai.T : Permission denied
    [exec] PATCH APPLICATION FAILED

When I look at /tmp, I see this:

-bash-3.00$ ls -lad /tmp
drwxr-xr-x   3 root     root         259 Jun  3 18:34 /tmp

I set it so its writable by all:

-bash-3.00$ ls -lad /tmp
drwxrwxrwx   3 root     root         259 Jun  3 22:41 /tmp

Perhaps it'll stick?  And tests will start to work again?

St.Ack


Hadoop QA (JIRA) wrote:
[ https://issues.apache.org/jira/browse/HADOOP-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12602139#action_12602139 ]
Hadoop QA commented on HADOOP-3177:
-----------------------------------

-1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12383320/3177_20080603.patch
  against trunk revision 662913.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 9 new or modified tests.

    -1 patch.  The patch command could not apply the patch.

Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2561/console

This message is automatically generated.

Expose DFSOutputStream.fsync API though the FileSystem interface
----------------------------------------------------------------

                Key: HADOOP-3177
                URL: https://issues.apache.org/jira/browse/HADOOP-3177
            Project: Hadoop Core
         Issue Type: Improvement
         Components: dfs
           Reporter: dhruba borthakur
           Assignee: Tsz Wo (Nicholas), SZE
        Attachments: 3177_20080603.patch


In the current code, there is a DFSOutputStream.fsync() API that allows a 
client to flush all buffered data to the datanodes and also persist block 
locations on the namenode. This API should be exposed through the generic API 
in the org.hadoop.fs.


Reply via email to