[ 
https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=488298&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488298
 ]

ASF GitHub Bot logged work on HADOOP-13327:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 22/Sep/20 14:26
            Start Date: 22/Sep/20 14:26
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-696757197


   TestRawLocalContract create is failing until we do our own 
BufferedOutputStream in passthrough of HADOOP-16830/ #2323 . 
   
   the HDFS checksum ones are about checksums not being found, which implies 
they weren't being written. Which goes near output streams, doesn't it? Which 
means there is a risk this is a genuine regression and not "hdfs tests being 
flaky"
   
   ```
   org.apache.hadoop.fs.PathIOException: `/striped/stripedFileChecksum1': Fail 
to get block checksum for 
LocatedStripedBlock{BP-1893408133-172.17.0.3-1600724641689:blk_-9223372036854775792_1001;
 getBlockSize()=37748736; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:33131,DS-5c8811e5-39af-4967-b3a7-3b76f95b0317,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:38603,DS-82754ee5-286a-402b-9861-b3ebbc149849,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:44379,DS-080d6ce9-cb6a-4f80-b37a-e63ecc31d9bc,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:34723,DS-c6c2aa4e-639b-4d85-9564-05631c8c5b79,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:40061,DS-54eb3c16-b9ce-4a5d-a7a3-f33b635579b0,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:45525,DS-3c9ac850-9273-4d2d-933c-2ac3b4b30308,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:45537,DS-9b1d805c-3b0e-4c84-ad0e-f454817b6829,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:44697,DS-e06897c8-c8a5-41d6-b5bc-57c7339bbf9b,DISK]];
 indices=[1, 2, 3, 4, 5, 6, 7, 8]}
        at 
org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:640)
        at 
org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
        at 
org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1851)
        at 
org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1871)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$34.doCall(DistributedFileSystem.java:1891)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$34.doCall(DistributedFileSystem.java:1888)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1905)
        at 
org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:584)
        at 
org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery(TestFileChecksum.java:295)
        at 
org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery1(TestFileChecksum.java:312)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:748)
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 488298)
    Time Spent: 2h 10m  (was: 2h)

> Add OutputStream + Syncable to the Filesystem Specification
> -----------------------------------------------------------
>
>                 Key: HADOOP-13327
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13327
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs
>    Affects Versions: 2.8.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, 
> HADOOP-13327-branch-2-001.patch
>
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Write down what a Filesystem output stream should do. While core the API is 
> defined in Java, that doesn't say what's expected about visibility, 
> durability, etc —and Hadoop Syncable interface is entirely ours to define.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to