[ https://issues.apache.org/jira/browse/HDFS-483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang resolved HDFS-483. -------------------------------- Resolution: Fixed Fix Version/s: 0.22.0 0.21.0 > Data transfer (aka pipeline) implementation cannot tolerate exceptions > ---------------------------------------------------------------------- > > Key: HDFS-483 > URL: https://issues.apache.org/jira/browse/HDFS-483 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node, hdfs client > Reporter: Tsz Wo (Nicholas), SZE > Fix For: 0.21.0, 0.22.0 > > Attachments: h483_20090709.patch, h483_20090713.patch, > h483_20090717.patch, h483_20090727.patch, h483_20090730.patch, > h483_20090731.patch, h483_20090806.patch, h483_20090807.patch, > h483_20090807b.patch, h483_20090810.patch, h483_20090818.patch, > h483_20090819.patch, h483_20090819b.patch > > > Data transfer was tested with simulated exceptions as below: > # create files with dfs > # write 1 byte > # close file > # open the same file > # read the 1 byte and compare results > The file was closed successfully but we got an IOException(Could not get > block locations...) when the file was reopened for reading. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.