[
https://issues.apache.org/jira/browse/HDFS-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17312160#comment-17312160
]
David Coste commented on HDFS-13546:
------------------------------------
Thanks a lot for sharing your workaround
> local replica can't sync directory on Windows
> ---------------------------------------------
>
> Key: HDFS-13546
> URL: https://issues.apache.org/jira/browse/HDFS-13546
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 3.1.0
> Environment: Windows 10 64bit
> JDK 1.8.0_172
> Hadoop 3.1.0
> HBase 2.0.0
> Reporter: SonixLegend
> Priority: Minor
>
> I have run Hadoop and HBase on Windows, it's development environment. But I
> got a error when I started hbase master node on same machine that was running
> hdfs.
> {code:java}
> 2018-05-11 18:34:52,320 INFO datanode.DataNode: PacketResponder:
> BP-471749493-192.168.154.244-1526032382905:blk_1073741850_1026,
> type=LAST_IN_PIPELINE: Thread is interrupted.
> 2018-05-11 18:34:52,320 INFO datanode.DataNode: PacketResponder:
> BP-471749493-192.168.154.244-1526032382905:blk_1073741850_1026,
> type=LAST_IN_PIPELINE terminating
> 2018-05-11 18:34:52,321 INFO datanode.DataNode: opWriteBlock
> BP-471749493-192.168.154.244-1526032382905:blk_1073741850_1026 received
> exception java.io.IOException: Failed to sync
> C:\hadoop\data\hdfs\data1\current\BP-471749493-192.168.154.244-1526032382905\current\rbw
> 2018-05-11 18:34:52,321 ERROR datanode.DataNode:
> LAPTOP-460HNFM9:9866:DataXceiver error processing WRITE_BLOCK operation src:
> /127.0.0.1:10842 dst: /127.0.0.1:9866
> java.io.IOException: Failed to sync
> C:\hadoop\data\hdfs\data1\current\BP-471749493-192.168.154.244-1526032382905\current\rbw
> at
> org.apache.hadoop.hdfs.server.datanode.LocalReplica.fsyncDirectory(LocalReplica.java:523)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:429)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:809)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:890)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.nio.file.AccessDeniedException:
> C:\hadoop\data\hdfs\data1\current\BP-471749493-192.168.154.244-1526032382905\current\rbw
> at
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
> at
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at
> sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
> at java.nio.channels.FileChannel.open(FileChannel.java:287)
> at java.nio.channels.FileChannel.open(FileChannel.java:335)
> at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:421)
> at
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169)
> at
> org.apache.hadoop.hdfs.server.datanode.LocalReplica.fsyncDirectory(LocalReplica.java:521)
> ... 8 more
> {code}
> I never got the error when I use hadoop 3.0.0 and hbase 1.4.x. And I found
> this is windows issue that windows can't sync dir and filechannel can't open
> dir with any permission option. I have changed the code on IOUtils fsync, and
> it's work for me.
> {code:java}
> if(!isDir || !Shell.WINDOWS)
> try(FileChannel channel = FileChannel.open(fileToSync.toPath(),
> isDir ? StandardOpenOption.READ : StandardOpenOption.WRITE)){
> fsync(channel, isDir);
> }
> {code}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]