[ 
https://issues.apache.org/jira/browse/HDFS-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944256#comment-16944256
 ] 

hirik edited comment on HDFS-14890 at 10/4/19 6:58 AM:
-------------------------------------------------------

Hi [~elgoiri] [~swagle],

Unbale to start the DataNode after a successful start,

org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes 
- current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume 
failures tolerated: 0

at 
org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2799)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2714)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2688)
 at 
com.slog.dfs.hdfs.dn.DataNodeServiceImpl.delayedStart(DataNodeServiceImpl.java:144)
 at 
com.slog.startup.service.ServiceHandler.processDelayedStart(ServiceHandler.java:404)
 at 
com.slog.startup.service.ServiceHandler.startConfiguredServices(ServiceHandler.java:115)
 at com.slog.startup.server.AppServerImpl.startServer(AppServerImpl.java:91) at 
com.slog.startup.server.AbstractServerImpl.start(AbstractServerImpl.java:52) at 
com.slog.startup.server.AppServerImpl.start(AppServerImpl.java:37) at 
com.slog.startup.CommonStarter.startServer(CommonStarter.java:41) at 
com.slog.startup.ApplicationStarterDev.main(ApplicationStarterDev.java:28)

 

Caused By : java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;

org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)

 


was (Author: hirik):
Hi [~elgoiri] [~swagle],

Unbale to start the DataNode after a successful start,

org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes 
- current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume 
failures tolerated: 0

at 
org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2799)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2714)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2688)
 at 
com.slog.dfs.hdfs.dn.DataNodeServiceImpl.delayedStart(DataNodeServiceImpl.java:144)
 at 
com.slog.startup.service.ServiceHandler.processDelayedStart(ServiceHandler.java:404)
 at 
com.slog.startup.service.ServiceHandler.startConfiguredServices(ServiceHandler.java:115)
 at com.slog.startup.server.AppServerImpl.startServer(AppServerImpl.java:91) at 
com.slog.startup.server.AbstractServerImpl.start(AbstractServerImpl.java:52) at 
com.slog.startup.server.AppServerImpl.start(AppServerImpl.java:37) at 
com.slog.startup.CommonStarter.startServer(CommonStarter.java:41) at 
com.slog.startup.ApplicationStarterDev.main(ApplicationStarterDev.java:28)

 

> HDFS is not starting in Windows
> -------------------------------
>
>                 Key: HDFS-14890
>                 URL: https://issues.apache.org/jira/browse/HDFS-14890
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.2.1
>         Environment: Windows 10.
>            Reporter: hirik
>            Priority: Blocker
>         Attachments: HDFS-14890.01.patch
>
>
> Hi,
> HDFS NameNode and JournalNode are not starting in Windows machine. Found 
> below related exception in logs. 
> Caused by: java.lang.UnsupportedOperationExceptionCaused by: 
> java.lang.UnsupportedOperationException
> at java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2155)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
> at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:422)
> at 
> com.slog.dfs.hdfs.nn.NameNodeServiceImpl.delayedStart(NameNodeServiceImpl.java:147)
>  
> Code changes related to this issue: 
> [https://github.com/apache/hadoop/commit/07e3cf952eac9e47e7bd5e195b0f9fc28c468313#diff-1a56e69d50f21b059637cfcbf1d23f11]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to