[ 
https://issues.apache.org/jira/browse/HDFS-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781637#comment-16781637
 ] 

Stephen O'Donnell commented on HDFS-9908:
-----------------------------------------

Even with HADOOP-12973 I have seen an instance of this problem occur again, in 
a slightly different part of the code, but I think for the same reason - a disk 
error during the handshake process.

These are the cut down logs:

{code}
2019-03-01 08:58:24,830 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-240961797-10.9.65.12-1392827522027 on volume 
/data/18/dfs/dn/current...
...
2019-03-01 08:58:27,029 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
not get disk usage information
ExitCodeException exitCode=1: du: cannot read directory 
`/data/18/dfs/dn/current/BP-240961797-10.9.65.12-1392827522027/current/finalized/subdir149/subdir215':
 Permission denied
du: cannot read directory 
`/data/18/dfs/dn/current/BP-240961797-10.9.65.12-1392827522027/current/finalized/subdir149/subdir213':
 Permission denied
du: cannot read directory 
`/data/18/dfs/dn/current/BP-240961797-10.9.65.12-1392827522027/current/finalized/subdir97/subdir25':
 Permission denied

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
        at org.apache.hadoop.util.Shell.run(Shell.java:504)
        at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:61)
        at org.apache.hadoop.fs.DU.refresh(DU.java:53)
        at 
org.apache.hadoop.fs.CachingGetSpaceUsed.init(CachingGetSpaceUsed.java:84)
        at 
org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:166)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.<init>(BlockPoolSlice.java:145)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:881)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:412)
...
2019-03-01 08:58:27,043 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken 
to scan block pool BP-240961797-10.9.65.12-1392827522027 on 
/data/18/dfs/dn/current: 2202ms
{code}

So we can see a du error occurred, was logged (due to HADOOP-12973) and the 
blockpool scan completed. However then in the 'add replicas to map' logic, we 
got another exception stemming from the same problem:

{code}
2019-03-01 08:58:27,564 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
replicas to map for block pool BP-240961797-10.9.65.12-1392827522027 on volume 
/data/18/dfs/dn/current...
...
2019-03-01 08:58:31,155 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught 
exception while adding replicas from /data/18/dfs/dn/current. Will throw later.
java.io.IOException: Invalid directory or I/O error occurred for dir: 
/data/18/dfs/dn/current/BP-240961797-10.9.65.12-1392827522027/current/finalized/subdir149/subdir215
        at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1167)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:861)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)

< The message 2019-03-01 08:59:00,989 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
add replicas to map for block pool BP-240961797-10.9.65.12-1392827522027 on 
volume xxx did not appear for this volume as it failed >
{code}

I believe this one is then thrown by the same logic that would have thrown the 
original du exception if it was not caught and then the same problem occurs. 
The DN tries to add all the volumes again, finds them locked and exits with a 
'all volumes failed' error.

I believe this error is coming from:

DataNode.initBlockPool -> FSDataSetImpl.addBlockPool -> 
FSVolumeList.getAllVolumesMap -> Throws exception

While with the original "du" issue which no longer happens, the error path was:

DataNode.initBlockPool -> FSDataSetImpl.addBlockPool -> 
FSVolumeList.addBlockPool -> Throws exception

This occurred on CDH 5.9.1, but a quick check of the current trunk suggests the 
code path is pretty much the same in this area.

> Datanode should tolerate disk scan failure during NN handshake
> --------------------------------------------------------------
>
>                 Key: HDFS-9908
>                 URL: https://issues.apache.org/jira/browse/HDFS-9908
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.5.0
>         Environment: CDH5.3.3
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>            Priority: Major
>         Attachments: HDFS-9908.001.patch, HDFS-9908.002.patch, 
> HDFS-9908.003.patch, HDFS-9908.004.patch, HDFS-9908.005.patch, 
> HDFS-9908.006.patch, HDFS-9908.007.patch
>
>
> DN may treat a disk scan failure exception as an NN handshake exception, and 
> this can prevent a DN to join a cluster even if most of its disks are healthy.
> During NN handshake, DN initializes block pools. It will create a lock files 
> per disk, and then scan the volumes. However, if the scanning throws 
> exceptions due to disk failure, DN will think it's an exception because NN is 
> inconsistent with the local storage (see {{DataNode#initBlockPool}}. As a 
> result, it will attempt to reconnect to NN again.
> However, at this point, DN has not deleted its lock files on the disks. If it 
> reconnects to NN again, it will think the same disks are already being used, 
> and then it will fail handshake again because all disks can not be used (due 
> to locking), and repeatedly. This will happen even if the DN has multiple 
> disks, and only one of them fails. The DN will not be able to connect to NN 
> despite just one failing disk. Note that it is possible to successfully 
> create a lock file on a disk, and then has error scanning the disk.
> We saw this on a CDH 5.3.3 cluster (which is based on Apache Hadoop 2.5.0, 
> and we still see the same bug in 3.0.0 trunk branch). The root cause is that 
> DN treats an internal error (single disk failure) as an external one (NN 
> handshake failure) and we should fix it.
> {code:title=DataNode.java}
> /**
>    * One of the Block Pools has successfully connected to its NN.
>    * This initializes the local storage for that block pool,
>    * checks consistency of the NN's cluster ID, etc.
>    * 
>    * If this is the first block pool to register, this also initializes
>    * the datanode-scoped storage.
>    * 
>    * @param bpos Block pool offer service
>    * @throws IOException if the NN is inconsistent with the local storage.
>    */
>   void initBlockPool(BPOfferService bpos) throws IOException {
>     NamespaceInfo nsInfo = bpos.getNamespaceInfo();
>     if (nsInfo == null) {
>       throw new IOException("NamespaceInfo not found: Block pool " + bpos
>           + " should have retrieved namespace info before initBlockPool.");
>     }
>     
>     setClusterId(nsInfo.clusterID, nsInfo.getBlockPoolID());
>     // Register the new block pool with the BP manager.
>     blockPoolManager.addBlockPool(bpos);
>     
>     // In the case that this is the first block pool to connect, initialize
>     // the dataset, block scanners, etc.
>     initStorage(nsInfo);
>     // Exclude failed disks before initializing the block pools to avoid 
> startup
>     // failures.
>     checkDiskError();
>     data.addBlockPool(nsInfo.getBlockPoolID(), conf);  <----- this line 
> throws disk error exception
>     blockScanner.enableBlockPoolId(bpos.getBlockPoolId());
>     initDirectoryScanner(conf);
>   }
> {code}
> {{FsVolumeList#addBlockPool}} is the source of exception.
> {code:title=FsVolumeList.java}
>   void addBlockPool(final String bpid, final Configuration conf) throws 
> IOException {
>     long totalStartTime = Time.monotonicNow();
>     
>     final List<IOException> exceptions = Collections.synchronizedList(
>         new ArrayList<IOException>());
>     List<Thread> blockPoolAddingThreads = new ArrayList<Thread>();
>     for (final FsVolumeImpl v : volumes) {
>       Thread t = new Thread() {
>         public void run() {
>           try (FsVolumeReference ref = v.obtainReference()) {
>             FsDatasetImpl.LOG.info("Scanning block pool " + bpid +
>                 " on volume " + v + "...");
>             long startTime = Time.monotonicNow();
>             v.addBlockPool(bpid, conf);
>             long timeTaken = Time.monotonicNow() - startTime;
>             FsDatasetImpl.LOG.info("Time taken to scan block pool " + bpid +
>                 " on " + v + ": " + timeTaken + "ms");
>           } catch (ClosedChannelException e) {
>             // ignore.
>           } catch (IOException ioe) {
>             FsDatasetImpl.LOG.info("Caught exception while scanning " + v +
>                 ". Will throw later.", ioe);
>             exceptions.add(ioe);
>           }
>         }
>       };
>       blockPoolAddingThreads.add(t);
>       t.start();
>     }
>     for (Thread t : blockPoolAddingThreads) {
>       try {
>         t.join();
>       } catch (InterruptedException ie) {
>         throw new IOException(ie);
>       }
>     }
>     if (!exceptions.isEmpty()) {
>       throw exceptions.get(0); <----- here's the original of exception
>     }
>     
>     long totalTimeTaken = Time.monotonicNow() - totalStartTime;
>     FsDatasetImpl.LOG.info("Total time to scan all replicas for block pool " +
>         bpid + ": " + totalTimeTaken + "ms");
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to