[ 
https://issues.apache.org/jira/browse/HDFS-15431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludun updated HDFS-15431:
-------------------------
    Description: 
a file with two replications and keep it opening.

first it writes to DN1 and DN2.
{code}
2020-06-23 14:22:51,379 | DEBUG | pipeline = 
[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
 
DatanodeInfoWithStorage[DN2:25009,DS-7e434b35-0b10-44fa-9d3b-c3c938f1724d,DISK]]
 | DataStreamer.java:1757
{code}
after DN2 restart, it writes to DN1 and DN3, 
{code}
2020-06-23 14:24:04,559 | DEBUG | pipeline = 
[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
 
DatanodeInfoWithStorage[DN3:25009,DS-1810c3d5-b6e8-4403-a0fc-071ea6e5489f,DISK]]
 | DataStreamer.java:1757
{code}
after DN1 restart. it writes to DN3 and DN4.
{code}
2020-06-23 14:25:21,340 | DEBUG | pipeline = 
[DatanodeInfoWithStorage[DN3:25009,DS-1810c3d5-b6e8-4403-a0fc-071ea6e5489f,DISK],
 
DatanodeInfoWithStorage[DN4:25009,DS-5fbb2232-e7c8-4186-8eb9-87a6aff86cef,DISK]]
 | DataStreamer.java:1757
{code}
restart Active NameNode.  then try to get the file.   

NameNode return locatedblocks with DN1 and DN2. Can not obtain block Exception 
occurred.
{code}
20/06/20 17:57:06 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{
  fileLength=0
  underConstruction=true
  
blocks=[LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796;
 getBlockSize()=53; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
 
DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]}]
  
lastLocatedBlock=LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796;
 getBlockSize()=53; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
 
DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]}
  isLastBlockComplete=false}
{code}
 

  was:
a file with two replications and keep it opening.

first it writes to DN1 and DN2.

after DN2 restart, it writes to DN1 and DN3, 

after DN1 restart. it writes to DN3 and DN4.

restart Active NameNode.  then try to get the file.   

NameNode return locatedblocks with DN1 and DN2. Can not obtain block Exception 
occurred.
{code}
20/06/20 17:57:06 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{
  fileLength=0
  underConstruction=true
  
blocks=[LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796;
 getBlockSize()=53; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
 
DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]}]
  
lastLocatedBlock=LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796;
 getBlockSize()=53; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
 
DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]}
  isLastBlockComplete=false}
{code}
 


> Can not read a opening file after NameNode failover if pipeline recover 
> occuered
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-15431
>                 URL: https://issues.apache.org/jira/browse/HDFS-15431
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: ludun
>            Priority: Major
>
> a file with two replications and keep it opening.
> first it writes to DN1 and DN2.
> {code}
> 2020-06-23 14:22:51,379 | DEBUG | pipeline = 
> [DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
>  
> DatanodeInfoWithStorage[DN2:25009,DS-7e434b35-0b10-44fa-9d3b-c3c938f1724d,DISK]]
>  | DataStreamer.java:1757
> {code}
> after DN2 restart, it writes to DN1 and DN3, 
> {code}
> 2020-06-23 14:24:04,559 | DEBUG | pipeline = 
> [DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
>  
> DatanodeInfoWithStorage[DN3:25009,DS-1810c3d5-b6e8-4403-a0fc-071ea6e5489f,DISK]]
>  | DataStreamer.java:1757
> {code}
> after DN1 restart. it writes to DN3 and DN4.
> {code}
> 2020-06-23 14:25:21,340 | DEBUG | pipeline = 
> [DatanodeInfoWithStorage[DN3:25009,DS-1810c3d5-b6e8-4403-a0fc-071ea6e5489f,DISK],
>  
> DatanodeInfoWithStorage[DN4:25009,DS-5fbb2232-e7c8-4186-8eb9-87a6aff86cef,DISK]]
>  | DataStreamer.java:1757
> {code}
> restart Active NameNode.  then try to get the file.   
> NameNode return locatedblocks with DN1 and DN2. Can not obtain block 
> Exception occurred.
> {code}
> 20/06/20 17:57:06 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{
>   fileLength=0
>   underConstruction=true
>   
> blocks=[LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796;
>  getBlockSize()=53; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
>  
> DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]}]
>   
> lastLocatedBlock=LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796;
>  getBlockSize()=53; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK],
>  
> DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]}
>   isLastBlockComplete=false}
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to