[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenbang updated HBASE-18743:
----------------------------
    Description: 
We recently had a critical production issue in which HFiles that were still in 
use by a table were deleted.
This appears to have been caused by conditions in which table have the same 
namespace and name with a default table cloned from snapshot.when snapshot and 
default table be deleted,HFiles that are still in use may be deleted.
For example:
Table with default namespace: "t1"
The namespace of the new table is the same as the name of the default table, 
and is generated by snapshot cloned : "t1: t1"
When the snapshot and the default namespace table are deleted, the new table is 
also deleted in the used HFILE
This is because the creation of the BackReferenceFile get the table Name is not 
normal, resulting in can not find the reference file, hfilecleaner to delete 
the file is used, when the table has not been major compact

  was:
We recently had a critical production issue in which HFiles that were still in 
use by a table were deleted.
This appears to have been caused by conditions in which table have the same 
namespace and name with a default table cloned from snapshot.when snapshot and 
default table be deleted,HFiles that are still in use may be deleted.
For example:
Table with default namespace: "table1"
The namespace of the new table is the same as the name of the default table, 
and is generated by snapshot cloned : "table1: table1"
When the snapshot and the default namespace table are deleted, the new table is 
also deleted in the used HFILE
This is because the creation of the BackReferenceFile get the table Name is not 
normal, resulting in can not find the reference file, hfilecleaner to delete 
the file is used, when the table has not been major compact
Problem code:
{code:java}
  public static TableName valueOf(String namespaceAsString, String 
qualifierAsString) {
    if (namespaceAsString == null || namespaceAsString.length() < 1) {
      namespaceAsString = NamespaceDescriptor.DEFAULT_NAMESPACE_NAME_STR;
    }

    for (TableName tn : tableCache) {
      if (qualifierAsString.equals(tn.getQualifierAsString()) &&
          namespaceAsString.equals(tn.getNameAsString())) {
        return tn;
      }
    }

    return createTableNameIfNecessary(
        ByteBuffer.wrap(Bytes.toBytes(namespaceAsString)),
        ByteBuffer.wrap(Bytes.toBytes(qualifierAsString)));
  }
{code}

"namespaceAsString.equals(tn.getNameAsString()))" 
This code should be
"namespaceAsString.equals(tn.getNamespaceAsString()))"


> Files that are in use by a table whitch have the same name and namespace with 
> a default table cloned from snapshot may be deleted when that snapshot and 
> default table is deleted
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-18743
>                 URL: https://issues.apache.org/jira/browse/HBASE-18743
>             Project: HBase
>          Issue Type: Bug
>          Components: hbase
>    Affects Versions: 1.1.12
>            Reporter: wenbang
>            Priority: Critical
>         Attachments: HBASE_18743.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFILE
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the file is used, when the table has not been major compact



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to