[ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533112#comment-16533112
 ] 

ASF GitHub Bot commented on PHOENIX-3534:
-----------------------------------------

Github user JamesRTaylor commented on a diff in the pull request:

    https://github.com/apache/phoenix/pull/303#discussion_r200206389
  
    --- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
    @@ -1457,28 +1791,110 @@ private static void getSchemaTableNames(Mutation 
row, byte[][] schemaTableNames)
                 schemaTableNames[2] = tName;
             }
         }
    -    
    +
         @Override
         public void createTable(RpcController controller, CreateTableRequest 
request,
                 RpcCallback<MetaDataResponse> done) {
             MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
             byte[][] rowKeyMetaData = new byte[3][];
             byte[] schemaName = null;
             byte[] tableName = null;
    +        String fullTableName = null;
             try {
                 int clientVersion = request.getClientVersion();
                 List<Mutation> tableMetadata = 
ProtobufUtil.getMutations(request);
                 MetaDataUtil.getTenantIdAndSchemaAndTableName(tableMetadata, 
rowKeyMetaData);
                 byte[] tenantIdBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.TENANT_ID_INDEX];
                 schemaName = 
rowKeyMetaData[PhoenixDatabaseMetaData.SCHEMA_NAME_INDEX];
                 tableName = 
rowKeyMetaData[PhoenixDatabaseMetaData.TABLE_NAME_INDEX];
    +            fullTableName = SchemaUtil.getTableName(schemaName, tableName);
    +            // TODO before creating a table we need to see if the table 
was previously created and then dropped
    +            // and clean up any parent->child links or child views
                 boolean isNamespaceMapped = 
MetaDataUtil.isNameSpaceMapped(tableMetadata, GenericKeyValueBuilder.INSTANCE,
                         new ImmutableBytesWritable());
                 final IndexType indexType = 
MetaDataUtil.getIndexType(tableMetadata, GenericKeyValueBuilder.INSTANCE,
                         new ImmutableBytesWritable());
    +            byte[] parentTenantId = null;
                 byte[] parentSchemaName = null;
                 byte[] parentTableName = null;
                 PTableType tableType = 
MetaDataUtil.getTableType(tableMetadata, GenericKeyValueBuilder.INSTANCE, new 
ImmutableBytesWritable());
    +            ViewType viewType = MetaDataUtil.getViewType(tableMetadata, 
GenericKeyValueBuilder.INSTANCE, new ImmutableBytesWritable());
    +
    +            // Load table to see if it already exists
    +            byte[] tableKey = SchemaUtil.getTableKey(tenantIdBytes, 
schemaName, tableName);
    +            ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(tableKey);
    +            long clientTimeStamp = 
MetaDataUtil.getClientTimeStamp(tableMetadata);
    +            PTable table = null;
    +                   try {
    +                           // Get as of latest timestamp so we can detect 
if we have a newer table that already
    +               // exists without making an additional query
    +                           table = loadTable(env, tableKey, cacheKey, 
clientTimeStamp, HConstants.LATEST_TIMESTAMP,
    +                                           clientVersion);
    +                   } catch (ParentTableNotFoundException e) {
    +                           dropChildMetadata(e.getParentSchemaName(), 
e.getParentTableName(), e.getParentTenantId());
    +                   }
    +            if (table != null) {
    +                if (table.getTimeStamp() < clientTimeStamp) {
    +                    // If the table is older than the client time stamp 
and it's deleted,
    +                    // continue
    +                    if (!isTableDeleted(table)) {
    +                        
builder.setReturnCode(MetaDataProtos.MutationCode.TABLE_ALREADY_EXISTS);
    +                        
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
    +                        builder.setTable(PTableImpl.toProto(table));
    +                        done.run(builder.build());
    +                        return;
    +                    }
    +                } else {
    +                    
builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_TABLE_FOUND);
    +                    
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
    +                    builder.setTable(PTableImpl.toProto(table));
    +                    done.run(builder.build());
    +                    return;
    +                }
    +            }
    +            
    +                   // check if the table was dropped, but had child views 
that were have not yet
    +                   // been cleaned up by compaction
    +                   if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME)) {
    +                           dropChildMetadata(schemaName, tableName, 
tenantIdBytes);
    +                   }
    --- End diff --
    
    Minor - indentation issue here.


> Support multi region SYSTEM.CATALOG table
> -----------------------------------------
>
>                 Key: PHOENIX-3534
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3534
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Thomas D'Silva
>            Priority: Major
>             Fix For: 5.0.0, 4.15.0
>
>         Attachments: PHOENIX-3534.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to