[ 
https://issues.apache.org/jira/browse/PHOENIX-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200908#comment-14200908
 ] 

Samarth Jain commented on PHOENIX-1408:
---------------------------------------

When adding a column with a new column family, we still need to disable and 
enable table. The reason being HBaseAdmin.addColumn() does the column addition 
asynchronously. For ex- this test fails most of the times:

{code}
@Test
    public void testAddColumnForNewColumnFamily() throws Exception {
        Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
        String ddl = "CREATE TABLE T (\n"
                +"ID1 VARCHAR(15) NOT NULL,\n"
                +"ID2 VARCHAR(15) NOT NULL,\n"
                +"CREATED_DATE DATE,\n"
                +"CREATION_TIME BIGINT,\n"
                +"LAST_USED DATE,\n"
                +"CONSTRAINT PK PRIMARY KEY (ID1, ID2)) SALT_BUCKETS = 8";
        Connection conn1 = DriverManager.getConnection(getUrl(), props);
        conn1.createStatement().execute(ddl);
        ddl = "ALTER TABLE T ADD CF.STRING VARCHAR";
        conn1.createStatement().execute(ddl);
    }

Stacktrace:
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family 
CF does not exist in region 
T,\x02\x00\x00,1415307015015.d81ef29b43557977dacd52d89785a343. in table 'T', 
{TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|1|', coprocessor$2 => 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|1|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|', coprocessor$5 
=> 
'|org.apache.phoenix.hbase.index.Indexer|1073741823|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
 coprocessor$6 => 
'|org.apache.hadoop.hbase.regionserver.LocalIndexSplitter|1|'}, {NAME => '0', 
DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => 
'0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => 
'0', KEEP_DELETED_CELLS => 'true', BLOCKSIZE => '65536', IN_MEMORY => 'false', 
BLOCKCACHE => 'true'}
        at 
org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:5500)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1933)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1913)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3089)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:662)

        at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107)
        at 
org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:527)
        at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:44)
        at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:66)
        at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:86)
        at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
        at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
        at 
org.apache.phoenix.compile.PostDDLCompiler$1.execute(PostDDLCompiler.java:211)
        at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:1598)
        at 
org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:2105)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:750)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:260)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
        at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
        at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
        at 
org.apache.phoenix.end2end.AlterTableIT.testAddColumnForNewColumnFamily(AlterTableIT.java:931)
{code}


Essentially, as we increase the number of regions/salt_buckets on the table, 
frequency of test failure increases.

> Don't disable table before modifying HTable metadata
> ----------------------------------------------------
>
>                 Key: PHOENIX-1408
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1408
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Samarth Jain
>
> In 0.98, HBase supports modifying the HTable metadata without disabling the 
> table first. We should remove our calls to htable.disableTable() and 
> htable.enableTable() in ConnectionQueryServicesImpl when we modify the table 
> metadata. The only time we still need to disable the table is before we drop 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to