[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418385#comment-16418385
 ] 

Sergey Soldatov commented on PHOENIX-4669:
------------------------------------------

[~brfrn169] We should add all column families that the parent table has as well 
as the default '0' because if we have more than one index on view they all will 
be kept in the same physical table, so we have to create all column families. A 
simple test case to check:
{noformat}
create table a (i1 integer primary key, c2.i2 integer, c3.i2 integer, c4.i2 
integer);
create view v1 as select * from a where c2.i2 = 1;
upsert into v1 (i1, c3.i2, c4.i2 ) values (1,1,1);
create index i1 on v1 (c3.i2);
create index i2 on v1 (c3.i2) include (c4.i2);
upsert into v1 (i1, c3.i2, c4.i2 ) values (2,2,2);
{noformat}

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-4669
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4669
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.14.0
>            Reporter: Toshihiro Suzuki
>            Priority: Major
>         Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>       at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>       at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>       at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>       at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>       at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>       at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>       at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>       at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>       at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>       at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>       at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>       at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>       at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>       at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>       at Case00162740.main(Case00162740.java:20)
> Caused by: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>       at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>       at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>       at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:297)
>       at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$2300(AsyncProcess.java:273)
>       at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.getErrors(AsyncProcess.java:1781)
>       at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:925)
>       at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:939)
>       at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1049)
>       ... 17 more
> {code}
> I will attach a unit test patch to reproduce this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to