[jira] [Commented] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365209#comment-16365209
 ] 

James Taylor commented on PHOENIX-4608:
---

Nice job tracking this down, [~sergey.soldatov]. Couple of comments:
- Where is the ProjectedColumnExpression being used by multiple threads? The 
one place I know about is when an UPSERT SELECT statement is run client side, 
but in that case we clone any expressions that store state. I just want to make 
sure we don't have a systemic issue.
- On the server-side, each scan being processed should have it's own copy of 
ProjectedColumnExpression.
- If we have to use the same the same ProjectedColumnExpression, I'd be more 
inclined to just instantiate the ValueBitSet in the evaluate method. It's like 
a regular bit set so will only have (in most cases) a long array of a single 
element.

> Concurrent modification of bitset in ProjectedColumnExpression
> --
>
> Key: PHOENIX-4608
> URL: https://issues.apache.org/jira/browse/PHOENIX-4608
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4608-v1.patch
>
>
> in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
> nulls during evaluate calls. We are using a single instance of 
> ProjectedColumnExpression per column across all threads running in parallel, 
> so it may happen that one thread call bitSet.clear() and another thread is 
> using it in isNull at the same time, making a wrong assumption that the value 
> is null.  We saw that problem when query like 
> {noformat}
> upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
> group by ID) as B join T2 as A on A.ID = B.ID;  
> {noformat}
> During the execution earlier mentioned condition happen and we don't advance 
> from the char column (A.ID)  to long (B.B) and get an exception like
> {noformat}
> Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
> cannot be cast to Integer without changing its value (state=22000,code=201) 
> java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
> -6908486506036322272 cannot be cast to Integer without changing its value 
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
>  
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
>  
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>  
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
>  
> at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
>  
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
> at sqlline.Commands.execute(Commands.java:822) 
> at sqlline.Commands.sql(Commands.java:732) 
> at sqlline.SqlLine.dispatch(SqlLine.java:808) 
> at sqlline.SqlLine.begin(SqlLine.java:681) 
> at sqlline.SqlLine.start(SqlLine.java:398) 
> at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}
> Fortunately, bitSet is the only field we continuously modify in that class, 
> so we may fix this problem by making it ThreadLocal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-02-14 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365175#comment-16365175
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4576:
--

Ping [~jamestaylor]
Before 1.4 we have special method to create scanner in HBase client scanner so 
that we used to get exception directly to BaseResultIterators.getIterators 
which we are handling properly but now scanner creation itself to next call 
that's why we are getting this issue.

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-14 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4608:
-
Description: 
in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
nulls during evaluate calls. We are using a single instance of 
ProjectedColumnExpression per column across all threads running in parallel, so 
it may happen that one thread call bitSet.clear() and another thread is using 
it in isNull at the same time, making a wrong assumption that the value is 
null.  We saw that problem when query like 
{noformat}
upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
group by ID) as B join T2 as A on A.ID = B.ID;  
{noformat}

During the execution earlier mentioned condition happen and we don't advance 
from the char column (A.ID)  to long (B.B) and get an exception like
{noformat}
Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
cannot be cast to Integer without changing its value (state=22000,code=201) 
java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
-6908486506036322272 cannot be cast to Integer without changing its value 
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
 
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
 
at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
 
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
 
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
 
at org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
 
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
at sqlline.Commands.execute(Commands.java:822) 
at sqlline.Commands.sql(Commands.java:732) 
at sqlline.SqlLine.dispatch(SqlLine.java:808) 
at sqlline.SqlLine.begin(SqlLine.java:681) 
at sqlline.SqlLine.start(SqlLine.java:398) 
at sqlline.SqlLine.main(SqlLine.java:292)
{noformat}

Fortunately, bitSet is the only field we continuously modify in that class, so 
we may fix this problem by making it ThreadLocal. 

  was:
in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
nulls during evaluate calls. We are using a single instance of 
ProjectedColumnExpression per column across all threads running in parallel, so 
it may happen that one thread call bitSet.clear() and another thread is using 
it in isNull at the same time, making a wrong assumption that the value is 
null.  We saw that problem when query like 
{noformat}
upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
group by ID) as B join T2 as A on A.ID = B.ID;  
{noformat}

During the execution earlier mentioned condition happen and we don't advance 
from the char column (A.ID)  to int (B.B) and get an exception like
{noformat}
Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
cannot be cast to Integer without changing its value (state=22000,code=201) 
java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
-6908486506036322272 cannot be cast to Integer without changing its value 
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
 
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
 
at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
 
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
 
at 

[jira] [Updated] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-14 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4608:
-
Attachment: PHOENIX-4608-v1.patch

> Concurrent modification of bitset in ProjectedColumnExpression
> --
>
> Key: PHOENIX-4608
> URL: https://issues.apache.org/jira/browse/PHOENIX-4608
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4608-v1.patch
>
>
> in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
> nulls during evaluate calls. We are using a single instance of 
> ProjectedColumnExpression per column across all threads running in parallel, 
> so it may happen that one thread call bitSet.clear() and another thread is 
> using it in isNull at the same time, making a wrong assumption that the value 
> is null.  We saw that problem when query like 
> {noformat}
> upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
> group by ID) as B join T2 as A on A.ID = B.ID;  
> {noformat}
> During the execution earlier mentioned condition happen and we don't advance 
> from the char column (A.ID)  to int (B.B) and get an exception like
> {noformat}
> Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
> cannot be cast to Integer without changing its value (state=22000,code=201) 
> java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
> -6908486506036322272 cannot be cast to Integer without changing its value 
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
>  
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
>  
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>  
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
>  
> at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
>  
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
> at sqlline.Commands.execute(Commands.java:822) 
> at sqlline.Commands.sql(Commands.java:732) 
> at sqlline.SqlLine.dispatch(SqlLine.java:808) 
> at sqlline.SqlLine.begin(SqlLine.java:681) 
> at sqlline.SqlLine.start(SqlLine.java:398) 
> at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}
> Fortunately, bitSet is the only field we continuously modify in that class, 
> so we may fix this problem by making it ThreadLocal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-14 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4608:


 Summary: Concurrent modification of bitset in 
ProjectedColumnExpression
 Key: PHOENIX-4608
 URL: https://issues.apache.org/jira/browse/PHOENIX-4608
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov
 Fix For: 4.14.0


in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
nulls during evaluate calls. We are using a single instance of 
ProjectedColumnExpression per column across all threads running in parallel, so 
it may happen that one thread call bitSet.clear() and another thread is using 
it in isNull at the same time, making a wrong assumption that the value is 
null.  We saw that problem when query like 
{noformat}
upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
group by ID) as B join T2 as A on A.ID = B.ID;  
{noformat}

During the execution earlier mentioned condition happen and we don't advance 
from the char column (A.ID)  to int (B.B) and get an exception like
{noformat}
Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
cannot be cast to Integer without changing its value (state=22000,code=201) 
java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
-6908486506036322272 cannot be cast to Integer without changing its value 
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
 
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
 
at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
 
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
 
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
 
at org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
 
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
at sqlline.Commands.execute(Commands.java:822) 
at sqlline.Commands.sql(Commands.java:732) 
at sqlline.SqlLine.dispatch(SqlLine.java:808) 
at sqlline.SqlLine.begin(SqlLine.java:681) 
at sqlline.SqlLine.start(SqlLine.java:398) 
at sqlline.SqlLine.main(SqlLine.java:292)
{noformat}

Fortunately, bitSet is the only field we continuously modify in that class, so 
we may fix this problem by making it ThreadLocal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2018-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365153#comment-16365153
 ] 

Hudson commented on PHOENIX-2566:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #41 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/41/])
PHOENIX-2566 Support NOT NULL constraint for any column for immutable (jtaylor: 
rev e99b738b6b6e2d7487dd46e6374d2ce22a164869)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.1.0
>
> Attachments: PHOENIX-2566_v1.patch
>
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4592) BaseResultIterators.getStatsForParallelizationProp() should use retry looking up the table without tenantId if cannot find the table using the tenantId

2018-02-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365135#comment-16365135
 ] 

James Taylor commented on PHOENIX-4592:
---

+1

> BaseResultIterators.getStatsForParallelizationProp() should use retry looking 
> up the table without tenantId if cannot find the table using the tenantId
> ---
>
> Key: PHOENIX-4592
> URL: https://issues.apache.org/jira/browse/PHOENIX-4592
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-4592-4.x-HBase-0.98.patch, 
> PHOENIX-4592-v2-4.x-HBase-0.98.patch
>
>
> Running a query using a tenant specific connection logs the following warning 
> :
> {code}
> 2018-02-09 17:41:45,497 WARN  [main] iterate.BaseResultIterators - Unable to 
> find parent table "X" of table "X" to determine USE_STATS_FOR_PARALLELIZATION
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=X
>   at 
> org.apache.phoenix.schema.PMetaDataImpl.getTableRef(PMetaDataImpl.java:71)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.getTable(PhoenixConnection.java:567)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getStatsForParallelizationProp(BaseResultIterators.java:1282)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:500)
>   at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:67)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:240)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:345)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:202)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:309)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:289)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:288)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:282)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1692)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> {code}
> The following code needs to be modified
> {code}
>  if (table.getType() == PTableType.INDEX && table.getParentName() != null) {
> PhoenixConnection conn = context.getConnection();
> String parentTableName = table.getParentName().getString();
> try {
> PTable parentTable =
> conn.getTable(new PTableKey(conn.getTenantId(), 
> parentTableName));
> useStats = parentTable.useStatsForParallelization();
> if (useStats != null) {
> return useStats;
> }
> } catch (TableNotFoundException e) {
> logger.warn("Unable to find parent table \"" + 
> parentTableName + "\" of table \""
> + table.getName().getString()
> + "\" to determine USE_STATS_FOR_PARALLELIZATION",
> e);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2018-02-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2566:
--
Fix Version/s: 5.1.0

> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.1.0
>
> Attachments: PHOENIX-2566_v1.patch
>
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2018-02-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2566.
---
Resolution: Fixed

> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.1.0
>
> Attachments: PHOENIX-2566_v1.patch
>
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-02-14 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365081#comment-16365081
 ] 

Josh Elser commented on PHOENIX-4533:
-

Good enough, Lev. I lifted the content into the markdown, edited it slightly, 
and have published it. Thanks!

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4533.1.patch, PHOENIX-4533.2.patch, 
> PHOENIX-4533.3.patch, PHOENIX-4533.squash.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4607) Allow PhoenixInputFormat to use tenant-specific connections

2018-02-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364972#comment-16364972
 ] 

James Taylor commented on PHOENIX-4607:
---

Yes, I'd expect this to work.

> Allow PhoenixInputFormat to use tenant-specific connections
> ---
>
> Key: PHOENIX-4607
> URL: https://issues.apache.org/jira/browse/PHOENIX-4607
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.13.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> When using Phoenix's MapReduce integration, the actual connections for the 
> SELECT query are created by PhoenixInputFormat. While PhoenixInputFormat has 
> support for a few connection properties such as SCN, a TenantId is not one of 
> them. 
> Add the ability to specify a TenantId for the PhoenixInputFormat's 
> connections to use. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[RESULT] [VOTE] Apache Phoenix 5.0.0-alpha rc1

2018-02-14 Thread Josh Elser

This vote passes with 3 (binding) +1's and 1 non-binding +1

Will start promoting shortly.

Big thank you to those who voted.

On 2/12/18 10:34 AM, Josh Elser wrote:

s/RC0/RC1/ below. I wasn't very diligent with my copy-paste-fix :)

The git-commit SHA1 is correct.

Please take a look if you can today!

On 2/9/18 10:34 AM, Josh Elser wrote:

Hello Everyone,

This is a call for a vote on Apache Phoenix 5.0.0-alpha rc1. Please 
notice that there are known issues with this release which deserve the 
"alpha" designation. These are staged on the website[1]. (Atomic 
upsert does work on my local installation with trivial testing)


Over rc0, this release contains the changes: PHOENIX-4586, 
PHOENIX-4546, PHOENIX-4549, PHOENIX-4582.


The RC is available at the standard location:

https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-5.0.0-alpha-HBase-2.0-rc1 



RC0 is based on the following commit: 
451d6a37d0d461b60edff36ceb42b17bb9610350


Signed with my key: 9E62822F4668F17B0972ADD9B7D5CD454677D66C, 
http://pgp.mit.edu/pks/lookup?op=get=0xB7D5CD454677D66C


Vote will be open for at least 72 hours (2018/02/12 1600GMT). Please 
vote:


[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team

[1] https://phoenix.apache.org/release_notes.html


[jira] [Commented] (PHOENIX-4607) Allow PhoenixInputFormat to use tenant-specific connections

2018-02-14 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364961#comment-16364961
 ] 

Geoffrey Jacoby commented on PHOENIX-4607:
--

[~jamestaylor], if MapReduce selects can use tenant-specific connections, will 
we automatically get the ability to use a tenant-view in the FROM clause of a 
MapReduce SELECT?

> Allow PhoenixInputFormat to use tenant-specific connections
> ---
>
> Key: PHOENIX-4607
> URL: https://issues.apache.org/jira/browse/PHOENIX-4607
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.13.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> When using Phoenix's MapReduce integration, the actual connections for the 
> SELECT query are created by PhoenixInputFormat. While PhoenixInputFormat has 
> support for a few connection properties such as SCN, a TenantId is not one of 
> them. 
> Add the ability to specify a TenantId for the PhoenixInputFormat's 
> connections to use. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4607) Allow PhoenixInputFormat to use tenant-specific connections

2018-02-14 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created PHOENIX-4607:


 Summary: Allow PhoenixInputFormat to use tenant-specific 
connections
 Key: PHOENIX-4607
 URL: https://issues.apache.org/jira/browse/PHOENIX-4607
 Project: Phoenix
  Issue Type: New Feature
Affects Versions: 4.13.0
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


When using Phoenix's MapReduce integration, the actual connections for the 
SELECT query are created by PhoenixInputFormat. While PhoenixInputFormat has 
support for a few connection properties such as SCN, a TenantId is not one of 
them. 

Add the ability to specify a TenantId for the PhoenixInputFormat's connections 
to use. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2018-02-14 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364864#comment-16364864
 ] 

Thomas D'Silva commented on PHOENIX-2566:
-

+1 LGTM

> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-2566_v1.patch
>
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4592) BaseResultIterators.getStatsForParallelizationProp() should use retry looking up the table without tenantId if cannot find the table using the tenantId

2018-02-14 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364851#comment-16364851
 ] 

Thomas D'Silva commented on PHOENIX-4592:
-

I have attached a v2 patch that reverts the change I made to  
USE_STATS_FOR_PARALLELIZATION to set the isMutableOnView property to be false. 
I think the original behavior is correct (since we have tests for this). 

> BaseResultIterators.getStatsForParallelizationProp() should use retry looking 
> up the table without tenantId if cannot find the table using the tenantId
> ---
>
> Key: PHOENIX-4592
> URL: https://issues.apache.org/jira/browse/PHOENIX-4592
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-4592-4.x-HBase-0.98.patch, 
> PHOENIX-4592-v2-4.x-HBase-0.98.patch
>
>
> Running a query using a tenant specific connection logs the following warning 
> :
> {code}
> 2018-02-09 17:41:45,497 WARN  [main] iterate.BaseResultIterators - Unable to 
> find parent table "X" of table "X" to determine USE_STATS_FOR_PARALLELIZATION
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=X
>   at 
> org.apache.phoenix.schema.PMetaDataImpl.getTableRef(PMetaDataImpl.java:71)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.getTable(PhoenixConnection.java:567)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getStatsForParallelizationProp(BaseResultIterators.java:1282)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:500)
>   at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:67)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:240)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:345)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:202)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:309)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:289)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:288)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:282)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1692)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> {code}
> The following code needs to be modified
> {code}
>  if (table.getType() == PTableType.INDEX && table.getParentName() != null) {
> PhoenixConnection conn = context.getConnection();
> String parentTableName = table.getParentName().getString();
> try {
> PTable parentTable =
> conn.getTable(new PTableKey(conn.getTenantId(), 
> parentTableName));
> useStats = parentTable.useStatsForParallelization();
> if (useStats != null) {
> return useStats;
> }
> } catch (TableNotFoundException e) {
> logger.warn("Unable to find parent table \"" + 
> parentTableName + "\" of table \""
> + table.getName().getString()
> + "\" to determine USE_STATS_FOR_PARALLELIZATION",
> e);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4592) BaseResultIterators.getStatsForParallelizationProp() should use retry looking up the table without tenantId if cannot find the table using the tenantId

2018-02-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4592:

Attachment: PHOENIX-4592-v2-4.x-HBase-0.98.patch

> BaseResultIterators.getStatsForParallelizationProp() should use retry looking 
> up the table without tenantId if cannot find the table using the tenantId
> ---
>
> Key: PHOENIX-4592
> URL: https://issues.apache.org/jira/browse/PHOENIX-4592
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-4592-4.x-HBase-0.98.patch, 
> PHOENIX-4592-v2-4.x-HBase-0.98.patch
>
>
> Running a query using a tenant specific connection logs the following warning 
> :
> {code}
> 2018-02-09 17:41:45,497 WARN  [main] iterate.BaseResultIterators - Unable to 
> find parent table "X" of table "X" to determine USE_STATS_FOR_PARALLELIZATION
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=X
>   at 
> org.apache.phoenix.schema.PMetaDataImpl.getTableRef(PMetaDataImpl.java:71)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.getTable(PhoenixConnection.java:567)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getStatsForParallelizationProp(BaseResultIterators.java:1282)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:500)
>   at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:67)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:240)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:345)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:202)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:309)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:289)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:288)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:282)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1692)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> {code}
> The following code needs to be modified
> {code}
>  if (table.getType() == PTableType.INDEX && table.getParentName() != null) {
> PhoenixConnection conn = context.getConnection();
> String parentTableName = table.getParentName().getString();
> try {
> PTable parentTable =
> conn.getTable(new PTableKey(conn.getTenantId(), 
> parentTableName));
> useStats = parentTable.useStatsForParallelization();
> if (useStats != null) {
> return useStats;
> }
> } catch (TableNotFoundException e) {
> logger.warn("Unable to find parent table \"" + 
> parentTableName + "\" of table \""
> + table.getName().getString()
> + "\" to determine USE_STATS_FOR_PARALLELIZATION",
> e);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-02-14 Thread Lev Bronshtein (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364841#comment-16364841
 ] 

Lev Bronshtein commented on PHOENIX-4533:
-

Josh, is this what you are looking for?

$ svn diff
Index: site/publish/server.html
===
--- site/publish/server.html (revision 1824225)
+++ site/publish/server.html (working copy)
@@ -289,10 +289,20 @@
 unset
 
 
+ phoenix.queryserver.http.keytab.file
+ The key to look for keytab file. This 
configuration MUST be specified if phoenix.queryserver.kerberos.http.principal 
is configured
+ unset
+ 
+ 
 phoenix.queryserver.kerberos.principal
- The kerberos principal to use when 
authenticating.
+ The kerberos principal to use when 
authenticating. If phoenix.queryserver.kerberos.http.principal is not 
configured, the principlaa specified will be also used to both authenticate 
SPNEGO connections and to connect to HBase. Unless 
phoenix.queryserver.http.keytab.file is also specified, this configuration will 
be ignored
 unset
 
+ 
+ phoenix.queryserver.kerberos.http.principal
+ The kerberos principal to use when 
authenticating SPNEGO connections
+ unset
+ 
 
 phoenix.queryserver.dns.nameserver
 The DNS hostname

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4533.1.patch, PHOENIX-4533.2.patch, 
> PHOENIX-4533.3.patch, PHOENIX-4533.squash.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-02-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364815#comment-16364815
 ] 

James Taylor commented on PHOENIX-4605:
---

Yep, that’s more or less what I’m proposing. We’d need to potentially 
initialize both transaction engines when we establish the connection to a 
cluster, though. Or perhaps better would be to establish this lazily the first 
time a transaction starts.

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4344) MapReduce Delete Support

2018-02-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364802#comment-16364802
 ] 

James Taylor commented on PHOENIX-4344:
---

Yes, you’re right - that’s one of the limitations for indexes on views - the 
DML must be done on the leaf views. If you do that, everything will just work 
(famous last words :-) ).

> MapReduce Delete Support
> 
>
> Key: PHOENIX-4344
> URL: https://issues.apache.org/jira/browse/PHOENIX-4344
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.12.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> Phoenix already has the ability to use MapReduce for asynchronous handling of 
> long-running SELECTs. It would be really useful to have this capability for 
> long-running DELETEs, particularly of tables with indexes where using HBase's 
> own MapReduce integration would be prohibitively complicated. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4344) MapReduce Delete Support

2018-02-14 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364797#comment-16364797
 ] 

Geoffrey Jacoby edited comment on PHOENIX-4344 at 2/14/18 9:25 PM:
---

[~jamestaylor] - if I remember right, normal Phoenix deletes already have an 
issue where deleting from a base table won't delete from the views or their 
indexes – you have to delete from the view to get it to "do the right thing". 
Given that, would it be OK to require the user to use the view name in a DELETE 
MapReduce query if they want the view and its indexes to be updated? 

This could be changed in the future if Phoenix deletes get smarter about 
finding and deleting from child views/indexes. 

For the particular use case that [~akshita.malhotra] and I have in mind for 
this feature, the users will definitely know the views they want to delete 
from. 


was (Author: gjacoby):
[~jamestaylor] - if I remember right, normal Phoenix deletes already have an 
issue where deleting from a base table won't delete from the views – you have 
to delete from the view to get it to "do the right thing". Given that, would it 
be OK to require the user to use the view name in a DELETE MapReduce query if 
they want the view and its indexes to be updated? 

This could be changed in the future if Phoenix deletes get smarter about 
finding and deleting from child views/indexes. 

For the particular use case that [~akshita.malhotra] and I have in mind for 
this feature, the users will definitely know the views they want to delete 
from. 

> MapReduce Delete Support
> 
>
> Key: PHOENIX-4344
> URL: https://issues.apache.org/jira/browse/PHOENIX-4344
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.12.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> Phoenix already has the ability to use MapReduce for asynchronous handling of 
> long-running SELECTs. It would be really useful to have this capability for 
> long-running DELETEs, particularly of tables with indexes where using HBase's 
> own MapReduce integration would be prohibitively complicated. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4344) MapReduce Delete Support

2018-02-14 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364797#comment-16364797
 ] 

Geoffrey Jacoby commented on PHOENIX-4344:
--

[~jamestaylor] - if I remember right, normal Phoenix deletes already have an 
issue where deleting from a base table won't delete from the views – you have 
to delete from the view to get it to "do the right thing". Given that, would it 
be OK to require the user to use the view name in a DELETE MapReduce query if 
they want the view and its indexes to be updated? 

This could be changed in the future if Phoenix deletes get smarter about 
finding and deleting from child views/indexes. 

For the particular use case that [~akshita.malhotra] and I have in mind for 
this feature, the users will definitely know the views they want to delete 
from. 

> MapReduce Delete Support
> 
>
> Key: PHOENIX-4344
> URL: https://issues.apache.org/jira/browse/PHOENIX-4344
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.12.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> Phoenix already has the ability to use MapReduce for asynchronous handling of 
> long-running SELECTs. It would be really useful to have this capability for 
> long-running DELETEs, particularly of tables with indexes where using HBase's 
> own MapReduce integration would be prohibitively complicated. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-02-14 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364534#comment-16364534
 ] 

Geoffrey Jacoby commented on PHOENIX-4605:
--

[~jamestaylor] - could there be a table-level override of the global 
hbase-site.xml setting, with a validation check to make sure that tables from 
two different TALs can't participate in a transaction with each other? That 
would allow you to have a global setting of "Tephra", but experiment with Omid 
on new tables. 

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4606) SELECT DISTINCT cause OutOfOrderScannerNextException

2018-02-14 Thread Alexey Karpov (JIRA)
Alexey Karpov created PHOENIX-4606:
--

 Summary: SELECT DISTINCT cause OutOfOrderScannerNextException
 Key: PHOENIX-4606
 URL: https://issues.apache.org/jira/browse/PHOENIX-4606
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Alexey Karpov


I am getting below exception in SELECT DISTINCT query if I use DISTINCT on 
VARCHAR column and there is a value with length more then 32 767 symbols. Same 
for GROUP BY operator.

The exception dissappears if I either delete this long value or use 
DISTINCT(SUBSTR(description, 0, 32767)).

I use HDP 2.6.4 and Phoenix 4.7

 Full exception:

org.apache.phoenix.iterate.BaseResultIterators: Failed to execute task during 
cancel

java.util.concurrent.ExecutionException: 
org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
OutOfOrderScannerNextException: was there a rpc timeout?

   at java.util.concurrent.FutureTask.report(FutureTask.java:122)

   at java.util.concurrent.FutureTask.get(FutureTask.java:192)

   at 
org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:900)

   at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:836)

   at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:719)

   at 
org.apache.phoenix.iterate.MergeSortResultIterator.getMinHeap(MergeSortResultIterator.java:72)

   at 
org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:93)

   at 
org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:58)

   at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)

   at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)

   at 
org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:148)

   at 
org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:101)

   at 
org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:81)

   at 
org.apache.calcite.avatica.jdbc.JdbcMeta.prepareAndExecute(JdbcMeta.java:740)

   at 
org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:219)

   at 
org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:928)

   at 
org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:880)

   at 
org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)

   at 
org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)

   at 
org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)

   at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)

   at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after retry 
of OutOfOrderScannerNextException: was there a rpc timeout?

   at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:115)

   at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:165)

   at 
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254)

   at 
org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277)

   at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:117)

   at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)

   at java.util.concurrent.FutureTask.run(FutureTask.java:266)

   at 

[jira] [Commented] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-02-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364326#comment-16364326
 ] 

James Taylor commented on PHOENIX-4605:
---

If we assume that either Tephra or Omid is in use, but not both, then we’re 
fine with having a config that defines which TAL implementation to use (or 
unset if translations are disabled). If a user already has Tephra tables, this 
doesn’t work well, though, as they couldn’t try Omid without breaking their app.

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Apache EU Roadshow CFP Closing Soon (23 February)

2018-02-14 Thread Sharan F

Hello Everyone

This is an initial reminder to let you all know that we are holding an 
Apache EU Roadshow co-located with FOSS Backstage in Berlin on 13^th and 
14^th June 2018. https://s.apache.org/tCHx


The Call for Proposals (CFP) for the Apache EU Roadshow is currently 
open and will close at the end of next week, so if you have been 
delaying making a submission because the closing date seemed a long way 
off, then it's time to start getting your proposals submitted.


So what are we looking for?
We will have 2 Apache Devrooms available during the 2 day Roadshow so 
are looking for projects including incubating ones, to submit 
presentations, panel discussions, BoFs, or workshop proposals. The main 
focus of the Roadshow will be IoT, Cloud, Httpd and Tomcat so if your 
project is involved in or around any of these technologies at Apache 
then we are very interested in hearing from you.


Community and collaboration is important at Apache so if your project is 
interested in organising a project sprint, meetup or hackathon during 
the Roadshow, then please submit it inthe CFP as we do have some space 
available to allocate for these.


If you are wanting to submit a talk on open source community related 
topics such as the Apache Way, governance or legal aspects then please 
submit these to the CFP for FOSS Backstage.


Tickets for the Apache EU Roadshow are included as part of the 
registration for FOSS Backstage, so to attend the Roadshow you will need 
to register for FOSS Backstage. Early Bird tickets are still available 
until the 21^st February 2018.


Please see below for important URLs to remember:

-  To submit a CFP for the Apache EU Roadshow 
:http://apachecon.com/euroadshow18/ 


-  To submit a CFP for FOSS Backstage : 
https://foss-backstage.de/call-papers


-  To register to attend the Apache EU Roadshow and/or FOSS Backstage : 
https://foss-backstage.de/tickets


For further updates and information about the Apache EU Roadshowplease 
check http://apachecon.com/euroadshow18/


Thanks
Sharan Foga, VP Apache Community Development


[jira] [Commented] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-02-14 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364028#comment-16364028
 ] 

Ohad Shacham commented on PHOENIX-4605:
---

initTxServiceClient is calling function setTransactionClient that was declared 
at the TAL. It uses the TransactionFactory to get the context and the 
TransactionFactory generates the context according to the defined transaction 
processor.

Is it possible to set the transaction processor inside the TransactionFactory 
and leave this code as it is? Read an option for hbase_site.xml? This way we 
have a var that defines whether to use transactions and one (inside the 
TransactionFactory) that defines which transaction processor to use.

What do you say [~jamestaylor]?

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4602) OrExpression should can also push non-leading pk columns to scan

2018-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16363670#comment-16363670
 ] 

Hudson commented on PHOENIX-4602:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1817 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1817/])
PHOENIX-4602 OrExpression should can also push non-leading pk columns to 
(chenglei: rev 770b9e41037ffc581267d756fa0cca790e32a197)
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java


> OrExpression should can also push non-leading pk columns to scan
> 
>
> Key: PHOENIX-4602
> URL: https://issues.apache.org/jira/browse/PHOENIX-4602
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4602_v2.patch
>
>
> Given following table:
> {code}
>     CREATE TABLE test_table (
>      PK1 INTEGER NOT NULL,
>      PK2 INTEGER NOT NULL,
>      PK3 INTEGER NOT NULL,
>  DATA INTEGER, 
>      CONSTRAINT TEST_PK PRIMARY KEY (PK1,PK2,PK3))
> {code}
> and a sql:
> {code}
>   select * from test_table t where (t.pk1 >=2 and t.pk1<5) and ((t.pk2 >= 4 
> and t.pk2 <6) or (t.pk2 >= 8 and t.pk2 <9))
> {code}
> Obviously, it is a typical case for the sql to use SkipScanFilter,however, 
> the sql actually does not use Skip Scan, it use Range Scan and just push the 
> leading pk column expression {{(t.pk1 >=2 and t.pk1<5)}} to scan,the explain 
> sql is :
>  {code:sql}
>    CLIENT PARALLEL 1-WAY RANGE SCAN OVER TEST_TABLE [2] - [5]
>    SERVER FILTER BY ((PK2 >= 4 AND PK2 < 6) OR (PK2 >= 8 AND PK2 < 9))
> {code}
>  I think the problem is affected by the 
> {{WhereOptimizer.KeyExpressionVisitor.orKeySlots}} method, in the following 
> line 763, because the pk2 column is not the leading pk column,so this method 
> return null, causing the expression 
> {{((t.pk2 >= 4 and t.pk2 <6) or (t.pk2 >= 8 and t.pk2 <9))}} is not  pushed 
> to scan:
> {code:java}
> 757    boolean hasFirstSlot = true;
> 758    boolean prevIsNull = false;
> 759    // TODO: Do the same optimization that we do for IN if the childSlots 
> specify a fully qualified row key
> 760   for (KeySlot slot : childSlot) {
> 761      if (hasFirstSlot) {
> 762           // if the first slot is null, return null immediately
> 763           if (slot == null) {
> 764                return null;
> 765            }
> 766           // mark that we've handled the first slot
> 767           hasFirstSlot = false;
> 768      }
> {code}
> For above {{WhereOptimizer.KeyExpressionVisitor.orKeySlots}} method, it seems 
> that it is not necessary to make sure the PK Column in OrExpression is 
> leading PK Column,just guarantee there is only one PK Column in OrExpression 
> is enough.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4526) PhoenixStorageHandler doesn't work with upper case in phoenix.rowkeys

2018-02-14 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16363640#comment-16363640
 ] 

Sergey Soldatov commented on PHOENIX-4526:
--

phoenix.rowkeys specifies *Hive* columns. all Hive columns are lowercase and 
should be specified in that way.

> PhoenixStorageHandler doesn't work with upper case in phoenix.rowkeys
> -
>
> Key: PHOENIX-4526
> URL: https://issues.apache.org/jira/browse/PHOENIX-4526
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Choi JaeHwan
>Priority: Major
>  Labels: HivePhoenix
> Attachments: PHOENIX-4526.patch
>
>
> If you write phoenix rowkey in uppercase, you will get the following error.
> The field column name changes from hive to lowercase, but not to the 
> phoenix.rowkeys property.
> {code}
> CREATE TABLE `PROFILE_PHOENIX_CLONE4` (
>   USER_ID STRING COMMENT 'from deserializer'
>   ,MARRIED STRING COMMENT 'from deserializer'
>   ,USER_NAME STRING COMMENT 'from deserializer'
>   ,BIRTH STRING COMMENT 'from deserializer'
>   ,WEIGHT FLOAT COMMENT 'from deserializer'
>   ,HEIGHT DOUBLE COMMENT 'from deserializer'
>   ,CHILD STRING COMMENT 'from deserializer'
>   ,IS_MALE BOOLEAN COMMENT 'from deserializer'
>   ,PHONE STRING COMMENT 'from deserializer'
>   ,EMAIL STRING COMMENT 'from deserializer'
>   ,CREATE_TIME TIMESTAMP COMMENT 'from deserializer'
> ) COMMENT '한글 HBase 테이블'
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler' 
> TBLPROPERTIES (
>   "phoenix.table.name"="jackdb_PROFILE_PHOENIX_CLONE4"
>   ,"phoenix.zookeeper.quorum"="qa3.nexr.com,qa4.nexr.com,qa5.nexr.com"
>   ,"phoenix.rowkeys"="USER_ID,MARRIED"
>   ,"phoenix.zookeeper.client.port"="2181"
>   ,"phoenix.zookeeper.znode.parent"="/hbase"
>   
> ,"phoenix.column.mapping"="USER_ID:USER_ID,MARRIED:MARRIED,USER_NAME:USER_NAME,BIRTH:BIRTH,WEIGHT:WEIGHT,HEIGHT:HEIGHT,CHILD:CHILD,IS_MALE:IS_MALE,PHONE:PHONE,EMAIL:EMAIL,CREATE_TIME:CREATE_TIME"
>   ,"ndap.table.storageType"="PHOENIX"
>   ,"phoenix.table.options"="SALT_BUCKETS=10,DATA_BLOCK_ENCODING='DIFF'"
> )
> {code}
> {code}
> 2018-01-04T10:37:50,186 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> ql.Driver (Driver.java:execute(1735)) - Executing 
> command(queryId=hive_20180104103750_424baf0b-141a-450c-ae78-8f9be8a743a8): 
> CREATE TABLE `jackdb`.`PROFILE_PHOENIX_CLONE4` (
>   USER_ID STRING COMMENT 'from deserializer'
>   ,MARRIED STRING COMMENT 'from deserializer'
>   ,USER_NAME STRING COMMENT 'from deserializer'
>   ,BIRTH STRING COMMENT 'from deserializer'
>   ,WEIGHT FLOAT COMMENT 'from deserializer'
>   ,HEIGHT DOUBLE COMMENT 'from deserializer'
>   ,CHILD STRING COMMENT 'from deserializer'
>   ,IS_MALE BOOLEAN COMMENT 'from deserializer'
>   ,PHONE STRING COMMENT 'from deserializer'
>   ,EMAIL STRING COMMENT 'from deserializer'
>   ,CREATE_TIME TIMESTAMP COMMENT 'from deserializer'
> ) COMMENT '한글 HBase 테이블'
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler' 
> TBLPROPERTIES (
>   "phoenix.table.name"="jackdb_PROFILE_PHOENIX_CLONE4"
>   ,"phoenix.zookeeper.quorum"="qa3.nexr.com,qa4.nexr.com,qa5.nexr.com"
>   ,"phoenix.rowkeys"="USER_ID,MARRIED"
>   ,"phoenix.zookeeper.client.port"="2181"
>   ,"phoenix.zookeeper.znode.parent"="/hbase"
>   
> ,"phoenix.column.mapping"="USER_ID:USER_ID,MARRIED:MARRIED,USER_NAME:USER_NAME,BIRTH:BIRTH,WEIGHT:WEIGHT,HEIGHT:HEIGHT,CHILD:CHILD,IS_MALE:IS_MALE,PHONE:PHONE,EMAIL:EMAIL,CREATE_TIME:CREATE_TIME"
>   ,"ndap.table.storageType"="PHOENIX"
>   ,"phoenix.table.options"="SALT_BUCKETS=10,DATA_BLOCK_ENCODING='DIFF'"
> )
> 2018-01-04T10:37:50,189 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> ql.Driver (Driver.java:launchTask(2181)) - Starting task [Stage-0:DDL] in 
> serial mode
> 2018-01-04T10:37:50,224 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> plan.CreateTableDesc (CreateTableDesc.java:toTable(717)) - Use 
> StorageHandler-supplied org.apache.phoenix.hive.PhoenixSerDe for table 
> PROFILE_PHOENIX_CLONE4
> 2018-01-04T10:37:50,225 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> exec.DDLTask (DDLTask.java:createTable(4324)) - creating table 
> jackdb.PROFILE_PHOENIX_CLONE4 on null
> 2018-01-04T10:37:50,294 ERROR [HiveServer2-Background-Pool: Thread-10310]: 
> exec.DDLTask (DDLTask.java:failed(639)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:862)
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:867)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4356)
>