[jira] [Commented] (PHOENIX-4219) Index gets out of sync on HBase 1.x

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181994#comment-16181994
 ] 

James Taylor commented on PHOENIX-4219:
---

Strange that the behavior is different in 0.98 versus 1.x. The test does not 
even mutate the same rows. It's essentially immutable data. It looks to be 
related to covered columns as it passes in master if the index has no covered 
columns.

> Index gets out of sync on HBase 1.x
> ---
>
> Key: PHOENIX-4219
> URL: https://issues.apache.org/jira/browse/PHOENIX-4219
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Priority: Blocker
> Attachments: PHOENIX-4219_test.patch
>
>
> When writing batches in parallel with multiple background threads, it seems 
> the index sometimes gets out of sync.  This only happens on the master and 
> 4.x-HBase-1.2.
> The tests pass for 4.x-HBase-0.98
> See the attached test, which writes with 2 background threads with batch size 
> of 100.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4219) Index gets out of sync on HBase 1.x

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4219:
--
Priority: Blocker  (was: Major)

> Index gets out of sync on HBase 1.x
> ---
>
> Key: PHOENIX-4219
> URL: https://issues.apache.org/jira/browse/PHOENIX-4219
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Priority: Blocker
> Attachments: PHOENIX-4219_test.patch
>
>
> When writing batches in parallel with multiple background threads, it seems 
> the index sometimes gets out of sync.  This only happens on the master and 
> 4.x-HBase-1.2.
> The tests pass for 4.x-HBase-0.98
> See the attached test, which writes with 2 background threads with batch size 
> of 100.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4234) Unable to find failed csv records in phoenix logs

2017-09-26 Thread suprita (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181985#comment-16181985
 ] 

suprita commented on PHOENIX-4234:
--

Hi ankit,

I am still not able to find the error in the yarn logs.

I am running the command:
HADOOP_CLASSPATH=/opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/lib/hbase/hbase-protocol.jar:/etc/hbase/conf
 hadoop jar 
/opt/cloudera/parcels/CLABS_PHOENIX-4.5.2-1.clabs_phoenix1.2.0.p0.774/lib/phoenix/phoenix-1.2.0-client.jar
 org.apache.phoenix.mapreduce.CsvBulkLoadTool -Dfs.permissions.umask-mode=000 
--table G1V3IN_SEPT --input /user/mi841425/smallgstr1.csv -ignore-errors

Where we have 3 records with error which are not being inserted into hbase 
because of ignore errors parameter which is fine.

But we need to track those 3 records as well.

Please help with it.


Regards
Suprita Bothra



> Unable to find failed csv records in phoenix logs
> -
>
> Key: PHOENIX-4234
> URL: https://issues.apache.org/jira/browse/PHOENIX-4234
> Project: Phoenix
>  Issue Type: Bug
>Reporter: suprita bothra
>
> Unable to fetch missing records information in phoenix table.How can we fetch 
> the missing records info.
> Like while parsing csv into hbase via bulkloading via mapreduce,and using 
> --igonre-errors  option to parse csv.
> So csv records having error are skipped but we are unable to fetch the info 
> of records which are skipped/failed and dint go into table.
> There must be logs of such information .Please help in identifying if we can 
> get logs of failed records



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181971#comment-16181971
 ] 

Hudson commented on PHOENIX-4007:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1813 (See 
[https://builds.apache.org/job/Phoenix-master/1813/])
PHOENIX-4007 Surface time at which byte/row estimate information was (samarth: 
rev 6d8357e9029e639de952d76493203e161e237adb)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/AggregatePlan.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsScanner.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/NumberUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/MultiKeyValueTuple.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/ListJarsQueryPlan.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryWithOffsetIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/TraceQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/SortMergeJoinPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/DelegateMutationPlan.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanBigFilterTest.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseUniqueNamesOwnClusterIT.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/schema/stats/StatisticsScannerTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/UnionPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/GuidePostsInfoBuilder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/GuidePostsInfo.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/StatementPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsCollectorIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/query/ParallelIteratorsSplitTest.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsDisabledIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/BaseMutationPlan.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsWriter.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/DefaultStatisticsCollector.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/LiteralResultIterationPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java


> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v10.patch, PHOENIX-4007_v1.patch, 
> PHOENIX-4007_v2.patch, PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, 
> PHOENIX-4007_v6.patch, PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, 
> PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181896#comment-16181896
 ] 

Hudson commented on PHOENIX-4138:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1812 (See 
[https://builds.apache.org/job/Phoenix-master/1812/])
PHOENIX-4138 Create a hard limit on number of indexes per table (Churro 
(jamestaylor: rev 4969794bf0c91305d01c8f016abe95039642d46e)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
* (add) 
phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java


> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: churro morales
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4138.patch, PHOENIX-4138.v1.patch, 
> PHOENIX-4138.v2.patch, PHOENIX-4138_v3.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4239) Fix flapping test in PartialIndexRebuilderIT

2017-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181897#comment-16181897
 ] 

Hudson commented on PHOENIX-4239:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1812 (See 
[https://builds.apache.org/job/Phoenix-master/1812/])
PHOENIX-4239 Fix flapping test in PartialIndexRebuilderIT (jamestaylor: rev 
176f541ceb36c74ecdb88d113132a4ff2e44a86b)
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java


> Fix flapping test in PartialIndexRebuilderIT
> 
>
> Key: PHOENIX-4239
> URL: https://issues.apache.org/jira/browse/PHOENIX-4239
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4239.patch
>
>
> To get more info on this flapper: 
> https://www.google.com/url?q=https%3A%2F%2Fbuilds.apache.org%2Fjob%2FPhoenix-master%2F1810%2FtestReport%2Fjunit%2Forg.apache.phoenix.end2end.index%2FPartialIndexRebuilderIT%2FtestIndexWriteFailureLeavingIndexActive%2F=D=1=AFQjCNEj0LexiK8bm4GzGex9JUvu0DBJag



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181882#comment-16181882
 ] 

Hadoop QA commented on PHOENIX-4138:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12889178/PHOENIX-4138_v3.patch
  against master branch at commit 5d9572736a991f19121477a0822d4b8bf26b4c69.
  ATTACHMENT ID: 12889178

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
assertFalse(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
+assertTrue(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
+conn.createStatement().execute("CREATE LOCAL INDEX I_" + i + 
tableName + " ON " + tableName + "(COL1) INCLUDE (COL2,COL3,COL4)");
+conn.createStatement().execute("CREATE LOCAL INDEX I_" + 
maxIndexes + tableName + " ON " + tableName + "(COL1) INCLUDE 
(COL2,COL3,COL4)");
+static boolean execeededIndexQuota(PTableType tableType, PTable 
parentTable, Configuration configuration) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpsertValuesIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.PartialIndexRebuilderIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1486//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1486//console

This message is automatically generated.

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: churro morales
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4138.patch, PHOENIX-4138.v1.patch, 
> PHOENIX-4138.v2.patch, PHOENIX-4138_v3.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3955) Ensure KEEP_DELETED_CELLS, REPLICATION_SCOPE, and TTL properties stay in sync between the physical data table and index tables

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3955:
--
Description: We need to make sure that indexes inherit the 
REPLICATION_SCOPE, KEEP_DELETED_CELLS and TTL properties from the base table. 
Otherwise we can run into situations where the data was removed (or not 
removed) from the data table but was removed (or not removed) from the index. 
Or vice-versa. We also need to make sure that any ALTER TABLE SET TTL or ALTER 
TABLE SET KEEP_DELETED_CELLS statements propagate the properties to the indexes 
too.  (was: We need to make sure that indexes inherit the KEEP_DELETED_CELLS 
and TTL properties from the base table. Otherwise we can run into situations 
where the data was removed (or not removed) from the data table but was removed 
(or not removed) from the index. Or vice-versa. We also need to make sure that 
any ALTER TABLE SET TTL or ALTER TABLE SET KEEP_DELETED_CELLS statements 
propagate the properties to the indexes too.)

> Ensure KEEP_DELETED_CELLS, REPLICATION_SCOPE, and TTL properties stay in sync 
> between the physical data table and index tables
> --
>
> Key: PHOENIX-3955
> URL: https://issues.apache.org/jira/browse/PHOENIX-3955
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> We need to make sure that indexes inherit the REPLICATION_SCOPE, 
> KEEP_DELETED_CELLS and TTL properties from the base table. Otherwise we can 
> run into situations where the data was removed (or not removed) from the data 
> table but was removed (or not removed) from the index. Or vice-versa. We also 
> need to make sure that any ALTER TABLE SET TTL or ALTER TABLE SET 
> KEEP_DELETED_CELLS statements propagate the properties to the indexes too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3955) Ensure KEEP_DELETED_CELLS, REPLICATION_SCOPE, and TTL properties stay in sync between the physical data table and index tables

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3955:
--
Summary: Ensure KEEP_DELETED_CELLS, REPLICATION_SCOPE, and TTL properties 
stay in sync between the physical data table and index tables  (was: Indexes 
should inherit the KEEP_DELETED_CELLS and TTL properties from the base table)

> Ensure KEEP_DELETED_CELLS, REPLICATION_SCOPE, and TTL properties stay in sync 
> between the physical data table and index tables
> --
>
> Key: PHOENIX-3955
> URL: https://issues.apache.org/jira/browse/PHOENIX-3955
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> We need to make sure that indexes inherit the KEEP_DELETED_CELLS and TTL 
> properties from the base table. Otherwise we can run into situations where 
> the data was removed (or not removed) from the data table but was removed (or 
> not removed) from the index. Or vice-versa. We also need to make sure that 
> any ALTER TABLE SET TTL or ALTER TABLE SET KEEP_DELETED_CELLS statements 
> propagate the properties to the indexes too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4239) Fix flapping test in PartialIndexRebuilderIT

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4239.
---
   Resolution: Fixed
 Assignee: James Taylor
Fix Version/s: 4.12.0

> Fix flapping test in PartialIndexRebuilderIT
> 
>
> Key: PHOENIX-4239
> URL: https://issues.apache.org/jira/browse/PHOENIX-4239
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4239.patch
>
>
> To get more info on this flapper: 
> https://www.google.com/url?q=https%3A%2F%2Fbuilds.apache.org%2Fjob%2FPhoenix-master%2F1810%2FtestReport%2Fjunit%2Forg.apache.phoenix.end2end.index%2FPartialIndexRebuilderIT%2FtestIndexWriteFailureLeavingIndexActive%2F=D=1=AFQjCNEj0LexiK8bm4GzGex9JUvu0DBJag



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181779#comment-16181779
 ] 

Hadoop QA commented on PHOENIX-4138:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12889159/PHOENIX-4138.v2.patch
  against master branch at commit 5d9572736a991f19121477a0822d4b8bf26b4c69.
  ATTACHMENT ID: 12889159

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
assertFalse(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
+assertTrue(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
+conn1.createStatement().execute("CREATE INDEX " + indexTableNameOne + 
" ON " + tableName + "(COL1) INCLUDE (COL2,COL3,COL4)");
+// here we ensure we get a too many indexes error since we are only 
allowed a max of one index.
+conn1.createStatement().execute("CREATE INDEX " + 
indexTableNameTwo + " ON " + tableName + "(COL2) INCLUDE (COL1,COL3,COL4)");
+assertEquals("ERROR 1047 (43A04): Too many indexes have already 
been created on the physical table. tableName=T01", e.getMessage());
+static boolean execeededIndexQuota(PTableType tableType, PTable 
parentTable, Configuration configuration) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.PartialIndexRebuilderIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpsertValuesIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1484//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1484//console

This message is automatically generated.

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: churro morales
> Attachments: PHOENIX-4138.patch, PHOENIX-4138.v1.patch, 
> PHOENIX-4138.v2.patch, PHOENIX-4138_v3.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181775#comment-16181775
 ] 

James Taylor commented on PHOENIX-4007:
---

+1. I like the new test. Nice work, [~samarthjain].

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v10.patch, PHOENIX-4007_v1.patch, 
> PHOENIX-4007_v2.patch, PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, 
> PHOENIX-4007_v6.patch, PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, 
> PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4138:
--
Attachment: PHOENIX-4138_v3.patch

Tweaked your patch, [~churromorales]. You can't override the 
MAX_INDEXES_PER_TABLE property because it'll impact other tests (the mini 
cluster is shared between many test classes). Also, you forgot to add a fail() 
call after the second index creation (so it'd always succeed).

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: churro morales
> Attachments: PHOENIX-4138.patch, PHOENIX-4138.v1.patch, 
> PHOENIX-4138.v2.patch, PHOENIX-4138_v3.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4239) Fix flapping test in PartialIndexRebuilderIT

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4239:
--
Attachment: PHOENIX-4239.patch

> Fix flapping test in PartialIndexRebuilderIT
> 
>
> Key: PHOENIX-4239
> URL: https://issues.apache.org/jira/browse/PHOENIX-4239
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
> Attachments: PHOENIX-4239.patch
>
>
> To get more info on this flapper: 
> https://www.google.com/url?q=https%3A%2F%2Fbuilds.apache.org%2Fjob%2FPhoenix-master%2F1810%2FtestReport%2Fjunit%2Forg.apache.phoenix.end2end.index%2FPartialIndexRebuilderIT%2FtestIndexWriteFailureLeavingIndexActive%2F=D=1=AFQjCNEj0LexiK8bm4GzGex9JUvu0DBJag



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4239) Fix flapping test in PartialIndexRebuilderIT

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4239:
--
Summary: Fix flapping test in PartialIndexRebuilderIT  (was: Display index 
state on assertion failure for PartialIndexRebuilderIT)

> Fix flapping test in PartialIndexRebuilderIT
> 
>
> Key: PHOENIX-4239
> URL: https://issues.apache.org/jira/browse/PHOENIX-4239
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>
> To get more info on this flapper: 
> https://www.google.com/url?q=https%3A%2F%2Fbuilds.apache.org%2Fjob%2FPhoenix-master%2F1810%2FtestReport%2Fjunit%2Forg.apache.phoenix.end2end.index%2FPartialIndexRebuilderIT%2FtestIndexWriteFailureLeavingIndexActive%2F=D=1=AFQjCNEj0LexiK8bm4GzGex9JUvu0DBJag



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4239) Display index state on assertion failure for PartialIndexRebuilderIT

2017-09-26 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4239:
-

 Summary: Display index state on assertion failure for 
PartialIndexRebuilderIT
 Key: PHOENIX-4239
 URL: https://issues.apache.org/jira/browse/PHOENIX-4239
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor


To get more info on this flapper: 
https://www.google.com/url?q=https%3A%2F%2Fbuilds.apache.org%2Fjob%2FPhoenix-master%2F1810%2FtestReport%2Fjunit%2Forg.apache.phoenix.end2end.index%2FPartialIndexRebuilderIT%2FtestIndexWriteFailureLeavingIndexActive%2F=D=1=AFQjCNEj0LexiK8bm4GzGex9JUvu0DBJag



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4225) Using Google cache may lead to lock up on RS side.

2017-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181651#comment-16181651
 ] 

Hadoop QA commented on PHOENIX-4225:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12889160/PHOENIX-4225-1.patch
  against master branch at commit 5d9572736a991f19121477a0822d4b8bf26b4c69.
  ATTACHMENT ID: 12889160

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+TenantCacheImpl cache = new TenantCacheImpl(memoryManager, 
maxServerCacheTimeToLive, ticker);
+TenantCacheImpl cache = new TenantCacheImpl(memoryManager, 
maxServerCacheTimeToLive, ticker);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1485//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1485//console

This message is automatically generated.

> Using Google cache may lead to lock up on RS side. 
> ---
>
> Key: PHOENIX-4225
> URL: https://issues.apache.org/jira/browse/PHOENIX-4225
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4225-1.patch
>
>
> On the server side we are using google cache with life time bounds. This is 
> integrated with GlobalMemoryManager which is used for almost all tasks that 
> requires memory allocation. The problem is that when the cache member get 
> removed, it doesn't send remove notification until next write/get operation 
> happen. But in some cases once the large cache was removed (but memory 
> manager doesn't know that since it relies on the notification), we try to 
> resend it and memory manager get stuck waiting for free space, blocking all 
> other operations with the memory manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4187) Use server timestamp for ROW_TIMESTAMP column when value is not specified

2017-09-26 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4187:

Fix Version/s: 4.12.0

> Use server timestamp for ROW_TIMESTAMP column when value is not specified
> -
>
> Key: PHOENIX-4187
> URL: https://issues.apache.org/jira/browse/PHOENIX-4187
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4225) Using Google cache may lead to lock up on RS side.

2017-09-26 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4225:
-
Attachment: PHOENIX-4225-1.patch

Easy fix with calling cache cleanup when we try to get/add cache. Two test 
cases added :
1. check that after accessing expired cache the memory manager reports the 
correct value of available memory
2. Check that if cache has expired and never was accessed, adding new cache 
would not block the memory manager. 

> Using Google cache may lead to lock up on RS side. 
> ---
>
> Key: PHOENIX-4225
> URL: https://issues.apache.org/jira/browse/PHOENIX-4225
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4225-1.patch
>
>
> On the server side we are using google cache with life time bounds. This is 
> integrated with GlobalMemoryManager which is used for almost all tasks that 
> requires memory allocation. The problem is that when the cache member get 
> removed, it doesn't send remove notification until next write/get operation 
> happen. But in some cases once the large cache was removed (but memory 
> manager doesn't know that since it relies on the notification), we try to 
> resend it and memory manager get stuck waiting for free space, blocking all 
> other operations with the memory manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4183) Disallow UPSERT that specifies ROW_TIMESTAMP column value for table with mutable index

2017-09-26 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4183:

Fix Version/s: 4.12.0

> Disallow UPSERT that specifies ROW_TIMESTAMP column value for table with 
> mutable index
> --
>
> Key: PHOENIX-4183
> URL: https://issues.apache.org/jira/browse/PHOENIX-4183
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
>
> When a table has row_timestamp column, user provided value for the column 
> ends up being the timestamp of the corresponding mutation. This could be 
> problematic for our mutable indexes. Immutable tables and indexes are fine, 
> though. We should detect this when the UPSERT is performed and give an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4183) Disallow UPSERT that specifies ROW_TIMESTAMP column value for table with mutable index

2017-09-26 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-4183:
---

Assignee: Thomas D'Silva

> Disallow UPSERT that specifies ROW_TIMESTAMP column value for table with 
> mutable index
> --
>
> Key: PHOENIX-4183
> URL: https://issues.apache.org/jira/browse/PHOENIX-4183
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Fix For: 4.12.0
>
>
> When a table has row_timestamp column, user provided value for the column 
> ends up being the timestamp of the corresponding mutation. This could be 
> problematic for our mutable indexes. Immutable tables and indexes are fine, 
> though. We should detect this when the UPSERT is performed and give an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-09-26 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated PHOENIX-4138:

Attachment: PHOENIX-4138.v2.patch

[~jamestaylor] added an integration test and fixed the header.  Hope this made 
it in time. 

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: churro morales
> Attachments: PHOENIX-4138.patch, PHOENIX-4138.v1.patch, 
> PHOENIX-4138.v2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181636#comment-16181636
 ] 

Samarth Jain commented on PHOENIX-4007:
---

[~jamestaylor] - does this look good now? The two test failures are unrelated 
to my patch. They fail without it too.

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v10.patch, PHOENIX-4007_v1.patch, 
> PHOENIX-4007_v2.patch, PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, 
> PHOENIX-4007_v6.patch, PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, 
> PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181581#comment-16181581
 ] 

Hadoop QA commented on PHOENIX-4224:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12889150/PHOENIX-4224-2.patch
  against master branch at commit 5d9572736a991f19121477a0822d4b8bf26b4c69.
  ATTACHMENT ID: 12889150

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1483//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1483//console

This message is automatically generated.

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4224-1.patch, PHOENIX-4224-2.patch
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-09-26 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181579#comment-16181579
 ] 

Vincent Poon commented on PHOENIX-3815:
---

+1, @James Taylor

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch, PHOENIX-3815_v3.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-09-26 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181579#comment-16181579
 ] 

Vincent Poon edited comment on PHOENIX-3815 at 9/26/17 9:09 PM:


+1, [~jamestaylor]


was (Author: vincentpoon):
+1, @James Taylor

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch, PHOENIX-3815_v3.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4229) Parent-Child linking rows in System.Catalog break tenant view replication

2017-09-26 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181577#comment-16181577
 ] 

Geoffrey Jacoby commented on PHOENIX-4229:
--

[~jamestaylor] - That's at best tricky to get right. What's the exact format of 
the forward pointer, and can it be readily distinguished from other 
non-tenanted rows by its rowkey? 

Replication has to make a call about whether to replicate or not based on the 
table, row key, column family, and column qualifier of an edit. It has the 
values too, but no notion of holistically viewing several Cells in a WALEdit 
with the same rowkey as a row  -- this is HBase level, not Phoenix. A WALEdit 
might contain multiple Cells, but there's no guarantee that they come from the 
same row, and, given that Phoenix splits batches up, no guarantee that all 
Cells of a changing row will be in the same WALEdit, right? 

Before the forward pointers were added, the logic was very simple: if the 
tenant-id field of the System.Catalog row key was non-zero, the edit 
replicates, and if it's not, it isn't. 

> Parent-Child linking rows in System.Catalog break tenant view replication
> -
>
> Key: PHOENIX-4229
> URL: https://issues.apache.org/jira/browse/PHOENIX-4229
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.12.0
>Reporter: Geoffrey Jacoby
>
> PHOENIX-2051 introduced new Parent-Child linking rows to System.Catalog that 
> speed up view deletion. Unfortunately, this breaks assumptions in 
> PHOENIX-3639, which gives a way to replicate tenant views from one cluster to 
> another. (It assumes that all the metadata for a tenant view is owned by the 
> tenant -- the linking rows are not.) 
> PHOENIX-3639 was a workaround in the first place to the more fundamental 
> design problem that Phoenix places the metadata for both table schemas -- 
> which should never be replicated -- in the same table and column family as 
> the metadata for tenant views, which should be replicated. 
> Note that the linking rows also make it more difficult to ever split these 
> two datasets apart, as proposed in PHOENIX-3520.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4233) IndexScrutiny test tool does not work for salted and shared index tables

2017-09-26 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181562#comment-16181562
 ] 

Vincent Poon commented on PHOENIX-4233:
---

 [~jamestaylor] Created PHOENIX-4238 to add this to the MR-based scrutiny tool

> IndexScrutiny test tool does not work for salted and shared index tables
> 
>
> Key: PHOENIX-4233
> URL: https://issues.apache.org/jira/browse/PHOENIX-4233
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4233.patch
>
>
> Our IndexScrutiny test-only tool does not handle salted tables or local or 
> view indexes correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4238) Add support for salted and shared index tables to IndexScrutinyTool MR

2017-09-26 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-4238:
-

 Summary: Add support for salted and shared index tables to 
IndexScrutinyTool MR
 Key: PHOENIX-4238
 URL: https://issues.apache.org/jira/browse/PHOENIX-4238
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Vincent Poon
Assignee: Vincent Poon


The IndexScrutinyTool MR job doesn't work for salted and shared table.  We 
should add support for this, similar to PHOENIX-4233



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-26 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181554#comment-16181554
 ] 

Vincent Poon commented on PHOENIX-4214:
---

Thanks [~jamestaylor], I'll take a look.  IIRC the WIP was working on 0.98 so 
should be easy to adapt.  I'll try to get a patch up for 0.98 soon.

> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214-4.x-HBase-0.98_v1.patch, 
> PHOENIX-4214.master.v1.patch, splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-26 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4224:
-
Attachment: PHOENIX-4224-2.patch

Added an integration test for expired cache. The case with resending cache is 
already covered by the rest tests in HashJoinCacheIT. 

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4224-1.patch, PHOENIX-4224-2.patch
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181546#comment-16181546
 ] 

Hudson commented on PHOENIX-4214:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1811 (See 
[https://builds.apache.org/job/Phoenix-master/1811/])
PHOENIX-4214 Scans which write should not block region split or close (jtaylor: 
rev 5d9572736a991f19121477a0822d4b8bf26b4c69)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/execute/UpsertSelectOverlappingBatchesIT.java


> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214-4.x-HBase-0.98_v1.patch, 
> PHOENIX-4214.master.v1.patch, splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4229) Parent-Child linking rows in System.Catalog break tenant view replication

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181536#comment-16181536
 ] 

James Taylor commented on PHOENIX-4229:
---

How about adding the forward pointer when the view is replicated? We have the 
back pointer as part of the child.

> Parent-Child linking rows in System.Catalog break tenant view replication
> -
>
> Key: PHOENIX-4229
> URL: https://issues.apache.org/jira/browse/PHOENIX-4229
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.12.0
>Reporter: Geoffrey Jacoby
>
> PHOENIX-2051 introduced new Parent-Child linking rows to System.Catalog that 
> speed up view deletion. Unfortunately, this breaks assumptions in 
> PHOENIX-3639, which gives a way to replicate tenant views from one cluster to 
> another. (It assumes that all the metadata for a tenant view is owned by the 
> tenant -- the linking rows are not.) 
> PHOENIX-3639 was a workaround in the first place to the more fundamental 
> design problem that Phoenix places the metadata for both table schemas -- 
> which should never be replicated -- in the same table and column family as 
> the metadata for tenant views, which should be replicated. 
> Note that the linking rows also make it more difficult to ever split these 
> two datasets apart, as proposed in PHOENIX-3520.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3815.
---
Resolution: Fixed

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch, PHOENIX-3815_v3.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181499#comment-16181499
 ] 

James Taylor commented on PHOENIX-4214:
---

FYI, test passes on master and 4.x-HBase-1.2 branch.

> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214-4.x-HBase-0.98_v1.patch, 
> PHOENIX-4214.master.v1.patch, splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4233) IndexScrutiny test tool does not work for salted and shared index tables

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4233.
---
Resolution: Fixed

> IndexScrutiny test tool does not work for salted and shared index tables
> 
>
> Key: PHOENIX-4233
> URL: https://issues.apache.org/jira/browse/PHOENIX-4233
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4233.patch
>
>
> Our IndexScrutiny test-only tool does not handle salted tables or local or 
> view indexes correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181474#comment-16181474
 ] 

James Taylor commented on PHOENIX-4214:
---

Fails on 4.x-HBase-1.1 branch with:
{code}
[INFO] Running org.apache.phoenix.execute.UpsertSelectOverlappingBatchesIT
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 125.877 
s <<< FAILURE! - in org.apache.phoenix.execute.UpsertSelectOverlappingBatchesIT
[ERROR] 
testSplitDuringUpsertSelect(org.apache.phoenix.execute.UpsertSelectOverlappingBatchesIT)
  Time elapsed: 41.12 s  <<< FAILURE!
junit.framework.AssertionFailedError: Waiting timed out after [30,000] msec
at 
org.apache.phoenix.execute.UpsertSelectOverlappingBatchesIT.testSplitDuringUpsertSelect(UpsertSelectOverlappingBatchesIT.java:209)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   UpsertSelectOverlappingBatchesIT.testSplitDuringUpsertSelect:209 
Waiting timed out after [30,000] msec
{code}

> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214-4.x-HBase-0.98_v1.patch, 
> PHOENIX-4214.master.v1.patch, splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4214:
--
Attachment: PHOENIX-4214-4.x-HBase-0.98_v1.patch

[~vincentpoon] - this test is hanging on 0.98 (see attached). Please advise.

> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214-4.x-HBase-0.98_v1.patch, 
> PHOENIX-4214.master.v1.patch, splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3061) IndexTool marks index as ACTIVE and exit 0 even if bulkload has error

2017-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181430#comment-16181430
 ] 

ASF GitHub Bot commented on PHOENIX-3061:
-

Github user SsnL closed the pull request at:

https://github.com/apache/phoenix/pull/178


> IndexTool marks index as ACTIVE and exit 0 even if bulkload has error
> -
>
> Key: PHOENIX-3061
> URL: https://issues.apache.org/jira/browse/PHOENIX-3061
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
>Priority: Blocker
>  Labels: easyfix
> Fix For: 4.8.0
>
> Attachments: fix.diff, PHOENIX-3061_addendum2.patch, 
> PHOENIX-3061_addendum3.patch, PHOENIX-3061_addendum4.patch
>
>
> In `IndexTool`, the job exits with code 0 and marks the index table as ACTIVE 
> even though MapReduce had error.
> See: 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java#L246-L256



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2017-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181429#comment-16181429
 ] 

ASF GitHub Bot commented on PHOENIX-3097:
-

Github user SsnL closed the pull request at:

https://github.com/apache/phoenix/pull/185


> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #178: [PHOENIX-3061] IndexTool exits 0 even if bulkload...

2017-09-26 Thread SsnL
Github user SsnL closed the pull request at:

https://github.com/apache/phoenix/pull/178


---


[GitHub] phoenix pull request #185: [PHOENIX-3097] Incompatibilities with HBase 0.98....

2017-09-26 Thread SsnL
Github user SsnL closed the pull request at:

https://github.com/apache/phoenix/pull/185


---


[jira] [Commented] (PHOENIX-4215) Partial index rebuild never complete after PHOENIX-3525 when rebuild period is configured

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181395#comment-16181395
 ] 

James Taylor commented on PHOENIX-4215:
---

bq. In the testcase patch uploaded here PHOENIX-4215_testcase.patch setting 
period to 1000 milli seconds. With this able to reproduce the issue and meeting 
all the requirements what ever you are saying. As you already said you have a 
test that's cool.
For the record, [~rajeshbabu], your test did not uncover PHOENIX-4220. Without 
this fix, every batch would have written everything from the start of the batch 
until LATEST_TIMESTAMP (which obviously completely defeats the idea of time 
batching and would have performed horribly). We need to be more diligent in our 
testing, IMHO.

> Partial index rebuild never complete after PHOENIX-3525 when rebuild period 
> is configured
> -
>
> Key: PHOENIX-4215
> URL: https://issues.apache.org/jira/browse/PHOENIX-4215
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4215_testcase.patch, PHOENIX-4215_wip2.patch, 
> PHOENIX-4215_wip.patch
>
>
> Currently the default value of phoenix.index.failure.handling.rebuild.period 
> is long max. When we configure it some thing like an hour or day then partial 
> index rebuild never complete and the index is never usable until recreate it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4215) Partial index rebuild never complete after PHOENIX-3525 when rebuild period is configured

2017-09-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4215.
---
Resolution: Duplicate

Duplicate of PHOENIX-4178

> Partial index rebuild never complete after PHOENIX-3525 when rebuild period 
> is configured
> -
>
> Key: PHOENIX-4215
> URL: https://issues.apache.org/jira/browse/PHOENIX-4215
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4215_testcase.patch, PHOENIX-4215_wip2.patch, 
> PHOENIX-4215_wip.patch
>
>
> Currently the default value of phoenix.index.failure.handling.rebuild.period 
> is long max. When we configure it some thing like an hour or day then partial 
> index rebuild never complete and the index is never usable until recreate it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales

2017-09-26 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda updated PHOENIX-4237:
--
Description: 
Strings stored via Phoenix can be composed from a subset of the entire set of 
Unicode characters. The natural sort order for strings for different languages 
often differs from the order dictated by the binary representation of the 
characters of these strings. Java provides the idea of a Collator which given 
an input string and a (language) locale can generate a Collation Key which can 
then be used to compare strings in that natural order.

Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J 
some time ago. These technologies can be combined to provide a robust new 
Phoenix function that can be used in an ORDER BY clause to sort strings 
according to the user's locale.

  was:
Strings stored via Phoenix can be from the entire set of Unicode characters. 
The natural sort order for strings for different languages often differs from 
the order dictated by the binary representation of the characters of these 
strings. Java provides the idea of a Collator which given an input string and a 
(language) locale can generate a Collation Key which can then be used to 
compare strings in that natural order.

Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J 
some time ago. These technologies can be combined to provide a robust new 
Phoenix function that can be used in an ORDER BY clause to sort strings 
according to the user's locale.


> Allow sorting on (Java) collation keys for non-English locales
> --
>
> Key: PHOENIX-4237
> URL: https://issues.apache.org/jira/browse/PHOENIX-4237
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Shehzaad Nakhoda
>
> Strings stored via Phoenix can be composed from a subset of the entire set of 
> Unicode characters. The natural sort order for strings for different 
> languages often differs from the order dictated by the binary representation 
> of the characters of these strings. Java provides the idea of a Collator 
> which given an input string and a (language) locale can generate a Collation 
> Key which can then be used to compare strings in that natural order.
> Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J 
> some time ago. These technologies can be combined to provide a robust new 
> Phoenix function that can be used in an ORDER BY clause to sort strings 
> according to the user's locale.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales

2017-09-26 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda updated PHOENIX-4237:
--
Description: 
Strings stored via Phoenix can be from the entire set of Unicode characters. 
The natural sort order for strings for different languages often differs from 
the order dictated by the binary representation of the characters of these 
strings. Java provides the idea of a Collator which given an input string and a 
(language) locale can generate a Collation Key which can then be used to 
compare strings in that natural order.

Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J 
some time ago. These technologies can be combined to provide a robust new 
Phoenix function that can be used in an ORDER BY clause to sort strings 
according to the user's locale.

> Allow sorting on (Java) collation keys for non-English locales
> --
>
> Key: PHOENIX-4237
> URL: https://issues.apache.org/jira/browse/PHOENIX-4237
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Shehzaad Nakhoda
>
> Strings stored via Phoenix can be from the entire set of Unicode characters. 
> The natural sort order for strings for different languages often differs from 
> the order dictated by the binary representation of the characters of these 
> strings. Java provides the idea of a Collator which given an input string and 
> a (language) locale can generate a Collation Key which can then be used to 
> compare strings in that natural order.
> Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J 
> some time ago. These technologies can be combined to provide a robust new 
> Phoenix function that can be used in an ORDER BY clause to sort strings 
> according to the user's locale.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales

2017-09-26 Thread Shehzaad Nakhoda (JIRA)
Shehzaad Nakhoda created PHOENIX-4237:
-

 Summary: Allow sorting on (Java) collation keys for non-English 
locales
 Key: PHOENIX-4237
 URL: https://issues.apache.org/jira/browse/PHOENIX-4237
 Project: Phoenix
  Issue Type: Improvement
Reporter: Shehzaad Nakhoda






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4232) Hide shadow cell and commit table access in TAL

2017-09-26 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181177#comment-16181177
 ] 

Ohad Shacham commented on PHOENIX-4232:
---

Thanks [~giacomotaylor].

I looked at Tephra's implementation today.

If we can read different cells and access the commit table from the filter's 
logic then we can basically implement a filter, as Tephra guys did, and at the 
client side add an attribute with the required info (transaction, shadow cells 
suffix, etc...). I assume the htable of the commit table will also need to be 
serialized and transferred? Maybe also the one of the original table, in case 
it cannot be extracted? Not sure, how large these are and whether it will 
degrade performance, however, I assume we will need to do it in any such a case.

This way we will have a general solution (an optimization actually) that can be 
used in vanilla Omid and not only when it is used by Phoenix.
 We can create a coprocessor and return it at the TAL function "getCoProcessor" 
when Omid is used by Phoenix.

What do you think?

Thx,
Ohad

> Hide shadow cell and commit table access in TAL
> ---
>
> Key: PHOENIX-4232
> URL: https://issues.apache.org/jira/browse/PHOENIX-4232
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>  Labels: omid
>
> Omid needs to project the shadow cell column qualifier and then based on the 
> value, filter the row. If the shadow cell is not found, it needs to perform a 
> lookup in the commit table (the source of truth) to get the information 
> instead. For the Phoenix integration, there are likely two TAL methods that 
> can be added to handle this:
> # Add method call to new TAL method in preScannerOpen call on coprocessor 
> that projects the shadow cell qualifiers and sets the time range. This is 
> equivalent to the TransactionProcessor.preScannerOpen that Tephra does. It's 
> possible this work could be done on the client side as well, but it's more 
> likely that the stuff that Phoenix does may override this (but we could get 
> it to work if need be).
> # Add TAL method that returns a RegionScanner to abstract out the filtering 
> of the row (potentially querying commit table). This RegionScanner would be 
> added as the first in the chain in the 
> NonAggregateRegionScannerFactory.getRegionScanner() API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (PHOENIX-4235) Add Schema as option to JDBC connection strings

2017-09-26 Thread Brian Stincer (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Stincer closed PHOENIX-4235.
--

Duplicate of PHOENIX-2571

> Add Schema as option to JDBC connection strings
> ---
>
> Key: PHOENIX-4235
> URL: https://issues.apache.org/jira/browse/PHOENIX-4235
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Brian Stincer
>Priority: Minor
>
> Allow users to declare a default schema to use on connection.
> jdbc:phoenix:thin:url=://:[;scheme=][;option=value...]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4235) Add Schema as option to JDBC connection strings

2017-09-26 Thread Brian Stincer (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Stincer resolved PHOENIX-4235.

Resolution: Fixed

Duplicate of: PHOENIX-2571

> Add Schema as option to JDBC connection strings
> ---
>
> Key: PHOENIX-4235
> URL: https://issues.apache.org/jira/browse/PHOENIX-4235
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Brian Stincer
>Priority: Minor
>
> Allow users to declare a default schema to use on connection.
> jdbc:phoenix:thin:url=://:[;scheme=][;option=value...]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4236) Snapshot reads over transactional tables

2017-09-26 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4236:
-

 Summary: Snapshot reads over transactional tables
 Key: PHOENIX-4236
 URL: https://issues.apache.org/jira/browse/PHOENIX-4236
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Akshita Malhotra


Based on the way coprocessors are not invoked for snapshot reads, I'm not sure 
it's working correctly for transactional tables. We need to either:
- Refactor the TransactionProcessor.preScannerOpen() method to be called prior 
to creating the RegionScanner or
- Make sure the client is doing everything that coprocessor call is doing: 
setting max versions, setting the time range, projecting family delete marker, 
adding visibility filter.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181078#comment-16181078
 ] 

Josh Elser commented on PHOENIX-4224:
-

Some unit tests for the new methods added to {{ServerCache}} would be awesome.

{noformat}
+for(HRegionLocation loc : servers) {
+this.servers.put(loc, System.currentTimeMillis());
+}
{noformat}

Call {{System.currentTimeMillis()}} once and add it for all servers. This will 
unnecessarily slow for large clusters.

+1 with a new unit test or two.

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4224-1.patch
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181062#comment-16181062
 ] 

James Taylor commented on PHOENIX-4138:
---

Need this today if it's going to make 4.12, [~churromorales].

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: churro morales
> Attachments: PHOENIX-4138.patch, PHOENIX-4138.v1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-09-26 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180999#comment-16180999
 ] 

Karan Mehta commented on PHOENIX-3757:
--

[~an...@apache.org] 
Can you clarify the difference between online and offline upgrade?

> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4235) Add Schema as option to JDBC connection strings

2017-09-26 Thread Brian Stincer (JIRA)
Brian Stincer created PHOENIX-4235:
--

 Summary: Add Schema as option to JDBC connection strings
 Key: PHOENIX-4235
 URL: https://issues.apache.org/jira/browse/PHOENIX-4235
 Project: Phoenix
  Issue Type: New Feature
Reporter: Brian Stincer
Priority: Minor


Allow users to declare a default schema to use on connection.

jdbc:phoenix:thin:url=://:[;scheme=][;option=value...]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4234) Unable to find failed csv records in phoenix logs

2017-09-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4234.

Resolution: Invalid

> Unable to find failed csv records in phoenix logs
> -
>
> Key: PHOENIX-4234
> URL: https://issues.apache.org/jira/browse/PHOENIX-4234
> Project: Phoenix
>  Issue Type: Bug
>Reporter: suprita bothra
>
> Unable to fetch missing records information in phoenix table.How can we fetch 
> the missing records info.
> Like while parsing csv into hbase via bulkloading via mapreduce,and using 
> --igonre-errors  option to parse csv.
> So csv records having error are skipped but we are unable to fetch the info 
> of records which are skipped/failed and dint go into table.
> There must be logs of such information .Please help in identifying if we can 
> get logs of failed records



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4234) Unable to find failed csv records in phoenix logs

2017-09-26 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180685#comment-16180685
 ] 

Ankit Singhal commented on PHOENIX-4234:


It should go in the standard yarn container logs(yarn log-collector tool can be 
used for getting logs from all the machines at one place in HDFS). 
A log line would contain something like "ERROR: something  Error on record 
".

Note:- user mailing list( u...@phoenix.apache.org) should be used for asking 
these type of questions.

> Unable to find failed csv records in phoenix logs
> -
>
> Key: PHOENIX-4234
> URL: https://issues.apache.org/jira/browse/PHOENIX-4234
> Project: Phoenix
>  Issue Type: Bug
>Reporter: suprita bothra
>
> Unable to fetch missing records information in phoenix table.How can we fetch 
> the missing records info.
> Like while parsing csv into hbase via bulkloading via mapreduce,and using 
> --igonre-errors  option to parse csv.
> So csv records having error are skipped but we are unable to fetch the info 
> of records which are skipped/failed and dint go into table.
> There must be logs of such information .Please help in identifying if we can 
> get logs of failed records



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4234) Unable to find failed csv records in phoenix logs

2017-09-26 Thread suprita bothra (JIRA)
suprita bothra created PHOENIX-4234:
---

 Summary: Unable to find failed csv records in phoenix logs
 Key: PHOENIX-4234
 URL: https://issues.apache.org/jira/browse/PHOENIX-4234
 Project: Phoenix
  Issue Type: Bug
Reporter: suprita bothra


Unable to fetch missing records information in phoenix table.How can we fetch 
the missing records info.
Like while parsing csv into hbase via bulkloading via mapreduce,and using 
--igonre-errors  option to parse csv.

So csv records having error are skipped but we are unable to fetch the info of 
records which are skipped/failed and dint go into table.
There must be logs of such information .Please help in identifying if we can 
get logs of failed records



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180486#comment-16180486
 ] 

Hadoop QA commented on PHOENIX-4007:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12889010/PHOENIX-4007_v9.patch
  against master branch at commit 94601de5f5f966fb8bcd1a069409bee460bf2400.
  ATTACHMENT ID: 12889010

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
++ " ( k INTEGER, c1.a bigint,c2.b bigint CONSTRAINT pk 
PRIMARY KEY (k)) GUIDE_POSTS_WIDTH = 0");
++ " ( k INTEGER, c1.a bigint,c2.b bigint 
CONSTRAINT pk PRIMARY KEY (k)) GUIDE_POSTS_WIDTH="
++ " (orgId CHAR(15) NOT NULL, pk2 integer NOT NULL, 
c1.a bigint, c2.b bigint CONSTRAINT PK PRIMARY KEY "
+"CLIENT 1-CHUNK 0 ROWS 20 BYTES PARALLEL 1-WAY FULL SCAN OVER 
" + physicalTableName + "\n" +
+String stats = columnEncoded && !mutable  ? "4-CHUNK 1 ROWS 38 BYTES" 
: "3-CHUNK 0 ROWS 20 BYTES";
++ " ( k INTEGER, c1.a bigint,c2.b bigint 
CONSTRAINT pk PRIMARY KEY (k)) GUIDE_POSTS_WIDTH="
+// If there are no guide posts within the query range, we use 
the estimateInfoTimestamp
+while (intersectWithGuidePosts && (endKey.length == 0 || 
currentGuidePost.compareTo(endKey) <= 0)) {
+Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
+scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.PartialIndexRebuilderIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpsertValuesIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1480//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1480//console

This message is automatically generated.

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v10.patch, PHOENIX-4007_v1.patch, 
> PHOENIX-4007_v2.patch, PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, 
> PHOENIX-4007_v6.patch, PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, 
> PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4233) IndexScrutiny test tool does not work for salted and shared index tables

2017-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180414#comment-16180414
 ] 

Hudson commented on PHOENIX-4233:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1809 (See 
[https://builds.apache.org/job/Phoenix-master/1809/])
PHOENIX-4233 IndexScrutiny test tool does not work for salted and shared 
(jtaylor: rev 94601de5f5f966fb8bcd1a069409bee460bf2400)
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/IndexScrutiny.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/util/IndexScrutinyIT.java


> IndexScrutiny test tool does not work for salted and shared index tables
> 
>
> Key: PHOENIX-4233
> URL: https://issues.apache.org/jira/browse/PHOENIX-4233
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4233.patch
>
>
> Our IndexScrutiny test-only tool does not handle salted tables or local or 
> view indexes correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180413#comment-16180413
 ] 

Hudson commented on PHOENIX-4230:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1809 (See 
[https://builds.apache.org/job/Phoenix-master/1809/])
PHOENIX-4230 Write index updates in postBatchMutateIndispensably for (jtaylor: 
rev d13a2e5b27db8d2232a9fc9890a37052f0f9)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixTransactionalIndexer.java


> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch, PHOENIX-4230_v2.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180395#comment-16180395
 ] 

Hadoop QA commented on PHOENIX-4007:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12889019/PHOENIX-4007_v10.patch
  against master branch at commit 94601de5f5f966fb8bcd1a069409bee460bf2400.
  ATTACHMENT ID: 12889019

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail .

Compilation errors resume:
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean) on 
project phoenix-core: Failed to clean project: Failed to delete 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/target
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :phoenix-core


Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1481//console

This message is automatically generated.

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v10.patch, PHOENIX-4007_v1.patch, 
> PHOENIX-4007_v2.patch, PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, 
> PHOENIX-4007_v6.patch, PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, 
> PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4007:
--
Attachment: PHOENIX-4007_v10.patch

Thanks for the feedback, James. Hopefully this should do it. I had to resort to 
adding some extra state in DefaultStatisticsCollector just so that we can 
initialize statsWriter later. Not the biggest fan of this approach, but I am 
not sure what else can be done.

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v10.patch, PHOENIX-4007_v1.patch, 
> PHOENIX-4007_v2.patch, PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, 
> PHOENIX-4007_v6.patch, PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, 
> PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180320#comment-16180320
 ] 

James Taylor commented on PHOENIX-4007:
---

bq. In this patch I have removed the init method altogether and just calling 
initGuidePostDepth once in the constructor of DefaultStatisticsCollector
The problem with that is that we want to do as much of the stats work as 
possible asynchronously. The init method is called by the asynchronous thread. 
The constructor of DefaultStatisticsCollector is not. Let's keep the init 
method as it was and ideally delay the construction of StatisticsWriter until 
the initGuidePostDepth method is called.

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v1.patch, PHOENIX-4007_v2.patch, 
> PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, PHOENIX-4007_v6.patch, 
> PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180316#comment-16180316
 ] 

Hadoop QA commented on PHOENIX-4007:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12889010/PHOENIX-4007_v9.patch
  against master branch at commit 94601de5f5f966fb8bcd1a069409bee460bf2400.
  ATTACHMENT ID: 12889010

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
++ " ( k INTEGER, c1.a bigint,c2.b bigint CONSTRAINT pk 
PRIMARY KEY (k)) GUIDE_POSTS_WIDTH = 0");
++ " ( k INTEGER, c1.a bigint,c2.b bigint 
CONSTRAINT pk PRIMARY KEY (k)) GUIDE_POSTS_WIDTH="
++ " (orgId CHAR(15) NOT NULL, pk2 integer NOT NULL, 
c1.a bigint, c2.b bigint CONSTRAINT PK PRIMARY KEY "
+"CLIENT 1-CHUNK 0 ROWS 20 BYTES PARALLEL 1-WAY FULL SCAN OVER 
" + physicalTableName + "\n" +
+String stats = columnEncoded && !mutable  ? "4-CHUNK 1 ROWS 38 BYTES" 
: "3-CHUNK 0 ROWS 20 BYTES";
++ " ( k INTEGER, c1.a bigint,c2.b bigint 
CONSTRAINT pk PRIMARY KEY (k)) GUIDE_POSTS_WIDTH="
+// If there are no guide posts within the query range, we use 
the estimateInfoTimestamp
+while (intersectWithGuidePosts && (endKey.length == 0 || 
currentGuidePost.compareTo(endKey) <= 0)) {
+Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
+scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1479//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1479//console

This message is automatically generated.

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v1.patch, PHOENIX-4007_v2.patch, 
> PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, PHOENIX-4007_v6.patch, 
> PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4007) Surface time at which byte/row estimate information was computed in explain plan output

2017-09-26 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4007:
--
Attachment: PHOENIX-4007_v9.patch

Updated patch with additional tests that check for the exact timestamp for the 
guidepost. I need to pass in the guidepost width to StatsWriter. In this patch 
I have removed the init method altogether and just calling initGuidePostDepth 
once in the constructor of DefaultStatisticsCollector.

> Surface time at which byte/row estimate information was computed in explain 
> plan output
> ---
>
> Key: PHOENIX-4007
> URL: https://issues.apache.org/jira/browse/PHOENIX-4007
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4007_v1.patch, PHOENIX-4007_v2.patch, 
> PHOENIX-4007_v3.patch, PHOENIX-4007_v4.patch, PHOENIX-4007_v6.patch, 
> PHOENIX-4007_v7.patch, PHOENIX-4007_v8.patch, PHOENIX-4007_v9.patch
>
>
> As part of PHOENIX-3822, we surfaced byte and row estimates for queries in 
> explain plan. Since we collect this information through stats collection, it 
> would also be helpful to surface when this information was last updated to 
> reflect its freshness. We already store last_stats_update_time in 
> SYSTEM.STATS. So the task would be essentially surfacing 
> last_stats_update_time as another column in the explain plan result set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)