[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131742#comment-16131742
 ] 

Hudson commented on PHOENIX-4094:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1737 (See 
[https://builds.apache.org/job/Phoenix-master/1737/])
PHOENIX-4094 ParallelWriterIndexCommitter incorrectly applys local (chenglei: 
rev ab67f30278bbf98338b2b347524e0bd17923257f)
* (add) 
phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALRecoveryRegionPostOpenIT.java


> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch, 
> PHOENIX-4094_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 

[jira] [Commented] (PHOENIX-4099) Do not write table data again when replaying mutations for partial index rebuild

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131717#comment-16131717
 ] 

Hadoop QA commented on PHOENIX-4099:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12882479/PHOENIX-4099.patch
  against master branch at commit 649b737a81243adc43b508a90addc9a2962c6bc1.
  ATTACHMENT ID: 12882479

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.phoenix.end2end.HashJoinIT.testJoinWithDifferentDateJoinKeyTypes(HashJoinIT.java:2411)
at 
org.apache.phoenix.end2end.HashJoinLocalIndexIT.testJoinWithLocalIndex(HashJoinLocalIndexIT.java:94)
at 
org.apache.phoenix.end2end.EncodeFunctionIT.testNullEncodingType(EncodeFunctionIT.java:120)
at 
org.apache.phoenix.end2end.AlterTableIT.testAddVarCols(AlterTableIT.java:657)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1275//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1275//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1275//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1275//console

This message is automatically generated.

> Do not write table data again when replaying mutations for partial index 
> rebuild
> 
>
> Key: PHOENIX-4099
> URL: https://issues.apache.org/jira/browse/PHOENIX-4099
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4099.patch, PHOENIX-4099_v2.patch
>
>
> There's no need to re-write the data table mutations when we're replaying 
> them to partially rebuild the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4099) Do not write table data again when replaying mutations for partial index rebuild

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131707#comment-16131707
 ] 

Hadoop QA commented on PHOENIX-4099:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12882508/PHOENIX-4099_v2.patch
  against master branch at commit ab67f30278bbf98338b2b347524e0bd17923257f.
  ATTACHMENT ID: 12882508

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+throw new IllegalArgumentException("Unknown ReplayWrite code 
of " + Bytes.toStringBinary(replayWriteBytes));
+
mutation.setAttribute(BaseScannerRegionObserver.REPLAY_WRITES, 
BaseScannerRegionObserver.REPLAY_TABLE_AND_INDEX_WRITES);
+  boolean resetTimeStamp = replayWrite == null && 
!isProbablyClientControlledTimeStamp(firstMutation);
+Pair statePair = 
state.getIndexUpdateState(maintainer.getAllColumns(), metaData.getReplayWrite() 
!= null, false, context);
+Pair statePair = 
state.getIndexUpdateState(cols, metaData.getReplayWrite() != null, true, 
context);
+
put.setAttribute(BaseScannerRegionObserver.REPLAY_WRITES, 
BaseScannerRegionObserver.REPLAY_ONLY_INDEX_WRITES);
+
del.setAttribute(BaseScannerRegionObserver.REPLAY_WRITES, 
BaseScannerRegionObserver.REPLAY_ONLY_INDEX_WRITES);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.phoenix.hbase.index.covered.NonTxIndexBuilderTest
  
org.apache.phoenix.hbase.index.covered.update.TestIndexUpdateManager

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1276//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1276//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1276//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1276//console

This message is automatically generated.

> Do not write table data again when replaying mutations for partial index 
> rebuild
> 
>
> Key: PHOENIX-4099
> URL: https://issues.apache.org/jira/browse/PHOENIX-4099
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4099.patch, PHOENIX-4099_v2.patch
>
>
> There's no need to re-write the data table mutations when we're replaying 
> them to partially rebuild the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131699#comment-16131699
 ] 

chenglei commented on PHOENIX-4094:
---

Pushed the patch to 4.x-HBase-0.98, and pushed the IT tests to master, 
4.x-HBase-1.2 and 4.x-HBase-1.1 branches.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch, 
> PHOENIX-4094_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x 

[jira] [Updated] (PHOENIX-4099) Do not write table data again when replaying mutations for partial index rebuild

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4099:
--
Attachment: PHOENIX-4099_v2.patch

When the REPLAY_AT connection property is used, we still need to write the data 
table mutations, so v2 of the patch writes a 1 for this case (which was what 
was written before) or a 2 (when replaying data writes for partial index 
rebuild). Backward compat is fine because we're using the same value as before.

> Do not write table data again when replaying mutations for partial index 
> rebuild
> 
>
> Key: PHOENIX-4099
> URL: https://issues.apache.org/jira/browse/PHOENIX-4099
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4099.patch, PHOENIX-4099_v2.patch
>
>
> There's no need to re-write the data table mutations when we're replaying 
> them to partially rebuild the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4095) Prevent index from getting out of sync with data table during partial rebuild

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131666#comment-16131666
 ] 

Hudson commented on PHOENIX-4095:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1736 (See 
[https://builds.apache.org/job/Phoenix-master/1736/])
PHOENIX-4095 Prevent index from getting out of sync with data table 
(jamestaylor: rev 649b737a81243adc43b508a90addc9a2962c6bc1)
* (delete) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/TestLocalTableState.java
* (delete) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/TestNonTxIndexBuilder.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/LocalTableStateTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/NonTxIndexBuilder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
* (delete) phoenix-core/src/it/java/org/apache/phoenix/util/TestUtilIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/OutOfOrderMutationsIT.java
* (delete) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/TestCoveredColumns.java
* (add) phoenix-core/src/test/java/org/apache/phoenix/util/IndexScrutiny.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/example/CoveredColumnIndexer.java
* (add) phoenix-core/src/it/java/org/apache/phoenix/util/IndexScrutinyIT.java
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/CoveredColumnsTest.java
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/NonTxIndexBuilderTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/util/IndexManagementUtil.java


> Prevent index from getting out of sync with data table during partial rebuild
> -
>
> Key: PHOENIX-4095
> URL: https://issues.apache.org/jira/browse/PHOENIX-4095
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4095_v1.patch, PHOENIX-4095_v2.patch
>
>
> When there are many versions of a row, the partial index rebuilder is not 
> correctly updating the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4099) Do not write table data again when replaying mutations for partial index rebuild

2017-08-17 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131603#comment-16131603
 ] 

Thomas D'Silva commented on PHOENIX-4099:
-

+1

> Do not write table data again when replaying mutations for partial index 
> rebuild
> 
>
> Key: PHOENIX-4099
> URL: https://issues.apache.org/jira/browse/PHOENIX-4099
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4099.patch
>
>
> There's no need to re-write the data table mutations when we're replaying 
> them to partially rebuild the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4089) Prevent index from getting out of sync with data table under high concurrency

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131573#comment-16131573
 ] 

Hudson commented on PHOENIX-4089:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1735 (See 
[https://builds.apache.org/job/Phoenix-master/1735/])
PHOENIX-4089 Prevent index from getting out of sync with data table 
(jamestaylor: rev ce6b891fd658f6593845d1155509d0f8a599336f)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/PostIndexDDLCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/builder/BaseIndexBuilder.java
* (add) phoenix-core/src/it/java/org/apache/phoenix/util/TestUtilIT.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/builder/IndexBuildManager.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/builder/IndexBuilder.java
* (add) phoenix-core/src/test/java/org/apache/phoenix/util/RunUntilFailure.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* (add) phoenix-core/src/test/java/org/apache/phoenix/util/Repeat.java


> Prevent index from getting out of sync with data table under high concurrency
> -
>
> Key: PHOENIX-4089
> URL: https://issues.apache.org/jira/browse/PHOENIX-4089
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4089_4.x-HBase-0.98.patch, 
> PHOENIX-4089_4.x-HBase-0.98_v2.patch, PHOENIX-4089_v1.patch, 
> PHOENIX_4089_v2.patch, PHOENIX_4089_v3.patch, PHOENIX-4089_v4.patch
>
>
> Under high concurrency, we're still seeing the index get out of sync with the 
> data table. It seems that the particular case is when the same Put occurs 
> with the same time stamp from different clients, based on the locking we do, 
> Phoenix thinks a different Put was the last one than HBase does, leading to 
> inconsistencies.
> The solution is to timestamp the cells on the server-side after the lock has 
> been taken. The new concurrent unit test passes 50x with this in place, while 
> it otherwise fails 1/10 of the time (or more on HBase 1.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4100) Simplify mutable secondary index implementation

2017-08-17 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4100:
-

 Summary: Simplify mutable secondary index implementation
 Key: PHOENIX-4100
 URL: https://issues.apache.org/jira/browse/PHOENIX-4100
 Project: Phoenix
  Issue Type: Improvement
Reporter: James Taylor


There's a lot of code for mutable secondary indexes, a lot of which is 
commented out (because it's not working under load). With all the latest 
patches, in particular PHOENIX-4089, we don't need most of it. Instead, we can 
do a simple scan to find the current value of the data row. We won't have 
mutations at the same time due to our locking and due to us timestamping the 
rows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4099) Do not write table data again when replaying mutations for partial index rebuild

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4099:
--
Attachment: PHOENIX-4099.patch

Simple optimization. Would you mind reviewing, [~tdsilva]?

> Do not write table data again when replaying mutations for partial index 
> rebuild
> 
>
> Key: PHOENIX-4099
> URL: https://issues.apache.org/jira/browse/PHOENIX-4099
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4099.patch
>
>
> There's no need to re-write the data table mutations when we're replaying 
> them to partially rebuild the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4099) Do not write table data again when replaying mutations for partial index rebuild

2017-08-17 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4099:
-

 Summary: Do not write table data again when replaying mutations 
for partial index rebuild
 Key: PHOENIX-4099
 URL: https://issues.apache.org/jira/browse/PHOENIX-4099
 Project: Phoenix
  Issue Type: Improvement
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.12.0


There's no need to re-write the data table mutations when we're replaying them 
to partially rebuild the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2460) Implement scrutiny command to validate whether or not an index is in sync with the data table

2017-08-17 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131556#comment-16131556
 ] 

Vincent Poon commented on PHOENIX-2460:
---

[~jamestaylor] [~samarthjain] Mind taking a look at this PR when you get time?
https://github.com/apache/phoenix/pull/269

Thanks!

> Implement scrutiny command to validate whether or not an index is in sync 
> with the data table
> -
>
> Key: PHOENIX-2460
> URL: https://issues.apache.org/jira/browse/PHOENIX-2460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Attachments: PHOENIX-2460.patch
>
>
> We should have a process that runs to verify that an index is valid against a 
> data table and potentially fixes it if discrepancies are found. This could 
> either be a MR job or a low priority background task.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #269: PHOENIX-2460 Implement scrutiny command to valida...

2017-08-17 Thread vincentpoon
GitHub user vincentpoon opened a pull request:

https://github.com/apache/phoenix/pull/269

PHOENIX-2460 Implement scrutiny command to validate whether or not an…

… index is in sync with the data table

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vincentpoon/phoenix PHOENIX-2460

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/269.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #269


commit 7fab3f1ea6cc6829b107f358fc6faf5f6e0afa83
Author: Vincent 
Date:   2017-08-18T00:46:41Z

PHOENIX-2460 Implement scrutiny command to validate whether or not an index 
is in sync with the data table




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-4098) BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131487#comment-16131487
 ] 

Hudson commented on PHOENIX-4098:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1734 (See 
[https://builds.apache.org/job/Phoenix-master/1734/])
PHOENIX-4098 BaseUniqueNamesOwnClusterIT doesn't need to tear down (samarth: 
rev c92ddc451f93f34144be9aeda7cb9cedece450b3)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseUniqueNamesOwnClusterIT.java


> BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster
> -
>
> Key: PHOENIX-4098
> URL: https://issues.apache.org/jira/browse/PHOENIX-4098
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4092) Ensure index and table remains in sync when the table is mutating during index build

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4092.
---
Resolution: Duplicate

Duplicate of PHOENIX-2582

> Ensure index and table remains in sync when the table is mutating during 
> index build
> 
>
> Key: PHOENIX-4092
> URL: https://issues.apache.org/jira/browse/PHOENIX-4092
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> There's code in MetaDataClient.buildIndex() which runs a "catchup" query 
> after the initial index population finishes to find any rows for inflight 
> writes made while the population is taking place. This is meant to handle the 
> case in which one client runs an UPSERT SELECT while another issues a CREATE 
> INDEX. Since the UPSERT SELECT began before the CREATE INDEX, index 
> maintenance will not be performed. The catchup query is meant to handle this 
> scenario, though it makes an assumption that it can wait long enough for any 
> such DML operations to complete prior to running the catchup query. Instead, 
> we should have a mechanism to wait until all inflight DML operations on a 
> table are complete.
> Note also that if an index is built asynchronously, there's no catchup query 
> run at all.
> We should increase the testing we have around this scenario and deal with 
> these corner cases. For one such test, see 
> ImmutableIndexIT.testCreateIndexDuringUpsertSelect().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4095) Prevent index from getting out of sync with data table during partial rebuild

2017-08-17 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131466#comment-16131466
 ] 

Thomas D'Silva commented on PHOENIX-4095:
-

+1

> Prevent index from getting out of sync with data table during partial rebuild
> -
>
> Key: PHOENIX-4095
> URL: https://issues.apache.org/jira/browse/PHOENIX-4095
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4095_v1.patch, PHOENIX-4095_v2.patch
>
>
> When there are many versions of a row, the partial index rebuilder is not 
> correctly updating the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4095) Prevent index from getting out of sync with data table during partial rebuild

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131461#comment-16131461
 ] 

Hadoop QA commented on PHOENIX-4095:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12882459/PHOENIX-4095_v2.patch
  against master branch at commit ce6b891fd658f6593845d1155509d0f8a599336f.
  ATTACHMENT ID: 12882459

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(2000));
+private static boolean mutateRandomly(Connection conn, String 
fullTableName, int nRows) throws Exception {
+private static boolean hasInactiveIndex(PMetaData metaCache, PTableKey 
key) throws TableNotFoundException {
+private static boolean isAllActiveIndex(PMetaData metaCache, PTableKey 
key) throws TableNotFoundException {
+private static boolean mutateRandomly(Connection conn, String 
fullTableName, int nRows, boolean checkForInactive) throws SQLException, 
InterruptedException {
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(" + pk + "," + v1 + "," + v2 + ")");
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(" + pk + "," + v1 + "," + v2 + ")");
+conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k INTEGER PRIMARY KEY, v1 INTEGER, v2 INTEGER) COLUMN_ENCODED_BYTES = 0, 
STORE_NULLS=true");
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.phoenix.util.TestUtilIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1274//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1274//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1274//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1274//console

This message is automatically generated.

> Prevent index from getting out of sync with data table during partial rebuild
> -
>
> Key: PHOENIX-4095
> URL: https://issues.apache.org/jira/browse/PHOENIX-4095
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4095_v1.patch, PHOENIX-4095_v2.patch
>
>
> When there are many versions of a row, the partial index rebuilder is not 
> correctly updating the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4092) Ensure index and table remains in sync when the table is mutating during index build

2017-08-17 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131445#comment-16131445
 ] 

Thomas D'Silva commented on PHOENIX-4092:
-

[~apurtell] had some suggestions related to this in an email conversation 
https://issues.apache.org/jira/browse/PHOENIX-2582?focusedCommentId=15113226=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15113226

> Ensure index and table remains in sync when the table is mutating during 
> index build
> 
>
> Key: PHOENIX-4092
> URL: https://issues.apache.org/jira/browse/PHOENIX-4092
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> There's code in MetaDataClient.buildIndex() which runs a "catchup" query 
> after the initial index population finishes to find any rows for inflight 
> writes made while the population is taking place. This is meant to handle the 
> case in which one client runs an UPSERT SELECT while another issues a CREATE 
> INDEX. Since the UPSERT SELECT began before the CREATE INDEX, index 
> maintenance will not be performed. The catchup query is meant to handle this 
> scenario, though it makes an assumption that it can wait long enough for any 
> such DML operations to complete prior to running the catchup query. Instead, 
> we should have a mechanism to wait until all inflight DML operations on a 
> table are complete.
> Note also that if an index is built asynchronously, there's no catchup query 
> run at all.
> We should increase the testing we have around this scenario and deal with 
> these corner cases. For one such test, see 
> ImmutableIndexIT.testCreateIndexDuringUpsertSelect().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4095) Prevent index from getting out of sync with data table during partial rebuild

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4095:
--
Attachment: PHOENIX-4095_v2.patch

Attach v2 that fixes test failures

> Prevent index from getting out of sync with data table during partial rebuild
> -
>
> Key: PHOENIX-4095
> URL: https://issues.apache.org/jira/browse/PHOENIX-4095
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4095_v1.patch, PHOENIX-4095_v2.patch
>
>
> When there are many versions of a row, the partial index rebuilder is not 
> correctly updating the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3817) VerifyReplication using SQL

2017-08-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131425#comment-16131425
 ] 

Andrew Purtell commented on PHOENIX-3817:
-

If I do a replication of 1 rows from T1 to T2, GOODROWS=1. If I then 
delete the first row from T2 with sqline, I'll get this in the hbase shell

{noformat}
 row(s) in 0.6430 seconds
{noformat}

Correct. I deleted one row.

VerifyReplication seems confused:

{noformat}
org.apache.phoenix.mapreduce.VerifyReplicationTool$Verifier$Counter
BADROWS=1
ONLY_IN_SOURCE_TABLE_ROWS=100
{noformat}

That's not right. We are only missing one row.

> VerifyReplication using SQL
> ---
>
> Key: PHOENIX-3817
> URL: https://issues.apache.org/jira/browse/PHOENIX-3817
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch, 
> PHOENIX-3817.v3.patch, PHOENIX-3817.v4.patch
>
>
> Certain use cases may copy or replicate a subset of a table to a different 
> table or cluster. For example, application topologies may map data for 
> specific tenants to different peer clusters.
> It would be useful to have a Phoenix VerifyReplication tool that accepts an 
> SQL query, a target table, and an optional target cluster. The tool would 
> compare data returned by the query on the different tables and update various 
> result counters (similar to HBase's VerifyReplication).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4095) Prevent index from getting out of sync with data table during partial rebuild

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131405#comment-16131405
 ] 

Hadoop QA commented on PHOENIX-4095:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12882453/PHOENIX-4095_v1.patch
  against master branch at commit c92ddc451f93f34144be9aeda7cb9cedece450b3.
  ATTACHMENT ID: 12882453

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1273//console

This message is automatically generated.

> Prevent index from getting out of sync with data table during partial rebuild
> -
>
> Key: PHOENIX-4095
> URL: https://issues.apache.org/jira/browse/PHOENIX-4095
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4095_v1.patch
>
>
> When there are many versions of a row, the partial index rebuilder is not 
> correctly updating the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4095) Prevent index from getting out of sync with data table during partial rebuild

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4095:
--
Attachment: PHOENIX-4095_v1.patch

Please review, [~tdsilva]. This patch:
- ensures that the partial index rebuilder works under heavy load
- moves the logic to sort by time stamp into Indexer
- doesn't write to WAL during index rebuilding
- use thread local to carry state between coprocessor calls (due to lack of 
alternative)
- adds a bunch of high concurrency and other corner cases for the index 
rebuilder

> Prevent index from getting out of sync with data table during partial rebuild
> -
>
> Key: PHOENIX-4095
> URL: https://issues.apache.org/jira/browse/PHOENIX-4095
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4095_v1.patch
>
>
> When there are many versions of a row, the partial index rebuilder is not 
> correctly updating the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131352#comment-16131352
 ] 

Hudson commented on PHOENIX-4088:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1733 (See 
[https://builds.apache.org/job/Phoenix-master/1733/])
PHOENIX-4088 Clean up SQLExceptionCode (Csaba Skrabak) (elserj: rev 
f8bd40e9fdd57b3af5ed8dc4f08357b22b78b479)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> SQLExceptionCode.java code beauty and typos
> ---
>
> Key: PHOENIX-4088
> URL: https://issues.apache.org/jira/browse/PHOENIX-4088
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Trivial
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4088.patch
>
>
> * Fix typos in log message strings
> * Fix typo in enum constant name introduced in PHOENIX-2862
> * Organize line breaks around the last enum constants like they are in the 
> top ones



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131328#comment-16131328
 ] 

Hadoop QA commented on PHOENIX-4094:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12882393/PHOENIX-4094_v1.patch
  against master branch at commit b13413614fef3cdb87233fd1543081e7198d685f.
  ATTACHMENT ID: 12882393

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
57 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private static volatile Multimap 
tableReferenceToMutation=null;
+serverProps.put("hbase.coprocessor.region.classes", 
IndexTableFailingRegionObserver.class.getName());
+serverProps.put(Indexer.RecoveryFailurePolicyKeyForTesting, 
ReleaseLatchOnFailurePolicy.class.getName());
+serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB, 
Boolean.FALSE.toString());
+Map clientProps = 
Collections.singletonMap(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.FALSE.toString());
+setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
+public void 
preBatchMutate(ObserverContext observerContext, 
MiniBatchOperationInProgress miniBatchOp) throws IOException {
+if 
(observerContext.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().contains(INDEX_TABLE_NAME)
 && failIndexTableWrite) {
+
if(Bytes.toString(family).startsWith(QueryConstants.LOCAL_INDEX_COLUMN_FAMILY_PREFIX)
 && failIndexTableWrite) {
+public void handleFailure(Multimap 
attempted, Exception cause) throws IOException

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DerivedTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SequenceIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CustomEntityDataIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TransactionalViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexReplicationIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CreateTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.OrderByIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.FirstValuesFunctionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.StatementHintsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.RowValueConstructorIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.InListIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IsNullIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.txn.RollbackIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.StatsCollectorIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DynamicUpsertIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.QueryExecWithoutSCNIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.RenewLeaseIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ChildViewsUseParentViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TopNIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.EvaluationOfORIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CsvBulkLoadToolIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.monitoring.PhoenixMetricsIT

[jira] [Resolved] (PHOENIX-4098) BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster

2017-08-17 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-4098.
---
   Resolution: Fixed
Fix Version/s: 4.12.0

> BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster
> -
>
> Key: PHOENIX-4098
> URL: https://issues.apache.org/jira/browse/PHOENIX-4098
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4098) BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster

2017-08-17 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reassigned PHOENIX-4098:
-

Assignee: Samarth Jain

> BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster
> -
>
> Key: PHOENIX-4098
> URL: https://issues.apache.org/jira/browse/PHOENIX-4098
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4089) Prevent index from getting out of sync with data table under high concurrency

2017-08-17 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131320#comment-16131320
 ] 

Thomas D'Silva commented on PHOENIX-4089:
-

+1 LGTM

> Prevent index from getting out of sync with data table under high concurrency
> -
>
> Key: PHOENIX-4089
> URL: https://issues.apache.org/jira/browse/PHOENIX-4089
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4089_4.x-HBase-0.98.patch, 
> PHOENIX-4089_4.x-HBase-0.98_v2.patch, PHOENIX-4089_v1.patch, 
> PHOENIX_4089_v2.patch, PHOENIX_4089_v3.patch, PHOENIX-4089_v4.patch
>
>
> Under high concurrency, we're still seeing the index get out of sync with the 
> data table. It seems that the particular case is when the same Put occurs 
> with the same time stamp from different clients, based on the locking we do, 
> Phoenix thinks a different Put was the last one than HBase does, leading to 
> inconsistencies.
> The solution is to timestamp the cells on the server-side after the lock has 
> been taken. The new concurrent unit test passes 50x with this in place, while 
> it otherwise fails 1/10 of the time (or more on HBase 1.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131284#comment-16131284
 ] 

Andrew Purtell commented on PHOENIX-4094:
-

[~jamestaylor] Noted, following up on that

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch, 
> PHOENIX-4094_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 

[jira] [Updated] (PHOENIX-4098) BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster

2017-08-17 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4098:
--
Attachment: PHOENIX-4098.patch

> BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster
> -
>
> Key: PHOENIX-4098
> URL: https://issues.apache.org/jira/browse/PHOENIX-4098
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4098) BaseUniqueNamesOwnClusterIT doesn't need to tear down minicluster

2017-08-17 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4098:
-

 Summary: BaseUniqueNamesOwnClusterIT doesn't need to tear down 
minicluster
 Key: PHOENIX-4098
 URL: https://issues.apache.org/jira/browse/PHOENIX-4098
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4089) Prevent index from getting out of sync with data table under high concurrency

2017-08-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131256#comment-16131256
 ] 

James Taylor commented on PHOENIX-4089:
---

[~tdsilva]
We still need Indexer.isProbablyClientControlledTimeStamp method because tests 
are still relying on being able to use CURRENT_SCN for DML. I've filed 
PHOENIX-4096 to change that.

For UPSERT SELECT, since we keep the scanner open that's doing the SELECT, it 
won't see new rows from the UPSERT. For example, the 
UpsertSelectAutoCommitIT.testUpsertSelectDoesntSeeUpsertedData passes fine with 
this change. One potential caveat is if the region splits while the UPSERT 
SELECT is running. I believe that fails today (see PHOENIX-3163 and 
PHOENIX-2903), but I've filed PHOENIX-4097 to deal with that once these others 
are fixed.

> Prevent index from getting out of sync with data table under high concurrency
> -
>
> Key: PHOENIX-4089
> URL: https://issues.apache.org/jira/browse/PHOENIX-4089
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4089_4.x-HBase-0.98.patch, 
> PHOENIX-4089_4.x-HBase-0.98_v2.patch, PHOENIX-4089_v1.patch, 
> PHOENIX_4089_v2.patch, PHOENIX_4089_v3.patch, PHOENIX-4089_v4.patch
>
>
> Under high concurrency, we're still seeing the index get out of sync with the 
> data table. It seems that the particular case is when the same Put occurs 
> with the same time stamp from different clients, based on the locking we do, 
> Phoenix thinks a different Put was the last one than HBase does, leading to 
> inconsistencies.
> The solution is to timestamp the cells on the server-side after the lock has 
> been taken. The new concurrent unit test passes 50x with this in place, while 
> it otherwise fails 1/10 of the time (or more on HBase 1.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4097) Ensure new rows not seen if split occurs during UPSERT SELECT to same table

2017-08-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131258#comment-16131258
 ] 

James Taylor commented on PHOENIX-4097:
---

FYI, another split-related issue, [~aertoria].

> Ensure new rows not seen if split occurs during UPSERT SELECT to same table
> ---
>
> Key: PHOENIX-4097
> URL: https://issues.apache.org/jira/browse/PHOENIX-4097
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Since we're running at the latest time stamp during an UPSERT SELECT, we 
> should ensure that if a split occurs, we still do not see the new rows that 
> have been upsert. If that's not possible, we should likely fail the UPSERT 
> SELECT.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4097) Ensure new rows not seen if split occurs during UPSERT SELECT to same table

2017-08-17 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4097:
-

 Summary: Ensure new rows not seen if split occurs during UPSERT 
SELECT to same table
 Key: PHOENIX-4097
 URL: https://issues.apache.org/jira/browse/PHOENIX-4097
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


Since we're running at the latest time stamp during an UPSERT SELECT, we should 
ensure that if a split occurs, we still do not see the new rows that have been 
upsert. If that's not possible, we should likely fail the UPSERT SELECT.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4096) Disallow DML operations on connections with CURRENT_SCN set

2017-08-17 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4096:
-

 Summary: Disallow DML operations on connections with CURRENT_SCN 
set
 Key: PHOENIX-4096
 URL: https://issues.apache.org/jira/browse/PHOENIX-4096
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


We should make a connection read-only if CURRENT_SCN is set. It's really a bad 
idea to go back in time and update data and it won't work with secondary 
indexing, potentially leading to your index and table getting out of sync.

For testing purposes, where we need to control the timestamp, we should rely on 
the EnvironmentEdgeManager instead to control the current time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131013#comment-16131013
 ] 

Hadoop QA commented on PHOENIX-2048:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12850265/phoenix-2048.patch
  against master branch at commit b13413614fef3cdb87233fd1543081e7198d685f.
  ATTACHMENT ID: 12850265

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+String query = "select to_char(col_decimal, '" + pattern + "') 
from " + TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_date, '" + pattern + "') from " + 
TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_time, '" + pattern + "') from " + 
TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_timestamp, '" + pattern + "') from 
" + TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_integer, '" + pattern + "') from " 
+ TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_decimal, '" + pattern + "') from " 
+ TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1269//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1269//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1269//console

This message is automatically generated.

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131005#comment-16131005
 ] 

Hadoop QA commented on PHOENIX-2048:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12882386/PHOENIX-2048.patch
  against master branch at commit b13413614fef3cdb87233fd1543081e7198d685f.
  ATTACHMENT ID: 12882386

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+String query = "select to_char(col_decimal, '" + pattern + "') 
from " + TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_date, '" + pattern + "') from " + 
TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_time, '" + pattern + "') from " + 
TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_timestamp, '" + pattern + "') from 
" + TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_integer, '" + pattern + "') from " 
+ TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;
+String query = "select to_char(col_decimal, '" + pattern + "') from " 
+ TO_CHAR_TABLE_NAME + " WHERE pk = " + pk;

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1270//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1270//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1270//console

This message is automatically generated.

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4089) Prevent index from getting out of sync with data table under high concurrency

2017-08-17 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130951#comment-16130951
 ] 

Thomas D'Silva commented on PHOENIX-4089:
-

[~jamestaylor]

In  MutationState.validate we no longer set the timestamp to table timestamp, 
it is set to the scn or LATEST_TIMESTAMP, so do we still need the 
Indexer.isProbablyClientControlledTimeStamp method?

{code}
-return scn == null ? serverTimeStamp == QueryConstants.UNSET_TIMESTAMP 
? HConstants.LATEST_TIMESTAMP : serverTimeStamp : scn;
+return scn == null ? HConstants.LATEST_TIMESTAMP : scn;
{code}

Also will UPSERT SELECT to the same table work correctly with this change? I 
think we discussed something similar in PHOENIX-4051.

> Prevent index from getting out of sync with data table under high concurrency
> -
>
> Key: PHOENIX-4089
> URL: https://issues.apache.org/jira/browse/PHOENIX-4089
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4089_4.x-HBase-0.98.patch, 
> PHOENIX-4089_4.x-HBase-0.98_v2.patch, PHOENIX-4089_v1.patch, 
> PHOENIX_4089_v2.patch, PHOENIX_4089_v3.patch, PHOENIX-4089_v4.patch
>
>
> Under high concurrency, we're still seeing the index get out of sync with the 
> data table. It seems that the particular case is when the same Put occurs 
> with the same time stamp from different clients, based on the locking we do, 
> Phoenix thinks a different Put was the last one than HBase does, leading to 
> inconsistencies.
> The solution is to timestamp the cells on the server-side after the lock has 
> been taken. The new concurrent unit test passes 50x with this in place, while 
> it otherwise fails 1/10 of the time (or more on HBase 1.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130912#comment-16130912
 ] 

Hadoop QA commented on PHOENIX-2370:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12877299/PHOENIX-2370.patch
  against master branch at commit b13413614fef3cdb87233fd1543081e7198d685f.
  ATTACHMENT ID: 12877299

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+assertEquals(PhoenixResultSetMetaData.DEFAULT_DISPLAY_WIDTH, 
rs.getMetaData().getColumnDisplaySize(5));
+"CREATE TABLE T (pk1 CHAR(15) not null PRIMARY KEY, VB10 
VARBINARY(10), VBHUGE VARBINARY(2147483647), VB VARBINARY) ");
+assertEquals(PhoenixResultSetMetaData.DEFAULT_DISPLAY_WIDTH, 
rs.getMetaData().getColumnDisplaySize(4));

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.NotQueryIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1268//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1268//console

This message is automatically generated.

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>Assignee: Csaba Skrabak
>  Labels: newbie, verify
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2370.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4095) Prevent index from getting out of sync with data table during partial rebuild

2017-08-17 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4095:
-

 Summary: Prevent index from getting out of sync with data table 
during partial rebuild
 Key: PHOENIX-4095
 URL: https://issues.apache.org/jira/browse/PHOENIX-4095
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.12.0


When there are many versions of a row, the partial index rebuilder is not 
correctly updating the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4092) Ensure index and table remains in sync when the table is mutating during index build

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4092:
--
Summary: Ensure index and table remains in sync when the table is mutating 
during index build  (was: Ensure index and table remains in sync when the table 
is mutating)

> Ensure index and table remains in sync when the table is mutating during 
> index build
> 
>
> Key: PHOENIX-4092
> URL: https://issues.apache.org/jira/browse/PHOENIX-4092
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> There's code in MetaDataClient.buildIndex() which runs a "catchup" query 
> after the initial index population finishes to find any rows for inflight 
> writes made while the population is taking place. This is meant to handle the 
> case in which one client runs an UPSERT SELECT while another issues a CREATE 
> INDEX. Since the UPSERT SELECT began before the CREATE INDEX, index 
> maintenance will not be performed. The catchup query is meant to handle this 
> scenario, though it makes an assumption that it can wait long enough for any 
> such DML operations to complete prior to running the catchup query. Instead, 
> we should have a mechanism to wait until all inflight DML operations on a 
> table are complete.
> Note also that if an index is built asynchronously, there's no catchup query 
> run at all.
> We should increase the testing we have around this scenario and deal with 
> these corner cases. For one such test, see 
> ImmutableIndexIT.testCreateIndexDuringUpsertSelect().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130836#comment-16130836
 ] 

James Taylor commented on PHOENIX-4094:
---

+1. Good idea, [~comnetwork].

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch, 
> PHOENIX-4094_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}

[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130824#comment-16130824
 ] 

chenglei commented on PHOENIX-4094:
---

[~jamestaylor],thank you for the review, the IT test in my patch can also be 
used for master branch, so upload another patch just including IT test for the 
master.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch, 
> PHOENIX-4094_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, 

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Attachment: PHOENIX-4094_v1.patch

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch, 
> PHOENIX-4094_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148   

[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130814#comment-16130814
 ] 

James Taylor commented on PHOENIX-4094:
---

+1. Thanks for the patch, [~comnetwork].

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   

[jira] [Commented] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130807#comment-16130807
 ] 

Josh Elser commented on PHOENIX-4088:
-

Looks like the job got killed by something -- let me do a quick test locally 
(this is trivial enough of a change).

> SQLExceptionCode.java code beauty and typos
> ---
>
> Key: PHOENIX-4088
> URL: https://issues.apache.org/jira/browse/PHOENIX-4088
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Trivial
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4088.patch
>
>
> * Fix typos in log message strings
> * Fix typo in enum constant name introduced in PHOENIX-2862
> * Organize line breaks around the last enum constants like they are in the 
> top ones



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130798#comment-16130798
 ] 

Hadoop QA commented on PHOENIX-4094:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12882385/PHOENIX-4094_4.x-HBase-0.98_v1.patch
  against 4.x-HBase-0.98 branch at commit 
b13413614fef3cdb87233fd1543081e7198d685f.
  ATTACHMENT ID: 12882385

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1271//console

This message is automatically generated.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, 

[jira] [Updated] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2048:
---
Attachment: (was: phoenix-2048.patch)

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-2048:
---
Attachment: PHOENIX-2048.patch

Whitespace errors fixed in [^PHOENIX-2048.patch].

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: phoenix-2048.patch, PHOENIX-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Attachment: (was: PHOENIX-4094-4.x-HBase-0.98_v1.patch)

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148   

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Attachment: PHOENIX-4094_4.x-HBase-0.98_v1.patch

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094_4.x-HBase-0.98_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148  

[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130773#comment-16130773
 ] 

chenglei commented on PHOENIX-4094:
---

Uploaded my first patch, please help me have a review.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094-4.x-HBase-0.98_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147 

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Attachment: PHOENIX-4094-4.x-HBase-0.98_v1.patch

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4094-4.x-HBase-0.98_v1.patch
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148  

[jira] [Commented] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130765#comment-16130765
 ] 

Hadoop QA commented on PHOENIX-4088:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12882146/PHOENIX-4088.patch
  against master branch at commit b13413614fef3cdb87233fd1543081e7198d685f.
  ATTACHMENT ID: 12882146

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
++ QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " for enabling 
name space mapping isn't enabled."),
+INCONSISTENT_NAMESPACE_MAPPING_PROPERTIES(726, "43M10", " Inconsistent 
namespace mapping properties.."),

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1267//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1267//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1267//console

This message is automatically generated.

> SQLExceptionCode.java code beauty and typos
> ---
>
> Key: PHOENIX-4088
> URL: https://issues.apache.org/jira/browse/PHOENIX-4088
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Trivial
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4088.patch
>
>
> * Fix typos in log message strings
> * Fix typo in enum constant name introduced in PHOENIX-2862
> * Organize line breaks around the last enum constants like they are in the 
> top ones



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130753#comment-16130753
 ] 

Josh Elser commented on PHOENIX-4088:
-

Also, not sure why qa didn't run. Kicked it 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1267/console

> SQLExceptionCode.java code beauty and typos
> ---
>
> Key: PHOENIX-4088
> URL: https://issues.apache.org/jira/browse/PHOENIX-4088
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Trivial
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4088.patch
>
>
> * Fix typos in log message strings
> * Fix typo in enum constant name introduced in PHOENIX-2862
> * Organize line breaks around the last enum constants like they are in the 
> top ones



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-2370:
--

Assignee: Csaba Skrabak

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>Assignee: Csaba Skrabak
>  Labels: newbie, verify
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2370.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-08-17 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-2048:
--

Assignee: Csaba Skrabak

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.12.0
>
> Attachments: phoenix-2048.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4088) SQLExceptionCode.java code beauty and typos

2017-08-17 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130746#comment-16130746
 ] 

Csaba Skrabak commented on PHOENIX-4088:


[~elserj], thanks.

> SQLExceptionCode.java code beauty and typos
> ---
>
> Key: PHOENIX-4088
> URL: https://issues.apache.org/jira/browse/PHOENIX-4088
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Trivial
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4088.patch
>
>
> * Fix typos in log message strings
> * Fix typo in enum constant name introduced in PHOENIX-2862
> * Organize line breaks around the last enum constants like they are in the 
> top ones



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130728#comment-16130728
 ] 

chenglei commented on PHOENIX-4094:
---

[~jamestaylor], just wait a while, I am running all the IT tests in my local 
machine.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148 

[jira] [Comment Edited] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130667#comment-16130667
 ] 

chenglei edited comment on PHOENIX-4094 at 8/17/17 3:53 PM:


I wrote a IT test for Indexer.preWALRestore/postOpen to reproduce this issue in 
my patch.


was (Author: comnetwork):
I wrote a IT test to reproduce this issue in my patch.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the 

[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130691#comment-16130691
 ] 

James Taylor commented on PHOENIX-4094:
---

Please put together a patch, [~comnetwork]. FYI, [~apurtell] - we should check 
this and include this patch in our fork.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> 

[jira] [Assigned] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4094:
-

Assignee: chenglei

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>Assignee: chenglei
> Fix For: 4.12.0
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148if (allowLocalUpdates
> 149&& env != 

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4094:
--
Fix Version/s: 4.12.0

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
> Fix For: 4.12.0
>
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> If a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148if (allowLocalUpdates
> 149&& env != null
> 150

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Description: 
I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
cluster a certain time, I noticed some  RegionServers have plenty of  
{{WrongRegionException}} as following:

{code:java}
2017-08-01 11:53:10,669 WARN  
[rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
 regionserver.HRegion: Failed getting lock in batch put, 
row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of 
range for row lock on HRegion 
BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
 startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
at 
org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is caused by the ParallelWriterIndexCommitter.write method, in 
following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
mutations to current data table region unconditionlly,which is obviously 
inappropriate: 

{code:java}
 150try {
 151  if (allowLocalUpdates && env != null) {
 152   try {
 153   throwFailureIfDone();
 154   IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
 155   return null;
 156   } catch (IOException ignord) {
 157   // when it's failed we fall back to the 
standard & slow way
 158   if (LOG.isDebugEnabled()) {
 159   LOG.debug("indexRegion.batchMutate 
failed and fall back to HTable.batch(). Got error="
 160   + ignord);
 161   }
 162   }
 163   }
{code}

If a data table has a global index table , and when we replay the WALs to index 
table in Indexer.postOpen method in following 
line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
parameter for the global index table would  incorrectly be written to the 
current data table region:

{code:java}
688// do the usual writer stuff, killing the server again, if we can't 
manage to make the index
689// writes succeed again
690try {
691writer.writeAndKillYourselfOnFailure(updates, true);
692} catch (IOException e) {
693LOG.error("During WAL replay of outstanding index updates, "
694+ "Exception is thrown instead of killing server 
during index writing", e);
695}
696} finally {
{code}

However, ParallelWriterIndexCommitter.write method in the master and other 4.x 
branches is correct, just as following line 150 and line 151 :
{code:java}
147   try {
148if (allowLocalUpdates
149&& env != null
150&& tableReference.getTableName().equals(
151
env.getRegion().getTableDesc().getNameAsString())) {
152try {
153throwFailureIfDone();
154IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
155return null;
156} catch (IOException ignord) {
157// when it's failed we fall back to the 
standard & slow way
158if (LOG.isDebugEnabled()) {
159

[jira] [Comment Edited] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130667#comment-16130667
 ] 

chenglei edited comment on PHOENIX-4094 at 8/17/17 3:19 PM:


I wrote a IT test to reproduce this issue in my patch.


was (Author: comnetwork):
I write a IT test to reproduce this issue in my patch.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> When a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 

[jira] [Commented] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130667#comment-16130667
 ] 

chenglei commented on PHOENIX-4094:
---

I write a IT test to reproduce this issue in my patch.

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
> cluster a certain time, I noticed some  RegionServers have plenty of  
> {{WrongRegionException}} as following:
> {code:java}
> 2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
>  startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
> getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
> row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
> at 
> org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The problem is caused by the ParallelWriterIndexCommitter.write method, in 
> following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
> mutations to current data table region unconditionlly,which is obviously 
> inappropriate: 
> {code:java}
>  150try {
>  151  if (allowLocalUpdates && env != null) {
>  152   try {
>  153   throwFailureIfDone();
>  154   
> IndexUtil.writeLocalUpdates(env.getRegion(), mutations, true);
>  155   return null;
>  156   } catch (IOException ignord) {
>  157   // when it's failed we fall back to the 
> standard & slow way
>  158   if (LOG.isDebugEnabled()) {
>  159   LOG.debug("indexRegion.batchMutate 
> failed and fall back to HTable.batch(). Got error="
>  160   + ignord);
>  161   }
>  162   }
>  163   }
> {code}
> When a data table has a global index table , and when we replay the WALs to 
> index table in Indexer.postOpen method in following 
> line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
> parameter for the global index table would  incorrectly be written to the 
> current data table region:
> {code:java}
> 688// do the usual writer stuff, killing the server again, if we 
> can't manage to make the index
> 689// writes succeed again
> 690try {
> 691writer.writeAndKillYourselfOnFailure(updates, true);
> 692} catch (IOException e) {
> 693LOG.error("During WAL replay of outstanding index updates, 
> "
> 694+ "Exception is thrown instead of killing server 
> during index writing", e);
> 695}
> 696} finally {
> {code}
> However, ParallelWriterIndexCommitter.write method in the master and other 
> 4.x branches is correct, just as following line 150 and line 151 :
> {code:java}
> 147   try {
> 148if (allowLocalUpdates
> 149&& env != null

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Description: 
I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
cluster a certain time, I noticed some  RegionServers have plenty of  
{{WrongRegionException}} as following:

{code:java}
2017-08-01 11:53:10,669 WARN  
[rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
 regionserver.HRegion: Failed getting lock in batch put, 
row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of 
range for row lock on HRegion 
BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
 startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
at 
org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is caused by the ParallelWriterIndexCommitter.write method, in 
following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
mutations to current data table region unconditionlly,which is obviously 
inappropriate: 

{code:java}
 150try {
 151  if (allowLocalUpdates && env != null) {
 152   try {
 153   throwFailureIfDone();
 154   IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
 155   return null;
 156   } catch (IOException ignord) {
 157   // when it's failed we fall back to the 
standard & slow way
 158   if (LOG.isDebugEnabled()) {
 159   LOG.debug("indexRegion.batchMutate 
failed and fall back to HTable.batch(). Got error="
 160   + ignord);
 161   }
 162   }
 163   }
{code}

When a data table has a global index table , and when we replay the WALs to 
index table in Indexer.postOpen method in following 
line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
parameter for the global index table would  incorrectly be written to the 
current data table region:

{code:java}
688// do the usual writer stuff, killing the server again, if we can't 
manage to make the index
689// writes succeed again
690try {
691writer.writeAndKillYourselfOnFailure(updates, true);
692} catch (IOException e) {
693LOG.error("During WAL replay of outstanding index updates, "
694+ "Exception is thrown instead of killing server 
during index writing", e);
695}
696} finally {
{code}

However, ParallelWriterIndexCommitter.write method in the master and other 4.x 
branches is correct, just as following line 150 and line 151 :
{code:java}
147   try {
148if (allowLocalUpdates
149&& env != null
150&& tableReference.getTableName().equals(
151
env.getRegion().getTableDesc().getNameAsString())) {
152try {
153throwFailureIfDone();
154IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
155return null;
156} catch (IOException ignord) {
157// when it's failed we fall back to the 
standard & slow way
158if (LOG.isDebugEnabled()) {
159

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Description: 
I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I restarted my hbase 
cluster a certain time, I noticed some  RegionServers have plenty of  error 
logs as following:

{code:java}
 2017-08-01 11:53:10,669 WARN  
[rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
 regionserver.HRegion: Failed getting lock in batch put, 
row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of 
range for row lock on HRegion 
BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
 startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
at 
org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is caused by the ParallelWriterIndexCommitter.write method, in 
following line 151, if {{allowLocalUpdates}}  is true, it would wiite index 
mutations to current data table region unconditionlly,which is obviously 
inappropriate: 

{code:java}
 150try {
 151  if (allowLocalUpdates && env != null) {
 152   try {
 153   throwFailureIfDone();
 154   IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
 155   return null;
 156   } catch (IOException ignord) {
 157   // when it's failed we fall back to the 
standard & slow way
 158   if (LOG.isDebugEnabled()) {
 159   LOG.debug("indexRegion.batchMutate 
failed and fall back to HTable.batch(). Got error="
 160   + ignord);
 161   }
 162   }
 163   }
{code}

When a data table has a global index table , and when we replay the WALs to 
index table in Indexer.postOpen method in following 
line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
parameter for the global index table would  incorrectly be written to the 
current data table region:

{code:java}
688// do the usual writer stuff, killing the server again, if we can't 
manage to make the index
689// writes succeed again
690try {
691writer.writeAndKillYourselfOnFailure(updates, true);
692} catch (IOException e) {
693LOG.error("During WAL replay of outstanding index updates, "
694+ "Exception is thrown instead of killing server 
during index writing", e);
695}
696} finally {
{code}

However, ParallelWriterIndexCommitter.write method in the master and other 4.x 
branches is correct, just as following line 150 and line 151 :
{code:java}
147   try {
148if (allowLocalUpdates
149&& env != null
150&& tableReference.getTableName().equals(
151
env.getRegion().getTableDesc().getNameAsString())) {
152try {
153throwFailureIfDone();
154IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
155return null;
156} catch (IOException ignord) {
157// when it's failed we fall back to the 
standard & slow way
158if (LOG.isDebugEnabled()) {
159

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Description: 
I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I When I restarted my 
hbase cluster a certain time, I noticed some  RegionServers have a lot of error 
logs as following:

{code:java}
 2017-08-01 11:53:10,669 WARN  
[rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
 regionserver.HRegion: Failed getting lock in batch put, 
row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of 
range for row lock on HRegion 
BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
 startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
at 
org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is caused by the ParallelWriterIndexCommitter.write method, in 
following line 151,if allowLocalUpdates  is true, it would wiite index 
mutations to local region unconditionlly,which is obviously inappropriate.

{code:java}
 150try {
 151  if (allowLocalUpdates && env != null) {
 152   try {
 153   throwFailureIfDone();
 154   IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
 155   return null;
 156   } catch (IOException ignord) {
 157   // when it's failed we fall back to the 
standard & slow way
 158   if (LOG.isDebugEnabled()) {
 159   LOG.debug("indexRegion.batchMutate 
failed and fall back to HTable.batch(). Got error="
 160   + ignord);
 161   }
 162   }
 163   }
{code}

When a data table has a global index table , and when we replay the WALs to 
index table in Indexer.postOpen method in following 
line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
parameter for the index table would  incorrectly be written to the cuurent data 
table region:

{code:java}
688// do the usual writer stuff, killing the server again, if we can't 
manage to make the index
689// writes succeed again
690try {
691writer.writeAndKillYourselfOnFailure(updates, true);
692} catch (IOException e) {
693LOG.error("During WAL replay of outstanding index updates, "
694+ "Exception is thrown instead of killing server 
during index writing", e);
695}
696} finally {
{code}

However , ParallelWriterIndexCommitter.write method in the master and other 4.x 
branches is correct,just as following line 150 and line 151 :
{code:java}
147   try {
148if (allowLocalUpdates
149&& env != null
150&& tableReference.getTableName().equals(
151
env.getRegion().getTableDesc().getNameAsString())) {
152try {
153throwFailureIfDone();
154IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
155return null;
156} catch (IOException ignord) {
157// when it's failed we fall back to the 
standard & slow way
158if (LOG.isDebugEnabled()) {
159LOG.debug("indexRegion.batchMutate 

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Description: 
I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I When I restarted my 
hbase cluster a certain time, I noticed some  RegionServers have a lot of error 
logs as following:

{code:java}
 2017-08-01 11:53:10,669 WARN  
[rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
 regionserver.HRegion: Failed getting lock in batch put, 
row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of 
range for row lock on HRegion 
BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
 startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
at 
org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is caused by the ParallelWriterIndexCommitter.write method, in 
following line 151,if allowLocalUpdates  is true, it would wiite index 
mutations to local region unconditionlly,which is obviously inappropriate.

{code:java}
 150try {
 151  if (allowLocalUpdates && env != null) {
 152   try {
 153   throwFailureIfDone();
 154   IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
 155   return null;
 156   } catch (IOException ignord) {
 157   // when it's failed we fall back to the 
standard & slow way
 158   if (LOG.isDebugEnabled()) {
 159   LOG.debug("indexRegion.batchMutate 
failed and fall back to HTable.batch(). Got error="
 160   + ignord);
 161   }
 162   }
 163   }
{code}

When a data table has a global index table , and when we replay the WALs to 
index table in Indexer.postOpen method in following 
line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
parameter for the index table would  incorrectly be written to the cuurent data 
table region:

{code:java}
688// do the usual writer stuff, killing the server again, if we can't 
manage to make the index
689// writes succeed again
690try {
691writer.writeAndKillYourselfOnFailure(updates, true);
692} catch (IOException e) {
693LOG.error("During WAL replay of outstanding index updates, "
694+ "Exception is thrown instead of killing server 
during index writing", e);
695}
696} finally {
{code}

However , the master and other 4.x branches is correct:



  was:
I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I When I restarted my 
hbase cluster a certain time, I noticed some  RegionServers have a lot of error 
logs as following:

{code:java}
 2017-08-01 11:53:10,669 WARN  
[rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
 regionserver.HRegion: Failed getting lock in batch put, 
row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of 
range for row lock on HRegion 
BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
 startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
at 

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Description: 
I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I When I restarted my 
hbase cluster a certain time, I noticed some  RegionServers have a lot of error 
logs as following:

{code:java}
 2017-08-01 11:53:10,669 WARN  
[rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
 regionserver.HRegion: Failed getting lock in batch put, 
row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out of 
range for row lock on HRegion 
BIZARCH_NS_PRODUCT.BIZTRACER_SPAN,90ffd783-b0a3-4f8a-81ef-0a7535fea197_0,1490066612493.463220cd8fad7254481595911e62d74d.,
 startKey='90ffd783-b0a3-4f8a-81ef-0a7535fea197_0', 
getEndKey()='917fc343-3331-47fa-907c-df83a6f302f7_0', 
row='\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0'
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:3539)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3557)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2261)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2213)
at 
org.apache.phoenix.util.IndexUtil.writeLocalUpdates(IndexUtil.java:671)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:157)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is caused by the ParallelWriterIndexCommitter.write method, in 
following line 151,if allowLocalUpdates  is true, it would wiite index 
mutations to local region unconditionlly,which is obviously inappropriate.

{code:java}
 150try {
 151  if (allowLocalUpdates && env != null) {
 152   try {
 153   throwFailureIfDone();
 154   IndexUtil.writeLocalUpdates(env.getRegion(), 
mutations, true);
 155   return null;
 156   } catch (IOException ignord) {
 157   // when it's failed we fall back to the 
standard & slow way
 158   if (LOG.isDebugEnabled()) {
 159   LOG.debug("indexRegion.batchMutate 
failed and fall back to HTable.batch(). Got error="
 160   + ignord);
 161   }
 162   }
 163   }
{code}

When a data table has a global index table , and when we replay the WALs to 
index table in Indexer.postOpen method in following 
line 691, which the {{allowLocalUpdates}} parameter is true, the  {{updates}} 
parameter  for the index table would be written to the cuurent data table 
region:

{code:java}
688// do the usual writer stuff, killing the server again, if we can't 
manage to make the index
689// writes succeed again
690try {
691writer.writeAndKillYourselfOnFailure(updates, true);
692} catch (IOException e) {
693LOG.error("During WAL replay of outstanding index updates, "
694+ "Exception is thrown instead of killing server 
during index writing", e);
695}
696} finally {
{code}




> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>
> I used phoenix-4.x-HBase-0.98 in my hbase cluster.When I When I restarted my 
> hbase cluster a certain time, I noticed some  RegionServers have a lot of 
> error logs as following:
> {code:java}
>  2017-08-01 11:53:10,669 WARN  
> [rsync.slave005.bizhbasetest.sjs.ted,60020,1501511894174-index-writer--pool2-t786]
>  regionserver.HRegion: Failed getting lock in batch put, 
> row=\x10\x00\x00\x00913f0eed-6710-4de9-8bac-077a106bb9ae_0
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for row lock on HRegion 
> 

[jira] [Updated] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-4094:
--
Affects Version/s: 4.11.0

> ParallelWriterIndexCommitter incorrectly applys local updates to index tables 
> for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4094
> URL: https://issues.apache.org/jira/browse/PHOENIX-4094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: chenglei
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4094) ParallelWriterIndexCommitter incorrectly applys local updates to index tables for 4.x-HBase-0.98

2017-08-17 Thread chenglei (JIRA)
chenglei created PHOENIX-4094:
-

 Summary: ParallelWriterIndexCommitter incorrectly applys local 
updates to index tables for 4.x-HBase-0.98
 Key: PHOENIX-4094
 URL: https://issues.apache.org/jira/browse/PHOENIX-4094
 Project: Phoenix
  Issue Type: Bug
Reporter: chenglei






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4093) org.apache.phoenix.exception.PhoenixIOException: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60304:

2017-08-17 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4093:

Summary: org.apache.phoenix.exception.PhoenixIOException: 
java.net.SocketTimeoutException: callTimeout=6, callDuration=60304:  (was: 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:)

> org.apache.phoenix.exception.PhoenixIOException: 
> java.net.SocketTimeoutException: callTimeout=6, callDuration=60304:
> 
>
> Key: PHOENIX-4093
> URL: https://issues.apache.org/jira/browse/PHOENIX-4093
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: Phoenix4.10
> HBase 1.2  CDH5.12
>Reporter: Jepson
>  Labels: performance
>
> SQL Error [101] [08000]: org.apache.phoenix.exception.PhoenixIOException: 
> Failed after attempts=36, exceptions:
> Thu Aug 17 10:51:48 UTC 2017, null, *java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60304*: row '' on table 'DW:OMS_TIO_IDX' at 
> region=DW:OMS_TIO_IDX,,1502808904791.06aa2e941810212e9c8733e5f6bdb9ec., 
> hostname=hadoop44,60020,1502954074181, seqNum=8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4093) org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

2017-08-17 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4093:

Summary: org.apache.phoenix.exception.PhoenixIOException: Failed after 
attempts=36, exceptions:  (was:  
org.apache.phoenix.exception.PhoenixIOException 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:)

> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
> exceptions:
> --
>
> Key: PHOENIX-4093
> URL: https://issues.apache.org/jira/browse/PHOENIX-4093
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: Phoenix4.10
> HBase 1.2  CDH5.12
>Reporter: Jepson
>  Labels: performance
>
> SQL Error [101] [08000]: org.apache.phoenix.exception.PhoenixIOException: 
> Failed after attempts=36, exceptions:
> Thu Aug 17 10:51:48 UTC 2017, null, *java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60304*: row '' on table 'DW:OMS_TIO_IDX' at 
> region=DW:OMS_TIO_IDX,,1502808904791.06aa2e941810212e9c8733e5f6bdb9ec., 
> hostname=hadoop44,60020,1502954074181, seqNum=8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4093) org.apache.phoenix.exception.PhoenixIOException org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

2017-08-17 Thread Jepson (JIRA)
Jepson created PHOENIX-4093:
---

 Summary:  org.apache.phoenix.exception.PhoenixIOException 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:
 Key: PHOENIX-4093
 URL: https://issues.apache.org/jira/browse/PHOENIX-4093
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
 Environment: Phoenix4.10
HBase 1.2  CDH5.12
Reporter: Jepson


SQL Error [101] [08000]: org.apache.phoenix.exception.PhoenixIOException: 
Failed after attempts=36, exceptions:
Thu Aug 17 10:51:48 UTC 2017, null, *java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60304*: row '' on table 'DW:OMS_TIO_IDX' at 
region=DW:OMS_TIO_IDX,,1502808904791.06aa2e941810212e9c8733e5f6bdb9ec., 
hostname=hadoop44,60020,1502954074181, seqNum=8




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)