Build failed in Jenkins: Phoenix-Calcite #3

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[maryannxue] PHOENIX-2741 Support DROP SEQUENCE in Phoenix/Calcite Integration

--
[...truncated 10123 lines...]
Running org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.441 sec - in 
org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.544 sec - in 
org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Running org.apache.phoenix.end2end.ReverseFunctionIT
Running org.apache.phoenix.end2end.ReverseScanIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.012 sec - in 
org.apache.phoenix.end2end.PhoenixRuntimeIT
Running org.apache.phoenix.end2end.RoundFloorCeilFunctionsEnd2EndIT
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.141 sec - in 
org.apache.phoenix.end2end.ReverseFunctionIT
Running org.apache.phoenix.end2end.ServerExceptionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.515 sec - in 
org.apache.phoenix.end2end.ServerExceptionIT
Running org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.27 sec - in 
org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.613 sec - in 
org.apache.phoenix.end2end.ReverseScanIT
Running org.apache.phoenix.end2end.SkipScanQueryIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.749 sec - in 
org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Running org.apache.phoenix.end2end.SortMergeJoinIT
Tests run: 99, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 224.218 sec - 
in org.apache.phoenix.end2end.HashJoinIT
Running org.apache.phoenix.end2end.SortMergeJoinMoreIT
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.604 sec - 
in org.apache.phoenix.end2end.RoundFloorCeilFunctionsEnd2EndIT
Running org.apache.phoenix.end2end.SortOrderIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.988 sec - in 
org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Running org.apache.phoenix.end2end.SpooledSortMergeJoinIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.43 sec - in 
org.apache.phoenix.end2end.SortMergeJoinMoreIT
Running org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.849 sec - 
in org.apache.phoenix.end2end.SkipScanQueryIT
Running org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.206 sec - in 
org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Running org.apache.phoenix.end2end.StatementHintsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.624 sec - in 
org.apache.phoenix.end2end.StatementHintsIT
Running org.apache.phoenix.end2end.StddevIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.124 sec - in 
org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.StoreNullsIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.511 sec - in 
org.apache.phoenix.end2end.StddevIT
Running org.apache.phoenix.end2end.StringIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.273 sec - in 
org.apache.phoenix.end2end.StoreNullsIT
Running org.apache.phoenix.end2end.StringToArrayFunctionIT
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.886 sec - 
in org.apache.phoenix.end2end.StringIT
Running org.apache.phoenix.end2end.SubqueryIT
Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.287 sec - 
in org.apache.phoenix.end2end.SortOrderIT
Running org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.657 sec - 
in org.apache.phoenix.end2end.StringToArrayFunctionIT
Running org.apache.phoenix.end2end.TenantIdTypeIT
Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 232.214 sec - 
in org.apache.phoenix.end2end.SortMergeJoinIT
Running org.apache.phoenix.end2end.TenantSpecificViewIndexIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.599 sec - 
in org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT
Running org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.394 sec - in 
org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT
Running org.apache.phoenix.end2end.TimezoneOffsetFunctionIT
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 145.504 sec - 
in org.apache.phoenix.end2end.SubqueryIT
Running org.apache.phoenix.end2end.ToDateFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.765 sec - in 
org.apache.phoenix.end2end.TimezoneOffsetFunctionIT
Running 

Build failed in Jenkins: Phoenix | Master #1349

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[tdsilva] PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables

--
[...truncated 726 lines...]
Running org.apache.phoenix.tx.TxCheckpointIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.137 sec - in 
org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.713 sec - 
in org.apache.phoenix.tx.TxCheckpointIT
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 259.681 sec - 
in org.apache.phoenix.tx.TransactionIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 639.492 sec - 
in org.apache.phoenix.end2end.index.LocalIndexIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 865.007 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT

Results :

Tests run: 1070, Failures: 0, Errors: 0, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.726 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.427 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.377 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.16 sec - in 
org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.259 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.764 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.032 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.809 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.377 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.148 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.449 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.75 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.136 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.352 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.46 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.RTrimFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #145

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[tdsilva] PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables

--
[...truncated 697 lines...]
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.936 sec - 
in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
Running org.apache.phoenix.tx.TxCheckpointIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.946 sec - in 
org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 160.276 sec - 
in org.apache.phoenix.tx.TxCheckpointIT
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 273.383 sec - 
in org.apache.phoenix.tx.TransactionIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 664.011 sec - 
in org.apache.phoenix.end2end.index.LocalIndexIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 877.916 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT

Results :

Tests run: 1070, Failures: 0, Errors: 0, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.11 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.801 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.458 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.591 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.326 sec - 
in org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.947 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.528 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.862 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.394 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.955 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.83 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.474 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.287 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.689 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.476 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.151 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.465 sec - in 

Build failed in Jenkins: Phoenix-4.x-HBase-1.0 #592

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[tdsilva] PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables

--
[...truncated 697 lines...]
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.804 sec - in 
org.apache.phoenix.iterate.RoundRobinResultIteratorIT
Running org.apache.phoenix.tx.TxCheckpointIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.755 sec - in 
org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.834 sec - 
in org.apache.phoenix.tx.TxCheckpointIT
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 138.431 sec - 
in org.apache.phoenix.tx.TransactionIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 365.165 sec - 
in org.apache.phoenix.end2end.index.LocalIndexIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 575.319 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT

Results :

Tests run: 1070, Failures: 0, Errors: 0, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.551 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.887 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.59 sec - in 
org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.262 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.975 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.486 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.118 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.441 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.386 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.171 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.873 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.574 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.907 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.155 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running 

phoenix git commit: PHOENIX-2741 Support DROP SEQUENCE in Phoenix/Calcite Integration

2016-08-01 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 8b8563f47 -> 5962a9362


PHOENIX-2741 Support DROP SEQUENCE in Phoenix/Calcite Integration


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5962a936
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5962a936
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5962a936

Branch: refs/heads/calcite
Commit: 5962a93625a552a97f83a70d7b84e5a7ce76226e
Parents: 8b8563f
Author: maryannxue 
Authored: Mon Aug 1 15:20:30 2016 -0400
Committer: maryannxue 
Committed: Mon Aug 1 15:20:30 2016 -0400

--
 .../apache/phoenix/calcite/CalciteDDLIT.java|  3 +-
 phoenix-core/src/main/codegen/data/Parser.tdd   |  1 +
 .../src/main/codegen/includes/parserImpls.ftl   | 26 +++
 .../calcite/jdbc/PhoenixPrepareImpl.java| 15 +++
 .../phoenix/calcite/parse/SqlDropSequence.java  | 45 
 5 files changed, 89 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5962a936/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
index 970e68d..e0330dc 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
@@ -30,8 +30,9 @@ public class CalciteDDLIT extends BaseCalciteIT {
 start(PROPS).sql("create table t4(a bigint not null ROW_TIMESTAMP, b 
integer not null, c double constraint pk primary key(a,b)) 
SALT_BUCKET=4,VERSIONS=5 SPLIT ON('a','b')").execute();
 }
 
-@Test public void testCreateSequence() throws Exception {
+@Test public void testCreateAndDropSequence() throws Exception {
 start(PROPS).sql("create sequence if not exists s0 start with 2 
increment 3 minvalue 2 maxvalue 90 cycle cache 3").execute().close();
+start(PROPS).sql("drop sequence if exists s0").execute().close();
 }
 
 @Test public void testDropTable() throws Exception {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5962a936/phoenix-core/src/main/codegen/data/Parser.tdd
--
diff --git a/phoenix-core/src/main/codegen/data/Parser.tdd 
b/phoenix-core/src/main/codegen/data/Parser.tdd
index 04f073c..8ace311 100644
--- a/phoenix-core/src/main/codegen/data/Parser.tdd
+++ b/phoenix-core/src/main/codegen/data/Parser.tdd
@@ -53,6 +53,7 @@
 "SqlCreateTable()",
 "SqlCreateSequence()",
 "SqlDropTableOrDropView()",
+"SqlDropSequence()",
   ]
 
   # List of methods for parsing custom literals.

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5962a936/phoenix-core/src/main/codegen/includes/parserImpls.ftl
--
diff --git a/phoenix-core/src/main/codegen/includes/parserImpls.ftl 
b/phoenix-core/src/main/codegen/includes/parserImpls.ftl
index 8256738..a6ae9a2 100644
--- a/phoenix-core/src/main/codegen/includes/parserImpls.ftl
+++ b/phoenix-core/src/main/codegen/includes/parserImpls.ftl
@@ -271,6 +271,32 @@ SqlNode SqlDropTableOrDropView() :
 }
 }
 
+/**
+ * Parses statement
+ *   DROP SEQUENCE
+ */
+SqlNode SqlDropSequence() :
+{
+SqlParserPos pos;
+SqlIdentifier sequenceName;
+boolean ifExists;
+}
+{
+ { pos = getPos(); } 
+(
+  { ifExists = true; }
+|
+{
+ifExists = false;
+}
+)
+sequenceName = DualIdentifier()
+{
+return new SqlDropSequence(pos.plus(getPos()), sequenceName,
+SqlLiteral.createBoolean(ifExists, SqlParserPos.ZERO));
+}
+}
+
 SqlNodeList ColumnDefList() :
 {
 SqlParserPos pos;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5962a936/phoenix-core/src/main/java/org/apache/phoenix/calcite/jdbc/PhoenixPrepareImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/jdbc/PhoenixPrepareImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/jdbc/PhoenixPrepareImpl.java
index cdec08e..0c95a8c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/jdbc/PhoenixPrepareImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/jdbc/PhoenixPrepareImpl.java
@@ -38,6 +38,7 @@ import org.apache.calcite.util.Pair;
 import org.apache.phoenix.calcite.PhoenixSchema;
 import org.apache.phoenix.calcite.parse.SqlCreateSequence;
 import org.apache.phoenix.calcite.parse.SqlCreateTable;
+import 

phoenix git commit: PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)

2016-08-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 8a7bdb9c7 -> 15219d0fa


PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/15219d0f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/15219d0f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/15219d0f

Branch: refs/heads/4.x-HBase-0.98
Commit: 15219d0fa0c0bb0b3b44e7a01b9dce4745851fb7
Parents: 8a7bdb9
Author: Thomas D'Silva 
Authored: Mon Aug 1 12:00:27 2016 -0700
Committer: Thomas D'Silva 
Committed: Mon Aug 1 12:14:25 2016 -0700

--
 .../it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/15219d0f/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
index c335ff8..cb41d2b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
@@ -45,8 +45,9 @@ public class MutableIndexToolIT extends 
BaseOwnClusterHBaseManagedTimeIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-Map serverProps = Maps.newHashMapWithExpectedSize(1);
+Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.ASYNC_INDEX_AUTO_BUILD_ATTRIB, 
Boolean.toString(false));
 setUpRealDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
ReadOnlyProps.EMPTY_PROPS);
 }
 



phoenix git commit: PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)

2016-08-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 28e52ab3b -> c9e3d7d3d


PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c9e3d7d3
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c9e3d7d3
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c9e3d7d3

Branch: refs/heads/4.x-HBase-1.1
Commit: c9e3d7d3dfd5a97dffdedeaa9382390367b5e5a4
Parents: 28e52ab
Author: Thomas D'Silva 
Authored: Mon Aug 1 12:00:27 2016 -0700
Committer: Thomas D'Silva 
Committed: Mon Aug 1 12:15:06 2016 -0700

--
 .../it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c9e3d7d3/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
index c335ff8..cb41d2b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
@@ -45,8 +45,9 @@ public class MutableIndexToolIT extends 
BaseOwnClusterHBaseManagedTimeIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-Map serverProps = Maps.newHashMapWithExpectedSize(1);
+Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.ASYNC_INDEX_AUTO_BUILD_ATTRIB, 
Boolean.toString(false));
 setUpRealDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
ReadOnlyProps.EMPTY_PROPS);
 }
 



phoenix git commit: PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)

2016-08-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 a1106a72e -> 971426372


PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/97142637
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/97142637
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/97142637

Branch: refs/heads/4.x-HBase-1.0
Commit: 971426372d34d87a4ce5c6a8101af6f754037a96
Parents: a1106a7
Author: Thomas D'Silva 
Authored: Mon Aug 1 12:00:27 2016 -0700
Committer: Thomas D'Silva 
Committed: Mon Aug 1 12:14:45 2016 -0700

--
 .../it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/97142637/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
index c335ff8..cb41d2b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
@@ -45,8 +45,9 @@ public class MutableIndexToolIT extends 
BaseOwnClusterHBaseManagedTimeIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-Map serverProps = Maps.newHashMapWithExpectedSize(1);
+Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.ASYNC_INDEX_AUTO_BUILD_ATTRIB, 
Boolean.toString(false));
 setUpRealDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
ReadOnlyProps.EMPTY_PROPS);
 }
 



phoenix git commit: PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)

2016-08-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master 7a27282f2 -> 545cc1c02


PHOENIX-3120 AsyncIndexRebuilderTask fails for transactional tables (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/545cc1c0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/545cc1c0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/545cc1c0

Branch: refs/heads/master
Commit: 545cc1c025ec56ef174f117f8d96212457f96363
Parents: 7a27282
Author: Thomas D'Silva 
Authored: Mon Aug 1 12:00:27 2016 -0700
Committer: Thomas D'Silva 
Committed: Mon Aug 1 12:17:18 2016 -0700

--
 .../it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/545cc1c0/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
index c335ff8..cb41d2b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
@@ -45,8 +45,9 @@ public class MutableIndexToolIT extends 
BaseOwnClusterHBaseManagedTimeIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-Map serverProps = Maps.newHashMapWithExpectedSize(1);
+Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.ASYNC_INDEX_AUTO_BUILD_ATTRIB, 
Boolean.toString(false));
 setUpRealDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
ReadOnlyProps.EMPTY_PROPS);
 }
 



phoenix git commit: PHOENIX-2231 Support CREATE SEQUENCE in Phoenix/Calcite Integration

2016-08-01 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 50c797ffa -> 8b8563f47


PHOENIX-2231 Support CREATE SEQUENCE in Phoenix/Calcite Integration


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8b8563f4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8b8563f4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8b8563f4

Branch: refs/heads/calcite
Commit: 8b8563f4725bac5c1807222103c17dd5ecaa54c1
Parents: 50c797f
Author: maryannxue 
Authored: Mon Aug 1 15:03:45 2016 -0400
Committer: maryannxue 
Committed: Mon Aug 1 15:03:45 2016 -0400

--
 .../apache/phoenix/calcite/CalciteDDLIT.java|  4 +
 phoenix-core/src/main/codegen/data/Parser.tdd   |  2 +
 .../src/main/codegen/includes/parserImpls.ftl   | 59 --
 .../calcite/jdbc/PhoenixPrepareImpl.java| 24 ++
 .../calcite/parse/SqlCreateSequence.java| 81 
 5 files changed, 165 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8b8563f4/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
index 22b7474..970e68d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteDDLIT.java
@@ -30,6 +30,10 @@ public class CalciteDDLIT extends BaseCalciteIT {
 start(PROPS).sql("create table t4(a bigint not null ROW_TIMESTAMP, b 
integer not null, c double constraint pk primary key(a,b)) 
SALT_BUCKET=4,VERSIONS=5 SPLIT ON('a','b')").execute();
 }
 
+@Test public void testCreateSequence() throws Exception {
+start(PROPS).sql("create sequence if not exists s0 start with 2 
increment 3 minvalue 2 maxvalue 90 cycle cache 3").execute().close();
+}
+
 @Test public void testDropTable() throws Exception {
 start(PROPS).sql("create table t5(a varchar not null primary key, b 
varchar)").execute();
 start(PROPS).sql("drop table t5").execute();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/8b8563f4/phoenix-core/src/main/codegen/data/Parser.tdd
--
diff --git a/phoenix-core/src/main/codegen/data/Parser.tdd 
b/phoenix-core/src/main/codegen/data/Parser.tdd
index 75df3fc..04f073c 100644
--- a/phoenix-core/src/main/codegen/data/Parser.tdd
+++ b/phoenix-core/src/main/codegen/data/Parser.tdd
@@ -38,6 +38,7 @@
"IF"
"ROW_TIMESTAMP"
"SPLIT"
+   "CACHE"
   ]
 
   # List of keywords from "keywords" section that are not reserved.
@@ -50,6 +51,7 @@
 "SqlPhoenixExplain()",
 "SqlCreateView()",
 "SqlCreateTable()",
+"SqlCreateSequence()",
 "SqlDropTableOrDropView()",
   ]
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/8b8563f4/phoenix-core/src/main/codegen/includes/parserImpls.ftl
--
diff --git a/phoenix-core/src/main/codegen/includes/parserImpls.ftl 
b/phoenix-core/src/main/codegen/includes/parserImpls.ftl
index 383e5b4..8256738 100644
--- a/phoenix-core/src/main/codegen/includes/parserImpls.ftl
+++ b/phoenix-core/src/main/codegen/includes/parserImpls.ftl
@@ -178,6 +178,60 @@ SqlNode SqlCreateTable() :
 
 /**
  * Parses statement
+ *   CREATE SEQUENCE
+ */
+SqlNode SqlCreateSequence() :
+{
+SqlParserPos pos;
+SqlIdentifier sequenceName;
+boolean ifNotExists = false;
+SqlLiteral startWith = null;
+SqlLiteral incrementBy = null;
+SqlLiteral minValue = null;
+SqlLiteral maxValue = null;
+boolean cycle = false;
+SqlLiteral cache = null;
+Integer v;
+}
+{
+ { pos = getPos(); } 
+[
+   { ifNotExists = true; }
+]
+sequenceName = DualIdentifier()
+[
+ [  ]
+v = UnsignedIntLiteral() { startWith = 
SqlLiteral.createExactNumeric(v.toString(), getPos()); }
+]
+[
+ [  ]
+v = UnsignedIntLiteral() { incrementBy = 
SqlLiteral.createExactNumeric(v.toString(), getPos()); }
+]
+[
+
+v = UnsignedIntLiteral() { minValue = 
SqlLiteral.createExactNumeric(v.toString(), getPos()); }
+]
+[
+
+v = UnsignedIntLiteral() { maxValue = 
SqlLiteral.createExactNumeric(v.toString(), getPos()); }
+]
+[
+ { cycle = true; }
+]
+[
+
+v = UnsignedIntLiteral() { cache = 
SqlLiteral.createExactNumeric(v.toString(), getPos()); }
+]
+{
+return new SqlCreateSequence(pos.plus(getPos()), sequenceName,
+   

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #144

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[rajeshbabu] PHOENIX-3111 Possible Deadlock/delay while building index, upsert

--
[...truncated 2112 lines...]
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.006 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.AsyncIndexRegularBuildIT
org.apache.phoenix.end2end.index.AsyncIndexRegularBuildIT  Time elapsed: 0.005 
sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.AsyncIndexRegularBuildIT.doSetup(AsyncIndexRegularBuildIT.java:41)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.AsyncIndexRegularBuildIT.doSetup(AsyncIndexRegularBuildIT.java:41)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.AsyncIndexRegularBuildIT.doSetup(AsyncIndexRegularBuildIT.java:41)

Running org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.006 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT  Time elapsed: 0.005 
sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)

Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
org.apache.phoenix.end2end.index.MutableIndexFailureIT  Time elapsed: 0.004 sec 
 <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)

Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
org.apache.phoenix.end2end.index.MutableIndexReplicationIT  Time elapsed: 0.004 
sec  <<< ERROR!
java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:170)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:170)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)

Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT  Time elapsed: 0.004 
sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)

Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT  Time elapsed: 0.003 sec  
<<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 

Build failed in Jenkins: Phoenix-4.x-HBase-1.0 #591

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[rajeshbabu] PHOENIX-3111 Possible Deadlock/delay while building index, upsert

--
[...truncated 704 lines...]

Results :

Tests run: 1070, Failures: 0, Errors: 0, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.114 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.105 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.876 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.495 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.483 sec - in 
org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.777 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.124 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.946 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.971 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.666 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.149 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.176 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.01 sec - in 
org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.613 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.569 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.935 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.62 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.473 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.RTrimFunctionIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.669 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ReadOnlyIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.46 sec - in 
org.apache.phoenix.end2end.RTrimFunctionIT
Running org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.026 sec - in 
org.apache.phoenix.end2end.NthValueFunctionIT
Running org.apache.phoenix.end2end.ReverseFunctionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.131 sec - in 
org.apache.phoenix.end2end.RegexpSplitFunctionIT
Running org.apache.phoenix.end2end.RoundFloorCeilFunctionsEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, 

Build failed in Jenkins: Phoenix | Master #1348

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[rajeshbabu] PHOENIX-3111 Possible Deadlock/delay while building index, upsert

--
[...truncated 734 lines...]
Results :

Tests run: 1070, Failures: 0, Errors: 0, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.71 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Running org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.422 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.427 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.059 sec - 
in org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.076 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.008 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.809 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.093 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.067 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.106 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.03 sec - in 
org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.773 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.06 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.424 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.353 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.367 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.111 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.414 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.RTrimFunctionIT
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.329 sec - in 
org.apache.phoenix.end2end.DistinctPrefixFilterIT
Running org.apache.phoenix.end2end.ReadOnlyIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.692 sec - in 
org.apache.phoenix.end2end.RTrimFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.296 sec - 
in org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.RegexpSplitFunctionIT
Running org.apache.phoenix.end2end.ReverseFunctionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.392 sec - in 
org.apache.phoenix.end2end.RegexpSplitFunctionIT
Running 

phoenix git commit: PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, delete rows at server(Rajeshbabu)

2016-08-01 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-0.98 3fc406698 -> 56318fb0d


PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, 
delete rows at server(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/56318fb0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/56318fb0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/56318fb0

Branch: refs/heads/4.8-HBase-0.98
Commit: 56318fb0da9748c20a76011524a2523343ba405d
Parents: 3fc4066
Author: Rajeshbabu Chintaguntla 
Authored: Mon Aug 1 17:51:01 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Mon Aug 1 17:51:01 2016 +0530

--
 .../UngroupedAggregateRegionObserver.java   | 167 +--
 1 file changed, 151 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/56318fb0/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 2931933..7c4cb33 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -39,12 +39,17 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.Callable;
 
+import javax.annotation.concurrent.GuardedBy;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RegionTooBusyException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -89,7 +94,6 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
 import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker;
 import org.apache.phoenix.schema.stats.StatisticsCollector;
@@ -105,7 +109,6 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.LogUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.ServerUtil;
@@ -136,6 +139,37 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 public static final String DELETE_CQ = "DeleteCQ";
 public static final String DELETE_CF = "DeleteCF";
 public static final String EMPTY_CF = "EmptyCF";
+/**
+ * This lock used for synchronizing the state of
+ * {@link UngroupedAggregateRegionObserver#scansReferenceCount},
+ * {@link UngroupedAggregateRegionObserver#isRegionClosing} variables used 
to avoid possible
+ * dead lock situation in case below steps: 
+ * 1. We get read lock when we start writing local indexes, deletes etc.. 
+ * 2. when memstore reach threshold, flushes happen. Since they use read 
(shared) lock they 
+ * happen without any problem until someone tries to obtain write lock. 
+ * 3. at one moment we decide to split/bulkload/close and try to acquire 
write lock. 
+ * 4. Since that moment all attempts to get read lock will be blocked. 
I.e. no more 
+ * flushes will happen. But we continue to fill memstore with local index 
batches and 
+ * finally we get RTBE.
+ * 
+ * The solution to this is to not allow or delay operations acquire the 
write lock.
+ * 1) In case of split we just throw IOException so split won't happen but 
it will not cause any harm.
+ * 2) In case of bulkload failing it by throwing the exception. 
+ * 3) In case of region close by balancer/move wait before closing the 
reason and fail the query which 
+ * does write after reading. 
+ * 
+ * See PHOENIX-3111 for more info.
+ */
+
+private final Object lock = new Object();
+/**
+ * To maintain the number of scans used for create index, delete and 
upsert 

phoenix git commit: PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, delete rows at server(Rajeshbabu)

2016-08-01 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.2 cb21c8175 -> 8786a3d4a


PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, 
delete rows at server(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8786a3d4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8786a3d4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8786a3d4

Branch: refs/heads/4.8-HBase-1.2
Commit: 8786a3d4aca7fe3e0ab2df227b78ec17a8f7024e
Parents: cb21c81
Author: Rajeshbabu Chintaguntla 
Authored: Mon Aug 1 17:45:19 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Mon Aug 1 17:45:19 2016 +0530

--
 .../UngroupedAggregateRegionObserver.java   | 167 +--
 1 file changed, 151 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8786a3d4/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index d783670..eda59d1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -39,12 +39,17 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.Callable;
 
+import javax.annotation.concurrent.GuardedBy;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RegionTooBusyException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -88,7 +93,6 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
 import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker;
 import org.apache.phoenix.schema.stats.StatisticsCollector;
@@ -104,7 +108,6 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.LogUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.ServerUtil;
@@ -135,6 +138,37 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 public static final String DELETE_CQ = "DeleteCQ";
 public static final String DELETE_CF = "DeleteCF";
 public static final String EMPTY_CF = "EmptyCF";
+/**
+ * This lock used for synchronizing the state of
+ * {@link UngroupedAggregateRegionObserver#scansReferenceCount},
+ * {@link UngroupedAggregateRegionObserver#isRegionClosing} variables used 
to avoid possible
+ * dead lock situation in case below steps: 
+ * 1. We get read lock when we start writing local indexes, deletes etc.. 
+ * 2. when memstore reach threshold, flushes happen. Since they use read 
(shared) lock they 
+ * happen without any problem until someone tries to obtain write lock. 
+ * 3. at one moment we decide to split/bulkload/close and try to acquire 
write lock. 
+ * 4. Since that moment all attempts to get read lock will be blocked. 
I.e. no more 
+ * flushes will happen. But we continue to fill memstore with local index 
batches and 
+ * finally we get RTBE.
+ * 
+ * The solution to this is to not allow or delay operations acquire the 
write lock.
+ * 1) In case of split we just throw IOException so split won't happen but 
it will not cause any harm.
+ * 2) In case of bulkload failing it by throwing the exception. 
+ * 3) In case of region close by balancer/move wait before closing the 
reason and fail the query which 
+ * does write after reading. 
+ * 
+ * See PHOENIX-3111 for more info.
+ */
+
+private final Object lock = new Object();
+/**
+ * To maintain the number of scans used for create index, delete and 
upsert 

phoenix git commit: PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, delete rows at server(Rajeshbabu)

2016-08-01 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 c37f73f65 -> 8a7bdb9c7


PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, 
delete rows at server(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8a7bdb9c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8a7bdb9c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8a7bdb9c

Branch: refs/heads/4.x-HBase-0.98
Commit: 8a7bdb9c7e56dac4ebfa3a1a3877ba06eb70f572
Parents: c37f73f
Author: Rajeshbabu Chintaguntla 
Authored: Mon Aug 1 17:40:10 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Mon Aug 1 17:40:10 2016 +0530

--
 .../UngroupedAggregateRegionObserver.java   | 167 +--
 1 file changed, 151 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8a7bdb9c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 2931933..7c4cb33 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -39,12 +39,17 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.Callable;
 
+import javax.annotation.concurrent.GuardedBy;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RegionTooBusyException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -89,7 +94,6 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
 import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker;
 import org.apache.phoenix.schema.stats.StatisticsCollector;
@@ -105,7 +109,6 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.LogUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.ServerUtil;
@@ -136,6 +139,37 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 public static final String DELETE_CQ = "DeleteCQ";
 public static final String DELETE_CF = "DeleteCF";
 public static final String EMPTY_CF = "EmptyCF";
+/**
+ * This lock used for synchronizing the state of
+ * {@link UngroupedAggregateRegionObserver#scansReferenceCount},
+ * {@link UngroupedAggregateRegionObserver#isRegionClosing} variables used 
to avoid possible
+ * dead lock situation in case below steps: 
+ * 1. We get read lock when we start writing local indexes, deletes etc.. 
+ * 2. when memstore reach threshold, flushes happen. Since they use read 
(shared) lock they 
+ * happen without any problem until someone tries to obtain write lock. 
+ * 3. at one moment we decide to split/bulkload/close and try to acquire 
write lock. 
+ * 4. Since that moment all attempts to get read lock will be blocked. 
I.e. no more 
+ * flushes will happen. But we continue to fill memstore with local index 
batches and 
+ * finally we get RTBE.
+ * 
+ * The solution to this is to not allow or delay operations acquire the 
write lock.
+ * 1) In case of split we just throw IOException so split won't happen but 
it will not cause any harm.
+ * 2) In case of bulkload failing it by throwing the exception. 
+ * 3) In case of region close by balancer/move wait before closing the 
reason and fail the query which 
+ * does write after reading. 
+ * 
+ * See PHOENIX-3111 for more info.
+ */
+
+private final Object lock = new Object();
+/**
+ * To maintain the number of scans used for create index, delete and 
upsert 

phoenix git commit: PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, delete rows at server(Rajeshbabu)

2016-08-01 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 47da2cf2f -> a1106a72e


PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, 
delete rows at server(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a1106a72
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a1106a72
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a1106a72

Branch: refs/heads/4.x-HBase-1.0
Commit: a1106a72ecbb1cbec8f8d6ab48dcbda65616c3f9
Parents: 47da2cf
Author: Rajeshbabu Chintaguntla 
Authored: Mon Aug 1 17:28:43 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Mon Aug 1 17:28:43 2016 +0530

--
 .../UngroupedAggregateRegionObserver.java   | 168 +--
 1 file changed, 152 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a1106a72/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index f49a992..feea2b7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -39,12 +39,17 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.Callable;
 
+import javax.annotation.concurrent.GuardedBy;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RegionTooBusyException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -61,6 +66,7 @@ import org.apache.hadoop.hbase.regionserver.ScanType;
 import org.apache.hadoop.hbase.regionserver.Store;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.exception.DataExceedsCapacityException;
@@ -87,7 +93,6 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
 import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker;
 import org.apache.phoenix.schema.stats.StatisticsCollector;
@@ -103,7 +108,6 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.LogUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.ServerUtil;
@@ -134,6 +138,37 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 public static final String DELETE_CQ = "DeleteCQ";
 public static final String DELETE_CF = "DeleteCF";
 public static final String EMPTY_CF = "EmptyCF";
+/**
+ * This lock used for synchronizing the state of
+ * {@link UngroupedAggregateRegionObserver#scansReferenceCount},
+ * {@link UngroupedAggregateRegionObserver#isRegionClosing} variables used 
to avoid possible
+ * dead lock situation in case below steps: 
+ * 1. We get read lock when we start writing local indexes, deletes etc.. 
+ * 2. when memstore reach threshold, flushes happen. Since they use read 
(shared) lock they 
+ * happen without any problem until someone tries to obtain write lock. 
+ * 3. at one moment we decide to split/bulkload/close and try to acquire 
write lock. 
+ * 4. Since that moment all attempts to get read lock will be blocked. 
I.e. no more 
+ * flushes will happen. But we continue to fill memstore with local index 
batches and 
+ * finally we get RTBE.
+ * 
+ * The solution to this is to not allow or delay operations acquire the 
write lock.
+ * 1) In case of split we just throw IOException so split won't happen but 
it will not 

phoenix git commit: PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, delete rows at server(Rajeshbabu)

2016-08-01 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 52f8b386d -> 28e52ab3b


PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, 
delete rows at server(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/28e52ab3
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/28e52ab3
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/28e52ab3

Branch: refs/heads/4.x-HBase-1.1
Commit: 28e52ab3bb6e58edb501f2a049bdb5747fcb91df
Parents: 52f8b38
Author: Rajeshbabu Chintaguntla 
Authored: Mon Aug 1 16:57:14 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Mon Aug 1 16:57:14 2016 +0530

--
 .../UngroupedAggregateRegionObserver.java   | 167 +--
 1 file changed, 151 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/28e52ab3/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index d783670..eda59d1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -39,12 +39,17 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.Callable;
 
+import javax.annotation.concurrent.GuardedBy;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RegionTooBusyException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -88,7 +93,6 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
 import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker;
 import org.apache.phoenix.schema.stats.StatisticsCollector;
@@ -104,7 +108,6 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.LogUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.ServerUtil;
@@ -135,6 +138,37 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 public static final String DELETE_CQ = "DeleteCQ";
 public static final String DELETE_CF = "DeleteCF";
 public static final String EMPTY_CF = "EmptyCF";
+/**
+ * This lock used for synchronizing the state of
+ * {@link UngroupedAggregateRegionObserver#scansReferenceCount},
+ * {@link UngroupedAggregateRegionObserver#isRegionClosing} variables used 
to avoid possible
+ * dead lock situation in case below steps: 
+ * 1. We get read lock when we start writing local indexes, deletes etc.. 
+ * 2. when memstore reach threshold, flushes happen. Since they use read 
(shared) lock they 
+ * happen without any problem until someone tries to obtain write lock. 
+ * 3. at one moment we decide to split/bulkload/close and try to acquire 
write lock. 
+ * 4. Since that moment all attempts to get read lock will be blocked. 
I.e. no more 
+ * flushes will happen. But we continue to fill memstore with local index 
batches and 
+ * finally we get RTBE.
+ * 
+ * The solution to this is to not allow or delay operations acquire the 
write lock.
+ * 1) In case of split we just throw IOException so split won't happen but 
it will not cause any harm.
+ * 2) In case of bulkload failing it by throwing the exception. 
+ * 3) In case of region close by balancer/move wait before closing the 
reason and fail the query which 
+ * does write after reading. 
+ * 
+ * See PHOENIX-3111 for more info.
+ */
+
+private final Object lock = new Object();
+/**
+ * To maintain the number of scans used for create index, delete and 
upsert 

phoenix git commit: PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, delete rows at server(Rajeshbabu)

2016-08-01 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/master 3251ac58a -> 7a27282f2


PHOENIX-3111 Possible Deadlock/delay while building index, upsert select, 
delete rows at server(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7a27282f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7a27282f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7a27282f

Branch: refs/heads/master
Commit: 7a27282f237ff0fdcbcfc06cbd6062a764703b20
Parents: 3251ac5
Author: Rajeshbabu Chintaguntla 
Authored: Mon Aug 1 16:49:18 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Mon Aug 1 16:49:18 2016 +0530

--
 .../UngroupedAggregateRegionObserver.java   | 167 +--
 1 file changed, 151 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7a27282f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index d783670..eda59d1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -39,12 +39,17 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.Callable;
 
+import javax.annotation.concurrent.GuardedBy;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RegionTooBusyException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -88,7 +93,6 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
 import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker;
 import org.apache.phoenix.schema.stats.StatisticsCollector;
@@ -104,7 +108,6 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.LogUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.ServerUtil;
@@ -135,6 +138,37 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 public static final String DELETE_CQ = "DeleteCQ";
 public static final String DELETE_CF = "DeleteCF";
 public static final String EMPTY_CF = "EmptyCF";
+/**
+ * This lock used for synchronizing the state of
+ * {@link UngroupedAggregateRegionObserver#scansReferenceCount},
+ * {@link UngroupedAggregateRegionObserver#isRegionClosing} variables used 
to avoid possible
+ * dead lock situation in case below steps: 
+ * 1. We get read lock when we start writing local indexes, deletes etc.. 
+ * 2. when memstore reach threshold, flushes happen. Since they use read 
(shared) lock they 
+ * happen without any problem until someone tries to obtain write lock. 
+ * 3. at one moment we decide to split/bulkload/close and try to acquire 
write lock. 
+ * 4. Since that moment all attempts to get read lock will be blocked. 
I.e. no more 
+ * flushes will happen. But we continue to fill memstore with local index 
batches and 
+ * finally we get RTBE.
+ * 
+ * The solution to this is to not allow or delay operations acquire the 
write lock.
+ * 1) In case of split we just throw IOException so split won't happen but 
it will not cause any harm.
+ * 2) In case of bulkload failing it by throwing the exception. 
+ * 3) In case of region close by balancer/move wait before closing the 
reason and fail the query which 
+ * does write after reading. 
+ * 
+ * See PHOENIX-3111 for more info.
+ */
+
+private final Object lock = new Object();
+/**
+ * To maintain the number of scans used for create index, delete and 
upsert select