Build failed in Jenkins: Phoenix | Master #1431

2016-10-03 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3253 Make changes to tests to support method level

--
[...truncated 743851 lines...]

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(ParallelStatsDisabledTest) @ phoenix-flume ---

---
 T E S T S
---

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(ClientManagedTimeTests) @ phoenix-flume ---

---
 T E S T S
---

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTests) @ phoenix-flume ---

---
 T E S T S
---
Running org.apache.phoenix.flume.RegexEventSerializerIT
Running org.apache.phoenix.flume.PhoenixSinkIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.374 sec - in 
org.apache.phoenix.flume.PhoenixSinkIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.637 sec - in 
org.apache.phoenix.flume.RegexEventSerializerIT

Results :

Tests run: 15, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-flume ---

---
 T E S T S
---

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) @ 
phoenix-flume ---
[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsDisabledTest) @ 
phoenix-flume ---
[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ClientManagedTimeTests) @ 
phoenix-flume ---
[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (HBaseManagedTimeTests) @ 
phoenix-flume ---
[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (NeedTheirOwnClusterTests) @ 
phoenix-flume ---
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ phoenix-flume 
---
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-flume/4.9.0-HBase-1.2-SNAPSHOT/phoenix-flume-4.9.0-HBase-1.2-SNAPSHOT.jar
[INFO] Installing 
 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-flume/4.9.0-HBase-1.2-SNAPSHOT/phoenix-flume-4.9.0-HBase-1.2-SNAPSHOT.pom
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-flume/4.9.0-HBase-1.2-SNAPSHOT/phoenix-flume-4.9.0-HBase-1.2-SNAPSHOT-sources.jar
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-flume/4.9.0-HBase-1.2-SNAPSHOT/phoenix-flume-4.9.0-HBase-1.2-SNAPSHOT-tests.jar
[INFO] 
[INFO] 
[INFO] Building Phoenix - Pig 4.9.0-HBase-1.2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ phoenix-pig ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-checkstyle-plugin:2.13:check (validate) @ phoenix-pig ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.9.1:add-test-source (add-test-source) @ 
phoenix-pig ---
[INFO] Test Source directory: 
 added.
[INFO] 
[INFO] --- build-helper-maven-plugin:1.9.1:add-test-resource 
(add-test-resource) @ phoenix-pig ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ phoenix-pig ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
phoenix-pig ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.0:compile (default-compile) @ phoenix-pig ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 8 source files to 

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #219

2016-10-03 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3253 Make changes to tests to support method level

--
[...truncated 884 lines...]
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.658 sec - 
in org.apache.phoenix.end2end.ProductMetricsIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 303.725 sec - 
in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.SequenceIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.25 sec - in 
org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.ScanQueryIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 174.293 sec - 
in org.apache.phoenix.end2end.QueryIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.709 sec - 
in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.159 sec - in 
org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Running org.apache.phoenix.end2end.TopNIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 133.315 sec - 
in org.apache.phoenix.end2end.ScanQueryIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.881 sec - in 
org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 179.487 sec - 
in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.794 sec - in 
org.apache.phoenix.end2end.salted.SaltedTableIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.458 sec - in 
org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 168.993 sec - 
in org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 422.356 sec - 
in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 388.679 sec - 
in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 393.626 sec - 
in org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 480.211 sec - 
in org.apache.phoenix.end2end.UpsertSelectIT

Results :

Tests run: 1356, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTests) @ phoenix-core ---

---
 T E S T S
---

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---

---
 T E S T S
---
Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.4 sec - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.921 sec - in 
org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.896 sec - in 
org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.706 sec - in 
org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.445 sec - in 
org.apache.phoenix.end2end.QueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.829 sec - in 
org.apache.phoenix.end2end.QueryWithLimitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.183 sec - in 
org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 168.321 sec - 
in 

Build failed in Jenkins: Phoenix-Calcite #22

2016-10-03 Thread Apache Jenkins Server
See 

Changes:

[maryannxue] Avoid adding redundant hooks; set materialization_enabled=true by

[maryannxue] Move PhoenixPrepareImpl

--
[...truncated 108680 lines...]
type mismatch:
ref:
BOOLEAN NOT NULL
input:
BIGINT NOT NULL
at 
org.apache.phoenix.end2end.SpillableGroupByIT.testScanUri(SpillableGroupByIT.java:125)

Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.641 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
testIndexCreationDeadlockWithStats(org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT)
  Time elapsed: 9.429 sec  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.testIndexCreationDeadlockWithStats(ImmutableIndexWithStatsIT.java:77)

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
org.apache.phoenix.end2end.index.MutableIndexFailureIT  Time elapsed: 0.006 sec 
 <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:55189:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)

Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.012 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT  Time elapsed: 0.012 
sec  <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:49646:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)

Running 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.97 sec - in 
org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.01 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT  Time elapsed: 0.009 sec  
<<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:58423:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT.doSetup(TxWriteFailureIT.java:86)

Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.016 sec <<< 
FAILURE! - in org.apache.phoenix.execute.PartialCommitIT
org.apache.phoenix.execute.PartialCommitIT  Time elapsed: 0.006 sec  <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:50911:/hbase;test=true;test=true
at 
org.apache.phoenix.execute.PartialCommitIT.doSetup(PartialCommitIT.java:91)

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.387 sec - in 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.2 sec - in 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.102 sec - 
in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.221 sec - in 
org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running 
org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.3 sec <<< 
FAILURE! - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
testRoundRobinBehavior(org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT)
  Time elapsed: 1.189 sec  <<< ERROR!
java.sql.SQLException: Error while executing SQL "CREATE TABLE 
TESTROUNDROBINBEHAVIOR(K VARCHAR PRIMARY KEY)": java.sql.SQLException: ERROR 
1000 (42I00): Single column primary key may not be NULL. columnName=K
at 
org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT.testRoundRobinBehavior(RoundRobinResultIteratorWithStatsIT.java:66)
Caused by: java.lang.RuntimeException: 

phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 0494e54de -> 34ba28e60


PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/34ba28e6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/34ba28e6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/34ba28e6

Branch: refs/heads/master
Commit: 34ba28e60f66cb7b537b60c9ef04b8a26036f010
Parents: 0494e54
Author: James Taylor 
Authored: Mon Oct 3 18:02:48 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 18:02:48 2016 -0700

--
 .../apache/phoenix/end2end/GroupByCaseIT.java   | 31 +++-
 1 file changed, 17 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/34ba28e6/phoenix-core/src/it/java/org/apache/phoenix/end2end/GroupByCaseIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/GroupByCaseIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/GroupByCaseIT.java
index 48b926a..be59fd7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/GroupByCaseIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/GroupByCaseIT.java
@@ -451,32 +451,35 @@ public class GroupByCaseIT extends 
ParallelStatsDisabledIT {
 public void testGroupByWithAliasWithSameColumnName() throws SQLException {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
-String ddl = "create table test3 (pk integer primary key, col 
integer)";
+String tableName1 = generateUniqueName();
+String tableName2 = generateUniqueName();
+String tableName3 = generateUniqueName();
+String ddl = "create table " + tableName1 + " (pk integer primary key, 
col integer)";
 conn.createStatement().execute(ddl);
-ddl = "create table test4 (pk integer primary key, col integer)";
+ddl = "create table " + tableName2 + " (pk integer primary key, col 
integer)";
 conn.createStatement().execute(ddl);
-ddl = "create table test5 (notPk integer primary key, col integer)";
+ddl = "create table " + tableName3 + " (notPk integer primary key, col 
integer)";
 conn.createStatement().execute(ddl);
-conn.createStatement().execute("UPSERT INTO test3 VALUES (1,2)");
-conn.createStatement().execute("UPSERT INTO test4 VALUES (1,2)");
-conn.createStatement().execute("UPSERT INTO test5 VALUES (1,2)");
-conn.createStatement().executeQuery("select test3.pk as pk from test3 
group by pk");
-conn.createStatement().executeQuery("select test3.pk as pk from test3 
group by test3.pk");
-conn.createStatement().executeQuery("select test3.pk as pk from test3 
as t group by t.pk");
-conn.createStatement().executeQuery("select test3.col as pk from 
test3");
+conn.createStatement().execute("UPSERT INTO " + tableName1 + " VALUES 
(1,2)");
+conn.createStatement().execute("UPSERT INTO " + tableName2 + " VALUES 
(1,2)");
+conn.createStatement().execute("UPSERT INTO " + tableName3 + " VALUES 
(1,2)");
+conn.createStatement().executeQuery("select " + tableName1 + ".pk as 
pk from " + tableName1 + " group by pk");
+conn.createStatement().executeQuery("select " + tableName1 + ".pk as 
pk from " + tableName1 + " group by " + tableName1 + ".pk");
+conn.createStatement().executeQuery("select " + tableName1 + ".pk as 
pk from " + tableName1 + " as t group by t.pk");
+conn.createStatement().executeQuery("select " + tableName1 + ".col as 
pk from " + tableName1);
 conn.createStatement()
-.executeQuery("select test3.pk as pk from test3 join test5 on 
(test3.pk=test5.notPk) group by pk");
+.executeQuery("select " + tableName1 + ".pk as pk from " + 
tableName1 + " join " + tableName3 + " on (" + tableName1 + ".pk=" + tableName3 
+ ".notPk) group by pk");
 try {
-conn.createStatement().executeQuery("select test3.col as pk from 
test3 group by pk");
+conn.createStatement().executeQuery("select " + tableName1 + ".col 
as pk from " + tableName1 + " group by pk");
 fail();
 } catch (AmbiguousColumnException e) {}
 try {
-conn.createStatement().executeQuery("select col as pk from test3 
group by pk");
+conn.createStatement().executeQuery("select col as pk from " + 
tableName1 + " group by pk");
 fail();
 } catch (AmbiguousColumnException e) {}
 try {
 conn.createStatement()
-

Build failed in Jenkins: Phoenix | Master #1430

2016-10-03 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3253 Make changes to tests to support method level

--
[...truncated 733097 lines...]
2016-10-03 22:31:49,469 DEBUG 
[RpcServer.reader=1,bindAddress=pietas.apache.org,port=37335] 
org.apache.hadoop.hbase.ipc.RpcServer$Listener(857): 
RpcServer.listener,port=37335: DISCONNECTING client 67.195.81.190:39554 because 
read count=-1. Number of active connections: 1
2016-10-03 22:31:49,477 INFO  [RS:0;pietas:59494] 
org.apache.hadoop.hbase.regionserver.Leases(146): RS:0;pietas:59494 closing 
leases
2016-10-03 22:31:49,477 INFO  [RS:0;pietas:59494] 
org.apache.hadoop.hbase.regionserver.Leases(149): RS:0;pietas:59494 closed 
leases
2016-10-03 22:31:49,477 INFO  [RS:0;pietas:59494] 
org.apache.hadoop.hbase.ChoreService(323): Chore service for: 
pietas.apache.org,59494,1475533259465 had [[ScheduledChore: Name: 
MovedRegionsCleaner for region pietas.apache.org,59494,1475533259465 Period: 
12 Unit: MILLISECONDS]] on shutdown
2016-10-03 22:31:49,481 DEBUG [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): 
regionserver:59494-0x1578ca2b8fa0001, quorum=localhost:51458, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, 
path=/hbase/replication/rs/pietas.apache.org,59494,1475533259465
2016-10-03 22:31:49,482 INFO  [RS:0;pietas:59494] 
org.apache.hadoop.hbase.ipc.RpcServer(2277): Stopping server on 59494
2016-10-03 22:31:49,482 INFO  [RpcServer.listener,port=59494] 
org.apache.hadoop.hbase.ipc.RpcServer$Listener(761): 
RpcServer.listener,port=59494: stopping
2016-10-03 22:31:49,484 INFO  [RpcServer.responder] 
org.apache.hadoop.hbase.ipc.RpcServer$Responder(1003): RpcServer.responder: 
stopped
2016-10-03 22:31:49,484 INFO  [RpcServer.responder] 
org.apache.hadoop.hbase.ipc.RpcServer$Responder(906): RpcServer.responder: 
stopping
2016-10-03 22:31:49,487 DEBUG [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): 
regionserver:59494-0x1578ca2b8fa0001, quorum=localhost:51458, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, 
path=/hbase/rs/pietas.apache.org,59494,1475533259465
2016-10-03 22:31:49,487 DEBUG [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): 
regionserver:59494-0x1578ca2b8fa0001, quorum=localhost:51458, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, 
path=/hbase/rs
2016-10-03 22:31:49,488 DEBUG [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): 
master:37335-0x1578ca2b8fa, quorum=localhost:51458, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, 
path=/hbase/rs/pietas.apache.org,59494,1475533259465
2016-10-03 22:31:49,489 INFO  [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.RegionServerTracker(118): RegionServer 
ephemeral node deleted, processing expiration 
[pietas.apache.org,59494,1475533259465]
2016-10-03 22:31:49,489 INFO  [RS:0;pietas:59494] 
org.apache.hadoop.hbase.regionserver.HRegionServer(1104): stopping server 
pietas.apache.org,59494,1475533259465; zookeeper connection closed.
2016-10-03 22:31:49,489 INFO  [RS:0;pietas:59494] 
org.apache.hadoop.hbase.regionserver.HRegionServer(1107): RS:0;pietas:59494 
exiting
2016-10-03 22:31:49,490 INFO  [main-EventThread] 
org.apache.hadoop.hbase.master.ServerManager(612): Cluster shutdown set; 
pietas.apache.org,59494,1475533259465 expired; onlineServers=0
2016-10-03 22:31:49,490 DEBUG [M:0;pietas:37335] 
org.apache.hadoop.hbase.master.HMaster(1126): Stopping service threads
2016-10-03 22:31:49,490 DEBUG [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): 
master:37335-0x1578ca2b8fa, quorum=localhost:51458, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, 
path=/hbase/rs
2016-10-03 22:31:49,490 INFO  [Shutdown of 
org.apache.hadoop.hbase.fs.HFileSystem@1550a26b] 
org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(191): 
Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1550a26b
2016-10-03 22:31:49,491 INFO  [main] 
org.apache.hadoop.hbase.util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 
regionserver(s) complete
2016-10-03 22:31:49,492 DEBUG [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): 
master:37335-0x1578ca2b8fa, quorum=localhost:51458, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, 
path=/hbase/master
2016-10-03 22:31:49,492 INFO  [M:0;pietas:37335] 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709):
 Closing zookeeper sessionid=0x1578ca2b8fa0004
2016-10-03 22:31:49,493 DEBUG [main-EventThread] 
org.apache.hadoop.hbase.zookeeper.ZKUtil(370): master:37335-0x1578ca2b8fa, 
quorum=localhost:51458, baseZNode=/hbase Set watcher on znode that does not yet 
exist, 

Build failed in Jenkins: Phoenix-Calcite #21

2016-10-03 Thread Apache Jenkins Server
See 

Changes:

[maryannxue] PHOENIX-2827 Support OFFSET in Calcite-Phoenix (Eric Lomore)

[maryannxue] Remove warnings under calcite packages

--
[...truncated 112002 lines...]
org.apache.phoenix.end2end.index.MutableIndexFailureIT  Time elapsed: 0.004 sec 
 <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:49358:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)

Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 17.826 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
testIndexCreationDeadlockWithStats(org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT)
  Time elapsed: 8.398 sec  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.testIndexCreationDeadlockWithStats(ImmutableIndexWithStatsIT.java:77)

Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT  Time elapsed: 0.006 
sec  <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:63712:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)

Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Running 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.323 sec - in 
org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT  Time elapsed: 0.004 sec  
<<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:62853:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT.doSetup(TxWriteFailureIT.java:86)

Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 sec <<< 
FAILURE! - in org.apache.phoenix.execute.PartialCommitIT
org.apache.phoenix.execute.PartialCommitIT  Time elapsed: 0.006 sec  <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:57528:/hbase;test=true;test=true
at 
org.apache.phoenix.execute.PartialCommitIT.doSetup(PartialCommitIT.java:91)

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.147 sec - in 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.404 sec - in 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.693 sec - 
in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.475 sec - in 
org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running 
org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.719 sec <<< 
FAILURE! - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
testRoundRobinBehavior(org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT)
  Time elapsed: 1.045 sec  <<< ERROR!
java.sql.SQLException: Error while executing SQL "CREATE TABLE 
TESTROUNDROBINBEHAVIOR(K VARCHAR PRIMARY KEY)": java.sql.SQLException: ERROR 
1000 (42I00): Single column primary key may not be NULL. columnName=K
at 
org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT.testRoundRobinBehavior(RoundRobinResultIteratorWithStatsIT.java:66)
Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 1000 
(42I00): Single column primary key may not be NULL. columnName=K
at 
org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT.testRoundRobinBehavior(RoundRobinResultIteratorWithStatsIT.java:66)
Caused by: java.sql.SQLException: ERROR 1000 (42I00): Single column primary key 
may not be NULL. columnName=K
at 

phoenix git commit: Move PhoenixPrepareImpl

2016-10-03 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 3889209d1 -> 66f20ca9d


Move PhoenixPrepareImpl


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/66f20ca9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/66f20ca9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/66f20ca9

Branch: refs/heads/calcite
Commit: 66f20ca9dbc2242d4b0343e181aa092ef0f6a4f0
Parents: 3889209
Author: maryannxue 
Authored: Mon Oct 3 15:21:10 2016 -0700
Committer: maryannxue 
Committed: Mon Oct 3 15:21:10 2016 -0700

--
 .../phoenix/calcite/PhoenixPrepareImpl.java | 409 ++
 .../jdbc/PhoenixCalciteEmbeddedDriver.java  |   1 +
 .../calcite/jdbc/PhoenixPrepareImpl.java| 410 ---
 .../calcite/ExpressionFactoryValuesTest.java|   1 -
 4 files changed, 410 insertions(+), 411 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/66f20ca9/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixPrepareImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixPrepareImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixPrepareImpl.java
new file mode 100644
index 000..2d6a84c
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixPrepareImpl.java
@@ -0,0 +1,409 @@
+package org.apache.phoenix.calcite;
+
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.calcite.adapter.enumerable.EnumerableRules;
+import org.apache.calcite.jdbc.CalcitePrepare;
+import org.apache.calcite.plan.RelOptCluster;
+import org.apache.calcite.plan.RelOptCostFactory;
+import org.apache.calcite.plan.RelOptPlanner;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.prepare.CalcitePrepareImpl;
+import org.apache.calcite.rel.logical.LogicalSort;
+import org.apache.calcite.rel.rules.JoinCommuteRule;
+import org.apache.calcite.rel.rules.SortProjectTransposeRule;
+import org.apache.calcite.rel.rules.SortUnionTransposeRule;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.sql.SqlColumnDefInPkConstraintNode;
+import org.apache.calcite.sql.SqlColumnDefNode;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlIndexExpressionNode;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlLiteral;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlNodeList;
+import org.apache.calcite.sql.SqlOptionNode;
+import org.apache.calcite.sql.parser.SqlParser;
+import org.apache.calcite.sql.parser.SqlParserPos;
+import org.apache.calcite.sql.parser.SqlParserUtil;
+import org.apache.calcite.util.NlsString;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.calcite.parse.SqlCreateIndex;
+import org.apache.phoenix.calcite.parse.SqlCreateSequence;
+import org.apache.phoenix.calcite.parse.SqlCreateTable;
+import org.apache.phoenix.calcite.parse.SqlDropIndex;
+import org.apache.phoenix.calcite.parse.SqlDropSequence;
+import org.apache.phoenix.calcite.parse.SqlDropTable;
+import org.apache.phoenix.calcite.parse.SqlUpdateStatistics;
+import org.apache.phoenix.calcite.parser.PhoenixParserImpl;
+import org.apache.phoenix.calcite.rel.PhoenixRel;
+import org.apache.phoenix.calcite.rel.PhoenixServerProject;
+import org.apache.phoenix.calcite.rel.PhoenixTemporarySort;
+import org.apache.phoenix.calcite.rules.PhoenixFilterScanMergeRule;
+import org.apache.phoenix.calcite.rules.PhoenixForwardTableScanRule;
+import 
org.apache.phoenix.calcite.rules.PhoenixJoinSingleValueAggregateMergeRule;
+import org.apache.phoenix.calcite.rules.PhoenixMergeSortUnionRule;
+import org.apache.phoenix.calcite.rules.PhoenixOrderedAggregateRule;
+import org.apache.phoenix.calcite.rules.PhoenixReverseTableScanRule;
+import org.apache.phoenix.calcite.rules.PhoenixSortServerJoinTransposeRule;
+import org.apache.phoenix.calcite.rules.PhoenixTableScanColumnRefRule;
+import org.apache.phoenix.compile.CreateIndexCompiler;
+import org.apache.phoenix.compile.CreateSequenceCompiler;
+import org.apache.phoenix.compile.CreateTableCompiler;
+import org.apache.phoenix.compile.MutationPlan;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.jdbc.PhoenixStatement.Operation;
+import org.apache.phoenix.parse.ColumnDef;
+import org.apache.phoenix.parse.ColumnDefInPkConstraint;
+import org.apache.phoenix.parse.ColumnName;
+import org.apache.phoenix.parse.CreateIndexStatement;
+import org.apache.phoenix.parse.CreateSequenceStatement;
+import 

phoenix git commit: Avoid adding redundant hooks; set materialization_enabled=true by default

2016-10-03 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 14c217bf8 -> 3889209d1


Avoid adding redundant hooks; set materialization_enabled=true by default


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3889209d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3889209d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3889209d

Branch: refs/heads/calcite
Commit: 3889209d13f2154b96ee7c5b4e0017c324790899
Parents: 14c217b
Author: maryannxue 
Authored: Mon Oct 3 15:16:16 2016 -0700
Committer: maryannxue 
Committed: Mon Oct 3 15:16:16 2016 -0700

--
 .../calcite/jdbc/PhoenixCalciteFactory.java | 56 ++--
 .../jdbc/PhoenixCalciteEmbeddedDriver.java  |  4 ++
 .../calcite/jdbc/PhoenixPrepareImpl.java| 39 --
 3 files changed, 55 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3889209d/phoenix-core/src/main/java/org/apache/calcite/jdbc/PhoenixCalciteFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/calcite/jdbc/PhoenixCalciteFactory.java 
b/phoenix-core/src/main/java/org/apache/calcite/jdbc/PhoenixCalciteFactory.java
index 014891b..6b00f04 100644
--- 
a/phoenix-core/src/main/java/org/apache/calcite/jdbc/PhoenixCalciteFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/calcite/jdbc/PhoenixCalciteFactory.java
@@ -35,14 +35,24 @@ import org.apache.calcite.jdbc.CalciteFactory;
 import org.apache.calcite.jdbc.Driver;
 import org.apache.calcite.linq4j.Enumerable;
 import org.apache.calcite.linq4j.Ord;
+import org.apache.calcite.prepare.Prepare.Materialization;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.runtime.Hook;
+import org.apache.calcite.runtime.Hook.Closeable;
 import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.tools.Program;
+import org.apache.calcite.tools.Programs;
+import org.apache.calcite.util.Holder;
 import org.apache.phoenix.calcite.PhoenixSchema;
+import org.apache.phoenix.calcite.rel.PhoenixRel;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.execute.RuntimeContext;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 
+import com.google.common.base.Function;
 import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
 
 public class PhoenixCalciteFactory extends CalciteFactory {
@@ -59,7 +69,8 @@ public class PhoenixCalciteFactory extends CalciteFactory {
 AvaticaFactory factory, String url, Properties info,
 CalciteSchema rootSchema, JavaTypeFactory typeFactory) {
 return new PhoenixCalciteConnection(
-(Driver) driver, factory, url, info, rootSchema, typeFactory);
+(Driver) driver, factory, url, info,
+CalciteSchema.createRootSchema(true, false), typeFactory);
 }
 
 @Override
@@ -108,14 +119,46 @@ public class PhoenixCalciteFactory extends CalciteFactory 
{
 }
 
 private static class PhoenixCalciteConnection extends 
CalciteConnectionImpl {
-final Map 
runtimeContextMap =
+private final Map 
runtimeContextMap =
 new ConcurrentHashMap();
+private final List hooks = Lists.newArrayList();
 
 public PhoenixCalciteConnection(Driver driver, AvaticaFactory factory, 
String url,
-Properties info, CalciteSchema rootSchema,
+Properties info, final CalciteSchema rootSchema,
 JavaTypeFactory typeFactory) {
-super(driver, factory, url, info,
-CalciteSchema.createRootSchema(true, false), typeFactory);
+super(driver, factory, url, info, rootSchema, typeFactory);
+
+hooks.add(Hook.PARSE_TREE.add(new Function() {
+@Override
+public Object apply(Object[] input) {
+// TODO Auto-generated method stub
+return null;
+}
+}));
+
+hooks.add(Hook.TRIMMED.add(new Function() {
+@Override
+public Object apply(RelNode root) {
+for (CalciteSchema schema : 
rootSchema.getSubSchemaMap().values()) {
+if (schema.schema instanceof PhoenixSchema) {
+((PhoenixSchema) 
schema.schema).defineIndexesAsMaterializations();
+for (CalciteSchema subSchema : 

phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 1cce661b8 -> 08d9c7154


PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/08d9c715
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/08d9c715
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/08d9c715

Branch: refs/heads/4.x-HBase-0.98
Commit: 08d9c7154194de69ce688ad78357bffb8d34c92f
Parents: 1cce661
Author: James Taylor 
Authored: Mon Oct 3 13:46:53 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 13:53:13 2016 -0700

--
 .../src/it/java/org/apache/phoenix/end2end/index/IndexIT.java| 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/08d9c715/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
index 254046b..cb4310b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
@@ -230,6 +230,10 @@ public class IndexIT extends ParallelStatsDisabledIT {
 testCreateIndexAfterUpsertStarted(false, 
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()),
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()));
+}
+
+@Test
+public void testCreateIndexAfterUpsertStartedTxnl() throws Exception {
 if (transactional) {
 testCreateIndexAfterUpsertStarted(true, 
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()),



phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 976c97ac0 -> 73be315a2


PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/73be315a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/73be315a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/73be315a

Branch: refs/heads/4.x-HBase-1.1
Commit: 73be315a2a80e8aaba9839eaa340281f92f6691f
Parents: 976c97a
Author: James Taylor 
Authored: Mon Oct 3 13:46:53 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 13:52:05 2016 -0700

--
 .../src/it/java/org/apache/phoenix/end2end/index/IndexIT.java| 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/73be315a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
index 254046b..cb4310b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
@@ -230,6 +230,10 @@ public class IndexIT extends ParallelStatsDisabledIT {
 testCreateIndexAfterUpsertStarted(false, 
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()),
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()));
+}
+
+@Test
+public void testCreateIndexAfterUpsertStartedTxnl() throws Exception {
 if (transactional) {
 testCreateIndexAfterUpsertStarted(true, 
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()),



phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 4c0aeb0d5 -> 0494e54de


PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0494e54d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0494e54d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0494e54d

Branch: refs/heads/master
Commit: 0494e54de2a9ff7e1115678658fb092c95602930
Parents: 4c0aeb0
Author: James Taylor 
Authored: Mon Oct 3 13:46:53 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 13:47:38 2016 -0700

--
 .../src/it/java/org/apache/phoenix/end2end/index/IndexIT.java| 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0494e54d/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
index 254046b..cb4310b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
@@ -230,6 +230,10 @@ public class IndexIT extends ParallelStatsDisabledIT {
 testCreateIndexAfterUpsertStarted(false, 
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()),
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()));
+}
+
+@Test
+public void testCreateIndexAfterUpsertStartedTxnl() throws Exception {
 if (transactional) {
 testCreateIndexAfterUpsertStarted(true, 
 SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, 
generateUniqueName()),



Build failed in Jenkins: Phoenix-Calcite #20

2016-10-03 Thread Apache Jenkins Server
See 

Changes:

[maryannxue] Adapt column mapping sequence for Phoenix column family support

[maryannxue] Code refine

--
[...truncated 100207 lines...]
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.ViewIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.006 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.ViewIT
org.apache.phoenix.end2end.ViewIT  Time elapsed: 0.005 sec  <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:56322:/hbase;test=true;test=true

Running org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
org.apache.phoenix.end2end.index.MutableIndexFailureIT  Time elapsed: 0.006 sec 
 <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:62303:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)

Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 18.074 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
testIndexCreationDeadlockWithStats(org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT)
  Time elapsed: 8.487 sec  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.testIndexCreationDeadlockWithStats(ImmutableIndexWithStatsIT.java:77)

Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT  Time elapsed: 0.005 
sec  <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:56178:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.41 sec - in 
org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT  Time elapsed: 0.004 sec  
<<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:63109:/hbase;test=true;test=true
at 
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT.doSetup(TxWriteFailureIT.java:86)

Running 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.006 sec <<< 
FAILURE! - in org.apache.phoenix.execute.PartialCommitIT
org.apache.phoenix.execute.PartialCommitIT  Time elapsed: 0.005 sec  <<< ERROR!
java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
calcite:localhost:57693:/hbase;test=true;test=true
at 
org.apache.phoenix.execute.PartialCommitIT.doSetup(PartialCommitIT.java:91)

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.263 sec - in 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.274 sec - in 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.907 sec - in 
org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running 
org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.776 sec - 
in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.362 sec <<< 
FAILURE! - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
testRoundRobinBehavior(org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT)
  Time elapsed: 1.001 sec  <<< ERROR!
java.sql.SQLException: Error while executing SQL "CREATE TABLE 
TESTROUNDROBINBEHAVIOR(K VARCHAR PRIMARY KEY)": 

phoenix git commit: Remove warnings under calcite packages

2016-10-03 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 4f01c91ec -> 14c217bf8


Remove warnings under calcite packages


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/14c217bf
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/14c217bf
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/14c217bf

Branch: refs/heads/calcite
Commit: 14c217bf8f173a87a5cc5f61b50351740942e84e
Parents: 4f01c91
Author: maryannxue 
Authored: Mon Oct 3 13:19:50 2016 -0700
Committer: maryannxue 
Committed: Mon Oct 3 13:19:50 2016 -0700

--
 .../src/main/java/org/apache/calcite/sql/SqlOptionNode.java| 1 -
 .../src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java| 2 +-
 .../src/main/java/org/apache/phoenix/calcite/PhoenixTable.java | 2 --
 3 files changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/14c217bf/phoenix-core/src/main/java/org/apache/calcite/sql/SqlOptionNode.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/calcite/sql/SqlOptionNode.java 
b/phoenix-core/src/main/java/org/apache/calcite/sql/SqlOptionNode.java
index 47654f3..bcec18a 100644
--- a/phoenix-core/src/main/java/org/apache/calcite/sql/SqlOptionNode.java
+++ b/phoenix-core/src/main/java/org/apache/calcite/sql/SqlOptionNode.java
@@ -26,7 +26,6 @@ import org.apache.phoenix.calcite.CalciteUtils;
 import org.apache.phoenix.calcite.rel.PhoenixRelImplementor;
 import org.apache.phoenix.calcite.rel.PhoenixRelImplementorImpl;
 import org.apache.phoenix.execute.RuntimeContext;
-import org.apache.phoenix.execute.RuntimeContextImpl;
 
 public class SqlOptionNode extends SqlNode {
 public final String familyName;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/14c217bf/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
index 46a3053..8606588 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
@@ -165,7 +165,7 @@ public class PhoenixSchema implements Schema {
 this.schemaName == null ? schema : parentSchema;
 func = ViewTable.viewMacro(schema, viewSql,
 CalciteSchema.from(viewSqlSchema).path(null),
-pTable.getViewType() == ViewType.UPDATABLE);
+null, pTable.getViewType() == ViewType.UPDATABLE);
 views.put(name, func);
 viewTables.add(tableRef);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/14c217bf/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
index fb9f9c0..fe8d6de 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
@@ -18,7 +18,6 @@ import org.apache.calcite.rel.type.RelDataTypeFactory;
 import org.apache.calcite.schema.Statistic;
 import org.apache.calcite.schema.TranslatableTable;
 import org.apache.calcite.schema.impl.AbstractTable;
-import org.apache.calcite.sql.type.SqlTypeName;
 import org.apache.calcite.util.ImmutableBitSet;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.Scan;
@@ -43,7 +42,6 @@ import org.apache.phoenix.schema.TableRef;
 import org.apache.phoenix.schema.PTable.IndexType;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.stats.StatisticsUtil;
-import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.util.SchemaUtil;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.ImmutableList;



phoenix git commit: PHOENIX-2827 Support OFFSET in Calcite-Phoenix (Eric Lomore)

2016-10-03 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 1193e5afe -> 4f01c91ec


PHOENIX-2827 Support OFFSET in Calcite-Phoenix (Eric Lomore)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4f01c91e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4f01c91e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4f01c91e

Branch: refs/heads/calcite
Commit: 4f01c91ec78988058fbed1f31a51e2bf06db04a8
Parents: 1193e5a
Author: maryannxue 
Authored: Mon Oct 3 13:09:26 2016 -0700
Committer: maryannxue 
Committed: Mon Oct 3 13:10:18 2016 -0700

--
 .../org/apache/phoenix/calcite/CalciteIT.java   | 259 ++-
 .../phoenix/calcite/rel/PhoenixLimit.java   |  17 +-
 .../rel/PhoenixToEnumerableConverter.java   |   4 +-
 .../calcite/rules/PhoenixConverterRules.java|   3 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |   2 +-
 .../org/apache/phoenix/compile/QueryPlan.java   |  11 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |   2 +-
 .../apache/phoenix/execute/AggregatePlan.java   |  17 +-
 .../phoenix/execute/ClientAggregatePlan.java|  15 +-
 .../apache/phoenix/execute/ClientScanPlan.java  |  15 +-
 .../apache/phoenix/execute/CorrelatePlan.java   |  12 +-
 .../phoenix/execute/DegenerateQueryPlan.java|   2 +-
 .../phoenix/execute/DelegateQueryPlan.java  |  12 +-
 .../apache/phoenix/execute/HashJoinPlan.java|   4 +-
 .../execute/LiteralResultIterationPlan.java |  16 +-
 .../org/apache/phoenix/execute/ScanPlan.java|  16 +-
 .../phoenix/execute/SortMergeJoinPlan.java  |  10 +-
 .../phoenix/execute/TupleProjectionPlan.java|   4 +-
 .../org/apache/phoenix/execute/UnionPlan.java   |  11 +-
 .../apache/phoenix/execute/UnnestArrayPlan.java |  10 +-
 .../apache/phoenix/jdbc/PhoenixStatement.java   |   2 +-
 .../query/ParallelIteratorsSplitTest.java   |   2 +-
 22 files changed, 300 insertions(+), 146 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4f01c91e/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteIT.java
index da6303b..abea491 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteIT.java
@@ -280,19 +280,19 @@ public class CalciteIT extends BaseCalciteIT {
"  PhoenixServerProject(supplier_id=[$0], 
NAME=[$1])\n" +
"PhoenixTableScan(table=[[phoenix, Join, 
SupplierTable]], scanOrder=[FORWARD])\n")
 .close();
-
-start(false, 1000f).sql("SELECT \"order_id\", i.name, i.price, 
discount2, quantity FROM " + JOIN_ORDER_TABLE_FULL_NAME + " o LEFT JOIN " 
-+ JOIN_ITEM_TABLE_FULL_NAME + " i ON o.\"item_id\" = 
i.\"item_id\" limit 2")
-.explainIs("PhoenixToEnumerableConverter\n" +
-  "  PhoenixClientProject(order_id=[$0], 
NAME=[$4], PRICE=[$5], DISCOUNT2=[$6], QUANTITY=[$2])\n" +
-  "PhoenixLimit(fetch=[2])\n" +
-  "  PhoenixClientJoin(condition=[=($1, 
$3)], joinType=[left])\n" +
-  "PhoenixClientSort(sort0=[$1], 
dir0=[ASC])\n" +
-  "  PhoenixLimit(fetch=[2])\n" +
-  "
PhoenixServerProject(order_id=[$0], item_id=[$2], QUANTITY=[$4])\n" +
-  "  
PhoenixTableScan(table=[[phoenix, Join, OrderTable]])\n" +
-  "PhoenixServerProject(item_id=[$0], 
NAME=[$1], PRICE=[$2], DISCOUNT2=[$4])\n" +
-  "  PhoenixTableScan(table=[[phoenix, 
Join, ItemTable]], scanOrder=[FORWARD])\n")
+
+start(false, 1000f).sql("SELECT \"order_id\", i.name, i.price, 
discount2, quantity FROM " + JOIN_ORDER_TABLE_FULL_NAME + " o LEFT JOIN "
++ JOIN_ITEM_TABLE_FULL_NAME + " i ON o.\"item_id\" = 
i.\"item_id\" limit 2 offset 1")
+.explainIs("PhoenixToEnumerableConverter\n" +
+   "  PhoenixClientProject(order_id=[$0], NAME=[$4], 
PRICE=[$5], DISCOUNT2=[$6], QUANTITY=[$2])\n" +
+   "PhoenixLimit(offset=[1], fetch=[2])\n" +
+   "  PhoenixClientJoin(condition=[=($1, $3)], 
joinType=[left])\n" +
+   "PhoenixClientSort(sort0=[$1], 
dir0=[ASC])\n" +
+ 

phoenix git commit: Code refine

2016-10-03 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 17888a272 -> 1193e5afe


Code refine


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1193e5af
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1193e5af
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1193e5af

Branch: refs/heads/calcite
Commit: 1193e5afe6dae631d107c605616e4f78df1600bf
Parents: 17888a2
Author: maryannxue 
Authored: Mon Oct 3 12:01:56 2016 -0700
Committer: maryannxue 
Committed: Mon Oct 3 12:01:56 2016 -0700

--
 .../apache/phoenix/calcite/CalciteUtils.java| 31 ++--
 .../apache/phoenix/calcite/PhoenixTable.java| 25 ++--
 2 files changed, 32 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1193e5af/phoenix-core/src/main/java/org/apache/phoenix/calcite/CalciteUtils.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/CalciteUtils.java 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/CalciteUtils.java
index df348c9..ae143f7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/calcite/CalciteUtils.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/calcite/CalciteUtils.java
@@ -21,6 +21,7 @@ import org.apache.calcite.rel.RelNode;
 import org.apache.calcite.rel.core.JoinRelType;
 import org.apache.calcite.rel.core.Project;
 import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
 import org.apache.calcite.rex.RexCall;
 import org.apache.calcite.rex.RexCorrelVariable;
 import org.apache.calcite.rex.RexDynamicParam;
@@ -144,7 +145,7 @@ public class CalciteUtils {
 }
 
 @SuppressWarnings("rawtypes")
-public static PDataType relDataTypeToPDataType(RelDataType relDataType) {
+public static PDataType relDataTypeToPDataType(final RelDataType 
relDataType) {
 SqlTypeName sqlTypeName = relDataType.getSqlTypeName();
 final boolean isArrayType = sqlTypeName == SqlTypeName.ARRAY;
 if (isArrayType) {
@@ -161,7 +162,33 @@ public class CalciteUtils {
 }
 return PDataType.fromTypeId(ordinal + (isArrayType ? 
PDataType.ARRAY_TYPE_BASE : 0));
 }
-
+
+@SuppressWarnings("rawtypes")
+public static RelDataType pDataTypeToRelDataType(
+final RelDataTypeFactory typeFactory, final PDataType pDataType,
+final Integer maxLength, final Integer scale, final Integer 
arraySize) {
+final boolean isArrayType = pDataType.isArrayType();
+final PDataType baseType = isArrayType ?
+PDataType.fromTypeId(pDataType.getSqlType() - 
PDataType.ARRAY_TYPE_BASE) 
+  : pDataType;
+final int sqlTypeId = baseType.getResultSetSqlType();
+final PDataType normalizedBaseType = PDataType.fromTypeId(sqlTypeId);
+final SqlTypeName sqlTypeName = 
SqlTypeName.valueOf(normalizedBaseType.getSqlTypeName());
+RelDataType type;
+if (maxLength != null && scale != null) {
+type = typeFactory.createSqlType(sqlTypeName, maxLength, scale);
+} else if (maxLength != null) {
+type = typeFactory.createSqlType(sqlTypeName, maxLength);
+} else {
+type = typeFactory.createSqlType(sqlTypeName);
+}
+if (isArrayType) {
+type = typeFactory.createArrayType(type, arraySize == null ? -1 : 
arraySize);
+}
+
+return type;
+}
+
 public static JoinType convertJoinType(JoinRelType type) {
 JoinType ret = null;
 switch (type) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1193e5af/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
index 905441f..fb9f9c0 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixTable.java
@@ -124,34 +124,15 @@ public class PhoenixTable extends AbstractTable 
implements TranslatableTable {
 return tableMapping.getMappedColumns();
 }
 
-@SuppressWarnings("rawtypes")
 @Override
 public RelDataType getRowType(RelDataTypeFactory typeFactory) {
 final RelDataTypeFactory.FieldInfoBuilder builder = 
typeFactory.builder();
 final List columns = tableMapping.getMappedColumns();
 for (int i = 0; i < columns.size(); i++) {
 PColumn pColumn = columns.get(i);
-final 

phoenix git commit: Adapt column mapping sequence for Phoenix column family support

2016-10-03 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite 1ef5a298f -> 17888a272


Adapt column mapping sequence for Phoenix column family support


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/17888a27
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/17888a27
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/17888a27

Branch: refs/heads/calcite
Commit: 17888a2720c4314d0f8c5287b5fe7ca838509c39
Parents: 1ef5a29
Author: maryannxue 
Authored: Mon Oct 3 10:59:49 2016 -0700
Committer: maryannxue 
Committed: Mon Oct 3 10:59:49 2016 -0700

--
 .../apache/phoenix/calcite/BaseCalciteIT.java   | 18 -
 .../phoenix/calcite/CalciteLocalIndexIT.java|  6 +-
 .../apache/phoenix/calcite/TableMapping.java| 75 ++--
 3 files changed, 56 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/17888a27/phoenix-core/src/it/java/org/apache/phoenix/calcite/BaseCalciteIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/calcite/BaseCalciteIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/calcite/BaseCalciteIT.java
index e192dc6..05d825d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/calcite/BaseCalciteIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/calcite/BaseCalciteIT.java
@@ -39,6 +39,7 @@ import java.text.DecimalFormat;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
+import java.util.regex.Pattern;
 
 import org.apache.calcite.avatica.util.ArrayImpl;
 import org.apache.calcite.config.CalciteConnectionProperty;
@@ -168,12 +169,27 @@ public class BaseCalciteIT extends BaseHBaseManagedTimeIT 
{
 }
 
 public Sql explainIs(String expected) throws SQLException {
+return checkExplain(expected, true);
+}
+
+public Sql explainMatches(String expected) throws SQLException {
+return checkExplain(expected, false);
+}
+
+private Sql checkExplain(String expected, boolean exact) throws 
SQLException {
 final Statement statement = 
start.getConnection().createStatement();
 final ResultSet resultSet = 
statement.executeQuery(start.getExplainPlanString() + " " + sql);
 String explain = QueryUtil.getExplainPlan(resultSet);
 resultSet.close();
 statement.close();
-Assert.assertEquals(explain, expected);
+if (exact) {
+Assert.assertEquals(explain, expected);
+} else {
+Assert.assertTrue(
+"Explain plan \"" + explain
++ "\" does not match \"" + expected + "\"",
+explain.matches(expected));
+}
 return this;
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/17888a27/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteLocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteLocalIndexIT.java
index 02cf2a1..7b6c279 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteLocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/calcite/CalciteLocalIndexIT.java
@@ -65,9 +65,9 @@ public class CalciteLocalIndexIT extends BaseCalciteIndexIT {
"  PhoenixTableScan(table=[[phoenix, 
IDX_FULL]])\n")*/
 .close();
 start(true, 1000f).sql("select a_string, b_string from aTable where 
a_string = 'a'")
-.explainIs("PhoenixToEnumerableConverter\n" +
-   "  PhoenixServerProject(A_STRING=[$0], 
B_STRING=[$3])\n" +
-   "PhoenixTableScan(table=[[phoenix, IDX1]], 
filter=[=($0, 'a')])\n")
+.explainMatches("PhoenixToEnumerableConverter\n" +
+   "  PhoenixServerProject\\((0:)?A_STRING=\\[\\$0\\], 
(0:)?B_STRING=\\[\\$3\\]\\)\n" +
+   "PhoenixTableScan\\(table=\\[\\[phoenix, 
IDX1\\]\\], filter=\\[=\\(\\$0, 'a'\\)\\]\\)\n")
 .close();
 start(true, 1000f).sql("select a_string, b_string from aTable where 
b_string = 'b'")
 .explainIs("PhoenixToEnumerableConverter\n" +

http://git-wip-us.apache.org/repos/asf/phoenix/blob/17888a27/phoenix-core/src/main/java/org/apache/phoenix/calcite/TableMapping.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/TableMapping.java 

Apache-Phoenix | Phoenix-4.8-HBase-1.2 | Build Successful

2016-10-03 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.8-HBase-1.2

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/lastCompletedBuild/testReport/

Changes
[ankitsinghal59] PHOENIX-3159 CachingHTableFactory may close HTable during eviction even



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | Master | Build Successful

2016-10-03 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
No changes


Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.0 | Build Successful

2016-10-03 Thread Apache Jenkins Server
4.x-HBase-1.0 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.0

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastCompletedBuild/testReport/

Changes
[ankitsinghal59] PHOENIX-3159 CachingHTableFactory may close HTable during eviction even



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[3/3] phoenix git commit: PHOENIX-3338 Move flapping test into test class marked as NotThreadSafe

2016-10-03 Thread jamestaylor
PHOENIX-3338 Move flapping test into test class marked as NotThreadSafe


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/577a6dee
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/577a6dee
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/577a6dee

Branch: refs/heads/4.x-HBase-0.98
Commit: 577a6dee5846c53acb22c4b742e4b780b442df6b
Parents: fc01284
Author: James Taylor 
Authored: Thu Sep 29 17:30:37 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 09:51:06 2016 -0700

--
 .../phoenix/tx/NotThreadSafeTransactionIT.java  | 138 +++
 .../org/apache/phoenix/tx/TransactionIT.java| 126 -
 2 files changed, 138 insertions(+), 126 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/577a6dee/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
index b50f424..404bb9e 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
@@ -18,21 +18,38 @@
 package org.apache.phoenix.tx;
 
 import static org.apache.phoenix.util.TestUtil.INDEX_DATA_SCHEMA;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.apache.phoenix.util.TestUtil.createTransactionalTable;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Properties;
 
 import javax.annotation.concurrent.NotThreadSafe;
 
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.TestUtil;
+import org.apache.tephra.TransactionContext;
+import org.apache.tephra.TransactionSystemClient;
+import org.apache.tephra.TxConstants;
+import org.apache.tephra.hbase.TransactionAwareHTable;
 import org.junit.Test;
 
 /**
@@ -190,4 +207,125 @@ public class NotThreadSafeTransactionIT extends 
ParallelStatsDisabledIT {
 }
 }
 
+@Test
+public void testExternalTxContext() throws Exception {
+ResultSet rs;
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String fullTableName = generateUniqueName();
+PhoenixConnection pconn = conn.unwrap(PhoenixConnection.class);
+
+TransactionSystemClient txServiceClient = 
pconn.getQueryServices().getTransactionSystemClient();
+
+Statement stmt = conn.createStatement();
+stmt.execute("CREATE TABLE " + fullTableName + "(K VARCHAR PRIMARY 
KEY, V1 VARCHAR, V2 VARCHAR) TRANSACTIONAL=true");
+HTableInterface htable = 
pconn.getQueryServices().getTable(Bytes.toBytes(fullTableName));
+stmt.executeUpdate("upsert into " + fullTableName + " values('x', 'a', 
'a')");
+conn.commit();
+
+try (Connection newConn = DriverManager.getConnection(getUrl(), 
props)) {
+rs = newConn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(1,rs.getInt(1));
+}
+
+// Use HBase level Tephra APIs to start a new transaction
+TransactionAwareHTable txAware = new TransactionAwareHTable(htable, 
TxConstants.ConflictDetection.ROW);
+TransactionContext txContext = new TransactionContext(txServiceClient, 
txAware);
+txContext.start();
+
+// Use HBase APIs to add a new row
+Put put = new Put(Bytes.toBytes("z"));
+put.add(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, 
QueryConstants.EMPTY_COLUMN_BYTES, QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+put.add(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, 
Bytes.toBytes("V1"), Bytes.toBytes("b"));
+txAware.put(put);

[1/3] phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 fc01284b1 -> 1cce661b8


PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1cce661b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1cce661b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1cce661b

Branch: refs/heads/4.x-HBase-0.98
Commit: 1cce661b8eefde5cc7b5d7799ba5e148a91516a7
Parents: d5b3bce
Author: James Taylor 
Authored: Sun Oct 2 12:47:34 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 09:51:06 2016 -0700

--
 .../apache/phoenix/end2end/AlterTableIT.java| 49 --
 .../phoenix/end2end/FlappingAlterTableIT.java   | 97 
 pom.xml | 28 +++---
 3 files changed, 114 insertions(+), 60 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1cce661b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 0125a63..48f4217 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -1083,30 +1083,6 @@ public class AlterTableIT extends 
ParallelStatsDisabledIT {
 }
 
 @Test
-public void testAddColumnForNewColumnFamily() throws Exception {
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
-+"ID1 VARCHAR(15) NOT NULL,\n"
-+"ID2 VARCHAR(15) NOT NULL,\n"
-+"CREATED_DATE DATE,\n"
-+"CREATION_TIME BIGINT,\n"
-+"LAST_USED DATE,\n"
-+"CONSTRAINT PK PRIMARY KEY (ID1, ID2)) SALT_BUCKETS = 8";
-Connection conn1 = DriverManager.getConnection(getUrl(), props);
-conn1.createStatement().execute(ddl);
-ddl = "ALTER TABLE " + dataTableFullName + " ADD CF.STRING VARCHAR";
-conn1.createStatement().execute(ddl);
-try (HBaseAdmin admin = 
conn1.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
-HColumnDescriptor[] columnFamilies = 
admin.getTableDescriptor(Bytes.toBytes(dataTableFullName)).getColumnFamilies();
-assertEquals(2, columnFamilies.length);
-assertEquals("0", columnFamilies[0].getNameAsString());
-assertEquals(HColumnDescriptor.DEFAULT_TTL, 
columnFamilies[0].getTimeToLive());
-assertEquals("CF", columnFamilies[1].getNameAsString());
-assertEquals(HColumnDescriptor.DEFAULT_TTL, 
columnFamilies[1].getTimeToLive());
-}
-}
-
-@Test
 public void testSetHColumnProperties() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
@@ -1414,31 +1390,6 @@ public class AlterTableIT extends 
ParallelStatsDisabledIT {
 }
 }
 
-@Test
-public void testNewColumnFamilyInheritsTTLOfEmptyCF() throws Exception {
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
-+"ID1 VARCHAR(15) NOT NULL,\n"
-+"ID2 VARCHAR(15) NOT NULL,\n"
-+"CREATED_DATE DATE,\n"
-+"CREATION_TIME BIGINT,\n"
-+"LAST_USED DATE,\n"
-+"CONSTRAINT PK PRIMARY KEY (ID1, ID2)) SALT_BUCKETS = 8, TTL 
= 1000";
-Connection conn1 = DriverManager.getConnection(getUrl(), props);
-conn1.createStatement().execute(ddl);
-ddl = "ALTER TABLE " + dataTableFullName + " ADD CF.STRING VARCHAR";
-conn1.createStatement().execute(ddl);
-try (HBaseAdmin admin = 
conn1.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
-HTableDescriptor tableDesc = 
admin.getTableDescriptor(Bytes.toBytes(dataTableFullName));
-HColumnDescriptor[] columnFamilies = tableDesc.getColumnFamilies();
-assertEquals(2, columnFamilies.length);
-assertEquals("0", columnFamilies[0].getNameAsString());
-assertEquals(1000, columnFamilies[0].getTimeToLive());
-assertEquals("CF", columnFamilies[1].getNameAsString());
-assertEquals(1000, columnFamilies[1].getTimeToLive());
-}
-}
-
 private static void assertImmutableRows(Connection conn, String 
fullTableName, boolean expectedValue) throws SQLException 

[2/3] phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d5b3bced
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d5b3bced
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d5b3bced

Branch: refs/heads/4.x-HBase-0.98
Commit: d5b3bcedf34e716a54038410e29dafb21ac05ccf
Parents: 577a6de
Author: James Taylor 
Authored: Sun Oct 2 11:10:14 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 09:51:06 2016 -0700

--
 .../phoenix/end2end/FlappingLocalIndexIT.java   | 300 +
 .../phoenix/end2end/index/BaseLocalIndexIT.java |  80 +
 .../phoenix/end2end/index/LocalIndexIT.java | 299 +
 .../phoenix/tx/FlappingTransactionIT.java   | 328 ++
 .../phoenix/tx/NotThreadSafeTransactionIT.java  | 331 ---
 pom.xml |   4 +-
 6 files changed, 712 insertions(+), 630 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d5b3bced/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
new file mode 100644
index 000..7509997
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
@@ -0,0 +1,300 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.end2end.index.BaseLocalIndexIT;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+public class FlappingLocalIndexIT extends BaseLocalIndexIT {
+
+public FlappingLocalIndexIT(boolean isNamespaceMapped) {
+super(isNamespaceMapped);
+}
+
+@Test
+public void testScanWhenATableHasMultipleLocalIndexes() throws Exception {
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+
+createBaseTable(tableName, null, "('e','i','o')");
+Connection conn1 = DriverManager.getConnection(getUrl());
+try {
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('b',1,2,4,'z')");
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('f',1,2,3,'a')");
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('j',2,4,2,'a')");
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('q',3,1,1,'c')");
+conn1.commit();
+conn1.createStatement().execute("CREATE LOCAL INDEX " + indexName 
+ " ON " + tableName + "(v1)");
+conn1.createStatement().execute("CREATE LOCAL INDEX " + indexName 
+ "2 ON " + tableName + "(k3)");
+conn1.commit();
+conn1 = DriverManager.getConnection(getUrl());
+ResultSet rs = conn1.createStatement().executeQuery("SELECT 
COUNT(*) FROM " + tableName);
+

[3/3] phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/976c97ac
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/976c97ac
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/976c97ac

Branch: refs/heads/4.x-HBase-1.1
Commit: 976c97ac085a8b96e40e7da05740568b2c4757a7
Parents: 7f1ccc2
Author: James Taylor 
Authored: Sun Oct 2 12:47:34 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 09:20:31 2016 -0700

--
 .../apache/phoenix/end2end/AlterTableIT.java| 49 --
 .../phoenix/end2end/FlappingAlterTableIT.java   | 97 
 pom.xml | 28 +++---
 3 files changed, 114 insertions(+), 60 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/976c97ac/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 0125a63..48f4217 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -1083,30 +1083,6 @@ public class AlterTableIT extends 
ParallelStatsDisabledIT {
 }
 
 @Test
-public void testAddColumnForNewColumnFamily() throws Exception {
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
-+"ID1 VARCHAR(15) NOT NULL,\n"
-+"ID2 VARCHAR(15) NOT NULL,\n"
-+"CREATED_DATE DATE,\n"
-+"CREATION_TIME BIGINT,\n"
-+"LAST_USED DATE,\n"
-+"CONSTRAINT PK PRIMARY KEY (ID1, ID2)) SALT_BUCKETS = 8";
-Connection conn1 = DriverManager.getConnection(getUrl(), props);
-conn1.createStatement().execute(ddl);
-ddl = "ALTER TABLE " + dataTableFullName + " ADD CF.STRING VARCHAR";
-conn1.createStatement().execute(ddl);
-try (HBaseAdmin admin = 
conn1.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
-HColumnDescriptor[] columnFamilies = 
admin.getTableDescriptor(Bytes.toBytes(dataTableFullName)).getColumnFamilies();
-assertEquals(2, columnFamilies.length);
-assertEquals("0", columnFamilies[0].getNameAsString());
-assertEquals(HColumnDescriptor.DEFAULT_TTL, 
columnFamilies[0].getTimeToLive());
-assertEquals("CF", columnFamilies[1].getNameAsString());
-assertEquals(HColumnDescriptor.DEFAULT_TTL, 
columnFamilies[1].getTimeToLive());
-}
-}
-
-@Test
 public void testSetHColumnProperties() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
@@ -1414,31 +1390,6 @@ public class AlterTableIT extends 
ParallelStatsDisabledIT {
 }
 }
 
-@Test
-public void testNewColumnFamilyInheritsTTLOfEmptyCF() throws Exception {
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
-+"ID1 VARCHAR(15) NOT NULL,\n"
-+"ID2 VARCHAR(15) NOT NULL,\n"
-+"CREATED_DATE DATE,\n"
-+"CREATION_TIME BIGINT,\n"
-+"LAST_USED DATE,\n"
-+"CONSTRAINT PK PRIMARY KEY (ID1, ID2)) SALT_BUCKETS = 8, TTL 
= 1000";
-Connection conn1 = DriverManager.getConnection(getUrl(), props);
-conn1.createStatement().execute(ddl);
-ddl = "ALTER TABLE " + dataTableFullName + " ADD CF.STRING VARCHAR";
-conn1.createStatement().execute(ddl);
-try (HBaseAdmin admin = 
conn1.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
-HTableDescriptor tableDesc = 
admin.getTableDescriptor(Bytes.toBytes(dataTableFullName));
-HColumnDescriptor[] columnFamilies = tableDesc.getColumnFamilies();
-assertEquals(2, columnFamilies.length);
-assertEquals("0", columnFamilies[0].getNameAsString());
-assertEquals(1000, columnFamilies[0].getTimeToLive());
-assertEquals("CF", columnFamilies[1].getNameAsString());
-assertEquals(1000, columnFamilies[1].getTimeToLive());
-}
-}
-
 private static void assertImmutableRows(Connection conn, String 
fullTableName, boolean expectedValue) throws SQLException {
 PhoenixConnection pconn = conn.unwrap(PhoenixConnection.class);
 

[1/3] phoenix git commit: PHOENIX-3338 Move flapping test into test class marked as NotThreadSafe

2016-10-03 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 5db33c511 -> 976c97ac0


PHOENIX-3338 Move flapping test into test class marked as NotThreadSafe


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ddce0bfd
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ddce0bfd
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ddce0bfd

Branch: refs/heads/4.x-HBase-1.1
Commit: ddce0bfd2e721d680bb987468fd65e7ca9f37165
Parents: 5db33c5
Author: James Taylor 
Authored: Thu Sep 29 17:30:37 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 09:13:32 2016 -0700

--
 .../phoenix/tx/NotThreadSafeTransactionIT.java  | 138 +++
 .../org/apache/phoenix/tx/TransactionIT.java| 126 -
 2 files changed, 138 insertions(+), 126 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ddce0bfd/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
index b50f424..e0005e4 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/tx/NotThreadSafeTransactionIT.java
@@ -18,21 +18,38 @@
 package org.apache.phoenix.tx;
 
 import static org.apache.phoenix.util.TestUtil.INDEX_DATA_SCHEMA;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.apache.phoenix.util.TestUtil.createTransactionalTable;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Properties;
 
 import javax.annotation.concurrent.NotThreadSafe;
 
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.TestUtil;
+import org.apache.tephra.TransactionContext;
+import org.apache.tephra.TransactionSystemClient;
+import org.apache.tephra.TxConstants;
+import org.apache.tephra.hbase.TransactionAwareHTable;
 import org.junit.Test;
 
 /**
@@ -190,4 +207,125 @@ public class NotThreadSafeTransactionIT extends 
ParallelStatsDisabledIT {
 }
 }
 
+@Test
+public void testExternalTxContext() throws Exception {
+ResultSet rs;
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String fullTableName = generateUniqueName();
+PhoenixConnection pconn = conn.unwrap(PhoenixConnection.class);
+
+TransactionSystemClient txServiceClient = 
pconn.getQueryServices().getTransactionSystemClient();
+
+Statement stmt = conn.createStatement();
+stmt.execute("CREATE TABLE " + fullTableName + "(K VARCHAR PRIMARY 
KEY, V1 VARCHAR, V2 VARCHAR) TRANSACTIONAL=true");
+HTableInterface htable = 
pconn.getQueryServices().getTable(Bytes.toBytes(fullTableName));
+stmt.executeUpdate("upsert into " + fullTableName + " values('x', 'a', 
'a')");
+conn.commit();
+
+try (Connection newConn = DriverManager.getConnection(getUrl(), 
props)) {
+rs = newConn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(1,rs.getInt(1));
+}
+
+// Use HBase level Tephra APIs to start a new transaction
+TransactionAwareHTable txAware = new TransactionAwareHTable(htable, 
TxConstants.ConflictDetection.ROW);
+TransactionContext txContext = new TransactionContext(txServiceClient, 
txAware);
+txContext.start();
+
+// Use HBase APIs to add a new row
+Put put = new Put(Bytes.toBytes("z"));
+put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, 
QueryConstants.EMPTY_COLUMN_BYTES, QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+

[2/3] phoenix git commit: PHOENIX-3253 Make changes to tests to support method level parallelization

2016-10-03 Thread jamestaylor
PHOENIX-3253 Make changes to tests to support method level parallelization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7f1ccc21
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7f1ccc21
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7f1ccc21

Branch: refs/heads/4.x-HBase-1.1
Commit: 7f1ccc2125be4a6d900f47113f09d7c0a4dba601
Parents: ddce0bf
Author: James Taylor 
Authored: Sun Oct 2 11:10:14 2016 -0700
Committer: James Taylor 
Committed: Mon Oct 3 09:16:21 2016 -0700

--
 .../phoenix/end2end/FlappingLocalIndexIT.java   | 300 +
 .../phoenix/end2end/index/BaseLocalIndexIT.java |  80 +
 .../phoenix/end2end/index/LocalIndexIT.java | 299 +
 .../phoenix/tx/FlappingTransactionIT.java   | 328 ++
 .../phoenix/tx/NotThreadSafeTransactionIT.java  | 331 ---
 pom.xml |   4 +-
 6 files changed, 712 insertions(+), 630 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7f1ccc21/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
new file mode 100644
index 000..7509997
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
@@ -0,0 +1,300 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.end2end.index.BaseLocalIndexIT;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+public class FlappingLocalIndexIT extends BaseLocalIndexIT {
+
+public FlappingLocalIndexIT(boolean isNamespaceMapped) {
+super(isNamespaceMapped);
+}
+
+@Test
+public void testScanWhenATableHasMultipleLocalIndexes() throws Exception {
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+
+createBaseTable(tableName, null, "('e','i','o')");
+Connection conn1 = DriverManager.getConnection(getUrl());
+try {
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('b',1,2,4,'z')");
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('f',1,2,3,'a')");
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('j',2,4,2,'a')");
+conn1.createStatement().execute("UPSERT INTO " + tableName + " 
values('q',3,1,1,'c')");
+conn1.commit();
+conn1.createStatement().execute("CREATE LOCAL INDEX " + indexName 
+ " ON " + tableName + "(v1)");
+conn1.createStatement().execute("CREATE LOCAL INDEX " + indexName 
+ "2 ON " + tableName + "(k3)");
+conn1.commit();
+conn1 = DriverManager.getConnection(getUrl());
+ResultSet rs = conn1.createStatement().executeQuery("SELECT 
COUNT(*) FROM " + tableName);
+

Jenkins build is back to normal : Phoenix-4.8-HBase-0.98 #31

2016-10-03 Thread Apache Jenkins Server
See 



Apache-Phoenix | Master | Build Successful

2016-10-03 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[ankitsinghal59] PHOENIX-3159 CachingHTableFactory may close HTable during eviction even



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix-4.8-HBase-1.0 #29

2016-10-03 Thread Apache Jenkins Server
See 

Changes:

[ankitsinghal59] PHOENIX-3159 CachingHTableFactory may close HTable during 
eviction even

--
[...truncated 275 lines...]
Running org.apache.phoenix.expression.function.InstrFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec - in 
org.apache.phoenix.expression.function.InstrFunctionTest
Running org.apache.phoenix.expression.function.BuiltinFunctionConstructorTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in 
org.apache.phoenix.expression.function.BuiltinFunctionConstructorTest
Running org.apache.phoenix.expression.function.ExternalSqlTypeIdFunctionTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - in 
org.apache.phoenix.expression.function.ExternalSqlTypeIdFunctionTest
Running org.apache.phoenix.expression.RegexpSubstrFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec - in 
org.apache.phoenix.expression.RegexpSubstrFunctionTest
Running org.apache.phoenix.expression.SignFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 sec - in 
org.apache.phoenix.expression.SignFunctionTest
Running org.apache.phoenix.expression.SqrtFunctionTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.321 sec - in 
org.apache.phoenix.expression.NullValueTest
Running org.apache.phoenix.expression.SortOrderExpressionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 sec - in 
org.apache.phoenix.expression.SqrtFunctionTest
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.027 sec - in 
org.apache.phoenix.util.PhoenixRuntimeTest
Running org.apache.phoenix.expression.ArrayFillFunctionTest
Running org.apache.phoenix.expression.ArrayPrependFunctionTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec - in 
org.apache.phoenix.expression.ArrayFillFunctionTest
Running org.apache.phoenix.expression.RoundFloorCeilExpressionsTest
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.135 sec - in 
org.apache.phoenix.expression.SortOrderExpressionTest
Running org.apache.phoenix.expression.PowerFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.021 sec - in 
org.apache.phoenix.expression.PowerFunctionTest
Running org.apache.phoenix.expression.GetSetByteBitFunctionTest
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.603 sec - in 
org.apache.phoenix.expression.ArrayPrependFunctionTest
Running org.apache.phoenix.expression.ColumnExpressionTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - in 
org.apache.phoenix.expression.ColumnExpressionTest
Running org.apache.phoenix.expression.RegexpSplitFunctionTest
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.669 sec - in 
org.apache.phoenix.expression.RoundFloorCeilExpressionsTest
Running org.apache.phoenix.expression.LnLogFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 sec - in 
org.apache.phoenix.expression.LnLogFunctionTest
Running org.apache.phoenix.expression.AbsFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec - in 
org.apache.phoenix.expression.AbsFunctionTest
Running org.apache.phoenix.expression.LikeExpressionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec - in 
org.apache.phoenix.expression.RegexpSplitFunctionTest
Running org.apache.phoenix.expression.ExpFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec - in 
org.apache.phoenix.expression.ExpFunctionTest
Running org.apache.phoenix.expression.ILikeExpressionTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.019 sec - in 
org.apache.phoenix.expression.LikeExpressionTest
Running org.apache.phoenix.expression.ArrayAppendFunctionTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.013 sec - in 
org.apache.phoenix.expression.ILikeExpressionTest
Running org.apache.phoenix.expression.CbrtFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec - in 
org.apache.phoenix.expression.CbrtFunctionTest
Running org.apache.phoenix.expression.ArrayConcatFunctionTest
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - in 
org.apache.phoenix.expression.ArrayAppendFunctionTest
Running org.apache.phoenix.expression.StringToArrayFunctionTest
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec - in 
org.apache.phoenix.expression.StringToArrayFunctionTest
Running org.apache.phoenix.filter.DistinctPrefixFilterTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec - in 
org.apache.phoenix.filter.DistinctPrefixFilterTest
Running org.apache.phoenix.filter.SkipScanFilterTest
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time 

phoenix git commit: PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread

2016-10-03 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.2 468ca2b93 -> 8dbff1ec0


PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it 
is getting used for writing by another thread


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8dbff1ec
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8dbff1ec
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8dbff1ec

Branch: refs/heads/4.8-HBase-1.2
Commit: 8dbff1ec0725ae60636482b5086d27e3d11bd2fa
Parents: 468ca2b
Author: Ankit Singhal 
Authored: Mon Oct 3 19:15:58 2016 +0530
Committer: Ankit Singhal 
Committed: Mon Oct 3 19:15:58 2016 +0530

--
 .../hbase/index/table/CachingHTableFactory.java | 104 ---
 .../index/table/CoprocessorHTableFactory.java   |   6 ++
 .../hbase/index/table/HTableFactory.java|   4 +-
 .../hbase/index/write/IndexWriterUtils.java |   3 +
 .../write/ParallelWriterIndexCommitter.java |  21 ++--
 .../TrackingParallelWriterIndexCommitter.java   |  18 ++--
 .../hbase/index/write/FakeTableFactory.java |   9 +-
 .../index/write/TestCachingHTableFactory.java   |  37 ---
 .../hbase/index/write/TestIndexWriter.java  |  24 +++--
 .../index/write/TestParalleIndexWriter.java |  16 ++-
 .../write/TestParalleWriterIndexCommitter.java  |  15 ++-
 11 files changed, 197 insertions(+), 60 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8dbff1ec/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
index 0c06e2b..d0df5b3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
@@ -17,18 +17,30 @@
  */
 package org.apache.phoenix.hbase.index.table;
 
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.HTABLE_KEEP_ALIVE_KEY;
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY;
+
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.util.Bytes;
-
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.phoenix.execute.DelegateHTable;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 
+import com.google.common.annotations.VisibleForTesting;;
+
 /**
  * A simple cache that just uses usual GC mechanisms to cleanup unused {@link 
HTableInterface}s.
  * When requesting an {@link HTableInterface} via {@link #getTable}, you may 
get the same table as
@@ -47,7 +59,7 @@ public class CachingHTableFactory implements HTableFactory {
   public class HTableInterfaceLRUMap extends LRUMap {
 
 public HTableInterfaceLRUMap(int cacheSize) {
-  super(cacheSize);
+  super(cacheSize, true);
 }
 
 @Override
@@ -58,12 +70,18 @@ public class CachingHTableFactory implements HTableFactory {
 + " because it was evicted from the cache.");
   }
   try {
-table.close();
+ synchronized (this) { // the whole operation of closing and checking 
the count should be atomic
+// and should not conflict with getTable()
+  if (((CachedHTableWrapper)table).getReferenceCount() <= 0) {
+table.close();
+return true;
+  }
+}
   } catch (IOException e) {
 LOG.info("Failed to correctly close HTable: " + 
Bytes.toString(table.getTableName())
 + " ignoring since being removed from queue.");
   }
-  return true;
+  return false;
 }
   }
 
@@ -73,38 +91,94 @@ public class CachingHTableFactory implements HTableFactory {
 
   private static final Log LOG = LogFactory.getLog(CachingHTableFactory.class);
   private static final String CACHE_SIZE_KEY = "index.tablefactory.cache.size";
-  private static final int DEFAULT_CACHE_SIZE = 10;
+  private static final 

phoenix git commit: PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread

2016-10-03 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.1 56eb4bd28 -> be90ecf24


PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it 
is getting used for writing by another thread


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/be90ecf2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/be90ecf2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/be90ecf2

Branch: refs/heads/4.8-HBase-1.1
Commit: be90ecf24db1064133fbe726e0d508a629d5d74d
Parents: 56eb4bd
Author: Ankit Singhal 
Authored: Mon Oct 3 19:14:47 2016 +0530
Committer: Ankit Singhal 
Committed: Mon Oct 3 19:14:47 2016 +0530

--
 .../hbase/index/table/CachingHTableFactory.java | 104 ---
 .../index/table/CoprocessorHTableFactory.java   |   6 ++
 .../hbase/index/table/HTableFactory.java|   4 +-
 .../hbase/index/write/IndexWriterUtils.java |   3 +
 .../write/ParallelWriterIndexCommitter.java |  21 ++--
 .../TrackingParallelWriterIndexCommitter.java   |  18 ++--
 .../hbase/index/write/FakeTableFactory.java |   9 +-
 .../index/write/TestCachingHTableFactory.java   |  37 ---
 .../hbase/index/write/TestIndexWriter.java  |  24 +++--
 .../index/write/TestParalleIndexWriter.java |  16 ++-
 .../write/TestParalleWriterIndexCommitter.java  |  15 ++-
 11 files changed, 197 insertions(+), 60 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/be90ecf2/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
index 0c06e2b..d0df5b3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
@@ -17,18 +17,30 @@
  */
 package org.apache.phoenix.hbase.index.table;
 
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.HTABLE_KEEP_ALIVE_KEY;
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY;
+
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.util.Bytes;
-
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.phoenix.execute.DelegateHTable;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 
+import com.google.common.annotations.VisibleForTesting;;
+
 /**
  * A simple cache that just uses usual GC mechanisms to cleanup unused {@link 
HTableInterface}s.
  * When requesting an {@link HTableInterface} via {@link #getTable}, you may 
get the same table as
@@ -47,7 +59,7 @@ public class CachingHTableFactory implements HTableFactory {
   public class HTableInterfaceLRUMap extends LRUMap {
 
 public HTableInterfaceLRUMap(int cacheSize) {
-  super(cacheSize);
+  super(cacheSize, true);
 }
 
 @Override
@@ -58,12 +70,18 @@ public class CachingHTableFactory implements HTableFactory {
 + " because it was evicted from the cache.");
   }
   try {
-table.close();
+ synchronized (this) { // the whole operation of closing and checking 
the count should be atomic
+// and should not conflict with getTable()
+  if (((CachedHTableWrapper)table).getReferenceCount() <= 0) {
+table.close();
+return true;
+  }
+}
   } catch (IOException e) {
 LOG.info("Failed to correctly close HTable: " + 
Bytes.toString(table.getTableName())
 + " ignoring since being removed from queue.");
   }
-  return true;
+  return false;
 }
   }
 
@@ -73,38 +91,94 @@ public class CachingHTableFactory implements HTableFactory {
 
   private static final Log LOG = LogFactory.getLog(CachingHTableFactory.class);
   private static final String CACHE_SIZE_KEY = "index.tablefactory.cache.size";
-  private static final int DEFAULT_CACHE_SIZE = 10;
+  private static final 

phoenix git commit: PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread

2016-10-03 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.0 afe8591cc -> 0df931ad8


PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it 
is getting used for writing by another thread


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0df931ad
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0df931ad
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0df931ad

Branch: refs/heads/4.8-HBase-1.0
Commit: 0df931ad8e62a0af6694f6ba3cd62e77aeecb1b7
Parents: afe8591
Author: Ankit Singhal 
Authored: Mon Oct 3 19:11:15 2016 +0530
Committer: Ankit Singhal 
Committed: Mon Oct 3 19:11:15 2016 +0530

--
 .../hbase/index/table/CachingHTableFactory.java | 104 ---
 .../index/table/CoprocessorHTableFactory.java   |   6 ++
 .../hbase/index/table/HTableFactory.java|   4 +-
 .../hbase/index/write/IndexWriterUtils.java |   3 +
 .../write/ParallelWriterIndexCommitter.java |  17 +--
 .../TrackingParallelWriterIndexCommitter.java   |  18 ++--
 .../hbase/index/write/FakeTableFactory.java |   9 +-
 .../index/write/TestCachingHTableFactory.java   |  37 ---
 .../hbase/index/write/TestIndexWriter.java  |  24 +++--
 .../index/write/TestParalleIndexWriter.java |  16 ++-
 .../write/TestParalleWriterIndexCommitter.java  |  15 ++-
 11 files changed, 197 insertions(+), 56 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0df931ad/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
index 0c06e2b..d0df5b3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
@@ -17,18 +17,30 @@
  */
 package org.apache.phoenix.hbase.index.table;
 
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.HTABLE_KEEP_ALIVE_KEY;
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY;
+
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.util.Bytes;
-
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.phoenix.execute.DelegateHTable;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 
+import com.google.common.annotations.VisibleForTesting;;
+
 /**
  * A simple cache that just uses usual GC mechanisms to cleanup unused {@link 
HTableInterface}s.
  * When requesting an {@link HTableInterface} via {@link #getTable}, you may 
get the same table as
@@ -47,7 +59,7 @@ public class CachingHTableFactory implements HTableFactory {
   public class HTableInterfaceLRUMap extends LRUMap {
 
 public HTableInterfaceLRUMap(int cacheSize) {
-  super(cacheSize);
+  super(cacheSize, true);
 }
 
 @Override
@@ -58,12 +70,18 @@ public class CachingHTableFactory implements HTableFactory {
 + " because it was evicted from the cache.");
   }
   try {
-table.close();
+ synchronized (this) { // the whole operation of closing and checking 
the count should be atomic
+// and should not conflict with getTable()
+  if (((CachedHTableWrapper)table).getReferenceCount() <= 0) {
+table.close();
+return true;
+  }
+}
   } catch (IOException e) {
 LOG.info("Failed to correctly close HTable: " + 
Bytes.toString(table.getTableName())
 + " ignoring since being removed from queue.");
   }
-  return true;
+  return false;
 }
   }
 
@@ -73,38 +91,94 @@ public class CachingHTableFactory implements HTableFactory {
 
   private static final Log LOG = LogFactory.getLog(CachingHTableFactory.class);
   private static final String CACHE_SIZE_KEY = "index.tablefactory.cache.size";
-  private static final int DEFAULT_CACHE_SIZE = 10;
+  private static final 

phoenix git commit: PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread

2016-10-03 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-0.98 54bc83839 -> 41ecf71b1


PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it 
is getting used for writing by another thread


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/41ecf71b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/41ecf71b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/41ecf71b

Branch: refs/heads/4.8-HBase-0.98
Commit: 41ecf71b1d4bb49ec41dec6609d06812dd00bb94
Parents: 54bc838
Author: Ankit Singhal 
Authored: Mon Oct 3 19:07:55 2016 +0530
Committer: Ankit Singhal 
Committed: Mon Oct 3 19:07:55 2016 +0530

--
 .../hbase/index/table/CachingHTableFactory.java | 104 ---
 .../index/table/CoprocessorHTableFactory.java   |   6 ++
 .../hbase/index/table/HTableFactory.java|   4 +-
 .../hbase/index/write/IndexWriterUtils.java |   3 +
 .../write/ParallelWriterIndexCommitter.java |  16 ++-
 .../TrackingParallelWriterIndexCommitter.java   |  18 ++--
 .../hbase/index/write/FakeTableFactory.java |   9 +-
 .../index/write/TestCachingHTableFactory.java   |  37 ---
 .../hbase/index/write/TestIndexWriter.java  |  24 +++--
 .../index/write/TestParalleIndexWriter.java |  16 ++-
 .../write/TestParalleWriterIndexCommitter.java  |  15 ++-
 11 files changed, 197 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/41ecf71b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
index 0c06e2b..d0df5b3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
@@ -17,18 +17,30 @@
  */
 package org.apache.phoenix.hbase.index.table;
 
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.HTABLE_KEEP_ALIVE_KEY;
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY;
+
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.util.Bytes;
-
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.phoenix.execute.DelegateHTable;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 
+import com.google.common.annotations.VisibleForTesting;;
+
 /**
  * A simple cache that just uses usual GC mechanisms to cleanup unused {@link 
HTableInterface}s.
  * When requesting an {@link HTableInterface} via {@link #getTable}, you may 
get the same table as
@@ -47,7 +59,7 @@ public class CachingHTableFactory implements HTableFactory {
   public class HTableInterfaceLRUMap extends LRUMap {
 
 public HTableInterfaceLRUMap(int cacheSize) {
-  super(cacheSize);
+  super(cacheSize, true);
 }
 
 @Override
@@ -58,12 +70,18 @@ public class CachingHTableFactory implements HTableFactory {
 + " because it was evicted from the cache.");
   }
   try {
-table.close();
+ synchronized (this) { // the whole operation of closing and checking 
the count should be atomic
+// and should not conflict with getTable()
+  if (((CachedHTableWrapper)table).getReferenceCount() <= 0) {
+table.close();
+return true;
+  }
+}
   } catch (IOException e) {
 LOG.info("Failed to correctly close HTable: " + 
Bytes.toString(table.getTableName())
 + " ignoring since being removed from queue.");
   }
-  return true;
+  return false;
 }
   }
 
@@ -73,38 +91,94 @@ public class CachingHTableFactory implements HTableFactory {
 
   private static final Log LOG = LogFactory.getLog(CachingHTableFactory.class);
   private static final String CACHE_SIZE_KEY = "index.tablefactory.cache.size";
-  private static final int DEFAULT_CACHE_SIZE = 10;
+  private static 

phoenix git commit: PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread

2016-10-03 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 c4abc7cf2 -> fc01284b1


PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it 
is getting used for writing by another thread


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/fc01284b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/fc01284b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/fc01284b

Branch: refs/heads/4.x-HBase-0.98
Commit: fc01284b16e02b849d200d594bc42c01aa915e21
Parents: c4abc7c
Author: Ankit Singhal 
Authored: Mon Oct 3 19:03:56 2016 +0530
Committer: Ankit Singhal 
Committed: Mon Oct 3 19:03:56 2016 +0530

--
 .../hbase/index/table/CachingHTableFactory.java | 104 ---
 .../index/table/CoprocessorHTableFactory.java   |   6 ++
 .../hbase/index/table/HTableFactory.java|   4 +-
 .../hbase/index/write/IndexWriterUtils.java |   3 +
 .../write/ParallelWriterIndexCommitter.java |  16 ++-
 .../TrackingParallelWriterIndexCommitter.java   |  18 ++--
 .../hbase/index/write/FakeTableFactory.java |   9 +-
 .../index/write/TestCachingHTableFactory.java   |  37 ---
 .../hbase/index/write/TestIndexWriter.java  |  24 +++--
 .../index/write/TestParalleIndexWriter.java |  16 ++-
 .../write/TestParalleWriterIndexCommitter.java  |  15 ++-
 11 files changed, 197 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/fc01284b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
index 0c06e2b..d0df5b3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
@@ -17,18 +17,30 @@
  */
 package org.apache.phoenix.hbase.index.table;
 
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.HTABLE_KEEP_ALIVE_KEY;
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY;
+
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.util.Bytes;
-
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.phoenix.execute.DelegateHTable;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 
+import com.google.common.annotations.VisibleForTesting;;
+
 /**
  * A simple cache that just uses usual GC mechanisms to cleanup unused {@link 
HTableInterface}s.
  * When requesting an {@link HTableInterface} via {@link #getTable}, you may 
get the same table as
@@ -47,7 +59,7 @@ public class CachingHTableFactory implements HTableFactory {
   public class HTableInterfaceLRUMap extends LRUMap {
 
 public HTableInterfaceLRUMap(int cacheSize) {
-  super(cacheSize);
+  super(cacheSize, true);
 }
 
 @Override
@@ -58,12 +70,18 @@ public class CachingHTableFactory implements HTableFactory {
 + " because it was evicted from the cache.");
   }
   try {
-table.close();
+ synchronized (this) { // the whole operation of closing and checking 
the count should be atomic
+// and should not conflict with getTable()
+  if (((CachedHTableWrapper)table).getReferenceCount() <= 0) {
+table.close();
+return true;
+  }
+}
   } catch (IOException e) {
 LOG.info("Failed to correctly close HTable: " + 
Bytes.toString(table.getTableName())
 + " ignoring since being removed from queue.");
   }
-  return true;
+  return false;
 }
   }
 
@@ -73,38 +91,94 @@ public class CachingHTableFactory implements HTableFactory {
 
   private static final Log LOG = LogFactory.getLog(CachingHTableFactory.class);
   private static final String CACHE_SIZE_KEY = "index.tablefactory.cache.size";
-  private static final int DEFAULT_CACHE_SIZE = 10;
+  private static 

phoenix git commit: PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread

2016-10-03 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 2c83fae49 -> 5db33c511


PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it 
is getting used for writing by another thread


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5db33c51
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5db33c51
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5db33c51

Branch: refs/heads/4.x-HBase-1.1
Commit: 5db33c5119f49ed01bc6ea3e97124e64cbd2d022
Parents: 2c83fae
Author: Ankit Singhal 
Authored: Mon Oct 3 18:59:08 2016 +0530
Committer: Ankit Singhal 
Committed: Mon Oct 3 18:59:08 2016 +0530

--
 .../hbase/index/table/CachingHTableFactory.java | 104 ---
 .../index/table/CoprocessorHTableFactory.java   |   6 ++
 .../hbase/index/table/HTableFactory.java|   4 +-
 .../hbase/index/write/IndexWriterUtils.java |   3 +
 .../write/ParallelWriterIndexCommitter.java |  21 ++--
 .../TrackingParallelWriterIndexCommitter.java   |  18 ++--
 .../hbase/index/write/FakeTableFactory.java |   9 +-
 .../index/write/TestCachingHTableFactory.java   |  37 ---
 .../hbase/index/write/TestIndexWriter.java  |  24 +++--
 .../index/write/TestParalleIndexWriter.java |  16 ++-
 .../write/TestParalleWriterIndexCommitter.java  |  15 ++-
 11 files changed, 197 insertions(+), 60 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5db33c51/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
index 0c06e2b..d0df5b3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
@@ -17,18 +17,30 @@
  */
 package org.apache.phoenix.hbase.index.table;
 
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.HTABLE_KEEP_ALIVE_KEY;
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY;
+
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.util.Bytes;
-
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.phoenix.execute.DelegateHTable;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 
+import com.google.common.annotations.VisibleForTesting;;
+
 /**
  * A simple cache that just uses usual GC mechanisms to cleanup unused {@link 
HTableInterface}s.
  * When requesting an {@link HTableInterface} via {@link #getTable}, you may 
get the same table as
@@ -47,7 +59,7 @@ public class CachingHTableFactory implements HTableFactory {
   public class HTableInterfaceLRUMap extends LRUMap {
 
 public HTableInterfaceLRUMap(int cacheSize) {
-  super(cacheSize);
+  super(cacheSize, true);
 }
 
 @Override
@@ -58,12 +70,18 @@ public class CachingHTableFactory implements HTableFactory {
 + " because it was evicted from the cache.");
   }
   try {
-table.close();
+ synchronized (this) { // the whole operation of closing and checking 
the count should be atomic
+// and should not conflict with getTable()
+  if (((CachedHTableWrapper)table).getReferenceCount() <= 0) {
+table.close();
+return true;
+  }
+}
   } catch (IOException e) {
 LOG.info("Failed to correctly close HTable: " + 
Bytes.toString(table.getTableName())
 + " ignoring since being removed from queue.");
   }
-  return true;
+  return false;
 }
   }
 
@@ -73,38 +91,94 @@ public class CachingHTableFactory implements HTableFactory {
 
   private static final Log LOG = LogFactory.getLog(CachingHTableFactory.class);
   private static final String CACHE_SIZE_KEY = "index.tablefactory.cache.size";
-  private static final int DEFAULT_CACHE_SIZE = 10;
+  private static final 

phoenix git commit: PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread

2016-10-03 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/master 2f51568a7 -> 4c0aeb0d5


PHOENIX-3159 CachingHTableFactory may close HTable during eviction even if it 
is getting used for writing by another thread


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4c0aeb0d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4c0aeb0d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4c0aeb0d

Branch: refs/heads/master
Commit: 4c0aeb0d530852bc12e5fcd930e336fb19434397
Parents: 2f51568
Author: Ankit Singhal 
Authored: Mon Oct 3 18:54:51 2016 +0530
Committer: Ankit Singhal 
Committed: Mon Oct 3 18:54:51 2016 +0530

--
 .../hbase/index/table/CachingHTableFactory.java | 104 ---
 .../index/table/CoprocessorHTableFactory.java   |   6 ++
 .../hbase/index/table/HTableFactory.java|   4 +-
 .../hbase/index/write/IndexWriterUtils.java |   3 +
 .../write/ParallelWriterIndexCommitter.java |  21 ++--
 .../TrackingParallelWriterIndexCommitter.java   |  18 ++--
 .../hbase/index/write/FakeTableFactory.java |   9 +-
 .../index/write/TestCachingHTableFactory.java   |  37 ---
 .../hbase/index/write/TestIndexWriter.java  |  24 +++--
 .../index/write/TestParalleIndexWriter.java |  16 ++-
 .../write/TestParalleWriterIndexCommitter.java  |  15 ++-
 11 files changed, 197 insertions(+), 60 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4c0aeb0d/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
index 0c06e2b..d0df5b3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
@@ -17,18 +17,30 @@
  */
 package org.apache.phoenix.hbase.index.table;
 
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.HTABLE_KEEP_ALIVE_KEY;
+import static 
org.apache.phoenix.hbase.index.write.IndexWriterUtils.INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY;
+
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.util.Bytes;
-
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.phoenix.execute.DelegateHTable;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 
+import com.google.common.annotations.VisibleForTesting;;
+
 /**
  * A simple cache that just uses usual GC mechanisms to cleanup unused {@link 
HTableInterface}s.
  * When requesting an {@link HTableInterface} via {@link #getTable}, you may 
get the same table as
@@ -47,7 +59,7 @@ public class CachingHTableFactory implements HTableFactory {
   public class HTableInterfaceLRUMap extends LRUMap {
 
 public HTableInterfaceLRUMap(int cacheSize) {
-  super(cacheSize);
+  super(cacheSize, true);
 }
 
 @Override
@@ -58,12 +70,18 @@ public class CachingHTableFactory implements HTableFactory {
 + " because it was evicted from the cache.");
   }
   try {
-table.close();
+ synchronized (this) { // the whole operation of closing and checking 
the count should be atomic
+// and should not conflict with getTable()
+  if (((CachedHTableWrapper)table).getReferenceCount() <= 0) {
+table.close();
+return true;
+  }
+}
   } catch (IOException e) {
 LOG.info("Failed to correctly close HTable: " + 
Bytes.toString(table.getTableName())
 + " ignoring since being removed from queue.");
   }
-  return true;
+  return false;
 }
   }
 
@@ -73,38 +91,94 @@ public class CachingHTableFactory implements HTableFactory {
 
   private static final Log LOG = LogFactory.getLog(CachingHTableFactory.class);
   private static final String CACHE_SIZE_KEY = "index.tablefactory.cache.size";
-  private static final int DEFAULT_CACHE_SIZE = 10;
+  private static final int 

Build failed in Jenkins: Phoenix Compile Level Compatibility with HBase #70

2016-10-03 Thread Apache Jenkins Server
See 

--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (Ubuntu ubuntu1 yahoo-not-h2 ubuntu docker) in 
workspace 
[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/hudson7121465499180079535.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386178
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 10
core id : 9
physical id : 0
physical id : 1
MemTotal:   49453340 kB
MemFree:10066316 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 24G   12K   24G   1% /dev
tmpfs   4.8G  824K  4.8G   1% /run
/dev/dm-0   3.6T  790G  2.6T  23% /
none4.0K 0  4.0K   0% /sys/fs/cgroup
none5.0M 0  5.0M   0% /run/lock
none 24G   16K   24G   1% /run/shm
none100M 0  100M   0% /run/user
/dev/sda2   237M  114M  111M  51% /boot
3.0.4
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.4-jenkins
apache-maven-3.0.5
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
latest
latest2
latest3
maven


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.

main:
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common

main:
[mkdir] Created dir: 

 [exec] tar: hadoop-snappy-nativelibs.tar: Cannot open: No such file or 
directory
 [exec] tar: Error is not recoverable: exiting now
 [exec] Result: 2

main:
[mkdir] Created dir: 

 [copy] Copying 20 files to 

[mkdir] Created dir: 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 17 files to 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 1 file to 

[mkdir] Created dir: 


HBase pom.xml:

Got HBase version as 0.98.23-SNAPSHOT
Cloning into 'phoenix'...
Switched to a new branch '4.x-HBase-0.98'
Branch 4.x-HBase-0.98 set up to track remote branch 4.x-HBase-0.98 from origin.
ANTLR Parser Generator  Version 3.5.2
Output file 

 does not exist: must build 

PhoenixSQL.g


===
Verifying compile level compatibility with HBase branch-1.2 with Phoenix master