Apache-Phoenix | Master | Build Successful

2019-07-24 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-master/2483/

2019-07-24 Thread Apache Jenkins Server
[...truncated 52 lines...]

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-07-24 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-07-24 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix-4.x-HBase-1.3 #512

2019-07-24 Thread Apache Jenkins Server
See 




[phoenix] branch master updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4315fd2  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
4315fd2 is described below

commit 4315fd2a72474630742a7a88cff75ece9e3a591c
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 2cde910..0f71733 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -281,22 +280,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 04e71d9  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
04e71d9 is described below

commit 04e71d913d8abc4d42f18c57d631308e88fced74
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 24c0f39..ceea647 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -280,22 +279,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new fca930c  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
fca930c is described below

commit fca930cc05150a25353af45ad372ea2252b3df90
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 24c0f39..ceea647 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -280,22 +279,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new bbf3033  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
bbf3033 is described below

commit bbf3033c45e20df68e9627c30e508615cf954baa
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 24c0f39..ceea647 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -280,22 +279,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-24 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new e7d9654  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
e7d9654 is described below

commit e7d965401ae093317f3b8f2d3ff912e4a6abd392
Author: Chinmay Kulkarni 
AuthorDate: Tue Jul 16 16:24:30 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 168 -
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +++--
 2 files changed, 121 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 59af533..99f1216 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -47,7 +50,11 @@ import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDriver;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver;
 import org.apache.phoenix.jdbc.PhoenixTestDriver;
-import org.apache.phoenix.query.*;
+import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.ConnectionQueryServicesImpl;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesTestImpl;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.UpgradeUtil;
 import org.junit.After;
@@ -85,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -119,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -136,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -176,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -184,23 +191,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connections */
 
 
-// 

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-24 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 3aee060  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
3aee060 is described below

commit 3aee060cfed5a3182c13e19962db2f64a96eae3b
Author: Chinmay Kulkarni 
AuthorDate: Tue Jul 16 16:24:30 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 168 -
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +++--
 2 files changed, 121 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 59af533..99f1216 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -47,7 +50,11 @@ import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDriver;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver;
 import org.apache.phoenix.jdbc.PhoenixTestDriver;
-import org.apache.phoenix.query.*;
+import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.ConnectionQueryServicesImpl;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesTestImpl;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.UpgradeUtil;
 import org.junit.After;
@@ -85,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -119,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -136,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -176,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -184,23 +191,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connections */
 
 
-// 

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-07-24 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[s.kadam] PHOENIX-5382 : Improved performace with Bulk operations over iterations



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix-4.x-HBase-1.4 #233

2019-07-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #511

2019-07-24 Thread Apache Jenkins Server
See 


Changes:

[s.kadam] PHOENIX-5382 : Improved performace with Bulk operations over 
iterations

--
[...truncated 453.88 KB...]
[INFO] Running org.apache.phoenix.util.Base62EncoderTest
[INFO] Running org.apache.phoenix.util.SequenceUtilTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 s 
- in org.apache.phoenix.util.Base62EncoderTest
[INFO] Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.632 
s - in org.apache.phoenix.compile.WhereOptimizerTest
[INFO] Running org.apache.phoenix.util.MetaDataUtilTest
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 s 
- in org.apache.phoenix.util.SequenceUtilTest
[INFO] Running org.apache.phoenix.util.ColumnInfoTest
[INFO] Running org.apache.phoenix.util.csv.StringToArrayConverterTest
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 s - 
in org.apache.phoenix.util.ColumnInfoTest
[INFO] Running org.apache.phoenix.util.csv.CsvUpsertExecutorTest
[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 s 
- in org.apache.phoenix.util.MetaDataUtilTest
[INFO] Running org.apache.phoenix.util.QualifierEncodingSchemeTest
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 s 
- in org.apache.phoenix.util.QualifierEncodingSchemeTest
[INFO] Running org.apache.phoenix.mapreduce.FormatToBytesWritableMapperTest
[INFO] Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.7 s - 
in org.apache.phoenix.compile.QueryOptimizerTest
[INFO] Running org.apache.phoenix.mapreduce.CsvBulkImportUtilTest
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.639 s 
- in org.apache.phoenix.util.PhoenixEncodeDecodeTest
[INFO] Running org.apache.phoenix.mapreduce.bulkload.TestTableRowkeyPair
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 s 
- in org.apache.phoenix.mapreduce.bulkload.TestTableRowkeyPair
[INFO] Running org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtilTest
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.354 s 
- in org.apache.phoenix.mapreduce.FormatToBytesWritableMapperTest
[INFO] Running org.apache.phoenix.mapreduce.util.IndexColumnNamesTest
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.127 s 
- in org.apache.phoenix.util.json.JsonUpsertExecutorTest
[INFO] Running 
org.apache.phoenix.mapreduce.util.ColumnInfoToStringEncoderDecoderTest
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.675 s 
- in org.apache.phoenix.util.csv.StringToArrayConverterTest
[INFO] Running org.apache.phoenix.mapreduce.CsvToKeyValueMapperTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 s 
- in org.apache.phoenix.mapreduce.CsvToKeyValueMapperTest
[INFO] Running org.apache.phoenix.mapreduce.BulkLoadToolTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.052 s 
- in org.apache.phoenix.mapreduce.util.ColumnInfoToStringEncoderDecoderTest
[INFO] Running org.apache.phoenix.mapreduce.index.IndexScrutinyTableOutputTest
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.026 s 
- in org.apache.phoenix.mapreduce.BulkLoadToolTest
[INFO] Running org.apache.phoenix.execute.MutationStateTest
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 s 
- in org.apache.phoenix.execute.MutationStateTest
[INFO] Running org.apache.phoenix.execute.LiteralResultIteratorPlanTest
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 s 
- in org.apache.phoenix.execute.LiteralResultIteratorPlanTest
[INFO] Running org.apache.phoenix.execute.CorrelatePlanTest
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 s 
- in org.apache.phoenix.execute.CorrelatePlanTest
[INFO] Running org.apache.phoenix.execute.UnnestArrayPlanTest
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 s 
- in org.apache.phoenix.execute.UnnestArrayPlanTest
[INFO] Running org.apache.phoenix.execute.DescVarLengthFastByteComparisonsTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 s 
- in org.apache.phoenix.execute.DescVarLengthFastByteComparisonsTest
[INFO] Running org.apache.phoenix.trace.TraceSpanReceiverTest
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.715 s 
- in org.apache.phoenix.mapreduce.CsvBulkImportUtilTest
[INFO] Running org.apache.phoenix.query.QueryPlanTest
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.035 s 
- in org.apache.phoenix.util.csv.CsvUpsertExecutorTest
[INFO] Running org.apache.phoenix.query.KeyRangeClipTest
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.58 s - 
in org.apache.phoenix.mapreduce.index.IndexScrutinyTableOutputTest
[INFO] Tests run: 2, 

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations (addendum)

2019-07-24 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 9a2c310  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations (addendum)
9a2c310 is described below

commit 9a2c310d63b5b30a496ebf4541f4e46db0450c01
Author: Viraj Jasani 
AuthorDate: Wed Jul 24 18:21:06 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations 
(addendum)

Signed-off-by: s.kadam 
---
 .../src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
index 59ed9cf..024e3cd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
@@ -158,7 +158,11 @@ public class CSVCommonsLoader {
  * @return
  */
 public static char asControlCharacter(char delimiter) {
-return CTRL_CHARACTER_TABLE.getOrDefault(delimiter, delimiter);
+if(CTRL_CHARACTER_TABLE.containsKey(delimiter)) {
+return CTRL_CHARACTER_TABLE.get(delimiter);
+} else {
+return delimiter;
+}
 }
 
 /**



[phoenix] 02/02: PHOENIX-5382 : Improved performace with Bulk operations over iterations (addendum)

2019-07-24 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit b962034e48314b51580a333f5f7ddf5b57e74a2d
Author: Viraj Jasani 
AuthorDate: Wed Jul 24 18:21:06 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations 
(addendum)

Signed-off-by: s.kadam 
---
 .../src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
index 59ed9cf..024e3cd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
@@ -158,7 +158,11 @@ public class CSVCommonsLoader {
  * @return
  */
 public static char asControlCharacter(char delimiter) {
-return CTRL_CHARACTER_TABLE.getOrDefault(delimiter, delimiter);
+if(CTRL_CHARACTER_TABLE.containsKey(delimiter)) {
+return CTRL_CHARACTER_TABLE.get(delimiter);
+} else {
+return delimiter;
+}
 }
 
 /**



[phoenix] 01/02: PHOENIX-4918 Apache Phoenix website Grammar page is running on an old version with cvs format fix (addendum)

2019-07-24 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 99b3b9f2d702345a68d0f8a4dc014da00657bf4a
Author: Xinyi 
AuthorDate: Wed Jul 3 17:24:43 2019 -0700

PHOENIX-4918 Apache Phoenix website Grammar page is running on an old 
version with cvs format fix (addendum)

Signed-off-by: s.kadam 
---
 docs/phoenix.csv | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/phoenix.csv b/docs/phoenix.csv
index d881e5e..fc0c0c8 100644
--- a/docs/phoenix.csv
+++ b/docs/phoenix.csv
@@ -1974,7 +1974,7 @@ similarly to the single-argument ""TO_DATE"" function.
 ","
 TO_DATE('Sat, 3 Feb 2001 03:05:06 GMT', 'EEE, d MMM  HH:mm:ss z')
 TO_DATE('1970-01-01', '-MM-dd', 'GMT+1')
-date "1970-01-01 12:30:00"
+date '1970-01-01 12:30:00'
 "
 
 "Functions (Time and Date)","CURRENT_DATE","



[phoenix] branch 4.x-HBase-1.4 updated (2a670b0 -> b962034)

2019-07-24 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a change to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 2a670b0  PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer
 new 99b3b9f  PHOENIX-4918 Apache Phoenix website Grammar page is running 
on an old version with cvs format fix (addendum)
 new b962034  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations (addendum)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/phoenix.csv| 2 +-
 .../src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java | 6 +-
 2 files changed, 6 insertions(+), 2 deletions(-)



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations (addendum)

2019-07-24 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new dbbd7ef  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations (addendum)
dbbd7ef is described below

commit dbbd7ef4805caf3ce07b0aace44fda756324860c
Author: Viraj Jasani 
AuthorDate: Wed Jul 24 18:21:06 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations 
(addendum)

Signed-off-by: s.kadam 
---
 .../src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
index 59ed9cf..024e3cd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
@@ -158,7 +158,11 @@ public class CSVCommonsLoader {
  * @return
  */
 public static char asControlCharacter(char delimiter) {
-return CTRL_CHARACTER_TABLE.getOrDefault(delimiter, delimiter);
+if(CTRL_CHARACTER_TABLE.containsKey(delimiter)) {
+return CTRL_CHARACTER_TABLE.get(delimiter);
+} else {
+return delimiter;
+}
 }
 
 /**



[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations (addendum)

2019-07-24 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 4967d01  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations (addendum)
4967d01 is described below

commit 4967d0142fc4cf440993e10bebaf868f7fb99d4d
Author: Viraj Jasani 
AuthorDate: Wed Jul 24 17:03:34 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations 
(addendum)

Signed-off-by: s.kadam 
---
 .../src/main/java/org/apache/phoenix/compile/FromCompiler.java  | 3 +--
 .../java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java  | 5 ++---
 .../src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java | 6 +-
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index eb4e6a8..dab0ef1 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -695,11 +695,10 @@ public class FromCompiler {
 throws SQLException {
 if (!dynColumns.isEmpty()) {
 List existingColumns = theTable.getColumns();
+// Need to skip the salting column, as it's added in the 
makePTable call below
 List allcolumns = new ArrayList<>(
 theTable.getBucketNum() == null ? existingColumns :
 existingColumns.subList(1, 
existingColumns.size()));
-// Need to skip the salting column, as it's added in the 
makePTable call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 5047991..1ab402e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -3253,9 +3253,8 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 mutateTable.setInt(6, numColumns + 1);
 mutateTable.execute();
 }
-List tableMetadata = new ArrayList<>(
-
metaConnection.getMutationState().toMutations(metaConnection.getSCN()).next()
-.getSecond());
+List tableMetadata = new ArrayList<>();
+
tableMetadata.addAll(metaConnection.getMutationState().toMutations(metaConnection.getSCN()).next().getSecond());
 metaConnection.rollback();
 PColumn column = new 
PColumnImpl(PNameFactory.newName("COLUMN_QUALIFIER"),
 PNameFactory.newName(DEFAULT_COLUMN_FAMILY_NAME), 
PVarbinary.INSTANCE, null, null, true, numColumns,
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
index 59ed9cf..024e3cd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
@@ -158,7 +158,11 @@ public class CSVCommonsLoader {
  * @return
  */
 public static char asControlCharacter(char delimiter) {
-return CTRL_CHARACTER_TABLE.getOrDefault(delimiter, delimiter);
+if(CTRL_CHARACTER_TABLE.containsKey(delimiter)) {
+return CTRL_CHARACTER_TABLE.get(delimiter);
+} else {
+return delimiter;
+}
 }
 
 /**



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations (addendum)

2019-07-24 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 4aeb265  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations (addendum)
4aeb265 is described below

commit 4aeb265ab55b21e1385b98f2e701fad46ee3b848
Author: Viraj Jasani 
AuthorDate: Wed Jul 24 17:03:34 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations 
(addendum)

Signed-off-by: s.kadam 
---
 .../src/main/java/org/apache/phoenix/compile/FromCompiler.java  | 3 +--
 .../java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java  | 5 ++---
 .../src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java | 6 +-
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index eb4e6a8..dab0ef1 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -695,11 +695,10 @@ public class FromCompiler {
 throws SQLException {
 if (!dynColumns.isEmpty()) {
 List existingColumns = theTable.getColumns();
+// Need to skip the salting column, as it's added in the 
makePTable call below
 List allcolumns = new ArrayList<>(
 theTable.getBucketNum() == null ? existingColumns :
 existingColumns.subList(1, 
existingColumns.size()));
-// Need to skip the salting column, as it's added in the 
makePTable call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 5047991..1ab402e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -3253,9 +3253,8 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 mutateTable.setInt(6, numColumns + 1);
 mutateTable.execute();
 }
-List tableMetadata = new ArrayList<>(
-
metaConnection.getMutationState().toMutations(metaConnection.getSCN()).next()
-.getSecond());
+List tableMetadata = new ArrayList<>();
+
tableMetadata.addAll(metaConnection.getMutationState().toMutations(metaConnection.getSCN()).next().getSecond());
 metaConnection.rollback();
 PColumn column = new 
PColumnImpl(PNameFactory.newName("COLUMN_QUALIFIER"),
 PNameFactory.newName(DEFAULT_COLUMN_FAMILY_NAME), 
PVarbinary.INSTANCE, null, null, true, numColumns,
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
index 59ed9cf..024e3cd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/CSVCommonsLoader.java
@@ -158,7 +158,11 @@ public class CSVCommonsLoader {
  * @return
  */
 public static char asControlCharacter(char delimiter) {
-return CTRL_CHARACTER_TABLE.getOrDefault(delimiter, delimiter);
+if(CTRL_CHARACTER_TABLE.containsKey(delimiter)) {
+return CTRL_CHARACTER_TABLE.get(delimiter);
+} else {
+return delimiter;
+}
 }
 
 /**



Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1068

2019-07-24 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H33 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins4330940719115561764.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386517
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98985828 kB
MemFree: 5094156 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G 1002M  8.5G  11% /run
/dev/sda3   3.6T  511G  2.9T  15% /
tmpfs48G  440K   48G   1% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  238M  211M  54% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
tmpfs   9.5G 0  9.5G   0% /run/user/1000
/dev/loop11  57M   57M 0 100% /snap/snapcraft/3022
/dev/loop3   57M   57M 0 100% /snap/snapcraft/3059
/dev/loop7   89M   89M 0 100% /snap/core/7169
/dev/loop8   89M   89M 0 100% /snap/core/7270
/dev/loop5   55M   55M 0 100% /snap/lxd/11320
/dev/loop1   55M   55M 0 100% /snap/lxd/11353
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure


Jenkins build is back to normal : Phoenix-4.x-HBase-1.5 #108

2019-07-24 Thread Apache Jenkins Server
See