Apache-Phoenix | master | HBase 2.2 | Build #16 FAILURE

2020-08-25 Thread Apache Jenkins Server

master branch  HBase 2.2  build #16 status FAILURE
Build #16 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/16/


[phoenix] branch 4.x updated: PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: use stage level timout instead of global)

2020-08-25 Thread stoty
This is an automated email from the ASF dual-hosted git repository.

stoty pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 3543183  PHOENIX-6056 Migrate from builds.apache.org by August 15 
(addendum: use stage level timout instead of global)
3543183 is described below

commit 35431837cbd4fc39c1d5f5fc4652776db92c542b
Author: Istvan Toth 
AuthorDate: Wed Aug 26 08:27:52 2020 +0200

PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: use 
stage level timout instead of global)
---
 Jenkinsfile | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 981198f..e7ca83d 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -21,7 +21,6 @@ pipeline {
 
 options {
 buildDiscarder(logRotator(daysToKeepStr: '30'))
-timeout(time: 6, unit: 'HOURS')
 timestamps()
 }
 
@@ -51,8 +50,14 @@ pipeline {
 stages {
 
 stage('BuildAndTest') {
+options {
+timeout(time: 5, unit: 'HOURS')
+}
 steps {
-sh "mvn clean verify 
-Dhbase.profile=${HBASE_PROFILE} -B"
+sh """#!/bin/bash
+ulimit -S -u 6
+mvn clean verify 
-Dhbase.profile=${HBASE_PROFILE} -B
+"""
 }
 post {
 always {



[phoenix] branch master updated: PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: use stage level timout instead of global)

2020-08-25 Thread stoty
This is an automated email from the ASF dual-hosted git repository.

stoty pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 1cb4f1f  PHOENIX-6056 Migrate from builds.apache.org by August 15 
(addendum: use stage level timout instead of global)
1cb4f1f is described below

commit 1cb4f1f1ef93b5b6a0f7a34ed2caa40c9a556d75
Author: Istvan Toth 
AuthorDate: Wed Aug 26 08:22:06 2020 +0200

PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: use 
stage level timout instead of global)
---
 Jenkinsfile | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 7bb97e9..058208a 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -21,7 +21,6 @@ pipeline {
 
 options {
 buildDiscarder(logRotator(daysToKeepStr: '30'))
-timeout(time: 6, unit: 'HOURS')
 timestamps()
 }
 
@@ -51,6 +50,9 @@ pipeline {
 stages {
 
 stage('RebuildHBase') {
+options {
+timeout(time: 30, unit: 'MINUTES')
+}
 environment {
 HBASE_VERSION = sh(returnStdout: true, script: 
"mvn help:evaluate -Dhbase.profile=${HBASE_PROFILE} 
-Dartifact=org.apache.phoenix:phoenix-core -Dexpression=hbase.version -q 
-DforceStdout").trim()
 }
@@ -65,8 +67,14 @@ pipeline {
 }
 
 stage('BuildAndTest') {
+options {
+timeout(time: 5, unit: 'HOURS')
+}
 steps {
-sh "mvn clean verify 
-Dhbase.profile=${HBASE_PROFILE} -B"
+sh """#!/bin/bash
+ulimit -S -u 6
+mvn clean verify 
-Dhbase.profile=${HBASE_PROFILE} -B
+"""
 }
 post {
 always {



Apache-Phoenix | jenkinstest | HBase 2.3 | Build #13 ABORTED

2020-08-25 Thread Apache Jenkins Server

jenkinstest branch  HBase 2.3  build #13 status ABORTED
Build #13 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-experiment/job/jenkinstest/13/


Apache-Phoenix | jenkinstest | HBase 2.2 | Build #13 FAILURE

2020-08-25 Thread Apache Jenkins Server

jenkinstest branch  HBase 2.2  build #13 status FAILURE
Build #13 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-experiment/job/jenkinstest/13/


Apache-Phoenix | jenkinstest | HBase 2.1 | Build #13 FAILURE

2020-08-25 Thread Apache Jenkins Server

jenkinstest branch  HBase 2.1  build #13 status FAILURE
Build #13 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-experiment/job/jenkinstest/13/


Apache-Phoenix | jenkinstest | HBase 2.1 | Build #12 ABORTED

2020-08-25 Thread Apache Jenkins Server

jenkinstest branch  HBase 2.1  build #12 status ABORTED
Build #12 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-experiment/job/jenkinstest/12/


[phoenix] branch 4.x updated: PHOENIX-6034 Optimize InListIT (#838)

2020-08-25 Thread ankit
This is an automated email from the ASF dual-hosted git repository.

ankit pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 7575123  PHOENIX-6034 Optimize InListIT (#838)
7575123 is described below

commit 7575123f5638d2e85bc017322b945ab9e246ad49
Author: Ankit Singhal 
AuthorDate: Tue Aug 25 21:40:55 2020 -0700

PHOENIX-6034 Optimize InListIT (#838)
---
 .../java/org/apache/phoenix/end2end/InListIT.java  | 156 ++---
 1 file changed, 107 insertions(+), 49 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java
index b0aee8f..c64fa79 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java
@@ -26,15 +26,18 @@ import static org.junit.Assert.assertTrue;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
-import java.sql.PreparedStatement;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 import java.util.Properties;
 
+import com.google.common.base.Function;
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
 import org.apache.phoenix.compile.QueryPlan;
 import org.apache.phoenix.iterate.ExplainTable;
 import org.apache.phoenix.schema.SortOrder;
@@ -43,27 +46,39 @@ import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PInteger;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Function;
-import com.google.common.base.Joiner;
-import com.google.common.collect.Lists;
-
 
 public class InListIT extends ParallelStatsDisabledIT {
 private static final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
-String tableName;
-String descViewName;
-String ascViewName;
+private static boolean isInitialized = false;
+private static String tableName = generateUniqueName();
+private static String tableName2 = generateUniqueName();
+private static String descViewName = generateUniqueName();
+private static String ascViewName = generateUniqueName();
+private static String viewName1 = generateUniqueName();
+private static String viewName2 = generateUniqueName();
+private static String prefix = generateUniqueName();
 
 @Before
 public void setup() throws Exception {
-tableName = generateUniqueName();
-descViewName = generateUniqueName();
-ascViewName = generateUniqueName();
-buildSchema(tableName, generateUniqueName(), true);
-buildSchema(generateUniqueName(), generateUniqueName(), false);
+if(isInitialized){
+return;
+}
+initializeTables();
+isInitialized = true;
+}
+
+@After
+public void cleanUp() throws SQLException {
+deleteTenantData(descViewName);
+deleteTenantData(viewName1);
+deleteTenantData(viewName2);
+deleteTenantData(ascViewName);
+deleteTenantData(tableName);
+deleteTenantData(tableName2);
 }
 
 @Test
@@ -163,7 +178,7 @@ public class InListIT extends ParallelStatsDisabledIT {
  * @return  the table or view name that should be used to access the 
created table
  */
 private static String initializeAndGetTable(Connection baseConn, 
Connection conn, boolean isMultiTenant, PDataType pkType, int saltBuckets) 
throws SQLException {
-String tableName = generateUniqueName() + "in_test" + 
pkType.getSqlTypeName() + saltBuckets + (isMultiTenant ? "_multi" : "_single");
+String tableName = getTableName(isMultiTenant, pkType, saltBuckets);
 String tableDDL = createTableDDL(tableName, pkType, saltBuckets, 
isMultiTenant);
 baseConn.createStatement().execute(tableDDL);
 
@@ -179,6 +194,12 @@ public class InListIT extends ParallelStatsDisabledIT {
 }
 }
 
+private static String getTableName(boolean isMultiTenant, PDataType 
pkType, int saltBuckets) {
+return prefix+"init_in_test_" + pkType.getSqlTypeName() + saltBuckets 
+ (isMultiTenant ?
+"_multi" :
+"_single");
+}
+
 private static final String TENANT_ID = "ABC";
 private static final String TENANT_URL = getUrl() + ";" + 
PhoenixRuntime.TENANT_ID_ATTRIB + '=' + TENANT_ID;
 
@@ -189,15 +210,51 @@ public class InListIT extends ParallelStatsDisabledIT {
 
 private static final List HINTS = Arrays.asList("/*+ SKIP_SCAN 
*/", "/*+ RANGE_SCAN */");
 
+private  void initializeTables() throws Exception {
+buildSchema(table

[phoenix] branch master updated: PHOENIX-6034 Optimize InListIT (#838)

2020-08-25 Thread ankit
This is an automated email from the ASF dual-hosted git repository.

ankit pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 674efb9  PHOENIX-6034 Optimize InListIT (#838)
674efb9 is described below

commit 674efb9e39f02e63872fdf1651723106b7bcfc1d
Author: Ankit Singhal 
AuthorDate: Tue Aug 25 21:40:55 2020 -0700

PHOENIX-6034 Optimize InListIT (#838)
---
 .../java/org/apache/phoenix/end2end/InListIT.java  | 156 ++---
 1 file changed, 107 insertions(+), 49 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java
index 9e3c40a..93d645f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/InListIT.java
@@ -26,15 +26,18 @@ import static org.junit.Assert.assertTrue;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
-import java.sql.PreparedStatement;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 import java.util.Properties;
 
+import com.google.common.base.Function;
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
 import org.apache.phoenix.compile.QueryPlan;
 import org.apache.phoenix.iterate.ExplainTable;
 import org.apache.phoenix.schema.SortOrder;
@@ -43,27 +46,39 @@ import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PInteger;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
-import com.google.common.base.Function;
-import com.google.common.base.Joiner;
-import com.google.common.collect.Lists;
-
 
 public class InListIT extends ParallelStatsDisabledIT {
 private static final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
-String tableName;
-String descViewName;
-String ascViewName;
+private static boolean isInitialized = false;
+private static String tableName = generateUniqueName();
+private static String tableName2 = generateUniqueName();
+private static String descViewName = generateUniqueName();
+private static String ascViewName = generateUniqueName();
+private static String viewName1 = generateUniqueName();
+private static String viewName2 = generateUniqueName();
+private static String prefix = generateUniqueName();
 
 @Before
 public void setup() throws Exception {
-tableName = generateUniqueName();
-descViewName = generateUniqueName();
-ascViewName = generateUniqueName();
-buildSchema(tableName, generateUniqueName(), true);
-buildSchema(generateUniqueName(), generateUniqueName(), false);
+if(isInitialized){
+return;
+}
+initializeTables();
+isInitialized = true;
+}
+
+@After
+public void cleanUp() throws SQLException {
+deleteTenantData(descViewName);
+deleteTenantData(viewName1);
+deleteTenantData(viewName2);
+deleteTenantData(ascViewName);
+deleteTenantData(tableName);
+deleteTenantData(tableName2);
 }
 
 @Test
@@ -163,7 +178,7 @@ public class InListIT extends ParallelStatsDisabledIT {
  * @return  the table or view name that should be used to access the 
created table
  */
 private static String initializeAndGetTable(Connection baseConn, 
Connection conn, boolean isMultiTenant, PDataType pkType, int saltBuckets) 
throws SQLException {
-String tableName = generateUniqueName() + "in_test" + 
pkType.getSqlTypeName() + saltBuckets + (isMultiTenant ? "_multi" : "_single");
+String tableName = getTableName(isMultiTenant, pkType, saltBuckets);
 String tableDDL = createTableDDL(tableName, pkType, saltBuckets, 
isMultiTenant);
 baseConn.createStatement().execute(tableDDL);
 
@@ -179,6 +194,12 @@ public class InListIT extends ParallelStatsDisabledIT {
 }
 }
 
+private static String getTableName(boolean isMultiTenant, PDataType 
pkType, int saltBuckets) {
+return prefix+"init_in_test_" + pkType.getSqlTypeName() + saltBuckets 
+ (isMultiTenant ?
+"_multi" :
+"_single");
+}
+
 private static final String TENANT_ID = "ABC";
 private static final String TENANT_URL = getUrl() + ";" + 
PhoenixRuntime.TENANT_ID_ATTRIB + '=' + TENANT_ID;
 
@@ -189,15 +210,51 @@ public class InListIT extends ParallelStatsDisabledIT {
 
 private static final List HINTS = Arrays.asList("/*+ SKIP_SCAN 
*/", "/*+ RANGE_SCAN */");
 
+private  void initializeTables() throws Exception {
+buildSchema

Apache-Phoenix | 4.x | HBase 1.4 | Build #11 ABORTED

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.4  build #11 status ABORTED
Build #11 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/11/


Apache-Phoenix | 4.x | HBase 1.6 | Build #11 FAILURE

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.6  build #11 status FAILURE
Build #11 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/11/


Apache-Phoenix | master | HBase 2.2 | Build #15 ABORTED

2020-08-25 Thread Apache Jenkins Server

master branch  HBase 2.2  build #15 status ABORTED
Build #15 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/15/


Apache-Phoenix | master | HBase 2.3 | Build #15 FAILURE

2020-08-25 Thread Apache Jenkins Server

master branch  HBase 2.3  build #15 status FAILURE
Build #15 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/15/


Apache-Phoenix | 4.x | HBase 1.3 | Build #11 FAILURE

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.3  build #11 status FAILURE
Build #11 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/11/


Apache-Phoenix | master | HBase 2.1 | Build #15 FAILURE

2020-08-25 Thread Apache Jenkins Server

master branch  HBase 2.1  build #15 status FAILURE
Build #15 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/15/


[phoenix] branch 4.x updated: PHOENIX-6101 Avoid duplicate work between local and global indexes.

2020-08-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 81d404b  PHOENIX-6101 Avoid duplicate work between local and global 
indexes.
81d404b is described below

commit 81d404b4b02904211d95e86b492c8b52ee1bbcff
Author: Lars 
AuthorDate: Tue Aug 25 12:50:41 2020 -0700

PHOENIX-6101 Avoid duplicate work between local and global indexes.
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 106 -
 1 file changed, 63 insertions(+), 43 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index 49b5509..bcf718c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Increment;
@@ -429,10 +430,20 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
 return context.multiMutationMap.values();
 }
 
-public static void setTimestamp(Mutation m, long ts) throws IOException {
-for (List cells : m.getFamilyCellMap().values()) {
-for (Cell cell : cells) {
-CellUtil.setTimestamp(cell, ts);
+public static void setTimestamps(MiniBatchOperationInProgress 
miniBatchOp, IndexBuildManager builder, long ts) throws IOException {
+for (Integer i = 0; i < miniBatchOp.size(); i++) {
+if (miniBatchOp.getOperationStatus(i) == IGNORE) {
+continue;
+}
+Mutation m = miniBatchOp.getOperation(i);
+// skip this mutation if we aren't enabling indexing
+if (!builder.isEnabled(m)) {
+continue;
+}
+for (List cells : m.getFamilyCellMap().values()) {
+for (Cell cell : cells) {
+CellUtil.setTimestamp(cell, ts);
+}
 }
 }
 }
@@ -502,9 +513,6 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
 if (!this.builder.isEnabled(m)) {
 continue;
 }
-// We update the time stamp of the data table to prevent 
overlapping time stamps (which prevents index
-// inconsistencies as this case isn't handled correctly currently).
-setTimestamp(m, now);
 if (m instanceof Put) {
 ImmutableBytesPtr rowKeyPtr = new 
ImmutableBytesPtr(m.getRow());
 Pair dataRowState = 
context.dataRowStates.get(rowKeyPtr);
@@ -554,13 +562,13 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
  * The index update generation for local indexes uses the existing index 
update generation code (i.e.,
  * the {@link IndexBuilder} implementation).
  */
-private void 
handleLocalIndexUpdates(ObserverContext c,
+private void handleLocalIndexUpdates(TableName table,
  
MiniBatchOperationInProgress miniBatchOp,
  Collection 
pendingMutations,
  PhoenixIndexMetaData indexMetaData) 
throws Throwable {
 ListMultimap> 
indexUpdates = ArrayListMultimap.>create();
 this.builder.getIndexUpdates(indexUpdates, miniBatchOp, 
pendingMutations, indexMetaData);
-byte[] tableName = 
c.getEnvironment().getRegion().getTableDesc().getTableName().getName();
+byte[] tableName = table.getName();
 HTableInterfaceReference hTableInterfaceReference =
 new HTableInterfaceReference(new ImmutableBytesPtr(tableName));
 List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
@@ -685,10 +693,7 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
  * unverified status. In phase 2, data table mutations are applied. In 
phase 3, the status for an index table row is
  * either set to "verified" or the row is deleted.
  */
-private boolean 
preparePreIndexMutations(ObserverContext c,
-  
MiniBatchOperationInProgress miniBatchOp,
-  BatchMutateContext context,
-  Collection 
pendingMutations,
+private void preparePreIndexMutations(BatchMutateContext context,
  

[phoenix] branch master updated: PHOENIX-6101 Avoid duplicate work between local and global indexes.

2020-08-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new c109a61  PHOENIX-6101 Avoid duplicate work between local and global 
indexes.
c109a61 is described below

commit c109a61890fd2ea14a7274808b43298b6e221b11
Author: Lars 
AuthorDate: Tue Aug 25 12:31:18 2020 -0700

PHOENIX-6101 Avoid duplicate work between local and global indexes.
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 107 -
 1 file changed, 63 insertions(+), 44 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index 2d0cf51..50e1f68 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Increment;
@@ -443,10 +444,20 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
 return context.multiMutationMap.values();
 }
 
-public static void setTimestamp(Mutation m, long ts) throws IOException {
-for (List cells : m.getFamilyCellMap().values()) {
-for (Cell cell : cells) {
-CellUtil.setTimestamp(cell, ts);
+public static void setTimestamps(MiniBatchOperationInProgress 
miniBatchOp, IndexBuildManager builder, long ts) throws IOException {
+for (Integer i = 0; i < miniBatchOp.size(); i++) {
+if (miniBatchOp.getOperationStatus(i) == IGNORE) {
+continue;
+}
+Mutation m = miniBatchOp.getOperation(i);
+// skip this mutation if we aren't enabling indexing
+if (!builder.isEnabled(m)) {
+continue;
+}
+for (List cells : m.getFamilyCellMap().values()) {
+for (Cell cell : cells) {
+CellUtil.setTimestamp(cell, ts);
+}
 }
 }
 }
@@ -516,10 +527,6 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
 if (!this.builder.isEnabled(m)) {
 continue;
 }
-// Unless we're replaying edits to rebuild the index, we update 
the time stamp
-// of the data table to prevent overlapping time stamps (which 
prevents index
-// inconsistencies as this case isn't handled correctly currently).
-setTimestamp(m, now);
 if (m instanceof Put) {
 ImmutableBytesPtr rowKeyPtr = new 
ImmutableBytesPtr(m.getRow());
 Pair dataRowState = 
context.dataRowStates.get(rowKeyPtr);
@@ -569,13 +576,13 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
  * The index update generation for local indexes uses the existing index 
update generation code (i.e.,
  * the {@link IndexBuilder} implementation).
  */
-private void 
handleLocalIndexUpdates(ObserverContext c,
+private void handleLocalIndexUpdates(TableName table,
  
MiniBatchOperationInProgress miniBatchOp,
  Collection 
pendingMutations,
  PhoenixIndexMetaData indexMetaData) 
throws Throwable {
 ListMultimap> 
indexUpdates = ArrayListMultimap.>create();
 this.builder.getIndexUpdates(indexUpdates, miniBatchOp, 
pendingMutations, indexMetaData);
-byte[] tableName = 
c.getEnvironment().getRegion().getTableDescriptor().getTableName().getName();
+byte[] tableName = table.getName();
 HTableInterfaceReference hTableInterfaceReference =
 new HTableInterfaceReference(new ImmutableBytesPtr(tableName));
 List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
@@ -702,10 +709,7 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
  * unverified status. In phase 2, data table mutations are applied. In 
phase 3, the status for an index table row is
  * either set to "verified" or the row is deleted.
  */
-private boolean 
preparePreIndexMutations(ObserverContext c,
-  
MiniBatchOperationInProgress miniBatchOp,
-  BatchMutateContext context,
-   

Apache-Phoenix | 4.x | HBase 1.3 | Build #10 ABORTED

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.3  build #10 status ABORTED
Build #10 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/10/


Apache-Phoenix | 4.x | HBase 1.4 | Build #10 ABORTED

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.4  build #10 status ABORTED
Build #10 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/10/


Apache-Phoenix | 4.x | HBase 1.6 | Build #10 SUCCESS

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.6  build #10 status SUCCESS
Build #10 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/10/


[phoenix-connectors] branch master updated: PHOENIX-6098 IndexPredicateAnalyzer wrongly handles pushdown predicates and residual predicates

2020-08-25 Thread stoty
This is an automated email from the ASF dual-hosted git repository.

stoty pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-connectors.git


The following commit(s) were added to refs/heads/master by this push:
 new cb268ab  PHOENIX-6098 IndexPredicateAnalyzer wrongly handles pushdown 
predicates and residual predicates
cb268ab is described below

commit cb268abbd95a3a99dcb21af7ae85c88fa72724fd
Author: Toshihiro Suzuki 
AuthorDate: Tue Aug 25 10:01:20 2020 +0900

PHOENIX-6098 IndexPredicateAnalyzer wrongly handles pushdown predicates and 
residual predicates
---
 .../hive/ql/index/IndexPredicateAnalyzer.java| 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git 
a/phoenix-hive3/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java
 
b/phoenix-hive3/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java
index 4e77078..d9160b0 100644
--- 
a/phoenix-hive3/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java
+++ 
b/phoenix-hive3/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java
@@ -351,14 +351,18 @@ public class IndexPredicateAnalyzer {
 searchConditions, Object... nodeOutputs) throws SemanticException {
 
 if (FunctionRegistry.isOpAnd(expr)) {
-assert (nodeOutputs.length == 2);
-ExprNodeDesc residual1 = (ExprNodeDesc)nodeOutputs[0];
-ExprNodeDesc residual2 = (ExprNodeDesc)nodeOutputs[1];
-if (residual1 == null) { return residual2; }
-if (residual2 == null) { return residual1; }
-List residuals = new ArrayList();
-residuals.add(residual1);
-residuals.add(residual2);
+List residuals = new ArrayList<>();
+// GenericUDFOPAnd can expect more than 2 arguments after 
HIVE-11398
+for (Object nodeOutput : nodeOutputs) {
+// The null value of nodeOutput means the predicate is pushed 
down to Phoenix. So
+// we don't need to add it to the residual predicate list
+if (nodeOutput != null) {
+residuals.add((ExprNodeDesc) nodeOutput);
+}
+}
+if (residuals.size() == 1) {
+return residuals.get(0);
+}
 return new 
ExprNodeGenericFuncDesc(TypeInfoFactory.booleanTypeInfo, FunctionRegistry
 .getGenericUDFForAnd(), residuals);
 }



Apache-Phoenix | 4.x | HBase 1.4 | Build #9 ABORTED

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.4  build #9 status ABORTED
Build #9 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/9/


Apache-Phoenix | master | HBase 2.2 | Build #14 FAILURE

2020-08-25 Thread Apache Jenkins Server

master branch  HBase 2.2  build #14 status FAILURE
Build #14 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/14/


Apache-Phoenix | master | HBase 2.3 | Build #14 FAILURE

2020-08-25 Thread Apache Jenkins Server

master branch  HBase 2.3  build #14 status FAILURE
Build #14 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/14/


Apache-Phoenix | master | HBase 2.1 | Build #14 FAILURE

2020-08-25 Thread Apache Jenkins Server

master branch  HBase 2.1  build #14 status FAILURE
Build #14 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/14/


Apache-Phoenix | jenkinstest | HBase 2.2 | Build #11 ABORTED

2020-08-25 Thread Apache Jenkins Server

jenkinstest branch  HBase 2.2  build #11 status ABORTED
Build #11 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-experiment/job/jenkinstest/11/


[phoenix] branch 4.x updated: PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: fix node selection, clean up workDir, really use hbase.profile)

2020-08-25 Thread stoty
This is an automated email from the ASF dual-hosted git repository.

stoty pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new bad7aee  PHOENIX-6056 Migrate from builds.apache.org by August 15 
(addendum: fix node selection, clean up workDir, really use hbase.profile)
bad7aee is described below

commit bad7aee54370f86996b9e124e48ff8de657ed28e
Author: Istvan Toth 
AuthorDate: Tue Aug 25 11:24:25 2020 +0200

PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: fix 
node selection, clean up workDir, really use hbase.profile)
---
 Jenkinsfile | 17 ++---
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 6529331..981198f 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -17,9 +17,7 @@
  */
 
 pipeline {
-agent {
-label 'Hadoop'
-}
+agent none
 
 options {
 buildDiscarder(logRotator(daysToKeepStr: '30'))
@@ -30,7 +28,9 @@ pipeline {
 stages {
 stage('MatrixBuild') {
 matrix {
-agent any
+agent {
+label 'Hadoop'
+}
 
 axes {
 axis {
@@ -52,12 +52,12 @@ pipeline {
 
 stage('BuildAndTest') {
 steps {
-sh "mvn clean verify -B"
+sh "mvn clean verify 
-Dhbase.profile=${HBASE_PROFILE} -B"
 }
 post {
 always {
-   junit '**/target/surefire-reports/TEST-*.xml'
-   junit '**/target/failsafe-reports/TEST-*.xml'
+junit '**/target/surefire-reports/TEST-*.xml'
+junit '**/target/failsafe-reports/TEST-*.xml'
 }
 }
 }
@@ -82,6 +82,9 @@ pipeline {
 """
)
 }
+cleanup {
+deleteDir()
+}
 }
 }
 }



[phoenix] branch master updated: PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: really? fix node selection)

2020-08-25 Thread stoty
This is an automated email from the ASF dual-hosted git repository.

stoty pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new a34d568  PHOENIX-6056 Migrate from builds.apache.org by August 15 
(addendum: really? fix node selection)
a34d568 is described below

commit a34d568de2c001fbebb72b0b748771680448b075
Author: Istvan Toth 
AuthorDate: Tue Aug 25 11:02:41 2020 +0200

PHOENIX-6056 Migrate from builds.apache.org by August 15 (addendum: really? 
fix node selection)
---
 Jenkinsfile | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 1b7fb6d..7bb97e9 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -17,9 +17,7 @@
  */
 
 pipeline {
-agent {
-label 'Hadoop'
-}
+agent none
 
 options {
 buildDiscarder(logRotator(daysToKeepStr: '30'))
@@ -30,6 +28,9 @@ pipeline {
 stages {
 stage('MatrixBuild') {
 matrix {
+agent {
+label 'Hadoop'
+}
 
 axes {
 axis {



Apache-Phoenix | 4.x | HBase 1.6 | Build #9 FAILURE

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.6  build #9 status FAILURE
Build #9 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/9/


Apache-Phoenix | jenkinstest | HBase 2.1 | Build #11 SUCCESS

2020-08-25 Thread Apache Jenkins Server

jenkinstest branch  HBase 2.1  build #11 status SUCCESS
Build #11 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-experiment/job/jenkinstest/11/


Apache-Phoenix | jenkinstest | HBase 2.3 | Build #11 FAILURE

2020-08-25 Thread Apache Jenkins Server

jenkinstest branch  HBase 2.3  build #11 status FAILURE
Build #11 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-experiment/job/jenkinstest/11/


Apache-Phoenix | 4.x | HBase 1.3 | Build #9 FAILURE

2020-08-25 Thread Apache Jenkins Server

4.x branch  HBase 1.3  build #9 status FAILURE
Build #9 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/9/