Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #623

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get 
the

[tdsilva] PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
(addendum)

--
[...truncated 243.77 KB...]
[ERROR]   IndexToolIT.testSecondaryIndex:168 » IllegalArgument No network 
'en*'/'eth*' i...
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testGuidePostWidthUsedInDefaultStatsCollector:781->BaseTest.createTestTable:771->BaseTest.createTestTable:807
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testGuidePostWidthUsedInDefaultStatsCollector:781->BaseTest.createTestTable:771->BaseTest.createTestTable:807
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testNoDuplicatesAfterUpdateStatsWithDesc:270->StatsCollectorIT.testNoDuplicatesAfterUpdateStats:247
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testNoDuplicatesAfterUpdateStatsWithDesc:270->StatsCollectorIT.testNoDuplicatesAfterUpdateStats:247
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testNoDuplicatesAfterUpdateStatsWithSplits:265->StatsCollectorIT.testNoDuplicatesAfterUpdateStats:247
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testNoDuplicatesAfterUpdateStatsWithSplits:265->StatsCollectorIT.testNoDuplicatesAfterUpdateStats:247
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testRowCountAndByteCounts:603
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testRowCountAndByteCounts:603
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testSomeUpdateEmptyStats:176
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testSomeUpdateEmptyStats:176
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testUpdateEmptyStats:160
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testUpdateEmptyStats:160
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testUpdateStatsWithMultipleTables:281
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testUpdateStatsWithMultipleTables:281
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testUpdateStats:216
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testUpdateStats:216
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testWithMultiCF:505
 » IllegalArgument
[ERROR]   
NonColumnEncodedImmutableTxStatsCollectorIT>StatsCollectorIT.testWithMultiCF:505
 » IllegalArgument
[ERROR]   
StatsEnabledSplitSystemCatalogIT.testNonSaltedUpdatableViewWithIndex:118->testUpdatableViewWithIndex:174
 » IllegalArgument
[ERROR]   StatsEnabledSplitSystemCatalogIT.testReadOnlyOnReadOnlyView:211 » 
IllegalArgument
[ERROR]   
StatsEnabledSplitSystemCatalogIT.testSaltedUpdatableViewWithIndex:104->testUpdatableViewWithIndex:174
 » IllegalArgument
[ERROR]   StatsEnabledSplitSystemCatalogIT.testUpdatableOnUpdatableView:136 » 
IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testGuidePostWidthUsedInDefaultStatsCollector:781->BaseTest.createTestTable:771->BaseTest.createTestTable:807
 » IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testNoDuplicatesAfterUpdateStatsWithDesc:270->StatsCollectorIT.testNoDuplicatesAfterUpdateStats:247
 » IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testNoDuplicatesAfterUpdateStatsWithSplits:265->StatsCollectorIT.testNoDuplicatesAfterUpdateStats:247
 » IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testRowCountAndByteCounts:603
 » IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testSomeUpdateEmptyStats:176
 » IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testUpdateEmptyStats:160
 » IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testUpdateStatsWithMultipleTables:281
 » IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testUpdateStats:216 » 
IllegalArgument
[ERROR]   
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testWithMultiCF:505 » 
IllegalArgument
[ERROR]   ImmutableIndexIT.testDeleteFromNonPK:225 » IllegalArgument No network 
'en*'/'e...
[ERROR]   ImmutableIndexIT.testDeleteFromPartialPK:183 » IllegalArgument No 
network 'en*...
[ERROR]   ImmutableIndexIT.testDropIfImmutableKeyValueColumn:140 » 

[phoenix] branch master updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 351db9a  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
(addendum)
351db9a is described below

commit 351db9aa5e1158e12f920db5c618ba644dee8892
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 17:14:29 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)
---
 .../src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 85a6d8a..ac3993a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -45,7 +45,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
+   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
 
rows.map { row =>



[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 1c893a1  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
(addendum)
1c893a1 is described below

commit 1c893a1a41898bd585c46002fb27d2f9ea5bea8a
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 17:14:29 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)
---
 .../src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 85a6d8a..ac3993a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -45,7 +45,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
+   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
 
rows.map { row =>



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new bae3b97  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
(addendum)
bae3b97 is described below

commit bae3b977d8bae395a715784dfe454fef63b587e4
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 17:14:29 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)
---
 .../src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 85a6d8a..ac3993a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -45,7 +45,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
+   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
 
rows.map { row =>



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 6082306  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
(addendum)
6082306 is described below

commit 60823064a9480713d8d340265332f7bb8523b927
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 17:14:29 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory (addendum)
---
 .../src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 85a6d8a..ac3993a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -45,7 +45,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
+   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
 
rows.map { row =>



[phoenix] branch master updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the config in PhoenixRDD (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 8be7602  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
to get the config in PhoenixRDD (addendum)
8be7602 is described below

commit 8be7602f954c352d255601a54a36e58b0b0f08c1
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 16:52:36 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the 
config in PhoenixRDD (addendum)
---
 .../main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala   | 7 +--
 .../main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala  | 8 
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
index d555954..9377986 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
@@ -17,6 +17,7 @@ import org.apache.hadoop.conf.Configuration
 import org.apache.hadoop.hbase.{HBaseConfiguration, HConstants}
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver
 import org.apache.phoenix.mapreduce.util.{ColumnInfoToStringEncoderDecoder, 
PhoenixConfigurationUtil}
+import org.apache.phoenix.query.HBaseFactoryProvider
 import org.apache.phoenix.util.{ColumnInfo, PhoenixRuntime}
 
 import scala.collection.JavaConversions._
@@ -28,8 +29,8 @@ object ConfigurationUtil extends Serializable {
 
 // Create an HBaseConfiguration object from the passed in config, if 
present
 val config = conf match {
-  case Some(c) => HBaseConfiguration.create(c)
-  case _ => HBaseConfiguration.create()
+  case Some(c) => 
HBaseFactoryProvider.getConfigurationFactory.getConfiguration(c)
+  case _ => HBaseFactoryProvider.getConfigurationFactory.getConfiguration()
 }
 
 // Set the tenantId in the config if present
@@ -41,6 +42,8 @@ object ConfigurationUtil extends Serializable {
 // Set the table to save to
 PhoenixConfigurationUtil.setOutputTableName(config, tableName)
 PhoenixConfigurationUtil.setPhysicalTableName(config, tableName)
+// disable property provider evaluation
+PhoenixConfigurationUtil.setPropertyPolicyProviderDisabled(config);
 
 // Infer column names from the DataFrame schema
 PhoenixConfigurationUtil.setUpsertColumnNames(config, Array(columns : _*))
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 3b0289d..85a6d8a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -28,7 +28,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
saveToPhoenix(parameters("table"), zkUrl = 
parameters.get("zkUrl"), tenantId = parameters.get("TenantId"), 

skipNormalizingIdentifier=parameters.contains("skipNormalizingIdentifier"))
}
-  def saveToPhoenix(tableName: String, conf: Configuration = new Configuration,
+  def saveToPhoenix(tableName: String, conf: Option[Configuration] = None,
 zkUrl: Option[String] = None, tenantId: Option[String] = 
None, skipNormalizingIdentifier: Boolean = false): Unit = {
 
 // Retrieve the schema field names and normalize to Phoenix, need to do 
this outside of mapPartitions
@@ -36,7 +36,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 
 
 // Create a configuration object to use for saving
-@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, Some(conf))
+@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, conf)
 
 // Retrieve the zookeeper URL
 val zkUrlFinal = ConfigurationUtil.getZookeeperURL(outConfig)
@@ -45,9 +45,9 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
+   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
- 
+
rows.map { row =>
  val rec = new PhoenixRecordWritable(columns)
  row.toSeq.foreach { e => rec.add(e) }



[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the config in PhoenixRDD (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 7e89269  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
to get the config in PhoenixRDD (addendum)
7e89269 is described below

commit 7e892690c702a5465f4ddaebced0a0b018c7b629
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 16:52:36 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the 
config in PhoenixRDD (addendum)
---
 .../main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala   | 7 +--
 .../main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala  | 8 
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
index d555954..9377986 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
@@ -17,6 +17,7 @@ import org.apache.hadoop.conf.Configuration
 import org.apache.hadoop.hbase.{HBaseConfiguration, HConstants}
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver
 import org.apache.phoenix.mapreduce.util.{ColumnInfoToStringEncoderDecoder, 
PhoenixConfigurationUtil}
+import org.apache.phoenix.query.HBaseFactoryProvider
 import org.apache.phoenix.util.{ColumnInfo, PhoenixRuntime}
 
 import scala.collection.JavaConversions._
@@ -28,8 +29,8 @@ object ConfigurationUtil extends Serializable {
 
 // Create an HBaseConfiguration object from the passed in config, if 
present
 val config = conf match {
-  case Some(c) => HBaseConfiguration.create(c)
-  case _ => HBaseConfiguration.create()
+  case Some(c) => 
HBaseFactoryProvider.getConfigurationFactory.getConfiguration(c)
+  case _ => HBaseFactoryProvider.getConfigurationFactory.getConfiguration()
 }
 
 // Set the tenantId in the config if present
@@ -41,6 +42,8 @@ object ConfigurationUtil extends Serializable {
 // Set the table to save to
 PhoenixConfigurationUtil.setOutputTableName(config, tableName)
 PhoenixConfigurationUtil.setPhysicalTableName(config, tableName)
+// disable property provider evaluation
+PhoenixConfigurationUtil.setPropertyPolicyProviderDisabled(config);
 
 // Infer column names from the DataFrame schema
 PhoenixConfigurationUtil.setUpsertColumnNames(config, Array(columns : _*))
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 3b0289d..85a6d8a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -28,7 +28,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
saveToPhoenix(parameters("table"), zkUrl = 
parameters.get("zkUrl"), tenantId = parameters.get("TenantId"), 

skipNormalizingIdentifier=parameters.contains("skipNormalizingIdentifier"))
}
-  def saveToPhoenix(tableName: String, conf: Configuration = new Configuration,
+  def saveToPhoenix(tableName: String, conf: Option[Configuration] = None,
 zkUrl: Option[String] = None, tenantId: Option[String] = 
None, skipNormalizingIdentifier: Boolean = false): Unit = {
 
 // Retrieve the schema field names and normalize to Phoenix, need to do 
this outside of mapPartitions
@@ -36,7 +36,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 
 
 // Create a configuration object to use for saving
-@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, Some(conf))
+@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, conf)
 
 // Retrieve the zookeeper URL
 val zkUrlFinal = ConfigurationUtil.getZookeeperURL(outConfig)
@@ -45,9 +45,9 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
+   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
- 
+
rows.map { row =>
  val rec = new PhoenixRecordWritable(columns)
  row.toSeq.foreach { e => rec.add(e) }



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the config in PhoenixRDD (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 022362c  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
to get the config in PhoenixRDD (addendum)
022362c is described below

commit 022362cdccdd5d5e5470de84094dbaead55fa0c1
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 16:52:36 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the 
config in PhoenixRDD (addendum)
---
 .../main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala   | 7 +--
 .../main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala  | 8 
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
index d555954..9377986 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
@@ -17,6 +17,7 @@ import org.apache.hadoop.conf.Configuration
 import org.apache.hadoop.hbase.{HBaseConfiguration, HConstants}
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver
 import org.apache.phoenix.mapreduce.util.{ColumnInfoToStringEncoderDecoder, 
PhoenixConfigurationUtil}
+import org.apache.phoenix.query.HBaseFactoryProvider
 import org.apache.phoenix.util.{ColumnInfo, PhoenixRuntime}
 
 import scala.collection.JavaConversions._
@@ -28,8 +29,8 @@ object ConfigurationUtil extends Serializable {
 
 // Create an HBaseConfiguration object from the passed in config, if 
present
 val config = conf match {
-  case Some(c) => HBaseConfiguration.create(c)
-  case _ => HBaseConfiguration.create()
+  case Some(c) => 
HBaseFactoryProvider.getConfigurationFactory.getConfiguration(c)
+  case _ => HBaseFactoryProvider.getConfigurationFactory.getConfiguration()
 }
 
 // Set the tenantId in the config if present
@@ -41,6 +42,8 @@ object ConfigurationUtil extends Serializable {
 // Set the table to save to
 PhoenixConfigurationUtil.setOutputTableName(config, tableName)
 PhoenixConfigurationUtil.setPhysicalTableName(config, tableName)
+// disable property provider evaluation
+PhoenixConfigurationUtil.setPropertyPolicyProviderDisabled(config);
 
 // Infer column names from the DataFrame schema
 PhoenixConfigurationUtil.setUpsertColumnNames(config, Array(columns : _*))
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 3b0289d..85a6d8a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -28,7 +28,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
saveToPhoenix(parameters("table"), zkUrl = 
parameters.get("zkUrl"), tenantId = parameters.get("TenantId"), 

skipNormalizingIdentifier=parameters.contains("skipNormalizingIdentifier"))
}
-  def saveToPhoenix(tableName: String, conf: Configuration = new Configuration,
+  def saveToPhoenix(tableName: String, conf: Option[Configuration] = None,
 zkUrl: Option[String] = None, tenantId: Option[String] = 
None, skipNormalizingIdentifier: Boolean = false): Unit = {
 
 // Retrieve the schema field names and normalize to Phoenix, need to do 
this outside of mapPartitions
@@ -36,7 +36,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 
 
 // Create a configuration object to use for saving
-@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, Some(conf))
+@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, conf)
 
 // Retrieve the zookeeper URL
 val zkUrlFinal = ConfigurationUtil.getZookeeperURL(outConfig)
@@ -45,9 +45,9 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
+   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
- 
+
rows.map { row =>
  val rec = new PhoenixRecordWritable(columns)
  row.toSeq.foreach { e => rec.add(e) }



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the config in PhoenixRDD (addendum)

2019-02-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 946106a  PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory 
to get the config in PhoenixRDD (addendum)
946106a is described below

commit 946106a40375244f7eb3cb1a5a36a6e7b16615a7
Author: Thomas D'Silva 
AuthorDate: Thu Feb 28 16:52:36 2019 -0800

PHOENIX-5141 Use HBaseFactoryProvider.getConfigurationFactory to get the 
config in PhoenixRDD (addendum)
---
 .../main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala   | 7 +--
 .../main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala  | 8 
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
index d555954..9377986 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
@@ -17,6 +17,7 @@ import org.apache.hadoop.conf.Configuration
 import org.apache.hadoop.hbase.{HBaseConfiguration, HConstants}
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver
 import org.apache.phoenix.mapreduce.util.{ColumnInfoToStringEncoderDecoder, 
PhoenixConfigurationUtil}
+import org.apache.phoenix.query.HBaseFactoryProvider
 import org.apache.phoenix.util.{ColumnInfo, PhoenixRuntime}
 
 import scala.collection.JavaConversions._
@@ -28,8 +29,8 @@ object ConfigurationUtil extends Serializable {
 
 // Create an HBaseConfiguration object from the passed in config, if 
present
 val config = conf match {
-  case Some(c) => HBaseConfiguration.create(c)
-  case _ => HBaseConfiguration.create()
+  case Some(c) => 
HBaseFactoryProvider.getConfigurationFactory.getConfiguration(c)
+  case _ => HBaseFactoryProvider.getConfigurationFactory.getConfiguration()
 }
 
 // Set the tenantId in the config if present
@@ -41,6 +42,8 @@ object ConfigurationUtil extends Serializable {
 // Set the table to save to
 PhoenixConfigurationUtil.setOutputTableName(config, tableName)
 PhoenixConfigurationUtil.setPhysicalTableName(config, tableName)
+// disable property provider evaluation
+PhoenixConfigurationUtil.setPropertyPolicyProviderDisabled(config);
 
 // Infer column names from the DataFrame schema
 PhoenixConfigurationUtil.setUpsertColumnNames(config, Array(columns : _*))
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index 3b0289d..85a6d8a 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -28,7 +28,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
saveToPhoenix(parameters("table"), zkUrl = 
parameters.get("zkUrl"), tenantId = parameters.get("TenantId"), 

skipNormalizingIdentifier=parameters.contains("skipNormalizingIdentifier"))
}
-  def saveToPhoenix(tableName: String, conf: Configuration = new Configuration,
+  def saveToPhoenix(tableName: String, conf: Option[Configuration] = None,
 zkUrl: Option[String] = None, tenantId: Option[String] = 
None, skipNormalizingIdentifier: Boolean = false): Unit = {
 
 // Retrieve the schema field names and normalize to Phoenix, need to do 
this outside of mapPartitions
@@ -36,7 +36,7 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 
 
 // Create a configuration object to use for saving
-@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, Some(conf))
+@transient val outConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, 
tenantId, conf)
 
 // Retrieve the zookeeper URL
 val zkUrlFinal = ConfigurationUtil.getZookeeperURL(outConfig)
@@ -45,9 +45,9 @@ class DataFrameFunctions(data: DataFrame) extends 
Serializable {
 val phxRDD = data.rdd.mapPartitions{ rows =>
  
// Create a within-partition config to retrieve the ColumnInfo list
-   @transient val partitionConfig = 
ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, 
tenantId)
+   @transient val partitionConfig = ConfigurationUtil.getOutputCon 
figuration(tableName, fieldArray, zkUrlFinal, tenantId)
@transient val columns = 
PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList
- 
+
rows.map { row =>
  val rec = new PhoenixRecordWritable(columns)
  row.toSeq.foreach { e => rec.add(e) }



[phoenix] branch master updated: PHOENIX-374: Enable access to dynamic columns in * or cf.* selection (Addendum)

2019-02-28 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new af9ae19  PHOENIX-374: Enable access to dynamic columns in * or cf.* 
selection (Addendum)
af9ae19 is described below

commit af9ae1933b48b00e7ff9c4d8417678338ce4a18b
Author: Chinmay Kulkarni 
AuthorDate: Thu Feb 28 15:47:42 2019 -0800

PHOENIX-374: Enable access to dynamic columns in * or cf.* selection 
(Addendum)
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index a7936e0..cd961da 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -1291,7 +1291,9 @@ public class PTableImpl implements PTable {
 }
 String fam = Bytes.toString(family);
 if (column.isDynamic()) {
-this.colFamToDynamicColumnsMapping.putIfAbsent(fam, new 
ArrayList<>());
+if (!this.colFamToDynamicColumnsMapping.containsKey(fam)) {
+this.colFamToDynamicColumnsMapping.put(fam, new 
ArrayList<>());
+}
 this.colFamToDynamicColumnsMapping.get(fam).add(column);
 }
 }



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-374: Enable access to dynamic columns in * or cf.* selection (Addendum)

2019-02-28 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new f749240  PHOENIX-374: Enable access to dynamic columns in * or cf.* 
selection (Addendum)
f749240 is described below

commit f749240fe16011672df57779bc40f51dddefb011
Author: Chinmay Kulkarni 
AuthorDate: Thu Feb 28 15:47:42 2019 -0800

PHOENIX-374: Enable access to dynamic columns in * or cf.* selection 
(Addendum)
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 8b71c54..5f499d8 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -1291,7 +1291,9 @@ public class PTableImpl implements PTable {
 }
 String fam = Bytes.toString(family);
 if (column.isDynamic()) {
-this.colFamToDynamicColumnsMapping.putIfAbsent(fam, new 
ArrayList());
+if (!this.colFamToDynamicColumnsMapping.containsKey(fam)) {
+this.colFamToDynamicColumnsMapping.put(fam, new 
ArrayList());
+}
 this.colFamToDynamicColumnsMapping.get(fam).add(column);
 }
 }



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-374: Enable access to dynamic columns in * or cf.* selection (Addendum)

2019-02-28 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a4e213d  PHOENIX-374: Enable access to dynamic columns in * or cf.* 
selection (Addendum)
a4e213d is described below

commit a4e213de1674ec9d8132e0a3ea173c62c52dfd03
Author: Chinmay Kulkarni 
AuthorDate: Thu Feb 28 15:47:42 2019 -0800

PHOENIX-374: Enable access to dynamic columns in * or cf.* selection 
(Addendum)
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 8b71c54..5f499d8 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -1291,7 +1291,9 @@ public class PTableImpl implements PTable {
 }
 String fam = Bytes.toString(family);
 if (column.isDynamic()) {
-this.colFamToDynamicColumnsMapping.putIfAbsent(fam, new 
ArrayList());
+if (!this.colFamToDynamicColumnsMapping.containsKey(fam)) {
+this.colFamToDynamicColumnsMapping.put(fam, new 
ArrayList());
+}
 this.colFamToDynamicColumnsMapping.get(fam).add(column);
 }
 }



svn commit: r1854545 - in /phoenix/site: publish/language/datatypes.html publish/language/functions.html publish/language/index.html publish/news.html source/src/site/markdown/news.md

2019-02-28 Thread elserj
Author: elserj
Date: Thu Feb 28 19:50:22 2019
New Revision: 1854545

URL: http://svn.apache.org/viewvc?rev=1854545=rev
Log:
Add a news link to the nosql day blog post

Modified:
phoenix/site/publish/language/datatypes.html
phoenix/site/publish/language/functions.html
phoenix/site/publish/language/index.html
phoenix/site/publish/news.html
phoenix/site/source/src/site/markdown/news.md

Modified: phoenix/site/publish/language/datatypes.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/datatypes.html?rev=1854545=1854544=1854545=diff
==
--- phoenix/site/publish/language/datatypes.html (original)
+++ phoenix/site/publish/language/datatypes.html Thu Feb 28 19:50:22 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -987,7 +987,7 @@ syntax-end -->


Back to 
top
-   Copyright 2018 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.
+   Copyright 2019 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.




Modified: phoenix/site/publish/language/functions.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/functions.html?rev=1854545=1854544=1854545=diff
==
--- phoenix/site/publish/language/functions.html (original)
+++ phoenix/site/publish/language/functions.html Thu Feb 28 19:50:22 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -2294,7 +2294,7 @@ syntax-end -->


Back to 
top
-   Copyright 2018 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.
+   Copyright 2019 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.




Modified: phoenix/site/publish/language/index.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/index.html?rev=1854545=1854544=1854545=diff
==
--- phoenix/site/publish/language/index.html (original)
+++ phoenix/site/publish/language/index.html Thu Feb 28 19:50:22 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -2086,7 +2086,7 @@ syntax-end -->


Back to 
top
-   Copyright 2018 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.
+   Copyright 2019 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.




Modified: phoenix/site/publish/news.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/news.html?rev=1854545=1854544=1854545=diff
==
--- phoenix/site/publish/news.html (original)
+++ phoenix/site/publish/news.html Thu Feb 28 19:50:22 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -171,6 +171,10 @@
  
   

+   https://blogs.apache.org/phoenix/entry/nosql-day-2019;>NoSQL Day 2019 in 
Washington, DC (February 28, 2019) 
+
+   
+   
https://blogs.apache.org/phoenix/entry/apache-phoenix-releases-next-major;>Announcing
 Phoenix 5.0.0 released (July 4, 2018) 
 

@@ -506,7 +510,7 @@


Back to 
top
-   Copyright 2018 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.
+   Copyright 2019 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.




Modified: phoenix/site/source/src/site/markdown/news.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/news.md?rev=1854545=1854544=1854545=diff
==
--- phoenix/site/source/src/site/markdown/news.md (original)
+++ phoenix/site/source/src/site/markdown/news.md Thu Feb 28 19:50:22 2019
@@ -1,6 +1,8 @@
 # Apache Phoenix News
 
 
+ [NoSQL Day 2019 in Washington, 
DC](https://blogs.apache.org/phoenix/entry/nosql-day-2019) (February 28, 2019)
+
  [Announcing Phoenix 5.0.0 
released](https://blogs.apache.org/phoenix/entry/apache-phoenix-releases-next-major)
 (July 4, 2018)
 
  [PhoenixCon 2018 announced for June 18th, 
2018](https://phoenix.apache.org/phoenixcon-2018) (March 24, 2018)




Build failed in Jenkins: Phoenix Compile Compatibility with HBase #922

2019-02-28 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins937428915450034979.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:78998780 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G 1010M  8.5G  11% /run
/dev/sda3   3.6T  282G  3.2T   9% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  191M  258M  43% /boot
/dev/loop1   28M   28M 0 100% /snap/snapcraft/1871
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
tmpfs   9.5G 0  9.5G   0% /run/user/1000
/dev/loop2   28M   28M 0 100% /snap/snapcraft/2374
/dev/loop9   92M   92M 0 100% /snap/core/6259
/dev/loop11  91M   91M 0 100% /snap/core/6350
/dev/loop8   56M   56M 0 100% /snap/snapcraft/2496
/dev/loop10  91M   91M 0 100% /snap/core/6405
/dev/loop4   53M   53M 0 100% /snap/lxd/10206
/dev/loop7   53M   53M 0 100% /snap/lxd/10218
/dev/loop12  53M   53M 0 100% /snap/lxd/10234
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure