[jira] [Commented] (PHOENIX-4489) HBase Connection leak in Phoenix MR Jobs

2017-12-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302171#comment-16302171
 ] 

James Taylor commented on PHOENIX-4489:
---

Yes, definitely a concern. Let's make sure there are no connection leaks. This 
may be the cause of the reported issue over on PHOENIX-4247. Perhaps this is a 
duplicate? 

FYI, [~kumarappan].

> HBase Connection leak in Phoenix MR Jobs
> 
>
> Key: PHOENIX-4489
> URL: https://issues.apache.org/jira/browse/PHOENIX-4489
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>
> Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
> splits and the parallelism of the work. The class directly opens up a HBase 
> connection, which is not closed after the usage. Independently running MR 
> jobs should not have any concern, however jobs that run through Phoenix-Spark 
> can cause leak issues if this is left unclosed (since those jobs run as a 
> part of same JVM). 
> Apart from this, the connection should be instantiated with 
> {{HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. 
> It can be useful if a separate client is trying to run jobs and wants to 
> provide a custom implementation of {{HConnection}}. 
> [~jmahonin] Any ideas?
> [~jamestaylor] [~vincentpoon] Any concerns around this?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (PHOENIX-4247) Phoenix/Spark/ZK connection

2017-12-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-4247:
---

Reopening as this seems less like a question and more like a report of an 
issue. Would be nice to have a unit test to repro the issue, though.

> Phoenix/Spark/ZK connection
> ---
>
> Key: PHOENIX-4247
> URL: https://issues.apache.org/jira/browse/PHOENIX-4247
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.2 
> Spark 1.6 
> Phoenix 4.10 
>Reporter: Kumar Palaniappan
>
> After upgrading to CDH 5.9.1/Phoenix 4.10/Spark 1.6 from CDH 5.5.2/Phoenix 
> 4.6/Spark 1.5, streaming jobs that read data from Phoenix no longer release 
> their zookeeper connections, meaning that the number of connections from the 
> driver grow with each batch until the ZooKeeper limit on connections per IP 
> address is reached, at which point the Spark streaming job can no longer read 
> data from Phoenix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302168#comment-16302168
 ] 

James Taylor commented on PHOENIX-4487:
---

bq. However as discussed offline with [~twdsi...@gmail.com], we decided that 
since we _officially_ support only 2 client backward versions, this should be 
fine.
Please don't remove upgrade code without raising a discussion thread on the dev 
list. Vendor distros often need upgrade code to remain longer than two minor 
releases back due to the infrequency of their upgrade. For example, CDH will go 
from 4.7 to 4.11. The HDP distro may be even further behind.
bq. The first line checks if SYSTEM.MUTEX table exists or not. This can be in 2 
cases, either the table doesn't exist at all ot its being migrated to SYSTEM 
namespace. There is no clear way of differentiating them and hence there arises 
a plausible race condition here.
How is it better to go from an unlikely, possible race condition to always 
throwing an exception and blocking the upgrade when the mutex table doesn't 
exist? Please file a separate issue for handling the race condition properly. 
This patch improves things at least.
bq. I don't see the need for checking if namespace mapping is enabled or not. 
That check is because if namespaces are enabled, the earlier branch in the 
upgrade code path is executed and ensures that the mutex table is created. I'll 
add a comment.
bq. Also, how do we test this? Any UT or directly tried with a cluster?
Upgrade code needs to be manually checked. I'm hoping [~f.pompermaier] can help 
test this once this patch is committed.


> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4489) HBase Connection leak in Phoenix MR Jobs

2017-12-22 Thread Karan Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Mehta updated PHOENIX-4489:
-
Description: 
Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
splits and the parallelism of the work. The class directly opens up a HBase 
connection, which is not closed after the usage. Independently running MR jobs 
should not have any concern, however jobs that run through Phoenix-Spark can 
cause leak issues if this is left unclosed (since those jobs run as a part of 
same JVM). 

Apart from this, the connection should be instantiated with 
{{HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. It 
can be useful if a separate client is trying to run jobs and wants to provide a 
custom implementation of {{HConnection}}. 

[~jmahonin] Any ideas?
[~jamestaylor] [~vincentpoon] Any concerns around this?

  was:
Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
splits and the parallelism of the work. The class directly opens up a HBase 
connection, which is not closed after the usage. Independently running MR jobs 
should not have any concern, however jobs that run through Phoenix-Spark can 
cause leak issues if this is left unclosed (since those jobs run as a part of 
same JVM). 

Apart from this, the connection should be instantiated with 
{[HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. It 
can be useful if a separate client is trying to run jobs and wants to provide a 
custom implementation of {{HConnection}}. 

[~jmahonin] Any ideas?
[~jamestaylor] [~vincentpoon] Any concerns around this?


> HBase Connection leak in Phoenix MR Jobs
> 
>
> Key: PHOENIX-4489
> URL: https://issues.apache.org/jira/browse/PHOENIX-4489
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>
> Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
> splits and the parallelism of the work. The class directly opens up a HBase 
> connection, which is not closed after the usage. Independently running MR 
> jobs should not have any concern, however jobs that run through Phoenix-Spark 
> can cause leak issues if this is left unclosed (since those jobs run as a 
> part of same JVM). 
> Apart from this, the connection should be instantiated with 
> {{HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. 
> It can be useful if a separate client is trying to run jobs and wants to 
> provide a custom implementation of {{HConnection}}. 
> [~jmahonin] Any ideas?
> [~jamestaylor] [~vincentpoon] Any concerns around this?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4490) Phoenix Spark Module doesn't pass in user properties to create connection

2017-12-22 Thread Karan Mehta (JIRA)
Karan Mehta created PHOENIX-4490:


 Summary: Phoenix Spark Module doesn't pass in user properties to 
create connection
 Key: PHOENIX-4490
 URL: https://issues.apache.org/jira/browse/PHOENIX-4490
 Project: Phoenix
  Issue Type: Bug
Reporter: Karan Mehta


Phoenix Spark module doesn't work perfectly in a Kerberos environment. This is 
because whenever new {{PhoenixRDD}} are built, they are always built with new 
and default properties. The following piece of code in {{PhoenixRelation}} is 
an example. This is the class used by spark to create {{BaseRelation}} before 
executing a scan. 
{code}
new PhoenixRDD(
  sqlContext.sparkContext,
  tableName,
  requiredColumns,
  Some(buildFilter(filters)),
  Some(zkUrl),
  new Configuration(),
  dateAsTimestamp
).toDataFrame(sqlContext).rdd
{code}

This would work fine in most cases if the spark code is being run on the same 
cluster as HBase, the config object will pickup properties from Class path xml 
files. However in an external environment we should use the user provided 
properties and merge them before creating any {{PhoenixRelation}} or 
{{PhoenixRDD}}. As per my understanding, we should ideally provide properties 
in {{DefaultSource#createRelation() method}}.

An example of when this fails is, Spark tries to get the splits to optimize the 
MR performance for loading data in the table in 
{{PhoenixInputFormat#generateSplits()}} methods. Ideally, it should get all the 
config parameters from the {{JobContext}} being passed, but it is defaulted to 
{{new Configuration()}}, irrespective of what user passes in. Thus it fails to 
create a connection.

[~jmahonin] [~maghamraviki...@gmail.com] 
Any ideas or advice? Let me know if I am missing anything obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4489) HBase Connection leak in Phoenix MR Jobs

2017-12-22 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302159#comment-16302159
 ] 

Karan Mehta commented on PHOENIX-4489:
--

The fix seems straight forward but want to make sure its correct before moving 
on.

> HBase Connection leak in Phoenix MR Jobs
> 
>
> Key: PHOENIX-4489
> URL: https://issues.apache.org/jira/browse/PHOENIX-4489
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>
> Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
> splits and the parallelism of the work. The class directly opens up a HBase 
> connection, which is not closed after the usage. Independently running MR 
> jobs should not have any concern, however jobs that run through Phoenix-Spark 
> can cause leak issues if this is left unclosed (since those jobs run as a 
> part of same JVM). 
> Apart from this, the connection should be instantiated with 
> {[HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. 
> It can be useful if a separate client is trying to run jobs and wants to 
> provide a custom implementation of {{HConnection}}. 
> [~jmahonin] Any ideas?
> [~jamestaylor] [~vincentpoon] Any concerns around this?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4489) HBase Connection leak in Phoenix MR Jobs

2017-12-22 Thread Karan Mehta (JIRA)
Karan Mehta created PHOENIX-4489:


 Summary: HBase Connection leak in Phoenix MR Jobs
 Key: PHOENIX-4489
 URL: https://issues.apache.org/jira/browse/PHOENIX-4489
 Project: Phoenix
  Issue Type: Bug
Reporter: Karan Mehta


Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
splits and the parallelism of the work. The class directly opens up a HBase 
connection, which is not closed after the usage. Independently running MR jobs 
should not have any concern, however jobs that run through Phoenix-Spark can 
cause leak issues if this is left unclosed (since those jobs run as a part of 
same JVM). 

Apart from this, the connection should be instantiated with 
{[HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. It 
can be useful if a separate client is trying to run jobs and wants to provide a 
custom implementation of {{HConnection}}. 

[~jmahonin] Any ideas?
[~jamestaylor] [~vincentpoon] Any concerns around this?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4448) Remove complete dependency from Hadoop Metrics

2017-12-22 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302151#comment-16302151
 ] 

Karan Mehta commented on PHOENIX-4448:
--

bq. I'm even more confused

Sorry about that. This is my understanding and correct me wherever required. 
Metrics2 is the framework for transport of metrics (via JMX or custom sinks) as 
well as creating/defining new metrics. We used this framework for transferring 
the HBase or Phoenix Apache HTrace traces from source to sink (Sink is where 
those are written to a Phoenix table). PHOENIX-3752 removed that source/sink 
logic and added a custom implementation. The current attached patch just 
removes all the classes that are not being used anywhere and can be treated as 
a cleanup patch.

Now, could you clarify the scope of {{Hadoop Metrics2}}? Is using 
{{MetricHistogram}}, an interface from this library a direct dependency? All 
the other classes that I have deleted in this patch used Metrics. The class 
{{MetricsIndexerSourceImpl}} currently creates histograms on index metrics. 
From what I understand, they are not dependent on HBase metrics. Are you 
suggesting that we should change them to use the HBase metrics api instead? 
FYI. At this point, I am not sure, how they are handled or exposed.

[~aertoria], do you have any insight to this?

> Remove complete dependency from Hadoop Metrics
> --
>
> Key: PHOENIX-4448
> URL: https://issues.apache.org/jira/browse/PHOENIX-4448
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
> Attachments: PHOENIX-4448.001.patch
>
>
> PHOENIX-3757 removed the usage of Hadoop Metrics API and the sink. However 
> there are still some classes that are lying in place and probably not being 
> used elsewhere. This JIRA is to track the cleanup of those classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-22 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302148#comment-16302148
 ] 

Karan Mehta commented on PHOENIX-4487:
--

I ran into this issue when one of the older clients connected to a fresh 
cluster and then a newer client tried connecting to the same cluster. However 
as discussed offline with [~twdsi...@gmail.com], we decided that since we 
_officially_ support only 2 client backward versions, this should be fine. 
However, it is always good to fix things and make them as much stable as 
possible. There is also a plausible race condition addressed below, which was 
also one of the reasons we decided not to create SYSMUTEX table and instead 
throw the exception.

{code}
Lines: 3205 - 3211
if 
(!tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME)) {
TableName mutexName = 
SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, 
props);
if 
(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME.equals(mutexName) || 
!tableNames.contains(mutexName)) {
createSysMutexTable(admin, props);
}
}
{code}

The first line checks if SYSTEM.MUTEX table exists or not. This can be in 2 
cases, either the table doesn't exist at all ot its being migrated to SYSTEM 
namespace. There is no clear way of differentiating them and hence there arises 
a plausible race condition here. If it's at all required, we can directly call 
{{createSysMutexTable}} method, since it will check for the table according to 
client properties and create one if none of them exists.

{code}
if (currentServerSideTableTimeStamp <= 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_10_0 &&

!SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM,

ConnectionQueryServicesImpl.this.getProps())) {
{code}

I don't see the need for checking if namespace mapping is enabled or not. 
SYSMUTEX table needs to be created for any client who is jumping from 4.7 to 
4.13 (since it was introduced in 4.10 version). Namespaces can be either 
enabled or disabled. SYSMUTEX will be used once again when client wants to 
migrate the SYSTEM tables to SYSTEM namespace.

Please update the comments on this change to address the concern mentioned. 
[~jamestaylor] Also, how do we test this? Any UT or directly tried with a 
cluster? 

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302016#comment-16302016
 ] 

Hadoop QA commented on PHOENIX-4488:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903454/PHOENIX-4488.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903454

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1688//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1688//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1688//console

This message is automatically generated.

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301978#comment-16301978
 ] 

Hudson commented on PHOENIX-4466:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1901 (See 
[https://builds.apache.org/job/Phoenix-master/1901/])
PHOENIX-4466 Relocate Avatica and hadoop-common in thin-client jar (elserj: rev 
34693843abe4490b54fbd30512bf7d98d0f59c0d)
* (edit) phoenix-queryserver-client/pom.xml


> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Commented] (PHOENIX-4448) Remove complete dependency from Hadoop Metrics

2017-12-22 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301977#comment-16301977
 ] 

Josh Elser commented on PHOENIX-4448:
-

I'm even more confused, [~karanmehta93].

bq. \[these two classes\] are still using features offered by metrics2 library. 
How should we handle this?

This was your original question. There are two options:

# The classes are removed
# They're implemented in a manner that doesn't tie them to the Hadoop Metrics2 
API

My suggestion to look at the hbase-metrics-api, 
https://github.com/apache/hbase/tree/branch-2/hbase-metrics-api, is the only 
alternative I'm aware of to metrics collection in HBase that isn't tied to 
Hadoop Metrics2.

So, either we care about these metrics and want to preserve them, or we don't :)

> Remove complete dependency from Hadoop Metrics
> --
>
> Key: PHOENIX-4448
> URL: https://issues.apache.org/jira/browse/PHOENIX-4448
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
> Attachments: PHOENIX-4448.001.patch
>
>
> PHOENIX-3757 removed the usage of Hadoop Metrics API and the sink. However 
> there are still some classes that are lying in place and probably not being 
> used elsewhere. This JIRA is to track the cleanup of those classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301956#comment-16301956
 ] 

Ethan Wang commented on PHOENIX-4488:
-

I see. Thanks!

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301957#comment-16301957
 ] 

Hadoop QA commented on PHOENIX-4487:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903451/PHOENIX-4487.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903451

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+if (currentServerSideTableTimeStamp <= 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_10_0 &&
+if (acquiredMutexLock = 
acquireUpgradeMutex(currentServerSideTableTimeStamp, mutexRowKey)) {
+TableName mutexName = 
SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, 
props);
+if 
(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME.equals(mutexName) || 
!tableNames.contains(mutexName)) {

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1687//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1687//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1687//console

This message is automatically generated.

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301945#comment-16301945
 ] 

James Taylor commented on PHOENIX-4488:
---

bq. Is MetaDataEndPointIT the IT that also covers execeededIndexQuota?
End-to-end test is in CreateTableIT.testCreatingTooManyIndexesIsNotAllowed()
bq. Q2, nit: Patch line 95 (code line 501), trailing whitespaces.
Wil fix.

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301922#comment-16301922
 ] 

Ethan Wang commented on PHOENIX-4488:
-

[~jamestaylor]
Pretty straight forward! lgtm.

Q1, Is MetaDataEndPointIT the IT that also covers execeededIndexQuota?
Q2, nit: Patch line 95 (code line 501), trailing whitespaces.

Thanks

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4488:
--
Attachment: PHOENIX-4488.patch

Please review this trivial patch, [~tdsilva] or [~aertoria]. I removed the test 
as we have an end2end one that provides better coverage.

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4488:
-

 Summary: Cache config parameters for MetaDataEndPointImpl during 
initialization
 Key: PHOENIX-4488
 URL: https://issues.apache.org/jira/browse/PHOENIX-4488
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor


For example, see this code (which is called often):
{code}
boolean blockWriteRebuildIndex = 
env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301858#comment-16301858
 ] 

Hadoop QA commented on PHOENIX-4466:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903435/PHOENIX-4466-v2.patch
  against master branch at commit 412329a7415302831954891285d291055328c28b.
  ATTACHMENT ID: 12903435

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//console

This message is automatically generated.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at 

[jira] [Updated] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4466:

Fix Version/s: 4.14.0
   5.0.0

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Updated] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4487:
--
Attachment: PHOENIX-4487.patch

Looks like PHOENIX-3757 broke this, [~karanmehta93]. If a client is upgrading 
an older version, the SYSTEM.MUTEX table will not already exist, so the attempt 
to acquire the mutex will fail. Please review this fix.

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4487:
-

Assignee: James Taylor

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4487:
--
Summary: Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13  (was: 
Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13 on Cloudera)

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-12-22 Thread Flavio Pompermaier (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301750#comment-16301750
 ] 

Flavio Pompermaier commented on PHOENIX-4372:
-

Here it is: https://issues.apache.org/jira/browse/PHOENIX-4487




> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301749#comment-16301749
 ] 

Josh Elser commented on PHOENIX-4466:
-

Thanks, [~brfrn169]! I'll try to pull this down to double-check and then commit.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 

[jira] [Created] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13 on Cloudera

2017-12-22 Thread Flavio Pompermaier (JIRA)
Flavio Pompermaier created PHOENIX-4487:
---

 Summary: Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13 on 
Cloudera
 Key: PHOENIX-4487
 URL: https://issues.apache.org/jira/browse/PHOENIX-4487
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.1, 4.13.2-cdh5.11.2
Reporter: Flavio Pompermaier


Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
the from version was 4.7...).
The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-12-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301738#comment-16301738
 ] 

James Taylor commented on PHOENIX-4372:
---

Please file a JIRA for this issue, [~f.pompermaier] and I'll attempt to repro 
and fix.

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-12-22 Thread Flavio Pompermaier (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301714#comment-16301714
 ] 

Flavio Pompermaier commented on PHOENIX-4372:
-

Could you also fix the problem that upgrading from 4.7 to 4.13 the
SYSTEM.MUTEX table is not automatically created? Is the only problem I
encountered during the upgrade on CDH




> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301697#comment-16301697
 ] 

Toshihiro Suzuki commented on PHOENIX-4466:
---

Thanks [~elserj]. I agree with you. I attached a new patch adding hadoop-common 
relocation.

In this patch, even when userClassPathFirst is specified, we can run 
spark-shell successfully. I think it is better and safer than the previous 
patch as you mentioned.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

Re: Questions regarding interacting with PQS using C# and Protobufs

2017-12-22 Thread Josh Elser

+1 I've used that library with success in the past.

You may also find the list of Requests/Responses helpful: 
https://calcite.apache.org/avatica/docs/protobuf_reference.html. There 
is a single HTTP endpoint which these are submitted to. Looking at a 
tcpdump of the thin JDBC driver or the source code for the Avatica 
driver may also be helpful in understanding the lifecycle.


On 12/21/17 5:12 PM, 김영우 (YoungWoo Kim) wrote:

https://github.com/Azure/hdinsight-phoenix-sharp

Might be a good example for you.

- Youngwoo

2017년 12월 22일 (금) 오전 7:02, Chinmay Kulkarni 님이
작성:


Hi all,

I am trying to create a simple .net client to query data in HBase via
Phoenix using the Phoenix Query Server and am sort of struggling to find
documentation or examples for doing the same.

My understanding is that I can do this by sending POST requests to PQS in
which I send data using the protobuf format. Is this correct? Apache
Calcite's documentation also mentions using WireMessage APIs to achieve the
same. Can you please point me towards some resources to help me use
WireMessage in .net?

Thanks,
Chinmay





[jira] [Updated] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4466:
--
Attachment: PHOENIX-4466-v2.patch

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (PHOENIX-4481) Some IT tests are failing with wrong bytes count after updating statistics

2017-12-22 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301663#comment-16301663
 ] 

Ankit Singhal commented on PHOENIX-4481:


+1, [~rajeshbabu]

> Some IT tests are failing with wrong bytes count after updating statistics 
> ---
>
> Key: PHOENIX-4481
> URL: https://issues.apache.org/jira/browse/PHOENIX-4481
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4481.patch
>
>
> {noformat}
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.431 s  <<< FAILURE!
> java.lang.AssertionError: expected:<48> but was:<52>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.31 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.309 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.499 s  <<< FAILURE!
> java.lang.AssertionError: expected:<240> but was:<260>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.507 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.54 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] Tests run: 26, Failures: 6, Errors: 0, Skipped: 4, Time elapsed: 
> 95.088 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = true, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT)  
> Time elapsed: 2.531 s  <<< FAILURE!
> java.lang.AssertionError: expected:<144> but was:<156>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-12-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301601#comment-16301601
 ] 

James Taylor commented on PHOENIX-4372:
---

Let’s do the 4.13.2 release first and then we can catch up cdh branch with 1.2. 
An important one to include in the cdh release is PHOENIX-4382 which we can’t 
release in a patch release, but can in the cdh release since its our first one. 
That’ll hopefully get committed today and I can create the 4.13 cdh branch 
afterwards. Sound ok?

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4481) Some IT tests are failing with wrong bytes count after updating statistics

2017-12-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-4481.
--
Resolution: Fixed

> Some IT tests are failing with wrong bytes count after updating statistics 
> ---
>
> Key: PHOENIX-4481
> URL: https://issues.apache.org/jira/browse/PHOENIX-4481
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4481.patch
>
>
> {noformat}
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.431 s  <<< FAILURE!
> java.lang.AssertionError: expected:<48> but was:<52>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.31 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.309 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.499 s  <<< FAILURE!
> java.lang.AssertionError: expected:<240> but was:<260>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.507 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.54 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] Tests run: 26, Failures: 6, Errors: 0, Skipped: 4, Time elapsed: 
> 95.088 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = true, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT)  
> Time elapsed: 2.531 s  <<< FAILURE!
> java.lang.AssertionError: expected:<144> but was:<156>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4481) Some IT tests are failing with wrong bytes count after updating statistics

2017-12-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4481:
-
Attachment: PHOENIX-4481.patch

> Some IT tests are failing with wrong bytes count after updating statistics 
> ---
>
> Key: PHOENIX-4481
> URL: https://issues.apache.org/jira/browse/PHOENIX-4481
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4481.patch
>
>
> {noformat}
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.431 s  <<< FAILURE!
> java.lang.AssertionError: expected:<48> but was:<52>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.31 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.309 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.499 s  <<< FAILURE!
> java.lang.AssertionError: expected:<240> but was:<260>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.507 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.54 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] Tests run: 26, Failures: 6, Errors: 0, Skipped: 4, Time elapsed: 
> 95.088 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = true, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT)  
> Time elapsed: 2.531 s  <<< FAILURE!
> java.lang.AssertionError: expected:<144> but was:<156>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4481) Some IT tests are failing with wrong bytes count after updating statistics

2017-12-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4481:
-
Attachment: (was: PHOENIX-4481.patch)

> Some IT tests are failing with wrong bytes count after updating statistics 
> ---
>
> Key: PHOENIX-4481
> URL: https://issues.apache.org/jira/browse/PHOENIX-4481
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4481.patch
>
>
> {noformat}
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.431 s  <<< FAILURE!
> java.lang.AssertionError: expected:<48> but was:<52>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.31 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.309 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.499 s  <<< FAILURE!
> java.lang.AssertionError: expected:<240> but was:<260>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.507 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.54 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] Tests run: 26, Failures: 6, Errors: 0, Skipped: 4, Time elapsed: 
> 95.088 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = true, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT)  
> Time elapsed: 2.531 s  <<< FAILURE!
> java.lang.AssertionError: expected:<144> but was:<156>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4481) Some IT tests are failing with wrong bytes count after updating statistics

2017-12-22 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301504#comment-16301504
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4481:
--

CellUtil.estimatedSerializedSizeOf is plus an extra SIZEOF_INT indicating the 
actual cell length that's why seeing the mismatches in guideposts bytes count.
{noformat}
  public static int estimatedSerializedSizeOf(final Cell cell) {
if (cell instanceof ExtendedCell) {
  return ((ExtendedCell) cell).getSerializedSize(true) + Bytes.SIZEOF_INT;
}
{noformat}

> Some IT tests are failing with wrong bytes count after updating statistics 
> ---
>
> Key: PHOENIX-4481
> URL: https://issues.apache.org/jira/browse/PHOENIX-4481
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4481.patch
>
>
> {noformat}
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.431 s  <<< FAILURE!
> java.lang.AssertionError: expected:<48> but was:<52>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.31 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.309 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.499 s  <<< FAILURE!
> java.lang.AssertionError: expected:<240> but was:<260>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.507 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.54 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] Tests run: 26, Failures: 6, Errors: 0, Skipped: 4, Time elapsed: 
> 95.088 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = true, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT)  
> Time elapsed: 2.531 s  <<< FAILURE!
> java.lang.AssertionError: expected:<144> but was:<156>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4481) Some IT tests are failing with wrong bytes count after updating statistics

2017-12-22 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301496#comment-16301496
 ] 

Rajeshbabu Chintaguntla edited comment on PHOENIX-4481 at 12/22/17 2:24 PM:


Here is the patch gets the actual length of cell. [~an...@apache.org] Please 
review.


was (Author: rajeshbabu):
Here is the patch gets the actual size. [~an...@apache.org] Please review.

> Some IT tests are failing with wrong bytes count after updating statistics 
> ---
>
> Key: PHOENIX-4481
> URL: https://issues.apache.org/jira/browse/PHOENIX-4481
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4481.patch
>
>
> {noformat}
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.431 s  <<< FAILURE!
> java.lang.AssertionError: expected:<48> but was:<52>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.31 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.309 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.499 s  <<< FAILURE!
> java.lang.AssertionError: expected:<240> but was:<260>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.507 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.54 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] Tests run: 26, Failures: 6, Errors: 0, Skipped: 4, Time elapsed: 
> 95.088 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = true, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT)  
> Time elapsed: 2.531 s  <<< FAILURE!
> java.lang.AssertionError: expected:<144> but was:<156>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4481) Some IT tests are failing with wrong bytes count after updating statistics

2017-12-22 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4481:
-
Attachment: PHOENIX-4481.patch

Here is the patch gets the actual size. [~an...@apache.org] Please review.

> Some IT tests are failing with wrong bytes count after updating statistics 
> ---
>
> Key: PHOENIX-4481
> URL: https://issues.apache.org/jira/browse/PHOENIX-4481
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4481.patch
>
>
> {noformat}
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.431 s  <<< FAILURE!
> java.lang.AssertionError: expected:<48> but was:<52>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.31 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.309 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.499 s  <<< FAILURE!
> java.lang.AssertionError: expected:<240> but was:<260>
> [ERROR] testSomeUpdateEmptyStats[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.507 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...IENT 4-CHUNK 1 ROWS [38] BYTES 
> PARALLEL 3-WA...> but was:<...IENT 4-CHUNK 1 ROWS [42] BYTES PARALLEL 3-WA...>
> [ERROR] testWithMultiCF[mutable = false, transactional = false, 
> isUserTableNamespaceMapped = true, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT) 
>  Time elapsed: 2.54 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...T 26-CHUNK 25 ROWS 1[3902] BYTES 
> PARALLEL 1-WA...> but was:<...T 26-CHUNK 25 ROWS 1[4098] BYTES PARALLEL 
> 1-WA...>
> [ERROR] Tests run: 26, Failures: 6, Errors: 0, Skipped: 4, Time elapsed: 
> 95.088 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
> [ERROR] testRowCountAndByteCounts[mutable = false, transactional = true, 
> isUserTableNamespaceMapped = false, columnEncoded = 
> true](org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT)  
> Time elapsed: 2.531 s  <<< FAILURE!
> java.lang.AssertionError: expected:<144> but was:<156>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-12-22 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301307#comment-16301307
 ] 

Pedro Boado commented on PHOENIX-4372:
--

[~jamestaylor] any chance to look at PHOENIX-4464 ?

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)