[jira] [Created] (PHOENIX-3858) Index maintenance not required for local indexes of table with immutable rows

2017-05-16 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-3858:


 Summary: Index maintenance not required for local indexes of table 
with immutable rows
 Key: PHOENIX-3858
 URL: https://issues.apache.org/jira/browse/PHOENIX-3858
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.11.0


While preparing index mutations we are are scanning data region for index 
maintenance which is not required in case of immutable rows. This optimisation 
is very helpful for local indexes which got removed in the recent changes.
FYI [~jamestaylor] [~mujtabachohan] Found this while checking PHOENIX-3853. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3823) Force cache update on MetaDataEntityNotFoundException

2017-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013473#comment-16013473
 ] 

Hadoop QA commented on PHOENIX-3823:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12868455/PHOENIX-3823.v3.patch
  against master branch at commit 1666e932d157be732946e02b474b6c342199bc0f.
  ATTACHMENT ID: 12868455

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+assertTrue(e.getMessage(), e.getMessage().contains("ERROR 
504 (42703): Undefined column. columnName="+dataTableFullName+".COL5"));
+String createQry = "create table "+tableName+" (k VARCHAR PRIMARY KEY, 
v1 VARCHAR, v2 VARCHAR)"
+"CREATE VIEW MY_VIEW (v43 VARCHAR) AS SELECT * FROM 
"+tableName+" WHERE v1 = 'value1'";
+String schemaNameStr = 
dataTable.getSchemaName()==null?null:dataTable.getSchemaName().getString();
+String tableNameStr = 
dataTable.getTableName()==null?null:dataTable.getTableName().getString();
+throw new ColumnNotFoundException(schemaNameStr, 
tableNameStr,null, WildcardParseNode.INSTANCE.toString());
+String schemaNameStr = 
table.getSchemaName()==null?null:table.getSchemaName().getString();
+String tableNameStr = 
table.getTableName()==null?null:table.getTableName().getString();
+throw new ColumnNotFoundException(schemaNameStr, tableNameStr, 
null, ref.getColumn().getName().getString());
+return new ColumnFamilyNotFoundException(info.getSchemaName(), 
info.getTableName(), info.getFamilyName());

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/873//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/873//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/873//console

This message is automatically generated.

> Force cache update on MetaDataEntityNotFoundException 
> --
>
> Key: PHOENIX-3823
> URL: https://issues.apache.org/jira/browse/PHOENIX-3823
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: James Taylor
>Assignee: Maddineni Sukumar
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3823.patch, PHOENIX-3823.v2.patch, 
> PHOENIX-3823.v3.patch
>
>
> When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period 
> of time which may cause the schema being used to become stale. If another 
> client adds a column or a new table or view, other clients won't see it. As a 
> result, the client will get a MetaDataEntityNotFoundException. Instead of 
> bubbling this up, we should retry after forcing a cache update on the tables 
> involved in the query.
> The above works well for references to entities that don't yet exist. 
> However, we cannot detect when some entities are referred to which no longer 
> exists until the cache expires. An exception is if a physical table is 
> dropped which would be detected immediately, however we would allow queries 
> and updates to columns which have been dropped until the cache entry expires 
> (which seems like a reasonable tradeoff IMHO. In addition, we won't start 
> using indexes on tables until the cache expires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3823) Force cache update on MetaDataEntityNotFoundException

2017-05-16 Thread Maddineni Sukumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maddineni Sukumar updated PHOENIX-3823:
---
Attachment: PHOENIX-3823.v3.patch

> Force cache update on MetaDataEntityNotFoundException 
> --
>
> Key: PHOENIX-3823
> URL: https://issues.apache.org/jira/browse/PHOENIX-3823
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: James Taylor
>Assignee: Maddineni Sukumar
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3823.patch, PHOENIX-3823.v2.patch, 
> PHOENIX-3823.v3.patch
>
>
> When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period 
> of time which may cause the schema being used to become stale. If another 
> client adds a column or a new table or view, other clients won't see it. As a 
> result, the client will get a MetaDataEntityNotFoundException. Instead of 
> bubbling this up, we should retry after forcing a cache update on the tables 
> involved in the query.
> The above works well for references to entities that don't yet exist. 
> However, we cannot detect when some entities are referred to which no longer 
> exists until the cache expires. An exception is if a physical table is 
> dropped which would be detected immediately, however we would allow queries 
> and updates to columns which have been dropped until the cache entry expires 
> (which seems like a reasonable tradeoff IMHO. In addition, we won't start 
> using indexes on tables until the cache expires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3850) Indexer.postOpen should not always log "Found some outstanding index updates that didn't succeed"

2017-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013449#comment-16013449
 ] 

Hadoop QA commented on PHOENIX-3850:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12868451/PHOENIX-3850_v2.patch
  against master branch at commit 1666e932d157be732946e02b474b6c342199bc0f.
  ATTACHMENT ID: 12868451

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/872//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/872//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/872//console

This message is automatically generated.

> Indexer.postOpen should not always log "Found some outstanding index updates 
> that didn't succeed"
> -
>
> Key: PHOENIX-3850
> URL: https://issues.apache.org/jira/browse/PHOENIX-3850
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: chenglei
>Priority: Minor
> Attachments: PHOENIX-3850_v2.patch
>
>
> When the RegionServer starts, I always find the region logs "Found some 
> outstanding index updates that didn't succeed during WAL replay - attempting 
> to replay now." That is because in following Indexer.postOpen method, the 
> LOG.info in line 528 is before the if statement in line 531, so the method 
> always log "Found some outstanding index updates that didn't succeed..." no 
> matter whether there are outstanding index updates.
>  {code}
> 520  @Override
> 521  public void postOpen(final ObserverContext 
> c) {
> 522Multimap updates = 
> failedIndexEdits.getEdits(c.getEnvironment().getRegion());
> 523
> 524if (this.disabled) {
> 525super.postOpen(c);
> 526return;
> 527  }
> 528   LOG.info("Found some outstanding index updates that didn't succeed 
> during"
> 529+ " WAL replay - attempting to replay now.");
> 530//if we have no pending edits to complete, then we are done
> 531if (updates == null || updates.size() == 0) {
> 532  return;
> 533}
> 534
> 535// do the usual writer stuff, killing the server again, if we can't 
> manage to make the index
> 536// writes succeed again
> 537try {
> 538writer.writeAndKillYourselfOnFailure(updates, true);
> 539} catch (IOException e) {
> 540LOG.error("During WAL replay of outstanding index updates, "
> 541+ "Exception is thrown instead of killing server 
> during index writing", e);
> 542}
> 543  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3850) Indexer.postOpen should not always log "Found some outstanding index updates that didn't succeed"

2017-05-16 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3850:
--
Attachment: PHOENIX-3850_v2.patch

> Indexer.postOpen should not always log "Found some outstanding index updates 
> that didn't succeed"
> -
>
> Key: PHOENIX-3850
> URL: https://issues.apache.org/jira/browse/PHOENIX-3850
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: chenglei
>Priority: Minor
> Attachments: PHOENIX-3850_v2.patch
>
>
> When the RegionServer starts, I always find the region logs "Found some 
> outstanding index updates that didn't succeed during WAL replay - attempting 
> to replay now." That is because in following Indexer.postOpen method, the 
> LOG.info in line 528 is before the if statement in line 531, so the method 
> always log "Found some outstanding index updates that didn't succeed..." no 
> matter whether there are outstanding index updates.
>  {code}
> 520  @Override
> 521  public void postOpen(final ObserverContext 
> c) {
> 522Multimap updates = 
> failedIndexEdits.getEdits(c.getEnvironment().getRegion());
> 523
> 524if (this.disabled) {
> 525super.postOpen(c);
> 526return;
> 527  }
> 528   LOG.info("Found some outstanding index updates that didn't succeed 
> during"
> 529+ " WAL replay - attempting to replay now.");
> 530//if we have no pending edits to complete, then we are done
> 531if (updates == null || updates.size() == 0) {
> 532  return;
> 533}
> 534
> 535// do the usual writer stuff, killing the server again, if we can't 
> manage to make the index
> 536// writes succeed again
> 537try {
> 538writer.writeAndKillYourselfOnFailure(updates, true);
> 539} catch (IOException e) {
> 540LOG.error("During WAL replay of outstanding index updates, "
> 541+ "Exception is thrown instead of killing server 
> during index writing", e);
> 542}
> 543  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3850) Indexer.postOpen should not always log "Found some outstanding index updates that didn't succeed"

2017-05-16 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3850:
--
Attachment: (was: PHOENIX-3850_v1.patch)

> Indexer.postOpen should not always log "Found some outstanding index updates 
> that didn't succeed"
> -
>
> Key: PHOENIX-3850
> URL: https://issues.apache.org/jira/browse/PHOENIX-3850
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: chenglei
>Priority: Minor
>
> When the RegionServer starts, I always find the region logs "Found some 
> outstanding index updates that didn't succeed during WAL replay - attempting 
> to replay now." That is because in following Indexer.postOpen method, the 
> LOG.info in line 528 is before the if statement in line 531, so the method 
> always log "Found some outstanding index updates that didn't succeed..." no 
> matter whether there are outstanding index updates.
>  {code}
> 520  @Override
> 521  public void postOpen(final ObserverContext 
> c) {
> 522Multimap updates = 
> failedIndexEdits.getEdits(c.getEnvironment().getRegion());
> 523
> 524if (this.disabled) {
> 525super.postOpen(c);
> 526return;
> 527  }
> 528   LOG.info("Found some outstanding index updates that didn't succeed 
> during"
> 529+ " WAL replay - attempting to replay now.");
> 530//if we have no pending edits to complete, then we are done
> 531if (updates == null || updates.size() == 0) {
> 532  return;
> 533}
> 534
> 535// do the usual writer stuff, killing the server again, if we can't 
> manage to make the index
> 536// writes succeed again
> 537try {
> 538writer.writeAndKillYourselfOnFailure(updates, true);
> 539} catch (IOException e) {
> 540LOG.error("During WAL replay of outstanding index updates, "
> 541+ "Exception is thrown instead of killing server 
> during index writing", e);
> 542}
> 543  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3832) Local Index - Empty resultset for multi-tenant tables

2017-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013417#comment-16013417
 ] 

Hudson commented on PHOENIX-3832:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1612 (See 
[https://builds.apache.org/job/Phoenix-master/1612/])
PHOENIX-3832 Local Index - Empty resultset for multi-tenant tables 
(jamestaylor: rev 1666e932d157be732946e02b474b6c342199bc0f)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java


> Local Index - Empty resultset for multi-tenant tables
> -
>
> Key: PHOENIX-3832
> URL: https://issues.apache.org/jira/browse/PHOENIX-3832
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3832.patch
>
>
> Schema
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
>  PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, STD_COL 
> VARCHAR, INDEXED_COL INTEGER,
>  CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI)) 
>  VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (INDEXED_COL);
> {noformat}
> Query
> {noformat}
> SELECT * FROM T WHERE INDEXED_COL < 1
> {noformat}
> Resultset is empty for all columns other than PK and indexed column. This is 
> on a single region table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3827) Make use of HBASE-15600 to write local index mutations along with data mutations atomically

2017-05-16 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013405#comment-16013405
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3827:
--

[~jamestaylor] I am working on finding the root cause of OnDuplicateKeyIT test 
failure but by the time after [~mujtabachohan] raised PHOENIX-3853 looking at 
the performance issue. I am wondering how the local indexes are slow. 
As for the patch we collect the local index updates by removing them from index 
updates if the table is same and add to the list of ongoing mutations. Then 
what HBase internally does it for every mutation in the batch it will get the 
corresponding index mutations acquire row lock and merge these cells to data 
mutations to write atomically(Then the remaining story is same as normal 
writes). I don't think this is costlier than normal writes. I have things to 
suspect
1) while loading data observed splits which might have caused some slowness
2) Or else index updates preparation might have been slow 
Continuing testing to find the root cause of slowness. 

> Make use of HBASE-15600 to write local index mutations along with data 
> mutations atomically
> ---
>
> Key: PHOENIX-3827
> URL: https://issues.apache.org/jira/browse/PHOENIX-3827
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3827.patch, PHOENIX-3827_v2.patch, 
> PHOENIX-3827_v3.patch
>
>
> After HBASE-15600 we can add mutations of the same table from coprocessors so 
> we can write local index data along with data mutations so it will be atomic. 
> This we can do in 4.x-HBase-1.3 version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013404#comment-16013404
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116896889
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java 
---
@@ -272,6 +278,11 @@
 public static final String DEFAULT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS.toString();
 public static final String 
DEFAULT_MULTITENANT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.ONE_CELL_PER_COLUMN.toString();
 
+public static final Integer DEFAULT_PHOENIX_PQS_REPORTING_INTERVAL_MS 
= 1; // 10 sec
+public static final String DEFAULT_PHOENIX_PQS_TYPE_OF_SINK = "file";
+public static final boolean DEFAULT_PHOENIX_QUERY_SERVER_METRICS = 
true;
--- End diff --

made the metrics collection to be off by default.


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread rahulsIOT
Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116896889
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java 
---
@@ -272,6 +278,11 @@
 public static final String DEFAULT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS.toString();
 public static final String 
DEFAULT_MULTITENANT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.ONE_CELL_PER_COLUMN.toString();
 
+public static final Integer DEFAULT_PHOENIX_PQS_REPORTING_INTERVAL_MS 
= 1; // 10 sec
+public static final String DEFAULT_PHOENIX_PQS_TYPE_OF_SINK = "file";
+public static final boolean DEFAULT_PHOENIX_QUERY_SERVER_METRICS = 
true;
--- End diff --

made the metrics collection to be off by default.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013382#comment-16013382
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116895997
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/sink/PqsFileSink.java
 ---
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics.sink;
+
+
+import org.apache.phoenix.queryserver.metrics.PqsConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.UnsupportedEncodingException;
+import java.io.FileOutputStream;
+import java.io.PrintStream;
+
+
+public class PqsFileSink extends PqsSink {
+
+private PrintStream writer;
+private static final Logger LOG = 
LoggerFactory.getLogger(PqsFileSink.class);
+private String filename = PqsConfiguration.getFileSinkFilename();
+
+public PqsFileSink() {
+try {
+writer = filename == null ? System.out
+: new PrintStream(new FileOutputStream(new 
File(filename)),
+true, "UTF-8");
+} catch (FileNotFoundException e) {
+LOG.error("Error creating "+ filename, e);
--- End diff --

Not sure which version you are looking at but there is a finally statement 
below. 

finally {
if (writer == null) {
writer = System.out;
}

If exception is thrown, then writer will be null and we set writer to be 
system.out. 


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread rahulsIOT
Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116895997
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/sink/PqsFileSink.java
 ---
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics.sink;
+
+
+import org.apache.phoenix.queryserver.metrics.PqsConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.UnsupportedEncodingException;
+import java.io.FileOutputStream;
+import java.io.PrintStream;
+
+
+public class PqsFileSink extends PqsSink {
+
+private PrintStream writer;
+private static final Logger LOG = 
LoggerFactory.getLogger(PqsFileSink.class);
+private String filename = PqsConfiguration.getFileSinkFilename();
+
+public PqsFileSink() {
+try {
+writer = filename == null ? System.out
+: new PrintStream(new FileOutputStream(new 
File(filename)),
+true, "UTF-8");
+} catch (FileNotFoundException e) {
+LOG.error("Error creating "+ filename, e);
--- End diff --

Not sure which version you are looking at but there is a finally statement 
below. 

finally {
if (writer == null) {
writer = System.out;
}

If exception is thrown, then writer will be null and we set writer to be 
system.out. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3823) Force cache update on MetaDataEntityNotFoundException

2017-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013373#comment-16013373
 ] 

Hadoop QA commented on PHOENIX-3823:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12868427/PHOENIX-3823.v2.patch
  against master branch at commit d2575288d1542c5b6e8dbe65448a22cf59aca8ff.
  ATTACHMENT ID: 12868427

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+assertTrue(e.getMessage(), e.getMessage().contains("ERROR 
504 (42703): Undefined column. columnName="+dataTableFullName+"COL5"));
+String createQry = "create table "+tableName+" (k VARCHAR PRIMARY KEY, 
v1 VARCHAR, v2 VARCHAR)"
+"CREATE VIEW MY_VIEW (v43 VARCHAR) AS SELECT * FROM 
"+tableName+" WHERE v1 = 'value1'";
+String schemaNameStr = 
dataTable.getSchemaName()==null?null:dataTable.getSchemaName().getString();
+String tableNameStr = 
dataTable.getTableName()==null?null:dataTable.getTableName().getString();
+throw new ColumnNotFoundException(schemaNameStr, 
tableNameStr,null, WildcardParseNode.INSTANCE.toString());
+String schemaNameStr = 
table.getSchemaName()==null?null:table.getSchemaName().getString();
+String tableNameStr = 
table.getTableName()==null?null:table.getTableName().getString();
+throw new ColumnNotFoundException(schemaNameStr, tableNameStr, 
null, ref.getColumn().getName().getString());
+return new ColumnFamilyNotFoundException(info.getSchemaName(), 
info.getTableName(), info.getFamilyName());

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AlterTableIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/871//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/871//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/871//console

This message is automatically generated.

> Force cache update on MetaDataEntityNotFoundException 
> --
>
> Key: PHOENIX-3823
> URL: https://issues.apache.org/jira/browse/PHOENIX-3823
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: James Taylor
>Assignee: Maddineni Sukumar
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3823.patch, PHOENIX-3823.v2.patch
>
>
> When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period 
> of time which may cause the schema being used to become stale. If another 
> client adds a column or a new table or view, other clients won't see it. As a 
> result, the client will get a MetaDataEntityNotFoundException. Instead of 
> bubbling this up, we should retry after forcing a cache update on the tables 
> involved in the query.
> The above works well for references to entities that don't yet exist. 
> However, we cannot detect when some entities are referred to which no longer 
> exists until the cache expires. An exception is if a physical table is 
> dropped which would be detected immediately, however we would allow queries 
> and updates to columns which have been dropped until the cache entry expires 
> (which seems like a reasonable tradeoff IMHO. In addition, we won't start 
> using indexes on tables until the cache expires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3832) Local Index - Empty resultset for multi-tenant tables

2017-05-16 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3832.
---
Resolution: Fixed

> Local Index - Empty resultset for multi-tenant tables
> -
>
> Key: PHOENIX-3832
> URL: https://issues.apache.org/jira/browse/PHOENIX-3832
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3832.patch
>
>
> Schema
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
>  PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, STD_COL 
> VARCHAR, INDEXED_COL INTEGER,
>  CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI)) 
>  VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (INDEXED_COL);
> {noformat}
> Query
> {noformat}
> SELECT * FROM T WHERE INDEXED_COL < 1
> {noformat}
> Resultset is empty for all columns other than PK and indexed column. This is 
> on a single region table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3857) Move to surefire 2.20

2017-05-16 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3857:
-

 Summary: Move to surefire 2.20
 Key: PHOENIX-3857
 URL: https://issues.apache.org/jira/browse/PHOENIX-3857
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.11.0


Move to surefire 2.20 which has some bug fixes we can eventually leverage for 
running our tests in parallel.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3832) Local Index - Empty resultset for multi-tenant tables

2017-05-16 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013329#comment-16013329
 ] 

Samarth Jain commented on PHOENIX-3832:
---

+1, looks good.

> Local Index - Empty resultset for multi-tenant tables
> -
>
> Key: PHOENIX-3832
> URL: https://issues.apache.org/jira/browse/PHOENIX-3832
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3832.patch
>
>
> Schema
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
>  PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, STD_COL 
> VARCHAR, INDEXED_COL INTEGER,
>  CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI)) 
>  VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (INDEXED_COL);
> {noformat}
> Query
> {noformat}
> SELECT * FROM T WHERE INDEXED_COL < 1
> {noformat}
> Resultset is empty for all columns other than PK and indexed column. This is 
> on a single region table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3832) Local Index - Empty resultset for multi-tenant tables

2017-05-16 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3832:
-

Assignee: James Taylor  (was: Rajeshbabu Chintaguntla)

> Local Index - Empty resultset for multi-tenant tables
> -
>
> Key: PHOENIX-3832
> URL: https://issues.apache.org/jira/browse/PHOENIX-3832
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3832.patch
>
>
> Schema
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
>  PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, STD_COL 
> VARCHAR, INDEXED_COL INTEGER,
>  CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI)) 
>  VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (INDEXED_COL);
> {noformat}
> Query
> {noformat}
> SELECT * FROM T WHERE INDEXED_COL < 1
> {noformat}
> Resultset is empty for all columns other than PK and indexed column. This is 
> on a single region table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3832) Local Index - Empty resultset for multi-tenant tables

2017-05-16 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3832:
--
Attachment: PHOENIX-3832.patch

Please review, [~rajeshbabu] or [~samarthjain]. FYI, this was caused by the 
move of the index ID to be before the tenant ID. Thanks for the pointer, 
Rajeshbabu!

> Local Index - Empty resultset for multi-tenant tables
> -
>
> Key: PHOENIX-3832
> URL: https://issues.apache.org/jira/browse/PHOENIX-3832
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3832.patch
>
>
> Schema
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
>  PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, STD_COL 
> VARCHAR, INDEXED_COL INTEGER,
>  CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI)) 
>  VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (INDEXED_COL);
> {noformat}
> Query
> {noformat}
> SELECT * FROM T WHERE INDEXED_COL < 1
> {noformat}
> Resultset is empty for all columns other than PK and indexed column. This is 
> on a single region table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3839) Prevent large aggregate queries from timing out

2017-05-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013316#comment-16013316
 ] 

Andrew Purtell commented on PHOENIX-3839:
-

If you are testing HBASE-18000 be sure to pick up the addendum too

> Prevent large aggregate queries from timing out
> ---
>
> Key: PHOENIX-3839
> URL: https://issues.apache.org/jira/browse/PHOENIX-3839
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
>
> Large aggregate queries timeout in Phoenix, even with our renew lease code in 
> place. The only workaround is to increase the RPC timeout to be really high 
> which is not such a good idea. It's quite possible HBASE-18000 is the root 
> cause. Would it be possible to test that theory on master (i.e. with HBase 
> 1.3 plus the patch for HBASE-18000), [~samarthjain] & [~mujtabachohan]?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3855) Separate MutableIndexIT into multiple test classes

2017-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013312#comment-16013312
 ] 

Hudson commented on PHOENIX-3855:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1611 (See 
[https://builds.apache.org/job/Phoenix-master/1611/])
PHOENIX-3855 Separate MutableIndexIT into multiple test classes (jamestaylor: 
rev d2575288d1542c5b6e8dbe65448a22cf59aca8ff)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
* (add) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexSplitIT.java
* (add) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexSplitForwardScanIT.java
* (add) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexSplitReverseScanIT.java


> Separate MutableIndexIT into multiple test classes
> --
>
> Key: PHOENIX-3855
> URL: https://issues.apache.org/jira/browse/PHOENIX-3855
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3855.patch
>
>
> MutableIndexIT is taking 50 minutes to run which is causing our test suite to 
> hang. The two slowest tests are doing splits to ensure the queries succeed 
> still, but there's no need to call these eight times (for all the 
> combinations of parameterized tests). There's also no need to explicitly drop 
> the tables created since the table names are unique.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3856) StatementContext class constructor not honouring supplied scan object

2017-05-16 Thread Maddineni Sukumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maddineni Sukumar updated PHOENIX-3856:
---
Attachment: PHOENIX-3856.patch

> StatementContext class  constructor not honouring supplied scan object
> --
>
> Key: PHOENIX-3856
> URL: https://issues.apache.org/jira/browse/PHOENIX-3856
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Attachments: PHOENIX-3856.patch
>
>
> In below constructor  we are creating additional scan object instead of 
> supplied scan object. 
>  public StatementContext(PhoenixStatement statement, Scan scan) {
> this(statement, FromCompiler.EMPTY_TABLE_RESOLVER, new Scan(), new 
> SequenceManager(statement));
> }



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3856) StatementContext class constructor not honouring supplied scan object

2017-05-16 Thread Maddineni Sukumar (JIRA)
Maddineni Sukumar created PHOENIX-3856:
--

 Summary: StatementContext class  constructor not honouring 
supplied scan object
 Key: PHOENIX-3856
 URL: https://issues.apache.org/jira/browse/PHOENIX-3856
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
Reporter: Maddineni Sukumar
Assignee: Maddineni Sukumar
Priority: Minor


In below constructor  we are creating additional scan object instead of 
supplied scan object. 

 public StatementContext(PhoenixStatement statement, Scan scan) {
this(statement, FromCompiler.EMPTY_TABLE_RESOLVER, new Scan(), new 
SequenceManager(statement));
}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3808) Implement chaos tests using HBase's hbase-it facility

2017-05-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013296#comment-16013296
 ] 

Andrew Purtell commented on PHOENIX-3808:
-

Sorry [~jamestaylor] I got sidetracked. Let me try it out this week. Any 
suggestions for an IT test that will or can run 30 minutes or longer? Maybe we 
should also parameterize some so we can trigger runs of different durations? 
When testing HBase IT cases we run some of them for hours, or days, 
IntegrationTestBigLinkedList especially. Still thinking about porting that to 
Phoenix. Will be possible to do that with this new framework. Suggestions for a 
few candidate 'native' ITs for extended operation welcome. 

> Implement chaos tests using HBase's hbase-it facility
> -
>
> Key: PHOENIX-3808
> URL: https://issues.apache.org/jira/browse/PHOENIX-3808
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>
> Implement chaos tests using HBase's hbase-it facility. Especially, 
> correctness testing with an active server killing monkey policy. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3823) Force cache update on MetaDataEntityNotFoundException

2017-05-16 Thread Maddineni Sukumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maddineni Sukumar updated PHOENIX-3823:
---
Attachment: PHOENIX-3823.v2.patch

> Force cache update on MetaDataEntityNotFoundException 
> --
>
> Key: PHOENIX-3823
> URL: https://issues.apache.org/jira/browse/PHOENIX-3823
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: James Taylor
>Assignee: Maddineni Sukumar
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3823.patch, PHOENIX-3823.v2.patch
>
>
> When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period 
> of time which may cause the schema being used to become stale. If another 
> client adds a column or a new table or view, other clients won't see it. As a 
> result, the client will get a MetaDataEntityNotFoundException. Instead of 
> bubbling this up, we should retry after forcing a cache update on the tables 
> involved in the query.
> The above works well for references to entities that don't yet exist. 
> However, we cannot detect when some entities are referred to which no longer 
> exists until the cache expires. An exception is if a physical table is 
> dropped which would be detected immediately, however we would allow queries 
> and updates to columns which have been dropped until the cache entry expires 
> (which seems like a reasonable tradeoff IMHO. In addition, we won't start 
> using indexes on tables until the cache expires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3855) Separate MutableIndexIT into multiple test classes

2017-05-16 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3855.
---
Resolution: Fixed

> Separate MutableIndexIT into multiple test classes
> --
>
> Key: PHOENIX-3855
> URL: https://issues.apache.org/jira/browse/PHOENIX-3855
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3855.patch
>
>
> MutableIndexIT is taking 50 minutes to run which is causing our test suite to 
> hang. The two slowest tests are doing splits to ensure the queries succeed 
> still, but there's no need to call these eight times (for all the 
> combinations of parameterized tests). There's also no need to explicitly drop 
> the tables created since the table names are unique.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3839) Prevent large aggregate queries from timing out

2017-05-16 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013261#comment-16013261
 ] 

James Taylor commented on PHOENIX-3839:
---

[~mujtabachohan] - is perhaps a new, different issue as you mention it fails 
while building a global index? I'm wondering about a long running SELECT 
COUNT(1) FROM T query with and without HBASE-18000.

> Prevent large aggregate queries from timing out
> ---
>
> Key: PHOENIX-3839
> URL: https://issues.apache.org/jira/browse/PHOENIX-3839
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
>
> Large aggregate queries timeout in Phoenix, even with our renew lease code in 
> place. The only workaround is to increase the RPC timeout to be really high 
> which is not such a good idea. It's quite possible HBASE-18000 is the root 
> cause. Would it be possible to test that theory on master (i.e. with HBase 
> 1.3 plus the patch for HBASE-18000), [~samarthjain] & [~mujtabachohan]?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3839) Prevent large aggregate queries from timing out

2017-05-16 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013254#comment-16013254
 ] 

Mujtaba Chohan commented on PHOENIX-3839:
-

[~jamestaylor] Fails with and without HBASE-18000.

> Prevent large aggregate queries from timing out
> ---
>
> Key: PHOENIX-3839
> URL: https://issues.apache.org/jira/browse/PHOENIX-3839
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
>
> Large aggregate queries timeout in Phoenix, even with our renew lease code in 
> place. The only workaround is to increase the RPC timeout to be really high 
> which is not such a good idea. It's quite possible HBASE-18000 is the root 
> cause. Would it be possible to test that theory on master (i.e. with HBase 
> 1.3 plus the patch for HBASE-18000), [~samarthjain] & [~mujtabachohan]?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3855) Separate MutableIndexIT into multiple test classes

2017-05-16 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3855:
--
Attachment: PHOENIX-3855.patch

> Separate MutableIndexIT into multiple test classes
> --
>
> Key: PHOENIX-3855
> URL: https://issues.apache.org/jira/browse/PHOENIX-3855
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3855.patch
>
>
> MutableIndexIT is taking 50 minutes to run which is causing our test suite to 
> hang. The two slowest tests are doing splits to ensure the queries succeed 
> still, but there's no need to call these eight times (for all the 
> combinations of parameterized tests). There's also no need to explicitly drop 
> the tables created since the table names are unique.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3855) Separate MutableIndexIT into multiple test classes

2017-05-16 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3855:
-

Assignee: James Taylor

> Separate MutableIndexIT into multiple test classes
> --
>
> Key: PHOENIX-3855
> URL: https://issues.apache.org/jira/browse/PHOENIX-3855
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3855.patch
>
>
> MutableIndexIT is taking 50 minutes to run which is causing our test suite to 
> hang. The two slowest tests are doing splits to ensure the queries succeed 
> still, but there's no need to call these eight times (for all the 
> combinations of parameterized tests). There's also no need to explicitly drop 
> the tables created since the table names are unique.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3855) Separate MutableIndexIT into multiple test classes

2017-05-16 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3855:
--
Fix Version/s: 4.11.0

> Separate MutableIndexIT into multiple test classes
> --
>
> Key: PHOENIX-3855
> URL: https://issues.apache.org/jira/browse/PHOENIX-3855
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.11.0
>
>
> MutableIndexIT is taking 50 minutes to run which is causing our test suite to 
> hang. The two slowest tests are doing splits to ensure the queries succeed 
> still, but there's no need to call these eight times (for all the 
> combinations of parameterized tests). There's also no need to explicitly drop 
> the tables created since the table names are unique.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3855) Separate MutableIndexIT into multiple test classes

2017-05-16 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3855:
-

 Summary: Separate MutableIndexIT into multiple test classes
 Key: PHOENIX-3855
 URL: https://issues.apache.org/jira/browse/PHOENIX-3855
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


MutableIndexIT is taking 50 minutes to run which is causing our test suite to 
hang. The two slowest tests are doing splits to ensure the queries succeed 
still, but there's no need to call these eight times (for all the combinations 
of parameterized tests). There's also no need to explicitly drop the tables 
created since the table names are unique.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013209#comment-16013209
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116874151
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
+
+public enum MetricType {
+global,
+request
+}
+
+protected static final Logger LOG = 
LoggerFactory.getLogger(PqsMetricsSystem.class);
+
+public Thread getGlobalMetricThread() {
+return globalMetricThread;
+}
+
+private Thread globalMetricThread = null;
+
+
+public PqsMetricsSystem(String sinkType,String fileName, Integer 
reportingInterval){
+PqsGlobalMetrics pqsGlobalMetricsToJMX = null;
+try {
+pqsGlobalMetricsToJMX = new PqsGlobalMetrics(sinkType, 
fileName,reportingInterval);
+globalMetricThread = new Thread(pqsGlobalMetricsToJMX);
+globalMetricThread.setName("globalMetricsThread");
+globalMetricThread.start();
+}catch (Exception ex){
+LOG.error(" could not instantiate the PQS Metrics System");
+if (globalMetricThread!=null) {
+try {
+globalMetricThread.interrupt();
+} catch (Exception ine) {
+LOG.error(" unable to interrupt the global metrics 
thread",ine);
+}
+}
+
+}
+ }
+
+public static PqsSink getSinkObject(String typeOfSink,String filename){
+PqsSink pqsSink;
+switch(typeOfSink.toLowerCase()) {
+case "file":
+pqsSink = new PqsFileSink(filename);
--- End diff --

Hadoop metrics using concept of source and sink and ties them together with 
metric system. If you use sink , you need a source of gauges, meters, counters 
etc as source. But the only interface Phoenix driver exposes is via static 
methods in PhoenixRuntime, which provides aggregate values for various metrics. 
So, it is difficult to use Hadoop metrics or Codahale.  


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> 

[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread rahulsIOT
Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116874151
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
+
+public enum MetricType {
+global,
+request
+}
+
+protected static final Logger LOG = 
LoggerFactory.getLogger(PqsMetricsSystem.class);
+
+public Thread getGlobalMetricThread() {
+return globalMetricThread;
+}
+
+private Thread globalMetricThread = null;
+
+
+public PqsMetricsSystem(String sinkType,String fileName, Integer 
reportingInterval){
+PqsGlobalMetrics pqsGlobalMetricsToJMX = null;
+try {
+pqsGlobalMetricsToJMX = new PqsGlobalMetrics(sinkType, 
fileName,reportingInterval);
+globalMetricThread = new Thread(pqsGlobalMetricsToJMX);
+globalMetricThread.setName("globalMetricsThread");
+globalMetricThread.start();
+}catch (Exception ex){
+LOG.error(" could not instantiate the PQS Metrics System");
+if (globalMetricThread!=null) {
+try {
+globalMetricThread.interrupt();
+} catch (Exception ine) {
+LOG.error(" unable to interrupt the global metrics 
thread",ine);
+}
+}
+
+}
+ }
+
+public static PqsSink getSinkObject(String typeOfSink,String filename){
+PqsSink pqsSink;
+switch(typeOfSink.toLowerCase()) {
+case "file":
+pqsSink = new PqsFileSink(filename);
--- End diff --

Hadoop metrics using concept of source and sink and ties them together with 
metric system. If you use sink , you need a source of gauges, meters, counters 
etc as source. But the only interface Phoenix driver exposes is via static 
methods in PhoenixRuntime, which provides aggregate values for various metrics. 
So, it is difficult to use Hadoop metrics or Codahale.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Question about Git tag for 4.8.2 release and RC2

2017-05-16 Thread Lars George
Hi Josh,

Understood. According to this
https://github.com/git/git/blob/master/gitweb/gitweb.perl#L6029 the
"tag" link is only present if the tag is a... tag? Weird.

Lars

On Tue, May 16, 2017 at 5:23 PM, Josh Elser  wrote:
> LarsG,
>
> Hrm, I don't know enough about git-web to say why these entries in the
> "Tags" header don't have the "tag" anchor. Check the code itself, these tags
> exist. See you for yourself:
>
> $ git clone https://git-wip-us.apache.org/repos/asf/phoenix.git && cd
> phoenix
> $ git tag | fgrep 4.8.2
>
> And, again, the mis-naming was likely accidental. We should fix this.
>
> - Josh
>
>
> Lars George wrote:
>>
>> Hi Josh,
>>
>> I think the point LarsF was making is that there are tags missing for
>> 4.8.2, it is only the commits as you stated. See
>>
>> https://www.dropbox.com/s/xh3g008te08mwbg/Screenshot%202017-05-15%2008.32.45.png?dl=0
>> for reference of what we are seeing (which has no "tag" link to those
>> commits). No worries, it is OK to use the commit hashes, we were
>> merely wondering why they were not tagged, and also why the naming
>> changed.
>>
>> Cheers,
>> Lars
>>
>> On Tue, May 9, 2017 at 1:59 AM, Josh Elser  wrote:
>>>
>>> Hi Lars,
>>>
>>> The tags for 4.8.2 are:
>>>
>>>
>>> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3d4205123f763fdfd875211d551da42deeb78412
>>>
>>> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=40de1f6cd5326b2c8ec2da5ad817696e050c2f3a
>>>
>>> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=e7c63d54c592bbed193841a68c0f8717831258d9
>>>
>>> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=cf6c4f7c44ac64ada9e61385c4b10ef00e06b014
>>>
>>> You can navigate to them via the git-web and the "tags" section. These
>>> get
>>> collapsed into a single "ref" (as both the official 4.8.2 release and rc2
>>> are the same thing -- rc2 was the vote that passed).
>>>
>>> Re: the missing "v" in "v4.x.y", I think this was just an accidental
>>> omission. We can probably rectify this (add a tag which follows the
>>> standard
>>> naming).
>>>
>>> Re: the missing tags on Github, this is probably just ASF infra having
>>> trouble. They control the mirroring process of all Git changes flowing to
>>> the Github mirror. Sometimes this process gets out of whack, I have no
>>> reason to believe this is anything other than that :)
>>>
>>>
>>> Lars Francke wrote:

 Hi everyone,

 I am the first to admit that I don't fully understand all of git so I'm
 hoping you guys can help me. We're trying to check out the release bits
 of
 the 4.8.2 release.

 On Github there's no tag to be found for this release.

 The links from the vote thread<


 https://lists.apache.org/thread.html/9d5725264517e4e93f610b8b117a735a75e90cf81d96154df9b2bda3@%3Cdev.phoenix.apache.org%3E>
 lead to 404s<


 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/4.8.2-HBase-1.2-rc2
>
> .


 The "tags" overview in Git does have a commit for 4.8.2<
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tags>   but the
 "tag"
 link at the end is missing.

 The "tag" also differs from previous releases in that the "v" at the
 beginning is missing.

 Compare this to 4.8.1:<


 https://lists.apache.org/thread.html/91683a340dd69cfcad15dd6ba6af1f497f317baadf13e7b22bc09e57@%3Cdev.phoenix.apache.org%3E

 Can anyone shed some light on the issue?

 Thanks,
 Lars

>


[jira] [Commented] (PHOENIX-3827) Make use of HBASE-15600 to write local index mutations along with data mutations atomically

2017-05-16 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013193#comment-16013193
 ] 

Mujtaba Chohan commented on PHOENIX-3827:
-

[~jamestaylor] There is no change in performance with PHOENIX-3827_v3.patch and 
it remains the same as described in PHOENIX-3853

> Make use of HBASE-15600 to write local index mutations along with data 
> mutations atomically
> ---
>
> Key: PHOENIX-3827
> URL: https://issues.apache.org/jira/browse/PHOENIX-3827
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3827.patch, PHOENIX-3827_v2.patch, 
> PHOENIX-3827_v3.patch
>
>
> After HBASE-15600 we can add mutations of the same table from coprocessors so 
> we can write local index data along with data mutations so it will be atomic. 
> This we can do in 4.x-HBase-1.3 version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013186#comment-16013186
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116866567
  
--- Diff: 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
 ---
@@ -161,5 +196,45 @@ public void smokeTest() throws Exception {
 }
   }
 }
+assertTrue(" metrics file should contain global ",
+checkFileContainsMetricsData(global));
+assertTrue(" metrics file should contain overall statement level 
metrics",
+checkFileContainsMetricsData(overAllReadRequestMetrics));
+assertTrue(" metrics file should contain statement level read metrics",
+checkFileContainsMetricsData(requestReadMetrics));
+assertTrue(" metrics file should contain connection level write 
metrics",
+checkFileContainsMetricsData(writeMetricsMut));
+  }
+
+  private static boolean checkFileContainsMetricsData(String metricsType) 
throws Exception {
+FileReader fileReader = new FileReader(pqsSinkFile);
+try(BufferedReader br = new BufferedReader(fileReader)) {
+  String st;
+  while (( st = br.readLine() ) != null) {
+ObjectMapper mapper = new ObjectMapper();
+JsonNode actualObj = mapper.readTree(st);
+boolean contains = actualObj.get(metricsType)!= null?true:false;
+if (contains) {
+  return true; // the outputfile does contain metrics jsons
+}
+  }
+};
+return false;
   }
+
+  public static Thread getThreadByName(String threadName) {
+for (Thread t : Thread.getAllStackTraces().keySet()) {
+  if (t.getName().equals(threadName)) return t;
+}
+return null;
+  }
+
+  private static void stopGlobalThread(){
+//need to stop the global metrics thread
+Thread globalMetricsThread = getThreadByName("globalMetricsThread");
--- End diff --

This is pretty hokey -- I think you should encapsulate the state of 
stopping this thread in PqsMetricsSystem


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013182#comment-16013182
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116869793
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java 
---
@@ -272,6 +278,11 @@
 public static final String DEFAULT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS.toString();
 public static final String 
DEFAULT_MULTITENANT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.ONE_CELL_PER_COLUMN.toString();
 
+public static final Integer DEFAULT_PHOENIX_PQS_REPORTING_INTERVAL_MS 
= 1; // 10 sec
+public static final String DEFAULT_PHOENIX_PQS_TYPE_OF_SINK = "file";
+public static final boolean DEFAULT_PHOENIX_QUERY_SERVER_METRICS = 
true;
--- End diff --

I think this should be off by default and something have to opt-in to 
enable.


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013184#comment-16013184
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116866233
  
--- Diff: 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
 ---
@@ -84,8 +114,11 @@ public static void afterClass() throws Exception {
   assertEquals("query server didn't exit cleanly", 0, 
AVATICA_SERVER.getQueryServer()
 .getRetCode());
 }
+//need to stop the global metrics thread
+stopGlobalThread();
--- End diff --

put this in a finally block to make sure it's stopped.


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013180#comment-16013180
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116869974
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/sink/PqsFileSink.java
 ---
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics.sink;
+
+
+import org.apache.phoenix.queryserver.metrics.PqsConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.UnsupportedEncodingException;
+import java.io.FileOutputStream;
+import java.io.PrintStream;
+
+
+public class PqsFileSink extends PqsSink {
+
+private PrintStream writer;
+private static final Logger LOG = 
LoggerFactory.getLogger(PqsFileSink.class);
+private String filename = PqsConfiguration.getFileSinkFilename();
+
+public PqsFileSink() {
+try {
+writer = filename == null ? System.out
+: new PrintStream(new FileOutputStream(new 
File(filename)),
+true, "UTF-8");
+} catch (FileNotFoundException e) {
+LOG.error("Error creating "+ filename, e);
--- End diff --

Ping: this still needs to be addressed.

e.g. what happens if /tmp is nonexistent or non-writable?


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013183#comment-16013183
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116870173
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
--- End diff --

These names leave a bit to be desired for me. Reading them, I don't really 
know what information they're going to contain. How about instead..

* QueryReadMetrics
* GlobalQueryReadMetrics
* ConnectionMutationWriteMetrics
* ConnectionMutationReadMetrics

I don't have a good understanding of what these metrics contain; it just 
seemed like you copied the names from the internal Phoenix methods. If we're 
exposing them to users, they need to be self-explanatory (or have documentation 
explaining them -- maybe we have that?).


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013181#comment-16013181
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116868081
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/PhoenixMetaFactoryImpl.java
 ---
@@ -68,7 +73,18 @@ public Meta create(List args) {
 "0 or 1 argument expected. Received " + 
Arrays.toString(args.toArray()));
   }
   // TODO: what about -D configs passed in from cli? How do they get 
pushed down?
-  return new JdbcMeta(url, info);
+  boolean isMetricOn = conf.getBoolean(PHOENIX_QUERY_SERVER_METRICS,
+  DEFAULT_PHOENIX_QUERY_SERVER_METRICS);
+  if (isMetricOn) {
+info.put("pqs_reporting_interval", 
PqsMetricsSystem.getReportingInterval(conf));
--- End diff --

Why this indirection with pqs_reporting_interval to 
phoenix.query.server.metrics.report.interval.ms internally? Just use our 
configuration properties all around.


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013185#comment-16013185
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116865709
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
+
+public enum MetricType {
+global,
+request
+}
+
+protected static final Logger LOG = 
LoggerFactory.getLogger(PqsMetricsSystem.class);
+
+public Thread getGlobalMetricThread() {
+return globalMetricThread;
+}
+
+private Thread globalMetricThread = null;
+
+
+public PqsMetricsSystem(String sinkType,String fileName, Integer 
reportingInterval){
+PqsGlobalMetrics pqsGlobalMetricsToJMX = null;
+try {
+pqsGlobalMetricsToJMX = new PqsGlobalMetrics(sinkType, 
fileName,reportingInterval);
+globalMetricThread = new Thread(pqsGlobalMetricsToJMX);
--- End diff --

IMO, it would be much nicer to use a `ExecutorService` to manage this 
instead of the thread directly.

This is missing an UncaughtExceptionHandler. If the metrics thread dies for 
some reason, it would go unnoticed.


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013187#comment-16013187
 ] 

ASF GitHub Bot commented on PHOENIX-3655:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116866907
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
+
+public enum MetricType {
+global,
+request
+}
+
+protected static final Logger LOG = 
LoggerFactory.getLogger(PqsMetricsSystem.class);
+
+public Thread getGlobalMetricThread() {
+return globalMetricThread;
+}
+
+private Thread globalMetricThread = null;
+
+
+public PqsMetricsSystem(String sinkType,String fileName, Integer 
reportingInterval){
+PqsGlobalMetrics pqsGlobalMetricsToJMX = null;
+try {
+pqsGlobalMetricsToJMX = new PqsGlobalMetrics(sinkType, 
fileName,reportingInterval);
+globalMetricThread = new Thread(pqsGlobalMetricsToJMX);
+globalMetricThread.setName("globalMetricsThread");
+globalMetricThread.start();
+}catch (Exception ex){
+LOG.error(" could not instantiate the PQS Metrics System");
+if (globalMetricThread!=null) {
+try {
+globalMetricThread.interrupt();
+} catch (Exception ine) {
+LOG.error(" unable to interrupt the global metrics 
thread",ine);
+}
+}
+
+}
+ }
+
+public static PqsSink getSinkObject(String typeOfSink,String filename){
+PqsSink pqsSink;
+switch(typeOfSink.toLowerCase()) {
+case "file":
+pqsSink = new PqsFileSink(filename);
--- End diff --

Looking back at this.. is there a reason why didn't you use the Hadoop 
Metrics2 Sink classes?


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver 

[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116866907
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
+
+public enum MetricType {
+global,
+request
+}
+
+protected static final Logger LOG = 
LoggerFactory.getLogger(PqsMetricsSystem.class);
+
+public Thread getGlobalMetricThread() {
+return globalMetricThread;
+}
+
+private Thread globalMetricThread = null;
+
+
+public PqsMetricsSystem(String sinkType,String fileName, Integer 
reportingInterval){
+PqsGlobalMetrics pqsGlobalMetricsToJMX = null;
+try {
+pqsGlobalMetricsToJMX = new PqsGlobalMetrics(sinkType, 
fileName,reportingInterval);
+globalMetricThread = new Thread(pqsGlobalMetricsToJMX);
+globalMetricThread.setName("globalMetricsThread");
+globalMetricThread.start();
+}catch (Exception ex){
+LOG.error(" could not instantiate the PQS Metrics System");
+if (globalMetricThread!=null) {
+try {
+globalMetricThread.interrupt();
+} catch (Exception ine) {
+LOG.error(" unable to interrupt the global metrics 
thread",ine);
+}
+}
+
+}
+ }
+
+public static PqsSink getSinkObject(String typeOfSink,String filename){
+PqsSink pqsSink;
+switch(typeOfSink.toLowerCase()) {
+case "file":
+pqsSink = new PqsFileSink(filename);
--- End diff --

Looking back at this.. is there a reason why didn't you use the Hadoop 
Metrics2 Sink classes?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116866233
  
--- Diff: 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
 ---
@@ -84,8 +114,11 @@ public static void afterClass() throws Exception {
   assertEquals("query server didn't exit cleanly", 0, 
AVATICA_SERVER.getQueryServer()
 .getRetCode());
 }
+//need to stop the global metrics thread
+stopGlobalThread();
--- End diff --

put this in a finally block to make sure it's stopped.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116869974
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/sink/PqsFileSink.java
 ---
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics.sink;
+
+
+import org.apache.phoenix.queryserver.metrics.PqsConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.UnsupportedEncodingException;
+import java.io.FileOutputStream;
+import java.io.PrintStream;
+
+
+public class PqsFileSink extends PqsSink {
+
+private PrintStream writer;
+private static final Logger LOG = 
LoggerFactory.getLogger(PqsFileSink.class);
+private String filename = PqsConfiguration.getFileSinkFilename();
+
+public PqsFileSink() {
+try {
+writer = filename == null ? System.out
+: new PrintStream(new FileOutputStream(new 
File(filename)),
+true, "UTF-8");
+} catch (FileNotFoundException e) {
+LOG.error("Error creating "+ filename, e);
--- End diff --

Ping: this still needs to be addressed.

e.g. what happens if /tmp is nonexistent or non-writable?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116868081
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/PhoenixMetaFactoryImpl.java
 ---
@@ -68,7 +73,18 @@ public Meta create(List args) {
 "0 or 1 argument expected. Received " + 
Arrays.toString(args.toArray()));
   }
   // TODO: what about -D configs passed in from cli? How do they get 
pushed down?
-  return new JdbcMeta(url, info);
+  boolean isMetricOn = conf.getBoolean(PHOENIX_QUERY_SERVER_METRICS,
+  DEFAULT_PHOENIX_QUERY_SERVER_METRICS);
+  if (isMetricOn) {
+info.put("pqs_reporting_interval", 
PqsMetricsSystem.getReportingInterval(conf));
--- End diff --

Why this indirection with pqs_reporting_interval to 
phoenix.query.server.metrics.report.interval.ms internally? Just use our 
configuration properties all around.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116865709
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
+
+public enum MetricType {
+global,
+request
+}
+
+protected static final Logger LOG = 
LoggerFactory.getLogger(PqsMetricsSystem.class);
+
+public Thread getGlobalMetricThread() {
+return globalMetricThread;
+}
+
+private Thread globalMetricThread = null;
+
+
+public PqsMetricsSystem(String sinkType,String fileName, Integer 
reportingInterval){
+PqsGlobalMetrics pqsGlobalMetricsToJMX = null;
+try {
+pqsGlobalMetricsToJMX = new PqsGlobalMetrics(sinkType, 
fileName,reportingInterval);
+globalMetricThread = new Thread(pqsGlobalMetricsToJMX);
--- End diff --

IMO, it would be much nicer to use a `ExecutorService` to manage this 
instead of the thread directly.

This is missing an UncaughtExceptionHandler. If the metrics thread dies for 
some reason, it would go unnoticed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116870173
  
--- Diff: 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/metrics/PqsMetricsSystem.java
 ---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.queryserver.metrics;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.queryserver.metrics.sink.PqsFileSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSink;
+import org.apache.phoenix.queryserver.metrics.sink.PqsSlf4jSink;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_FILE_SINK_FILENAME;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_METRIC_REPORTING_INTERVAL_MS;
+import static 
org.apache.phoenix.query.QueryServices.PHOENIX_PQS_TYPE_OF_SINK;
+
+
+public class PqsMetricsSystem {
+
+public final static String statementReadMetrics = 
"Statement-RequestReadMetrics";
+public final static String overAllReadRequestMetrics = 
"Statement-OverAllReadRequestMetrics";
+public final static String connectionWriteMetricsForMutations = 
"Connection-WriteMetricsForMutations";
+public final static String connectionReadMetricsForMutations = 
"Connection-ReadMetricsForMutations";
--- End diff --

These names leave a bit to be desired for me. Reading them, I don't really 
know what information they're going to contain. How about instead..

* QueryReadMetrics
* GlobalQueryReadMetrics
* ConnectionMutationWriteMetrics
* ConnectionMutationReadMetrics

I don't have a good understanding of what these metrics contain; it just 
seemed like you copied the names from the internal Phoenix methods. If we're 
exposing them to users, they need to be self-explanatory (or have documentation 
explaining them -- maybe we have that?).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116869793
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java 
---
@@ -272,6 +278,11 @@
 public static final String DEFAULT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS.toString();
 public static final String 
DEFAULT_MULTITENANT_IMMUTABLE_STORAGE_SCHEME = 
ImmutableStorageScheme.ONE_CELL_PER_COLUMN.toString();
 
+public static final Integer DEFAULT_PHOENIX_PQS_REPORTING_INTERVAL_MS 
= 1; // 10 sec
+public static final String DEFAULT_PHOENIX_PQS_TYPE_OF_SINK = "file";
+public static final boolean DEFAULT_PHOENIX_QUERY_SERVER_METRICS = 
true;
--- End diff --

I think this should be off by default and something have to opt-in to 
enable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #242: PQS metrics - https://issues.apache.org/jira/brow...

2017-05-16 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/242#discussion_r116866567
  
--- Diff: 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
 ---
@@ -161,5 +196,45 @@ public void smokeTest() throws Exception {
 }
   }
 }
+assertTrue(" metrics file should contain global ",
+checkFileContainsMetricsData(global));
+assertTrue(" metrics file should contain overall statement level 
metrics",
+checkFileContainsMetricsData(overAllReadRequestMetrics));
+assertTrue(" metrics file should contain statement level read metrics",
+checkFileContainsMetricsData(requestReadMetrics));
+assertTrue(" metrics file should contain connection level write 
metrics",
+checkFileContainsMetricsData(writeMetricsMut));
+  }
+
+  private static boolean checkFileContainsMetricsData(String metricsType) 
throws Exception {
+FileReader fileReader = new FileReader(pqsSinkFile);
+try(BufferedReader br = new BufferedReader(fileReader)) {
+  String st;
+  while (( st = br.readLine() ) != null) {
+ObjectMapper mapper = new ObjectMapper();
+JsonNode actualObj = mapper.readTree(st);
+boolean contains = actualObj.get(metricsType)!= null?true:false;
+if (contains) {
+  return true; // the outputfile does contain metrics jsons
+}
+  }
+};
+return false;
   }
+
+  public static Thread getThreadByName(String threadName) {
+for (Thread t : Thread.getAllStackTraces().keySet()) {
+  if (t.getName().equals(threadName)) return t;
+}
+return null;
+  }
+
+  private static void stopGlobalThread(){
+//need to stop the global metrics thread
+Thread globalMetricsThread = getThreadByName("globalMetricsThread");
--- End diff --

This is pretty hokey -- I think you should encapsulate the state of 
stopping this thread in PqsMetricsSystem


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3827) Make use of HBASE-15600 to write local index mutations along with data mutations atomically

2017-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013177#comment-16013177
 ] 

Hadoop QA commented on PHOENIX-3827:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12868401/PHOENIX-3827_v3.patch
  against master branch at commit 442d8eb29f1f73fd104a323d9aa77f3a4ccfd8d1.
  ATTACHMENT ID: 12868401

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
46 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+public void write(Collection> toWrite) throws 
IndexWriteException, IOException {
+public void write(Multimap toWrite) 
throws SingleIndexWriteFailureException {
+public void write(Multimap toWrite) 
throws MultiIndexWriteFailureException {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/870//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/870//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/870//console

This message is automatically generated.

> Make use of HBASE-15600 to write local index mutations along with data 
> mutations atomically
> ---
>
> Key: PHOENIX-3827
> URL: https://issues.apache.org/jira/browse/PHOENIX-3827
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3827.patch, PHOENIX-3827_v2.patch, 
> PHOENIX-3827_v3.patch
>
>
> After HBASE-15600 we can add mutations of the same table from coprocessors so 
> we can write local index data along with data mutations so it will be atomic. 
> This we can do in 4.x-HBase-1.3 version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] phoenix pull request #236: Loadbalancer

2017-05-16 Thread rahulsIOT
Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/236#discussion_r116869437
  
--- Diff: 
phoenix-load-balancer/src/main/java/org/apache/phoenix/loadbalancer/service/PhoenixQueryServerNode.java
 ---
@@ -0,0 +1,79 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.loadbalancer.service;
+
+import org.codehaus.jackson.annotate.JsonProperty;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.annotate.JsonRootName;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+
+/**
+ * Immutable class for defining the server location for
+ * Phoenix query instance. This data is stored as Node data
+ * in zookeeper
+ */
+public class PhoenixQueryServerNode {
+
+public void setHost(String host) {
+this.host = host;
+}
+
+public void setPort(String port) {
+this.port = port;
+}
+
+private String host;
+private String port;
--- End diff --

Josh. The benefit of this could be that we can pass more information such 
as measure of load on PQS ( cpu, load averages etc)  to load balancer. Load 
balancer can make more intelligent decision on where to route the next request. 
But, I think within this  PhoenixQueryServerNode.java class, it is good idea to 
use HostAndPort class from guava. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #236: Loadbalancer

2017-05-16 Thread rahulsIOT
Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/236#discussion_r116865790
  
--- Diff: phoenix-load-balancer/pom.xml ---
@@ -0,0 +1,58 @@
+
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+  
+org.apache.phoenix
+phoenix
+4.10.0-HBase-1.2-SNAPSHOT
+  
+  phoenix-queryserver-loadbalancer
+  Phoenix Load Balancer
+  A Load balancer which routes calls to Phoenix Query 
Server
+
+  
+${project.basedir}/..
+org.apache.phoenix.shaded
--- End diff --

done. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #236: Loadbalancer

2017-05-16 Thread rahulsIOT
Github user rahulsIOT commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/236#discussion_r116863959
  
--- Diff: phoenix-load-balancer/pom.xml ---
@@ -0,0 +1,58 @@
+
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+  
+org.apache.phoenix
+phoenix
+4.10.0-HBase-1.2-SNAPSHOT
--- End diff --

ok. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-05-16 Thread Rahul Shrivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013143#comment-16013143
 ] 

Rahul Shrivastava commented on PHOENIX-3655:


[~elserj] [~jamestaylor] 

I have made all the request code changes. Please review. 

thanks
Rahul


> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3853) Local Index - Writes to local index are twice as slow as global and get exponentially slower with PHOENIX-3827_v2 patch

2017-05-16 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013051#comment-16013051
 ] 

James Taylor commented on PHOENIX-3853:
---

Let me work up a new patch for PHOENIX-3827. It looks to me like the 
allowLocalUpdates boolean would cause the ParallelWriterIndexCommitter to write 
the local indexes still (in addition to them being written through the new 
mechanism). That would explain the 2x write time.

> Local Index - Writes to local index are twice as slow as global and get 
> exponentially slower with PHOENIX-3827_v2 patch
> ---
>
> Key: PHOENIX-3853
> URL: https://issues.apache.org/jira/browse/PHOENIX-3853
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.3.1 4GB heap in local mode
>Reporter: Mujtaba Chohan
> Fix For: 4.11.0
>
> Attachments: batch_time.png
>
>
> HBase 1.3.1 with head of Phoenix 4.x with/without PHOENIX-3827 v2 patch 
> applied. This is with immutable non-covered local/global index on a single 
> column with varying batch size when writing data to base table plus index.
> !batch_time.png!
> | Batch Size | Local Index with PHOENIX-3827_v2 patch (sec)| Local Index 
> without PHOENIX-3827_v2.patch (sec)| Global (sec)| 
> | 100 | 0.02 | 0.03 | 0.013 | 
> | 1000 | 0.3 | 0.3 | 0.13 | 
> | 1 | 4.3 | 2.6 | 1.3 | 
> | 12500 | 8.1 | 3 | 1.6 | 
> | 15000 | 13.3 | 3.1 | 1.9 | 
> Schema and index
> {noformat}
> CREATE TABLE IF NOT EXISTS T (OID CHAR(15) NOT NULL, PKP CHAR(3) NOT NULL, 
> PIH CHAR(15) NOT NULL, FD DATE NOT NULL, SB CHAR(15) NOT NULL, BJ CHAR(15), 
> JR VARCHAR, FIELD VARCHAR, YM VARCHAR, WN VARCHAR, LG VARCHAR, XHJ VARCHAR, 
> HF VARCHAR, GA VARCHAR, MX VARCHAR, NZ DECIMAL, JV DECIMAL, AG DATE, KV DATE, 
> JK VARCHAR, DK VARCHAR, EU DATE, OE VARCHAR, DV INTEGER, IK VARCHAR 
> CONSTRAINT PK PRIMARY KEY ( OID, PKP, PIH, FD DESC, SB )) 
> VERSIONS=1,IMMUTABLE_ROWS=true
> CREATE INDEX IF NOT EXISTS IDXT ON T (JV)
> {noformat}
> Data CSV
> https://expirebox.com/download/1cea73af1831b5193f0539d6e3442292.html
> [~rajeshbabu], [~lhofhansl], [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3808) Implement chaos tests using HBase's hbase-it facility

2017-05-16 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012996#comment-16012996
 ] 

James Taylor commented on PHOENIX-3808:
---

Have you had a chance to confirm that these chaos tests run locally, 
[~apurtell]? Would you have some bandwidth to review this, [~samarthjain]?

> Implement chaos tests using HBase's hbase-it facility
> -
>
> Key: PHOENIX-3808
> URL: https://issues.apache.org/jira/browse/PHOENIX-3808
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>
> Implement chaos tests using HBase's hbase-it facility. Especially, 
> correctness testing with an active server killing monkey policy. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3572) Support FETCH NEXT| n ROWS from Cursor

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012988#comment-16012988
 ] 

ASF GitHub Bot commented on PHOENIX-3572:
-

Github user bijugs commented on the issue:

https://github.com/apache/phoenix/pull/229
  
Attached patch for the changes to the JIRA ticket.


> Support FETCH NEXT| n ROWS from Cursor
> --
>
> Key: PHOENIX-3572
> URL: https://issues.apache.org/jira/browse/PHOENIX-3572
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Biju Nair
>Assignee: Biju Nair
> Attachments: PHOENIX-3572.patch
>
>
> Implement required changes to support 
> - {{DECLARE}} and {{OPEN}} a cursor
> - query {{FETCH NEXT | n ROWS}} from the cursor
> - {{CLOSE}} the cursor
> Based on the feedback in [PR 
> #192|https://github.com/apache/phoenix/pull/192], implement the changes using 
> {{ResultSet}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] phoenix issue #229: PHOENIX-3572 Support FETCH NEXT|n ROWS query on cursor

2017-05-16 Thread bijugs
Github user bijugs commented on the issue:

https://github.com/apache/phoenix/pull/229
  
Attached patch for the changes to the JIRA ticket.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-3572) Support FETCH NEXT| n ROWS from Cursor

2017-05-16 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-3572:
---
Attachment: PHOENIX-3572.patch

Patch for the changes attached.

> Support FETCH NEXT| n ROWS from Cursor
> --
>
> Key: PHOENIX-3572
> URL: https://issues.apache.org/jira/browse/PHOENIX-3572
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Biju Nair
>Assignee: Biju Nair
> Attachments: PHOENIX-3572.patch
>
>
> Implement required changes to support 
> - {{DECLARE}} and {{OPEN}} a cursor
> - query {{FETCH NEXT | n ROWS}} from the cursor
> - {{CLOSE}} the cursor
> Based on the feedback in [PR 
> #192|https://github.com/apache/phoenix/pull/192], implement the changes using 
> {{ResultSet}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3822) Surface byte and row estimates in a machine readable way when doing EXPLAIN PLAN

2017-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012975#comment-16012975
 ] 

Hadoop QA commented on PHOENIX-3822:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12868382/PHOENIX-3822.patch
  against master branch at commit 442d8eb29f1f73fd104a323d9aa77f3a4ccfd8d1.
  ATTACHMENT ID: 12868382

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   Pair info = 
PhoenixRuntime.getEstimatedRowsBytesScannedForQuery(conn, sql, 
Lists.newArrayList());
+   String sql = "SELECT ta.c1.a, ta.c2.b FROM " + tableA + " ta 
JOIN " + tableB + " tb ON ta.k = tb.k" ;
+   Pair info = 
PhoenixRuntime.getEstimatedRowsBytesScannedForQuery(conn, sql, 
Lists.newArrayList());
+   String sql = "SELECT /*+ USE_SORT_MERGE_JOIN */ ta.c1.a, 
ta.c2.b FROM " + tableA + " ta JOIN " + tableB + " tb ON ta.k = tb.k" ;
+   Pair info = 
PhoenixRuntime.getEstimatedRowsBytesScannedForQuery(conn, sql, 
Lists.newArrayList());
+ParallelIteratorFactory parallelIteratorFactory, GroupBy groupBy, 
Expression having) throws SQLException {
+   Pair p = 
ScanUtil.getEstimatedRowsAndBytesToScan(table.getTable(), context, 
table.getTable().getPhysicalName().getBytes());
+QueryPlan plan, HashJoinInfo joinInfo, SubPlan[] subPlans, boolean 
recompileWhereClause, List dependencies) throws SQLException {
+estimatedBytes = estimatedBytes == null ? 
subPlan.getInnerPlan().getEstimatedBytesToScan() : estimatedBytes + 
subPlan.getInnerPlan().getEstimatedBytesToScan();
+estimatedRows = estimatedRows == null ? 
subPlan.getInnerPlan().getEstimatedRowsToScan() : estimatedRows + 
subPlan.getInnerPlan().getEstimatedRowsToScan();

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ViewIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/869//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/869//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/869//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/869//console

This message is automatically generated.

> Surface byte and row estimates in a machine readable way when doing EXPLAIN 
> PLAN
> 
>
> Key: PHOENIX-3822
> URL: https://issues.apache.org/jira/browse/PHOENIX-3822
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-3822.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3822) Surface byte and row estimates in a machine readable way when doing EXPLAIN PLAN

2017-05-16 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012964#comment-16012964
 ] 

James Taylor commented on PHOENIX-3822:
---

Thanks for the patch, [~samarthjain]. Tests look good. We shouldn't copy/paste 
that gnarly 100 lines of code into ScanUtil.getEstimatedRowsAndBytesToScan(), 
though. Instead, I think we can refactor it into ScanUtil and have the new 
method return an instance of a new class such as ScanInfo to encapsulate the 
estimates plus the List information. If that's problematic, we 
should just leave the byte/row estimate info in BaseResultIterators and access 
it from QueryPlan and AggregatePlan like we currently do.

> Surface byte and row estimates in a machine readable way when doing EXPLAIN 
> PLAN
> 
>
> Key: PHOENIX-3822
> URL: https://issues.apache.org/jira/browse/PHOENIX-3822
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-3822.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3822) Surface byte and row estimates in a machine readable way when doing EXPLAIN PLAN

2017-05-16 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3822:
--
Attachment: (was: PHOENIX-3822.patch)

> Surface byte and row estimates in a machine readable way when doing EXPLAIN 
> PLAN
> 
>
> Key: PHOENIX-3822
> URL: https://issues.apache.org/jira/browse/PHOENIX-3822
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-3822.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3822) Surface byte and row estimates in a machine readable way when doing EXPLAIN PLAN

2017-05-16 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3822:
--
Attachment: PHOENIX-3822.patch

> Surface byte and row estimates in a machine readable way when doing EXPLAIN 
> PLAN
> 
>
> Key: PHOENIX-3822
> URL: https://issues.apache.org/jira/browse/PHOENIX-3822
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-3822.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3734) Refactor Phoenix to use TAL instead of direct calls to Tephra

2017-05-16 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012899#comment-16012899
 ] 

Ohad Shacham commented on PHOENIX-3734:
---

Thanks [~giacomotay...@gmail.com].
I am currently using Mac. I will try to use a Linux box and the other options 
your provided to see if it helps.


> Refactor Phoenix to use TAL instead of direct calls to Tephra
> -
>
> Key: PHOENIX-3734
> URL: https://issues.apache.org/jira/browse/PHOENIX-3734
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
> Attachments: PHOENIX-3734.v3.patch
>
>
> Refactor Phoenix to use the new transaction abstraction layer instead of 
> direct calls to Tephra. Once this task will be committed, Phoenix will 
> continue working with Tephra but will have the option for fast integration of 
> new transaction processing engines.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3839) Prevent large aggregate queries from timing out

2017-05-16 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012879#comment-16012879
 ] 

James Taylor commented on PHOENIX-3839:
---

Arghh - so it worked without HBASE-18000 it works, but with it it fails?

> Prevent large aggregate queries from timing out
> ---
>
> Key: PHOENIX-3839
> URL: https://issues.apache.org/jira/browse/PHOENIX-3839
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
>
> Large aggregate queries timeout in Phoenix, even with our renew lease code in 
> place. The only workaround is to increase the RPC timeout to be really high 
> which is not such a good idea. It's quite possible HBASE-18000 is the root 
> cause. Would it be possible to test that theory on master (i.e. with HBase 
> 1.3 plus the patch for HBASE-18000), [~samarthjain] & [~mujtabachohan]?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3734) Refactor Phoenix to use TAL instead of direct calls to Tephra

2017-05-16 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012871#comment-16012871
 ] 

James Taylor commented on PHOENIX-3734:
---

Sorry for the trouble, [~ohads]. Our unit tests seem to have become flakier 
lately. Are you running on a Mac or a Linux box? The Mac has always seemed 
flakier. Two contributing factors are the ASF infrastructure for the build bots 
seem overloaded and/or flaky and surefire doesn't give us any information when 
a test suite hangs or maybe crashes. You could try decreasing the 
parallelization to see if that helps: {{mvn verify -DnumForkedIT=1 
-DnumForkedUT=1}}. You can also try bumping up the memory and perm size: 
{{-Xmx3000m -XX:MaxPermSize=256m}}.

If you can do a local test run with the similar results before and after you 
patch on each of the 4.x lines, I think it'd be ok to commit your patch.


> Refactor Phoenix to use TAL instead of direct calls to Tephra
> -
>
> Key: PHOENIX-3734
> URL: https://issues.apache.org/jira/browse/PHOENIX-3734
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
> Attachments: PHOENIX-3734.v3.patch
>
>
> Refactor Phoenix to use the new transaction abstraction layer instead of 
> direct calls to Tephra. Once this task will be committed, Phoenix will 
> continue working with Tephra but will have the option for fast integration of 
> new transaction processing engines.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3839) Prevent large aggregate queries from timing out

2017-05-16 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012864#comment-16012864
 ] 

Mujtaba Chohan commented on PHOENIX-3839:
-

With HBASE-18000/HBase 1.3.1

{noformat}
INFO  [regionserver/localhost/127.0.0.1:0.leaseChecker] 
regionserver.RSRpcServices: Scanner 930 lease expired on region 
T,\x7F\xFF\xFE\xB7\x8C\xD4\x09\xCF017404812WfC   
,1494955820050.44900eecad4214ee0078b490517d5c13.

Failed after attempts=36, exceptions:
null, java.net.SocketTimeoutException: callTimeout=6, callDuration=60119: 
Call to localhost/127.0.0.1:60579 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=2790, waitTime=60001, 
operationTimeout=6 expired. row '' on table 'T' at 
region=T,,1494955820050.6e64409f6db8c5fce401b4216dcb35fc., 
hostname=localhost,60579,1494955479513, seqNum=578

at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:146)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

While building a global index.

[~samarthjain]

> Prevent large aggregate queries from timing out
> ---
>
> Key: PHOENIX-3839
> URL: https://issues.apache.org/jira/browse/PHOENIX-3839
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
>
> Large aggregate queries timeout in Phoenix, even with our renew lease code in 
> place. The only workaround is to increase the RPC timeout to be really high 
> which is not such a good idea. It's quite possible HBASE-18000 is the root 
> cause. Would it be possible to test that theory on master (i.e. with HBase 
> 1.3 plus the patch for HBASE-18000), [~samarthjain] & [~mujtabachohan]?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3572) Support FETCH NEXT| n ROWS from Cursor

2017-05-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012652#comment-16012652
 ] 

ASF GitHub Bot commented on PHOENIX-3572:
-

Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/229
  
Looks good to me as well. Thanks @bijugs,  for working on this. Let me 
commit this for you.


> Support FETCH NEXT| n ROWS from Cursor
> --
>
> Key: PHOENIX-3572
> URL: https://issues.apache.org/jira/browse/PHOENIX-3572
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Biju Nair
>Assignee: Biju Nair
>
> Implement required changes to support 
> - {{DECLARE}} and {{OPEN}} a cursor
> - query {{FETCH NEXT | n ROWS}} from the cursor
> - {{CLOSE}} the cursor
> Based on the feedback in [PR 
> #192|https://github.com/apache/phoenix/pull/192], implement the changes using 
> {{ResultSet}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] phoenix issue #229: PHOENIX-3572 Support FETCH NEXT|n ROWS query on cursor

2017-05-16 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/229
  
Looks good to me as well. Thanks @bijugs,  for working on this. Let me 
commit this for you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Question about Git tag for 4.8.2 release and RC2

2017-05-16 Thread Josh Elser

LarsG,

Hrm, I don't know enough about git-web to say why these entries in the 
"Tags" header don't have the "tag" anchor. Check the code itself, these 
tags exist. See you for yourself:


$ git clone https://git-wip-us.apache.org/repos/asf/phoenix.git && cd 
phoenix

$ git tag | fgrep 4.8.2

And, again, the mis-naming was likely accidental. We should fix this.

- Josh

Lars George wrote:

Hi Josh,

I think the point LarsF was making is that there are tags missing for
4.8.2, it is only the commits as you stated. See
https://www.dropbox.com/s/xh3g008te08mwbg/Screenshot%202017-05-15%2008.32.45.png?dl=0
for reference of what we are seeing (which has no "tag" link to those
commits). No worries, it is OK to use the commit hashes, we were
merely wondering why they were not tagged, and also why the naming
changed.

Cheers,
Lars

On Tue, May 9, 2017 at 1:59 AM, Josh Elser  wrote:

Hi Lars,

The tags for 4.8.2 are:

https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3d4205123f763fdfd875211d551da42deeb78412
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=40de1f6cd5326b2c8ec2da5ad817696e050c2f3a
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=e7c63d54c592bbed193841a68c0f8717831258d9
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=cf6c4f7c44ac64ada9e61385c4b10ef00e06b014

You can navigate to them via the git-web and the "tags" section. These get
collapsed into a single "ref" (as both the official 4.8.2 release and rc2
are the same thing -- rc2 was the vote that passed).

Re: the missing "v" in "v4.x.y", I think this was just an accidental
omission. We can probably rectify this (add a tag which follows the standard
naming).

Re: the missing tags on Github, this is probably just ASF infra having
trouble. They control the mirroring process of all Git changes flowing to
the Github mirror. Sometimes this process gets out of whack, I have no
reason to believe this is anything other than that :)


Lars Francke wrote:

Hi everyone,

I am the first to admit that I don't fully understand all of git so I'm
hoping you guys can help me. We're trying to check out the release bits of
the 4.8.2 release.

On Github there's no tag to be found for this release.

The links from the vote thread<

https://lists.apache.org/thread.html/9d5725264517e4e93f610b8b117a735a75e90cf81d96154df9b2bda3@%3Cdev.phoenix.apache.org%3E>
lead to 404s<

https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/4.8.2-HBase-1.2-rc2

.


The "tags" overview in Git does have a commit for 4.8.2<
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tags>   but the
"tag"
link at the end is missing.

The "tag" also differs from previous releases in that the "v" at the
beginning is missing.

Compare this to 4.8.1:<

https://lists.apache.org/thread.html/91683a340dd69cfcad15dd6ba6af1f497f317baadf13e7b22bc09e57@%3Cdev.phoenix.apache.org%3E

Can anyone shed some light on the issue?

Thanks,
Lars



[jira] [Commented] (PHOENIX-3734) Refactor Phoenix to use TAL instead of direct calls to Tephra

2017-05-16 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012531#comment-16012531
 ] 

Ohad Shacham commented on PHOENIX-3734:
---

Hi [~giacomotay...@gmail.com] and [~tdsilva]

The failed test pass when I run these in standalone mode, in addition, in each 
regression run different tests fail.
I also run "mvn verify" on the master branch and got error messages. 
Could you please advice?

In addition, I tried to run testing on branch 4.x-HBase-0.98 and the testing 
got stuck at the middle.

Thx,
Ohad

> Refactor Phoenix to use TAL instead of direct calls to Tephra
> -
>
> Key: PHOENIX-3734
> URL: https://issues.apache.org/jira/browse/PHOENIX-3734
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
> Attachments: PHOENIX-3734.v3.patch
>
>
> Refactor Phoenix to use the new transaction abstraction layer instead of 
> direct calls to Tephra. Once this task will be committed, Phoenix will 
> continue working with Tephra but will have the option for fast integration of 
> new transaction processing engines.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: indexedWALEditCodec may break hbase replication

2017-05-16 Thread Anoop John
You might be hitting the issue PHOENIX-2477.  The fix is there in
4.7.0.   Can u  pls try with this version and report back.

-Anoop-

On Mon, May 15, 2017 at 10:20 PM, Yi Liang  wrote:
> Hi,
>I am trying use Phoenix to put data, and seems hbase replication can not
> work. Below is my detail steps I followed. BTW, I am using hbase 1.2.4 and
> phoenix 4.6
>
> (1) Enable HBase replication, like setting hbase.replication=true.
> //HBase replication works fine without Phoenix after step (1)
>
> (2) Change WALCellCodec to IndexedWALEditCodec in hbase-site.xml on both
> source and destination cluster
> (3) Using Phoenix create command to create same table on both source and
> destination cluster, (Notice that I just create a normal table, and did not
> create index on this table)
> (4) Run Add_peers  to add destination as replication peers in source
> cluster hbase shell
> (5) For the table created in Phoenix, alter its replication_scope in hbase
> shell to enable its replication.
> (6) Using UPSERT command in Phoenix to put data.
> //After step (2) to (4), the data only inserted into source cluster, but
> not into destination cluster
>
> (7) Change IndexedWALEditCodec back to WALCellCodec,
> //the replication works fine now, the data UPSERT at source phoenix now
> replicated to destination cluster.
>
> So, I guess there might be more configuration I need to do, or the
> IndexedWALEditCodec may break the replication.
>
> Thanks
>
> Yi


[jira] [Updated] (PHOENIX-3854) phoenix can't read some data of hbase table

2017-05-16 Thread Hyunwoo,Han (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyunwoo,Han updated PHOENIX-3854:
-
Description: I created phoenix table on hbase. and i input data with 
phoenix psql.py and csvBulkLoadTool for phoenix table(for example: log was 
shown - 10 rows upserted). and i enquiry data with phoenix table. but only some 
of data of phoenix (for example : 6 of 10 rows) was returned. so i checked my 
csv file and process of data manipulation. but there are not errors. so next i 
checked if data were in hbase. Surprisingly, the other data of enqury were in 
hbase. but enquiry with phoenix were not returned the other data (4 rows). How 
can i solve this situation? Pls. show me the answer. I want to give the data 
for this case. but for security problem, i can't give them. sorry.  (was: I 
created phoenix table on hbase. and i input data with phoenix psql.py and 
csvBulkLoadTool for phoenix table(for example: log was shown - 10 rows 
upserted). and i enquiry data with phoenix table. but only some of data of 
phoenix (for example : 6 of 10 rows) was returned. so i checked my csv file and 
process of data manipulation. but there are not errors. so next i checked if 
data were in hbase. Surprisingly, the other data of enqury were in hbase. but 
enquiry with phoenix were not returned the other data (4 rows). How i solve 
this situation? Pls. show me the answer. I want to give the data for this case. 
but for security problem, i can't give them. sorry.)

> phoenix can't read some data of hbase table
> ---
>
> Key: PHOENIX-3854
> URL: https://issues.apache.org/jira/browse/PHOENIX-3854
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
> Environment: CentOS release 6.6(64bit), HBase 1.2.4
>Reporter: Hyunwoo,Han
>
> I created phoenix table on hbase. and i input data with phoenix psql.py and 
> csvBulkLoadTool for phoenix table(for example: log was shown - 10 rows 
> upserted). and i enquiry data with phoenix table. but only some of data of 
> phoenix (for example : 6 of 10 rows) was returned. so i checked my csv file 
> and process of data manipulation. but there are not errors. so next i checked 
> if data were in hbase. Surprisingly, the other data of enqury were in hbase. 
> but enquiry with phoenix were not returned the other data (4 rows). How can i 
> solve this situation? Pls. show me the answer. I want to give the data for 
> this case. but for security problem, i can't give them. sorry.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3699) Test suite PhoenixSparkITTenantSpecific fails

2017-05-16 Thread Sneha Kanekar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011810#comment-16011810
 ] 

Sneha Kanekar commented on PHOENIX-3699:


Any update on this?

> Test suite PhoenixSparkITTenantSpecific fails
> -
>
> Key: PHOENIX-3699
> URL: https://issues.apache.org/jira/browse/PHOENIX-3699
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0, 4.10.0
> Environment: Ubuntu: 14.04
>Reporter: Sneha Kanekar
>  Labels: ppc64le, x86
>
> In project Phoenix-Spark, the test suite PhoenixSparkITTenantSpecific fails 
> with a Run Aborted error. I have executed the test on both x86 as well as 
> ppc64le architechture and it fails on both of them.
> The error message is as follows:
> {code:borderStyle=solid}
> *** RUN ABORTED *** 
>   org.apache.phoenix.schema.TableAlreadyExistsException: ERROR 1013 (42M04): 
> Table already exists. tableName=TABLE1
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2311)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:957)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:211)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:358)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1511)
>   at 
> org.apache.phoenix.spark.AbstractPhoenixSparkIT$$anonfun$setupTables$1.apply(AbstractPhoenixSparkIT.scala:82)
>   at 
> org.apache.phoenix.spark.AbstractPhoenixSparkIT$$anonfun$setupTables$1.apply(AbstractPhoenixSparkIT.scala:80)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at 
> org.apache.phoenix.spark.AbstractPhoenixSparkIT.setupTables(AbstractPhoenixSparkIT.scala:80)
>   at 
> org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:91)
>   at 
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
>   at 
> org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
>   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
>   at 
> org.apache.phoenix.spark.AbstractPhoenixSparkIT.run(AbstractPhoenixSparkIT.scala:44)
>   at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)
>   at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)
>   at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1526)
>   at 
> org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:29)
>   at org.scalatest.Suite$class.run(Suite.scala:1421)
>   at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:29)
>   at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
>   at 
> org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
>   at 
> org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
>   at 
> org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
>   at 
> org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
>   at 
> org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
>   at 
> org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
>   at org.scalatest.tools.Runner$.main(Runner.scala:860)
>   at org.scalatest.tools.Runner.main(Runner.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)