[jira] [Commented] (PHOENIX-3978) Expose mutation failures in our metrics

2017-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083373#comment-16083373
 ] 

Hadoop QA commented on PHOENIX-3978:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12876720/PHOENIX-3978-v2.patch
  against master branch at commit b7b571b7db0c58ff488e435d5a3cf6c45a41fe86.
  ATTACHMENT ID: 12876720

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
50 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+Map> mutationWriteMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(con);
+assertEquals(expectedUncommittedStatementIndexes.length, 
mutationWriteMetrics.get(B_FAILURE_TABLE).get(MUTATION_BATCH_FAILED_SIZE).intValue());
+assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
+Map> mutationMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(pConn);
+Map> readMetrics = 
PhoenixRuntime.getReadMetricInfoForMutationsSinceLastReset(pConn);
+Map> mutationMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(pConn);
+Map> readMetrics = 
PhoenixRuntime.getReadMetricInfoForMutationsSinceLastReset(pConn);
+Map> mutationMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(pConn);
+Map> readMetrics = 
PhoenixRuntime.getReadMetricInfoForMutationsSinceLastReset(pConn);
+Map> readMetrics = 
PhoenixRuntime.getRequestReadMetricInfo(rs);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ImmutableIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpsertSelectIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SaltedViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.NotQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1197//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1197//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1197//console

This message is automatically generated.

> Expose mutation failures in our metrics
> ---
>
> Key: PHOENIX-3978
> URL: https://issues.apache.org/jira/browse/PHOENIX-3978
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-3978-4.x-HBase-0.98-v2.patch, 
> PHOENIX-3978.patch, PHOENIX-3978-v2.patch
>
>
> We should be exposing whether a mutation has failed through our metrics 
> system. This should be done both within global and request level metrics. 
> The task basically boils down to:
> 1) Adding a new enum MUTATION_BATCH_FAILED_COUNTER in MetricType.
> 2) Adding a new enum GLOBAL_MUTATION_BATCH_FAILED_COUNTER in 
> GlobalClientMetrics
> 3) Adding a new CombinableMetric member called mutationBatchFailed to 
> MutationMetric class
> 4) Making sure that the two metrics are updated within the catch exception 
> block of MutationState#send()
> 5) Unit test in PhoenixMetricsIT
> FYI, [~tdsilva]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082289#comment-16082289
 ] 

Ankit Singhal commented on PHOENIX-4010:


One option (discussed internally with [~devaraj] and [~sergey.soldatov] ) is to 
resend the hash table cache to the regionserver and re-execute the query again 
for that particular region but one thing we see is that we need to hold hash 
table caches at client during the course of the query(which may make client 
memory vulnerable).

WDYT [~giacomotaylor]/[~enis] , @others any ideas?

Attaching the patch for the current option in meanwhile.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4010:
---
Attachment: PHOENIX-4010.patch

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4010:
---
Attachment: (was: PHOENIX-4010.patch)

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4010:
---
Attachment: PHOENIX-4010.patch

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4010:
---
Attachment: (was: PHOENIX-4010.patch)

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: PreCommit busted again

2017-07-11 Thread Josh Elser

I seem to have gotten things back to normal. Shout if things are busted.

Also filed PHOENIX-4011 to fix some of the things in the 
test-patch.properties file.


On 7/11/17 1:46 PM, Josh Elser wrote:
Seems like the dev/test-patch.sh files just don't even exist for the 
4.x-HBase-0.98 branch. I'm a bit perplexed because it doesn't look like 
they ever actually existed on that branch.


On 7/11/17 1:30 PM, Josh Elser wrote:

Looking into it...


[jira] [Created] (PHOENIX-4011) Update precommit properties

2017-07-11 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4011:
---

 Summary: Update precommit properties
 Key: PHOENIX-4011
 URL: https://issues.apache.org/jira/browse/PHOENIX-4011
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 4.12.0


BRANCH_NAMES needs to be updated as we're presently not supporting the 
HBase-1.2 branch.

We're also still building against Hadoop 2.4.1, 2.5.2, and 2.6.0. I'm thinking 
that this list should really be 2.6.5, 2.7.3, 2.8.0 (and ideally, 3.0.0-alpha4, 
but I have no idea if that would require more work). I'm open to suggestions 
here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4009) Run UPDATE STATISTICS command by using MR integration on snapshots

2017-07-11 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082620#comment-16082620
 ] 

Samarth Jain commented on PHOENIX-4009:
---

FYI - [~elilevine], [~jfernando_sfdc], [~cody.mar...@gmail.com]. This could be 
an alternative and possibly a more efficient way of collecting stats more 
frequently without putting too much load on the cluster.

> Run UPDATE STATISTICS command by using MR integration on snapshots
> --
>
> Key: PHOENIX-4009
> URL: https://issues.apache.org/jira/browse/PHOENIX-4009
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> Now that we have the capability to run queries against table snapshots 
> through our map reduce integration, we can utilize this capability for stats 
> collection too. This would make our stats collection more resilient, resource 
> aware and less resource intensive. The bulk of the plumbing is already in 
> place. We would need to make sure that the integration doesn't barf when the 
> query is an UPDATE STATISTICS command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3994) Index RPC priority still depends on the controller factory property in hbase-site.xml

2017-07-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082537#comment-16082537
 ] 

James Taylor commented on PHOENIX-3994:
---

Thanks, [~samarthjain]. +1 with the addition of the comments I mentioned before.

> Index RPC priority still depends on the controller factory property in 
> hbase-site.xml
> -
>
> Key: PHOENIX-3994
> URL: https://issues.apache.org/jira/browse/PHOENIX-3994
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Sergey Soldatov
>Priority: Critical
> Attachments: PHOENIX-3994.patch, PHOENIX-3994_v2.patch
>
>
> During PHOENIX-3360 we tried to remove dependency on 
> hbase.rpc.controllerfactory.class property in hbase-site.xml since it cause 
> problems on the client side (if client is using server side configuration, 
> all client request may go using index priority). Committed solution is using 
> setting the controller factory programmatically for coprocessor environment 
> in Indexer class, but it comes that this solution doesn't work because the 
> environment configuration is not used for the coprocessor connection 
> creation. We need to provide a better solution since this issue may cause 
> accidental locks and failures that hard to identify and avoid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4011) Update precommit properties

2017-07-11 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4011:

Attachment: PHOENIX-4011.patch

Drops old Phoenix branch names and adds in 4.x-HBase-1.2. Sets the version of 
Hadoop we test against to be 2.6.5, 2.7.3, and 2.8.0.

> Update precommit properties
> ---
>
> Key: PHOENIX-4011
> URL: https://issues.apache.org/jira/browse/PHOENIX-4011
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4011.patch
>
>
> BRANCH_NAMES needs to be updated as we're presently not supporting the 
> HBase-1.2 branch.
> We're also still building against Hadoop 2.4.1, 2.5.2, and 2.6.0. I'm 
> thinking that this list should really be 2.6.5, 2.7.3, 2.8.0 (and ideally, 
> 3.0.0-alpha4, but I have no idea if that would require more work). I'm open 
> to suggestions here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4010:
--

 Summary: Hash Join cache may not be send to all regionservers when 
we have stale HBase meta cache
 Key: PHOENIX-4010
 URL: https://issues.apache.org/jira/browse/PHOENIX-4010
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.12.0


 If the region locations changed and our HBase meta cache is not updated then 
we might not be sending hash join cache to all region servers hosting the 
regions.
ConnectionQueryServicesImpl#getAllTableRegions
{code}
boolean reload =false;
while (true) {
try {
// We could surface the package projected 
HConnectionImplementation.getNumberOfCachedRegionLocations
// to get the sizing info we need, but this would require a new 
class in the same package and a cast
// to this implementation class, so it's probably not worth it.
List locations = Lists.newArrayList();
byte[] currentKey = HConstants.EMPTY_START_ROW;
do {
HRegionLocation regionLocation = 
connection.getRegionLocation(
TableName.valueOf(tableName), currentKey, reload);
locations.add(regionLocation);
currentKey = regionLocation.getRegionInfo().getEndKey();
} while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
return locations;
{code}

Skipping duplicate servers in ServerCacheClient#addServerCache

{code}
List locations = 
services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
int nRegions = locations.size();

.

 if ( ! servers.contains(entry) && 
keyRanges.intersectRegion(regionStartKey, regionEndKey,
cacheUsingTable.getIndexType() == 
IndexType.LOCAL)) {  
// Call RPC once per server
servers.add(entry);
{code}

For eg:- Table ’T’ has two regions R1 and R2 originally hosted on regionserver 
RS1. 

while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
but stale meta cache will still give old region locations i.e R1 and R2 on RS1 
and when we start copying hash table, we copy for R1 and skip R2 as they are 
hosted on same regionserver. so, the query on a table will fail as it will 
unable to find hash table cache on RS2 for processing regions R2.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4010:
---
Attachment: PHOENIX-4010.patch

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4010:
---
Attachment: (was: PHOENIX-4010.patch)

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082697#comment-16082697
 ] 

Devaraj Das commented on PHOENIX-4010:
--

Yeah +1 on retrying the whole query. Seems simpler and hopefully wouldn't need 
to do that often anyway.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3978) Expose mutation failures in our metrics

2017-07-11 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082793#comment-16082793
 ] 

Samarth Jain commented on PHOENIX-3978:
---

Thanks for the patch, [~tdsilva]. Looks good for the most part. A couple of 
comments:

In PartialCommitIT, maybe add a line to test global mutation failed metric too? 
{code}
+Map> mutationWriteMetrics = 
PhoenixRuntime.getWriteMetricsForMutationsSinceLastReset(con);
+assertEquals(expectedUncommittedStatementIndexes.length, 
mutationWriteMetrics.get(B_FAILURE_TABLE).get(MUTATION_BATCH_FAILED_COUNT).intValue());
{code}

For backward compatibility, we should leave the old PhoenixRuntime and 
corresponding aggregate() methods in the metric queues. We should  mark them as 
deprecated with a note to remove them in the next major release.

> Expose mutation failures in our metrics
> ---
>
> Key: PHOENIX-3978
> URL: https://issues.apache.org/jira/browse/PHOENIX-3978
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-3978.patch
>
>
> We should be exposing whether a mutation has failed through our metrics 
> system. This should be done both within global and request level metrics. 
> The task basically boils down to:
> 1) Adding a new enum MUTATION_BATCH_FAILED_COUNTER in MetricType.
> 2) Adding a new enum GLOBAL_MUTATION_BATCH_FAILED_COUNTER in 
> GlobalClientMetrics
> 3) Adding a new CombinableMetric member called mutationBatchFailed to 
> MutationMetric class
> 4) Making sure that the two metrics are updated within the catch exception 
> block of MutationState#send()
> 5) Unit test in PhoenixMetricsIT
> FYI, [~tdsilva]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126773527
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryWithTableSampleIT.java 
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Properties;
+
+import org.apache.phoenix.exception.PhoenixParserException;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Before;
+import org.junit.Test;
+
+
+public class QueryWithTableSampleIT extends ParallelStatsEnabledIT {
+private String tableName;
+private String joinedTableName;
+
+@Before
+public void generateTableNames() {
+tableName = "T_" + generateUniqueName();
+joinedTableName = "T_" + generateUniqueName();
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSyntax() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample 15 ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (175) ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQueryZeroSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (0) ";
+ResultSet rs = conn.createStatement().executeQuery(query); 
   
+assertFalse(rs.next());
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQuery() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (45) ";
+ResultSet rs = conn.createStatement().executeQuery(query);
+
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals(200, rs.getInt(2));
+
+assertTrue(rs.next());
+assertEquals(6, rs.getInt(1));
+assertEquals(600, rs.getInt(2));
+
+} finally {
+conn.close();
+}
+}
+
+

[jira] [Created] (PHOENIX-4012) Disable distributed upsert select when table has global mutable secondary indexes

2017-07-11 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4012:
-

 Summary: Disable distributed upsert select when table has global 
mutable secondary indexes
 Key: PHOENIX-4012
 URL: https://issues.apache.org/jira/browse/PHOENIX-4012
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain


It can be enabled back on when PHOENIX-3995 is fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread aertoria
Github user aertoria commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126782179
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java ---
@@ -539,6 +539,7 @@ protected QueryPlan 
compileSingleFlatQuery(StatementContext context, SelectState
 if (table.getViewStatement() != null) {
 viewWhere = new 
SQLParser(table.getViewStatement()).parseQuery().getWhere();
 }
+
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126774166
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java ---
@@ -539,6 +539,7 @@ protected QueryPlan 
compileSingleFlatQuery(StatementContext context, SelectState
 if (table.getViewStatement() != null) {
 viewWhere = new 
SQLParser(table.getViewStatement()).parseQuery().getWhere();
 }
+
--- End diff --

Please revert changes to this file as there are only whitespace changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread aertoria
Github user aertoria commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126779607
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryWithTableSampleIT.java 
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Properties;
+
+import org.apache.phoenix.exception.PhoenixParserException;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Before;
+import org.junit.Test;
+
+
+public class QueryWithTableSampleIT extends ParallelStatsEnabledIT {
+private String tableName;
+private String joinedTableName;
+
+@Before
+public void generateTableNames() {
+tableName = "T_" + generateUniqueName();
+joinedTableName = "T_" + generateUniqueName();
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSyntax() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample 15 ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (175) ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQueryZeroSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (0) ";
+ResultSet rs = conn.createStatement().executeQuery(query); 
   
+assertFalse(rs.next());
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQuery() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (45) ";
+ResultSet rs = conn.createStatement().executeQuery(query);
+
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals(200, rs.getInt(2));
+
+assertTrue(rs.next());
+assertEquals(6, rs.getInt(1));
+assertEquals(600, rs.getInt(2));
+
+} finally {
+conn.close();
+}
+}
+
+@Test
   

[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126779550
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/parse/SelectStatement.java ---
@@ -267,6 +267,12 @@ public LimitNode getLimit() {
 }
 
 @Override
+public Double getTableSamplingRate(){
+   if(fromTable==null || !(fromTable instanceof 
ConcreteTableNode)) return null;
+   return ((ConcreteTableNode)fromTable).getTableSamplingRate();
--- End diff --

Do we need this method? What happens in the case of a join, where there are 
multiple concrete tables? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3598) Enable proxy access to Phoenix query server for third party on behalf of end users

2017-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082871#comment-16082871
 ] 

Hadoop QA commented on PHOENIX-3598:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12876432/PHOENIX-3598.002.patch
  against master branch at commit b0109feb92fdd9e19bb6f70412d0c476ec60d3d4.
  ATTACHMENT ID: 12876432

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
50 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+public static final TableName SYSTEM_SCHEMA_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_SCHEMA_NAME);
+public static final TableName SYSTEM_STATS_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_STATS_NAME);
+public static final TableName SYSTEM_SEQUENCE_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_SEQUENCE_NAME);
+public static final TableName SYSTEM_FUNCTION_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_FUNCTION_NAME);
+public static final String QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR_ATTRIB = 
"phoenix.queryserver.withRemoteUserExtractor";
+public static final String QUERY_SERVER_REMOTEUSEREXTRACTOR_PARAM = 
"phoenix.queryserver.remoteUserExtractor.param";
+public static final String QUERY_SERVER_DISABLE_KERBEROS_LOGIN = 
"phoenix.queryserver.disable.kerberos.login";
+private static final List SYSTEM_TABLE_NAMES = 
Arrays.asList(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME,
+conf.set(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, 
SERVICE_PRINCIPAL + "@" + KDC.getRealm());
+conf.set(DFSConfigKeys.DFS_DATANODE_KERBEROS_PRINCIPAL_KEY, 
SERVICE_PRINCIPAL + "@" + KDC.getRealm());

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1195//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1195//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1195//console

This message is automatically generated.

> Enable proxy access to Phoenix query server for third party on behalf of end 
> users
> --
>
> Key: PHOENIX-3598
> URL: https://issues.apache.org/jira/browse/PHOENIX-3598
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Shi Wang
> Attachments: 0001-PHOENIX-3598.patch, PHOENIX-3598.001.patch, 
> PHOENIX-3598.002.patch
>
>
> This JIRA tracks the follow-on work of CALCITE-1539 needed on Phoenix query 
> server side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4000) Increase zookeeper session timeout in tests to prevent region server aborts

2017-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082874#comment-16082874
 ] 

Hudson commented on PHOENIX-4000:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1677 (See 
[https://builds.apache.org/job/Phoenix-master/1677/])
PHOENIX-4000 Increase zookeeper session timeout in tests to prevent (samarth: 
rev a752cd14851f1b1730c0ea3043a6dd0017fac5fc)
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java


> Increase zookeeper session timeout in tests to prevent region server aborts
> ---
>
> Key: PHOENIX-4000
> URL: https://issues.apache.org/jira/browse/PHOENIX-4000
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4000.patch
>
>
> In a local run I saw region server aborts happening because of zookeeper 
> session timing out. This was likely because of a long running GC cycle. 
> Changing the timeout right now before going down the patch of tuning GC 
> settings.
> FATAL [RS:0;10.0.1.43:62677-EventThread] 
> org.apache.hadoop.hbase.regionserver.HRegionServer(1950): ABORTING region 
> server 10.0.1.43,62677,1499389843327: regionserver:62677-0x15d1a9954230001, 
> quorum=localhost:58946, baseZNode=/hbase regionserver:62677-0x15d1a9954230001 
> received expired from ZooKeeper, aborting
> org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
> = Session expired



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4011) Update precommit properties

2017-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082915#comment-16082915
 ] 

Hadoop QA commented on PHOENIX-4011:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12876663/PHOENIX-4011.patch
  against master branch at commit c9bc3a7e5e380b0f6225091429976c8d705d15d1.
  ATTACHMENT ID: 12876663

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
50 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SaltedViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CastAndCoerceIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1196//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1196//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1196//console

This message is automatically generated.

> Update precommit properties
> ---
>
> Key: PHOENIX-4011
> URL: https://issues.apache.org/jira/browse/PHOENIX-4011
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4011.patch
>
>
> BRANCH_NAMES needs to be updated as we're presently not supporting the 
> HBase-1.2 branch.
> We're also still building against Hadoop 2.4.1, 2.5.2, and 2.6.0. I'm 
> thinking that this list should really be 2.6.5, 2.7.3, 2.8.0 (and ideally, 
> 3.0.0-alpha4, but I have no idea if that would require more work). I'm open 
> to suggestions here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3994) Index RPC priority still depends on the controller factory property in hbase-site.xml

2017-07-11 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082968#comment-16082968
 ] 

Samarth Jain commented on PHOENIX-3994:
---

Looks like my checkin broke some tests. Taking a look.

> Index RPC priority still depends on the controller factory property in 
> hbase-site.xml
> -
>
> Key: PHOENIX-3994
> URL: https://issues.apache.org/jira/browse/PHOENIX-3994
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Sergey Soldatov
>Assignee: Samarth Jain
>Priority: Critical
> Fix For: 4.12.0, 4.11.1
>
> Attachments: PHOENIX-3994.patch, PHOENIX-3994_v2.patch
>
>
> During PHOENIX-3360 we tried to remove dependency on 
> hbase.rpc.controllerfactory.class property in hbase-site.xml since it cause 
> problems on the client side (if client is using server side configuration, 
> all client request may go using index priority). Committed solution is using 
> setting the controller factory programmatically for coprocessor environment 
> in Indexer class, but it comes that this solution doesn't work because the 
> environment configuration is not used for the coprocessor connection 
> creation. We need to provide a better solution since this issue may cause 
> accidental locks and failures that hard to identify and avoid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4008) UPDATE STATISTIC should run raw scan with all versions of cells

2017-07-11 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4008:
-

 Summary: UPDATE STATISTIC should run raw scan with all versions of 
cells
 Key: PHOENIX-4008
 URL: https://issues.apache.org/jira/browse/PHOENIX-4008
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain


In order to truly measure the size of data when calculating guide posts, UPDATE 
STATISTIC should run a raw scan to taken into account all versions of cells. We 
should also be setting the max versions on the scan.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3994) Index RPC priority still depends on the controller factory property in hbase-site.xml

2017-07-11 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081709#comment-16081709
 ] 

Samarth Jain commented on PHOENIX-3994:
---

bq. Just to confirm, your patch causes our existing setting of the 
CUSTOM_CONTROLLER_CONF_KEY here in Indexer to take effect as we expect?
Yes. See the changes in BaseTest. Earlier we were setting the 
ServerRpcControllerFactory by default in our tests. After I removed that, 
PhoenixServerRpcIT was failing. With my change, it works again as expected 
(i.e. the index handler pool is used for handling index updates)

bq. Can you file an HBase JIRA if you haven't already and reference that JIRA 
in a comment on that method?
Done. HBASE-18359

bq. Will this be done in a separate JIRA?
Yes it will be fixed as part of PHOENIX-3995. I will file another JIRA to 
disable distributed upsert select in the meantime when a table has global 
mutable index on it since users can run into deadlocks without PHOENIX-3995 
fixed.

> Index RPC priority still depends on the controller factory property in 
> hbase-site.xml
> -
>
> Key: PHOENIX-3994
> URL: https://issues.apache.org/jira/browse/PHOENIX-3994
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Sergey Soldatov
>Priority: Critical
> Attachments: PHOENIX-3994.patch, PHOENIX-3994_v2.patch
>
>
> During PHOENIX-3360 we tried to remove dependency on 
> hbase.rpc.controllerfactory.class property in hbase-site.xml since it cause 
> problems on the client side (if client is using server side configuration, 
> all client request may go using index priority). Committed solution is using 
> setting the controller factory programmatically for coprocessor environment 
> in Indexer class, but it comes that this solution doesn't work because the 
> environment configuration is not used for the coprocessor connection 
> creation. We need to provide a better solution since this issue may cause 
> accidental locks and failures that hard to identify and avoid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4009) Run UPDATE STATISTICS command by using MR integration on snapshots

2017-07-11 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4009:
-

 Summary: Run UPDATE STATISTICS command by using MR integration on 
snapshots
 Key: PHOENIX-4009
 URL: https://issues.apache.org/jira/browse/PHOENIX-4009
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain


Now that we have the capability to run queries against table snapshots through 
our map reduce integration, we can utilize this capability for stats collection 
too. This would make our stats collection more resilient, resource aware and 
less resource intensive. The bulk of the plumbing is already in place. We would 
need to make sure that the integration doesn't barf when the query is an UPDATE 
STATISTICS command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082525#comment-16082525
 ] 

James Taylor commented on PHOENIX-4010:
---

Since presumably this is relatively rare, we could retry the entire query from 
PhoenixStatement in the case of a HashCacheNotFoundException. We do that 
already for other classes of exceptions such as MetaDataEntityNotFoundException.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4011) Update precommit properties

2017-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083299#comment-16083299
 ] 

Hudson commented on PHOENIX-4011:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1679 (See 
[https://builds.apache.org/job/Phoenix-master/1679/])
PHOENIX-4011 Use more reasonable branch names for Phoenix and Hadoop (elserj: 
rev b7b571b7db0c58ff488e435d5a3cf6c45a41fe86)
* (edit) dev/test-patch.properties


> Update precommit properties
> ---
>
> Key: PHOENIX-4011
> URL: https://issues.apache.org/jira/browse/PHOENIX-4011
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4011.patch
>
>
> BRANCH_NAMES needs to be updated as we're presently not supporting the 
> HBase-1.2 branch.
> We're also still building against Hadoop 2.4.1, 2.5.2, and 2.6.0. I'm 
> thinking that this list should really be 2.6.5, 2.7.3, 2.8.0 (and ideally, 
> 3.0.0-alpha4, but I have no idea if that would require more work). I'm open 
> to suggestions here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (PHOENIX-3994) Index RPC priority still depends on the controller factory property in hbase-site.xml

2017-07-11 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reopened PHOENIX-3994:
---

> Index RPC priority still depends on the controller factory property in 
> hbase-site.xml
> -
>
> Key: PHOENIX-3994
> URL: https://issues.apache.org/jira/browse/PHOENIX-3994
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Sergey Soldatov
>Assignee: Samarth Jain
>Priority: Critical
> Fix For: 4.12.0, 4.11.1
>
> Attachments: PHOENIX-3994.patch, PHOENIX-3994_v2.patch
>
>
> During PHOENIX-3360 we tried to remove dependency on 
> hbase.rpc.controllerfactory.class property in hbase-site.xml since it cause 
> problems on the client side (if client is using server side configuration, 
> all client request may go using index priority). Committed solution is using 
> setting the controller factory programmatically for coprocessor environment 
> in Indexer class, but it comes that this solution doesn't work because the 
> environment configuration is not used for the coprocessor connection 
> creation. We need to provide a better solution since this issue may cause 
> accidental locks and failures that hard to identify and avoid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix issue #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread aertoria
Github user aertoria commented on the issue:

https://github.com/apache/phoenix/pull/262
  
>  See ExplainTable and let's figure out the best place to add this.

Purposing adding this logic at `BaseResultIterators`.`explain()` method. 
Which is between **Line1080** and **Line1081**  of `BaseResultIterators.java` 
(link below)


https://github.com/apache/phoenix/blob/b9bb918610c04e21b27df8d3fe1c42df508a96f0/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java


Reason for this location:
Currently, `BaseResultIterators` inherits abstract class `ExplainTable`.  
The explaining string for the table part is essentially finished by these two 
classes, one following another. Now, the information of tablesampling is stored 
in `QueryPlan.Statement.fromtable` object, it resides in `BaseResultIterators`. 
Its parent, the abstract class `ExplainTable` will not have this info (unless 
we want to modify `PTable` interface and get it from `tableref`).  

In addition, table sampling main logic resides in `BaseResultIterators` 
when it does `getParallelScans()`. Now we are just making it to also explain 
the plan when it overrides `explain()`. All thing considered I think 
`BaseResultIterators` should be the best place to put it.  Please let me know 
your feedback!

After implemented, it looks like this on single select
```
CLIENT 3-CHUNK 30 ROWS 2370 BYTES PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER 
PERSON
TABLESAMPING BY 0.19
```

On join select
```
CLIENT 9-CHUNK 30 ROWS 2370 BYTES PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER 
PERSON
TABLESAMPING BY 0.65
PARALLEL INNER-JOIN TABLE 0
CLIENT 2-CHUNK 1 ROWS 32 BYTES PARALLEL 1-WAY ROUND ROBIN FULL SCAN 
OVER US_POPULATION
TABLESAMPING BY 0.9504
AFTER-JOIN SERVER FILTER BY PERSON.ADDRESS > US_POPULATION.STATE
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3994) Index RPC priority still depends on the controller factory property in hbase-site.xml

2017-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083136#comment-16083136
 ] 

Hudson commented on PHOENIX-3994:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1678 (See 
[https://builds.apache.org/job/Phoenix-master/1678/])
PHOENIX-3994 Index RPC priority still depends on the controller factory 
(samarth: rev c9bc3a7e5e380b0f6225091429976c8d705d15d1)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriterUtils.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/ParallelWriterIndexCommitter.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/rpc/PhoenixServerRpcIT.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java


> Index RPC priority still depends on the controller factory property in 
> hbase-site.xml
> -
>
> Key: PHOENIX-3994
> URL: https://issues.apache.org/jira/browse/PHOENIX-3994
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Sergey Soldatov
>Assignee: Samarth Jain
>Priority: Critical
> Fix For: 4.12.0, 4.11.1
>
> Attachments: PHOENIX-3994.patch, PHOENIX-3994_v2.patch
>
>
> During PHOENIX-3360 we tried to remove dependency on 
> hbase.rpc.controllerfactory.class property in hbase-site.xml since it cause 
> problems on the client side (if client is using server side configuration, 
> all client request may go using index priority). Committed solution is using 
> setting the controller factory programmatically for coprocessor environment 
> in Indexer class, but it comes that this solution doesn't work because the 
> environment configuration is not used for the coprocessor connection 
> creation. We need to provide a better solution since this issue may cause 
> accidental locks and failures that hard to identify and avoid. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3978) Expose mutation failures in our metrics

2017-07-11 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3978:

Attachment: PHOENIX-3978-v2.patch
PHOENIX-3978-4.x-HBase-0.98-v2.patch

[~samarthjain]

Thanks for the review. I have attached a v2 patch.

> Expose mutation failures in our metrics
> ---
>
> Key: PHOENIX-3978
> URL: https://issues.apache.org/jira/browse/PHOENIX-3978
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-3978-4.x-HBase-0.98-v2.patch, 
> PHOENIX-3978.patch, PHOENIX-3978-v2.patch
>
>
> We should be exposing whether a mutation has failed through our metrics 
> system. This should be done both within global and request level metrics. 
> The task basically boils down to:
> 1) Adding a new enum MUTATION_BATCH_FAILED_COUNTER in MetricType.
> 2) Adding a new enum GLOBAL_MUTATION_BATCH_FAILED_COUNTER in 
> GlobalClientMetrics
> 3) Adding a new CombinableMetric member called mutationBatchFailed to 
> MutationMetric class
> 4) Making sure that the two metrics are updated within the catch exception 
> block of MutationState#send()
> 5) Unit test in PhoenixMetricsIT
> FYI, [~tdsilva]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126833016
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/parse/SelectStatement.java ---
@@ -267,6 +267,12 @@ public LimitNode getLimit() {
 }
 
 @Override
+public Double getTableSamplingRate(){
+   if(fromTable==null || !(fromTable instanceof 
ConcreteTableNode)) return null;
+   return ((ConcreteTableNode)fromTable).getTableSamplingRate();
--- End diff --

Sounds reasonable. Please add a comment explaining as you've done here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126833346
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryWithTableSampleIT.java 
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Properties;
+
+import org.apache.phoenix.exception.PhoenixParserException;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Before;
+import org.junit.Test;
+
+
+public class QueryWithTableSampleIT extends ParallelStatsEnabledIT {
+private String tableName;
+private String joinedTableName;
+
+@Before
+public void generateTableNames() {
+tableName = "T_" + generateUniqueName();
+joinedTableName = "T_" + generateUniqueName();
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSyntax() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample 15 ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (175) ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQueryZeroSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (0) ";
+ResultSet rs = conn.createStatement().executeQuery(query); 
   
+assertFalse(rs.next());
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQuery() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (45) ";
+ResultSet rs = conn.createStatement().executeQuery(query);
+
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals(200, rs.getInt(2));
+
+assertTrue(rs.next());
+assertEquals(6, rs.getInt(1));
+assertEquals(600, rs.getInt(2));
+
+} finally {
+conn.close();
+}
+}
+
+

[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-07-11 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/262#discussion_r126833276
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryWithTableSampleIT.java 
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Properties;
+
+import org.apache.phoenix.exception.PhoenixParserException;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Before;
+import org.junit.Test;
+
+
+public class QueryWithTableSampleIT extends ParallelStatsEnabledIT {
+private String tableName;
+private String joinedTableName;
+
+@Before
+public void generateTableNames() {
+tableName = "T_" + generateUniqueName();
+joinedTableName = "T_" + generateUniqueName();
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSyntax() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample 15 ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test(expected=PhoenixParserException.class)
+public void testSingleQueryWrongSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (175) ";
+
+ResultSet rs = conn.createStatement().executeQuery(query);
+inspect(rs);
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQueryZeroSamplingRate() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (0) ";
+ResultSet rs = conn.createStatement().executeQuery(query); 
   
+assertFalse(rs.next());
+} finally {
+conn.close();
+}
+}
+
+@Test
+public void testSingleQuery() throws Exception {
+Properties props = 
PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+prepareTableWithValues(conn, 100);
+String query = "SELECT i1, i2 FROM " + tableName +" 
tablesample (45) ";
+ResultSet rs = conn.createStatement().executeQuery(query);
+
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals(200, rs.getInt(2));
+
+assertTrue(rs.next());
+assertEquals(6, rs.getInt(1));
+assertEquals(600, rs.getInt(2));
+
+} finally {
+conn.close();
+}
+}
+
+