[jira] [Commented] (PHOENIX-4455) Index table exists dirty data

2017-12-14 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292125#comment-16292125
 ] 

Ankit Singhal commented on PHOENIX-4455:


can you try our Index Scrutiny tool and post the keys which are not matching 
between index table and data table.
Details on how to run the Index Scrutiny tool can be found at 
https://phoenix.apache.org/secondary_indexing.html


> Index table exists dirty data
> -
>
> Key: PHOENIX-4455
> URL: https://issues.apache.org/jira/browse/PHOENIX-4455
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: asko
>Priority: Critical
> Attachments: querypaln.png, select table.png
>
>
> The result of first query has one record, but the next query have two record 
> from index table. It seems to that the index have dirty data.
> Index columns information and query plans:
> !querypaln.png!
> The results of query:
> !select table.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4458) Region dead lock when executing duplicate key upsert data table(have local index)

2017-12-14 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292119#comment-16292119
 ] 

Ankit Singhal commented on PHOENIX-4458:


[~asko], As per your jstack, index writers are also using the same 
pool(default) of handlers which is used for the scan. 
So, until you upgrade to 4.12 (which has PHOENIX-3994 to automatically 
configures the RPC controller factory for indexer), you need to manually set 
the below configuration in hbase-site.xml of the server (ensure that these 
configuration doesn't exist in hbase-site.xml of the client).

{code}

  hbase.region.server.rpc.scheduler.factory.class
  org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory
  Factory to create the Phoenix RPC Scheduler that uses separate 
queues for index and metadata updates


  hbase.rpc.controllerfactory.class
  
org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory
  Factory to create the Phoenix RPC Scheduler that uses separate 
queues for index and metadata updates

{code}

> Region dead lock when executing duplicate key upsert data table(have local 
> index)
> -
>
> Key: PHOENIX-4458
> URL: https://issues.apache.org/jira/browse/PHOENIX-4458
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: asko
>Priority: Critical
> Attachments: RegionDeadLockTest.java, jstack
>
>
> The attach file *RegionDeadLockTest.java* can produce this bug after running 
> a few minutes.
> The region will be hang that can not read/write. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-12-14 Thread David New (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292057#comment-16292057
 ] 

David New commented on PHOENIX-4372:


Is there a CDH5.13-hbase1.2 corresponding phoenix parcels version?

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4450) When I use phoenix queary below my project appeared on such an error Can anyone help me?

2017-12-14 Thread David New (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David New updated PHOENIX-4450:
---
Priority: Critical  (was: Major)

> When I use phoenix queary below my project appeared on such an error Can 
> anyone help me?
> 
>
> Key: PHOENIX-4450
> URL: https://issues.apache.org/jira/browse/PHOENIX-4450
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: David New
>Priority: Critical
>  Labels: jdbc, phoenix, thin
>
>  
> {code:java}
> Class.forName("org.apache.phoenix.queryserver.client.Driver");
>Connection conn= 
> DriverManager.getConnection("jdbc:phoenix:thin:url=http://192.168.0.1:8765;serialization=PROTOBUF;);
> String sqlerr="  SELECT   
> TO_CHAR(TO_DATE(SUCCESS_TIME,?),'-MM-dd') as success, "
> + "  COUNT(DISTINCT USER_ID) recharge_rs, "
> + "  COUNT(ID) recharge_rc, "
> + "  SUM(TO_NUMBER(ACTUAL_MONEY)) recharge_money "
> + "  FROM   RECHARGE "
> + "  WHERE   STATUS = 'success'   AND RECHARGE_WAY != 'admin' 
> "
> + "  GROUP BY   TO_CHAR(TO_DATE(SUCCESS_TIME,?),'-MM-dd') 
> ";
> PreparedStatement pstmt = conn.prepareStatement(sqlerr);
>pstmt.setString(1, "-MM-dd");
>pstmt.setString(2, "-MM-dd");
> ResultSet rs = pstmt.executeQuery();
> while (rs.next()) {
> System.out.println((rs.getString("success").toString()));
> }
> {code}
> 
> {code:java}
> AvaticaClientRuntimeException: Remote driver error: RuntimeException: 
> java.sql.SQLException: ERROR 2004 (INT05): Parameter value unbound. Parameter 
> at index 1 is unbound -> SQLException: ERROR 2004 (INT05): Parameter value 
> unbound. Parameter at index 1 is unbound. Error -1 (0) null
> java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 (INT05): 
> Parameter value unbound. Parameter at index 1 is unbound
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:683)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:880)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254)
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032)
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: ERROR 2004 (INT05): Parameter value 
> unbound. Parameter at index 1 is unbound
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.getParam(PhoenixParameterMetaData.java:89)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.isSigned(PhoenixParameterMetaData.java:138)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.parameters(JdbcMeta.java:270)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.signature(JdbcMeta.java:282)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:856)
>   ... 15 more
>   at 
> org.apache.calcite.avatica.remote.Service$ErrorResponse.toException(Service.java:2476)
>   at 
> 

[jira] [Updated] (PHOENIX-4450) When I use phoenix queary below my project appeared on such an error Can anyone help me?

2017-12-14 Thread David New (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David New updated PHOENIX-4450:
---
Issue Type: Bug  (was: Test)

> When I use phoenix queary below my project appeared on such an error Can 
> anyone help me?
> 
>
> Key: PHOENIX-4450
> URL: https://issues.apache.org/jira/browse/PHOENIX-4450
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: David New
>  Labels: jdbc, phoenix, thin
>
>  
> {code:java}
> Class.forName("org.apache.phoenix.queryserver.client.Driver");
>Connection conn= 
> DriverManager.getConnection("jdbc:phoenix:thin:url=http://192.168.0.1:8765;serialization=PROTOBUF;);
> String sqlerr="  SELECT   
> TO_CHAR(TO_DATE(SUCCESS_TIME,?),'-MM-dd') as success, "
> + "  COUNT(DISTINCT USER_ID) recharge_rs, "
> + "  COUNT(ID) recharge_rc, "
> + "  SUM(TO_NUMBER(ACTUAL_MONEY)) recharge_money "
> + "  FROM   RECHARGE "
> + "  WHERE   STATUS = 'success'   AND RECHARGE_WAY != 'admin' 
> "
> + "  GROUP BY   TO_CHAR(TO_DATE(SUCCESS_TIME,?),'-MM-dd') 
> ";
> PreparedStatement pstmt = conn.prepareStatement(sqlerr);
>pstmt.setString(1, "-MM-dd");
>pstmt.setString(2, "-MM-dd");
> ResultSet rs = pstmt.executeQuery();
> while (rs.next()) {
> System.out.println((rs.getString("success").toString()));
> }
> {code}
> 
> {code:java}
> AvaticaClientRuntimeException: Remote driver error: RuntimeException: 
> java.sql.SQLException: ERROR 2004 (INT05): Parameter value unbound. Parameter 
> at index 1 is unbound -> SQLException: ERROR 2004 (INT05): Parameter value 
> unbound. Parameter at index 1 is unbound. Error -1 (0) null
> java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 (INT05): 
> Parameter value unbound. Parameter at index 1 is unbound
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:683)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:880)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254)
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032)
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: ERROR 2004 (INT05): Parameter value 
> unbound. Parameter at index 1 is unbound
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.getParam(PhoenixParameterMetaData.java:89)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.isSigned(PhoenixParameterMetaData.java:138)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.parameters(JdbcMeta.java:270)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.signature(JdbcMeta.java:282)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:856)
>   ... 15 more
>   at 
> org.apache.calcite.avatica.remote.Service$ErrorResponse.toException(Service.java:2476)
>   at 
> org.apache.calcite.avatica.remote.RemoteProtobufService._apply(RemoteProtobufService.java:63)
>   at 
> 

[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291919#comment-16291919
 ] 

Hadoop QA commented on PHOENIX-4460:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902178/PHOENIX-4460-v2.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902178

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1666//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1666//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1666//console

This message is automatically generated.

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460-v2.patch, PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291830#comment-16291830
 ] 

Lars Hofhansl commented on PHOENIX-4460:


Yep. +1

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460-v2.patch, PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4451) KeyRange has a very high allocation rate

2017-12-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved PHOENIX-4451.
-
Resolution: Duplicate

This is a symptom of PHOENIX-4460

> KeyRange has a very high allocation rate
> 
>
> Key: PHOENIX-4451
> URL: https://issues.apache.org/jira/browse/PHOENIX-4451
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Andrew Purtell
>
> We are looking at sources for elevated GC pressure in production. During some 
> live heap analysis we noticed the KeyRange class appears to be an outlier in 
> terms of numbers of instances found in the live heap. I'm wondering if there 
> is some opportunity for object/allocation reuse here? Or perhaps it can be 
> converted into something amenable to escape analysis so we get stack 
> allocations instead of heap allocations? 
> This is Phoenix 4.13.0 on HBase 0.98.24:
> {noformat}
>  num #instances #bytes  class name
> --
>1: 18959212720432451240  [B
>2:  77390411 1857369864  org.apache.phoenix.query.KeyRange
>3:  14732411  844013608  [C
>4:  15034590  481106880  java.util.HashMap$Node
>5:   2587783  433834912  [Ljava.lang.Object;
>6:   3336992  400439040  
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher
>7:   3336992  373743104  
> org.apache.hadoop.hbase.regionserver.StoreScanner
>8:  14729747  353513928  java.lang.String
>9:   5605941  269085168  java.util.TreeMap
>   10:   6511408  208365056  
> java.util.concurrent.ConcurrentHashMap$Node
>   11:   2250216  180176200  [Ljava.util.HashMap$Node;
>   12:   5463124  174819968  org.apache.hadoop.hbase.KeyValue
>   13:   5277319  168874208  java.util.Hashtable$Entry
>   14:   3336992  160175616  
> org.apache.hadoop.hbase.regionserver.ScanDeleteTracker
>   15:   1734848  138787840  
> org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$FastDiffSeekerState
>   16:   1668418  120126096  org.apache.hadoop.hbase.client.Scan
>   17:   2142254  119966224  
> org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker
>  num #instances #bytes  class name
> --
>1: 18977427420451239624  [B
>2:  77446115 1858706760  org.apache.phoenix.query.KeyRange
>3:  14741777  844584352  [C
>4:  15043664  481397248  java.util.HashMap$Node
>5:   2591421  434232680  [Ljava.lang.Object;
>6:   3339244  400709280  
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher
>7:   3339244  373995328  
> org.apache.hadoop.hbase.regionserver.StoreScanner
>8:  14739076  353737824  java.lang.String
>9:   5609708  269265984  java.util.TreeMap
>   10:   6513671  208437472  
> java.util.concurrent.ConcurrentHashMap$Node
>   11:   2251693  180293648  [Ljava.util.HashMap$Node;
>   12:   5477024  175264768  org.apache.hadoop.hbase.KeyValue
>   13:   5277320  168874240  java.util.Hashtable$Entry
>   14:   3339244  160283712  
> org.apache.hadoop.hbase.regionserver.ScanDeleteTracker
>   15:   1759096  140727680  
> org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$FastDiffSeekerState
>   16:   1669544  120207168  org.apache.hadoop.hbase.client.Scan
>   17:   2143728  120048768  
> org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker
>  num #instances #bytes  class name
> --
>1: 18992030920464274472  [B
>2:  77499190 1859980560  org.apache.phoenix.query.KeyRange
>3:  14748627  845142696  [C
>4:  15049176  481573632  java.util.HashMap$Node
>5:   2593838  434563512  [Ljava.lang.Object;
>6:   3340548  400865760  
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher
>7:   3340548  374141376  
> org.apache.hadoop.hbase.regionserver.StoreScanner
>8:  14745909  353901816  java.lang.String
>9:   5611921  269372208  java.util.TreeMap
>   10:   6545786  209465152  
> java.util.concurrent.ConcurrentHashMap$Node
>   11:   2252716  180374216  [Ljava.util.HashMap$Node;
>   12:   5484841  175514912  org.apache.hadoop.hbase.KeyValue
>   13:   5338662  170837184  java.util.Hashtable$Entry
>   14:   3340548  160346304  
> org.apache.hadoop.hbase.regionserver.ScanDeleteTracker
>   15:   1771616  141729280  
> org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$FastDiffSeekerState
>   16:   1670196  120254112  org.apache.hadoop.hbase.client.Scan
>   17:   

[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291819#comment-16291819
 ] 

Geoffrey Jacoby commented on PHOENIX-4460:
--

+1

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460-v2.patch, PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4460:

Attachment: PHOENIX-4460-v2.patch

[~jamestaylor]

Closing the KeyValueScanner before creating a new one in 
{{BaseScannerRegionObserver.preStoreScannerOpen}} seems to fix the issue

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460-v2.patch, PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291490#comment-16291490
 ] 

Hadoop QA commented on PHOENIX-4460:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902112/PHOENIX-4460.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902112

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AlterSessionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AggregateIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1665//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1665//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1665//console

This message is automatically generated.

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291345#comment-16291345
 ] 

Lars Hofhansl commented on PHOENIX-4460:


Can we capture different clients in a unittest? This might be a worthy thing to 
add.
(If that needs a whole new plumbing then it's overkill)

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291313#comment-16291313
 ] 

James Taylor commented on PHOENIX-4460:
---

Something must be holding onto the List preventing it from 
being GCed. This simply clears that when the filtering is done such that it 
should be GCed (and GCed earlier). 

Still need to figure out the *why* (only happens with old client/new server)  
and the *who* (is something holding on to the filter instance?), but focusing 
mainly on a quick solution right now. Let's see if it helps - looks like the 
test to repo is pretty straightforward.

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291300#comment-16291300
 ] 

Lars Hofhansl edited comment on PHOENIX-4460 at 12/14/17 6:25 PM:
--

That's it? Can you explain? :)
Why would that be triggered only in the 4.10 client/4.13 server scenario?



was (Author: lhofhansl):
That's it? Can you explain? :)

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291300#comment-16291300
 ] 

Lars Hofhansl commented on PHOENIX-4460:


That's it? Can you explain? :)

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4460:
--
Attachment: PHOENIX-4460.patch

[~mujtabachohan] - would you mind giving this a try on a real cluster? 
Something must be holding onto the SkipScanFilter, so I may have a v2 that 
prevents that. This should work too, but let's try it.

FYI, [~lhofhansl].

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4460:
-

Assignee: James Taylor

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4460:
--
Fix Version/s: 4.13.2
   4.14.0

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4460:
-

 Summary: High GC / RS shutdown when we use select query with "IN" 
clause using 4.10 phoenix client on 4.13 phoenix server
 Key: PHOENIX-4460
 URL: https://issues.apache.org/jira/browse/PHOENIX-4460
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Priority: Blocker


We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
high object count issue on cluster today. 

Main observation is that this is reproducible when firing lots of query
select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix client 
hitting 4.13 phoenix on HBase server side
 (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)

We wrote a loader client (attached) with the below table/query , upserted ~100 
million rows and fired the query in parallel using 4-5 loader clients with 16 
threads each

{code}
TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
 + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
TestVal2 varchar(200), "  + "TestValue varchar(1))";

QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, ?, 
?, ?, ?, ?, ?, ?, ?)"
{code}

After running this client immediately within a min or two we see the 
phoenix.query.KeyRange object count immediately going up to several lakhs and 
keeps on increasing continuously. This count doesn't seem to come down even 
after shutting down the clients 

{code}
-bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
-histo:live 90725 | grep KeyRange
  47:2748526596448  org.apache.phoenix.query.KeyRange
1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
3411: 1 16  org.apache.phoenix.query.KeyRange$1
3412: 1 16  org.apache.phoenix.query.KeyRange$2
{code}
After some time we also started seeing High GC issues and RegionServers crashing

Experiment Summary:
- 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange count 
increasing upto few 100's)
- 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange count 
increasing upto few 100's)
- 4.10 client/4.13 Server --- Issue reproducible as described above






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


When the issues will be assigned?

2017-12-14 Thread cloud.pos...@gmail.com
I found some critical problem and propose JIRA,  When the jira can be tracked?


[jira] [Updated] (PHOENIX-4459) Region assignments are failing for the test cases with extended clocks to support SCN

2017-12-14 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4459:
-
Affects Version/s: (was: 5.0.0)
   Labels: HBase-2.0  (was: )
Fix Version/s: 5.0.0

> Region assignments are failing for the test cases with extended clocks to 
> support SCN
> -
>
> Key: PHOENIX-4459
> URL: https://issues.apache.org/jira/browse/PHOENIX-4459
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> There are test cases using own clock are failing with TableNotFoundException 
> during region assignment. The reason is the meta scan is not giving any 
> results because of the past timestamps. Need to check in more details. 
> Because of the region assignment failures during create table procedure hbase 
> client wait for 30 mins. So not able to continue running the other tests as 
> well.
> {noformat}
> 2017-12-14 16:48:03,153 ERROR [ProcExecWrkr-9] 
> org.apache.hadoop.hbase.master.TableStateManager(135): Unable to get table 
> T08 state
> org.apache.hadoop.hbase.TableNotFoundException: T08
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:175)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:132)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure.startTransition(AssignProcedure.java:161)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:294)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:85)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1452)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1221)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:77)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1731)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4459) Region assignments are failing for the test cases with extended clocks to support SCN

2017-12-14 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4459:
-
Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-4338

> Region assignments are failing for the test cases with extended clocks to 
> support SCN
> -
>
> Key: PHOENIX-4459
> URL: https://issues.apache.org/jira/browse/PHOENIX-4459
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> There are test cases using own clock are failing with TableNotFoundException 
> during region assignment. The reason is the meta scan is not giving any 
> results because of the past timestamps. Need to check in more details. 
> Because of the region assignment failures during create table procedure hbase 
> client wait for 30 mins. So not able to continue running the other tests as 
> well.
> {noformat}
> 2017-12-14 16:48:03,153 ERROR [ProcExecWrkr-9] 
> org.apache.hadoop.hbase.master.TableStateManager(135): Unable to get table 
> T08 state
> org.apache.hadoop.hbase.TableNotFoundException: T08
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:175)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:132)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure.startTransition(AssignProcedure.java:161)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:294)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:85)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1452)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1221)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:77)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1731)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4459) Region assignments are failing for the test cases with extended clocks to support SCN

2017-12-14 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4459:


 Summary: Region assignments are failing for the test cases with 
extended clocks to support SCN
 Key: PHOENIX-4459
 URL: https://issues.apache.org/jira/browse/PHOENIX-4459
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


There are test cases using own clock are failing with TableNotFoundException 
during region assignment. The reason is the meta scan is not giving any results 
because of the past timestamps. Need to check in more details. Because of the 
region assignment failures during create table procedure hbase client wait for 
30 mins. So not able to continue running the other tests as well.
{noformat}
2017-12-14 16:48:03,153 ERROR [ProcExecWrkr-9] 
org.apache.hadoop.hbase.master.TableStateManager(135): Unable to get table 
T08 state
org.apache.hadoop.hbase.TableNotFoundException: T08
at 
org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:175)
at 
org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:132)
at 
org.apache.hadoop.hbase.master.assignment.AssignProcedure.startTransition(AssignProcedure.java:161)
at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:294)
at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:85)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1452)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1221)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:77)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1731)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4458) Region dead lock when executing duplicate key upsert data table(have local index)

2017-12-14 Thread asko (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

asko updated PHOENIX-4458:
--
Attachment: RegionDeadLockTest.java
jstack

This code can reproduce bug

> Region dead lock when executing duplicate key upsert data table(have local 
> index)
> -
>
> Key: PHOENIX-4458
> URL: https://issues.apache.org/jira/browse/PHOENIX-4458
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: asko
>Priority: Critical
> Attachments: RegionDeadLockTest.java, jstack
>
>
> The attach file *RegionDeadLockTest.java* can produce this bug after running 
> a few minutes.
> The region will be hang that can not read/write. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4458) Region dead lock when executing duplicate key upsert data table(have local index)

2017-12-14 Thread asko (JIRA)
asko created PHOENIX-4458:
-

 Summary: Region dead lock when executing duplicate key upsert data 
table(have local index)
 Key: PHOENIX-4458
 URL: https://issues.apache.org/jira/browse/PHOENIX-4458
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.11.0
Reporter: asko
Priority: Critical


The attach file *RegionDeadLockTest.java* can produce this bug after running a 
few minutes.
The region will be hang that can not read/write. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)