[ 
https://issues.apache.org/jira/browse/PHOENIX-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107311#comment-15107311
 ] 

ASF GitHub Bot commented on PHOENIX-2417:
-----------------------------------------

Github user JamesRTaylor commented on a diff in the pull request:

    https://github.com/apache/phoenix/pull/147#discussion_r50165631
  
    --- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
 ---
    @@ -96,10 +111,109 @@ public void start(CoprocessorEnvironment env) throws 
IOException {
             rebuildIndexTimeInterval = 
env.getConfiguration().getLong(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB,
                 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_HANDLING_REBUILD_INTERVAL);
         }
    -
    +    
    +    private static String getJdbcUrl(RegionCoprocessorEnvironment env) {
    +        String zkQuorum = 
env.getConfiguration().get(HConstants.ZOOKEEPER_QUORUM);
    +        String zkClientPort = 
env.getConfiguration().get(HConstants.ZOOKEEPER_CLIENT_PORT,
    +            Integer.toString(HConstants.DEFAULT_ZOOKEPER_CLIENT_PORT));
    +        String zkParentNode = 
env.getConfiguration().get(HConstants.ZOOKEEPER_ZNODE_PARENT,
    +            HConstants.DEFAULT_ZOOKEEPER_ZNODE_PARENT);
    +        return PhoenixRuntime.JDBC_PROTOCOL + 
PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + zkQuorum
    +            + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + zkClientPort
    +            + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + zkParentNode;
    +    }
     
         @Override
         public void postOpen(ObserverContext<RegionCoprocessorEnvironment> e) {
    +        final RegionCoprocessorEnvironment env = e.getEnvironment();
    +
    +        Runnable r = new Runnable() {
    +            @Override
    +            public void run() {
    +                HTableInterface metaTable = null;
    +                HTableInterface statsTable = null;
    +                try {
    +                    Thread.sleep(1000);
    +                    LOG.info("Stats will be deleted for upgrade 4.7 
requirement!!");
    +                    metaTable = 
env.getTable(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME));
    +                    List<Cell> columnCells = metaTable
    +                            .get(new Get(SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME,
    +                                    
PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE)))
    +                            
.getColumnCells(PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES,
    +                                    QueryConstants.EMPTY_COLUMN_BYTES);
    +                    if (!columnCells.isEmpty()
    +                            && columnCells.get(0).getTimestamp() < 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_7_0) {
    --- End diff --
    
    This if is not needed.


> Compress memory used by row key byte[] of guideposts
> ----------------------------------------------------
>
>                 Key: PHOENIX-2417
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2417
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: James Taylor
>            Assignee: Ankit Singhal
>             Fix For: 4.7.0
>
>         Attachments: PHOENIX-2417.patch, PHOENIX-2417_encoder.diff, 
> PHOENIX-2417_v2_wip.patch, StatsUpgrade_wip.patch
>
>
> We've found that smaller guideposts are better in terms of minimizing any 
> increase in latency for point scans. However, this increases the amount of 
> memory significantly when caching the guideposts on the client. Guidepost are 
> equidistant row keys in the form of raw byte[] which are likely to have a 
> large percentage of their leading bytes in common (as they're stored in 
> sorted order. We should use a simple compression technique to mitigate this. 
> I noticed that Apache Parquet has a run length encoding - perhaps we can use 
> that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to