Github user JamesRTaylor commented on a diff in the pull request:
    --- Diff: 
    @@ -850,19 +849,12 @@ private void addCoprocessors(byte[] tableName, 
HTableDescriptor descriptor, PTab
                         && !SchemaUtil.isMetaTable(tableName)
                         && !SchemaUtil.isStatsTable(tableName)) {
                     if (isTransactional) {
    -                    if 
(!descriptor.hasCoprocessor(PhoenixTransactionalIndexer.class.getName())) {
descriptor.addCoprocessor(PhoenixTransactionalIndexer.class.getName(), null, 
priority, null);
    -                    }
    --- End diff --
    The coprocessor is already installed on existing tables. Removing the 
addCoprocessor only impacts a new table being created. Eventually, we could 
have some code that runs at upgrade time which removes the coprocessor from 
existing tables, but we could only do this after we know all clients have been 
    If we want to handle the old client, new server situation, it's slightly 
more complicated (but not too bad). We have an optimization that conditionally 
performs an RPC before the batch mutation which contains all the index metadata 
information (if there are more than a threshold number of mutations being 
batched). This information is then cached on the RS and looked up by the UUID 
we store on the Put. That's to prevent having to put information on *every* 
single mutation. So we'd have to add this attribute to the ServerCache or 
conditionally add the attribute to the mutation (depending on if we're doing 
the extra RPC or not).


Reply via email to