ChinmaySKulkarni commented on a change in pull request #428: PHOENIX-374: Enable access to dynamic columns in * or cf.* selection URL: https://github.com/apache/phoenix/pull/428#discussion_r251720559
########## File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DynamicColumnWildcard.java ########## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.coprocessor; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.coprocessor.ObserverContext; +import org.apache.hadoop.hbase.coprocessor.RegionCoprocessor; +import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; +import org.apache.hadoop.hbase.coprocessor.RegionObserver; +import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.phoenix.coprocessor.generated.DynamicColumnMetaDataProtos; +import org.apache.phoenix.coprocessor.generated.PTableProtos; +import org.apache.phoenix.schema.PColumn; +import org.apache.phoenix.schema.PColumnImpl; +import org.apache.phoenix.util.ServerUtil; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.NavigableMap; +import java.util.Optional; + +/** + * Coprocessor for exposing dynamic columns in wildcard queries. This coprocessor will be registered + * to physical HBase tables only i.e. PTableType.TABLE and is driven by the client-side config + * QueryServices.WILDCARD_QUERY_DYNAMIC_COLS_ATTRIB + * See PHOENIX-374 + */ +public class DynamicColumnWildcard implements RegionObserver, RegionCoprocessor { Review comment: @twdsilva One problem with this approach is that now the `preBatchMutate` will be called in every case, irrespective of the client-side config to enable dynamic column data in wildcard queries. With the current implementation, even if this config is off (and thus there are no attributes set on any of the mutations), we will still have to: - In the worst case (all Puts), iterate over (Number of mutations in `miniBatchOp`) * (Number of column families in each mutation), though ultimately it will be a no-op. - In the best case (all Deletes), iterate over (Number of mutations in `miniBatchOp`), though ultimately it will be a no-op. At best, I can set an additional attribute on the Put if we don't want to store any dynamic column metadata and make sure that if the config is off, we will always only iterate over (Number of mutations in `miniBatchOp`) and do no-op. There is still the additional iterations over all the mutations though. Is there a way to pass client-side configs to the server-side via the `miniBatchOp`? If so, I can completely short-circuit this logic in case the config is off. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
