ChinmaySKulkarni commented on a change in pull request #904:
URL: https://github.com/apache/phoenix/pull/904#discussion_r497719456



##########
File path: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
##########
@@ -139,6 +140,12 @@ private static void setValues(byte[][] values, int[] 
pkSlotIndex, int[] columnIn
         for (int i = 0, j = numSplColumns; j < values.length; j++, i++) {
             byte[] value = values[j];
             PColumn column = table.getColumns().get(columnIndexes[i]);
+            if (value.length >= maxCellSizeBytes) {

Review comment:
       What about use-cases that use SINGLE_CELL_ARRAY_WITH_OFFSETS storage 
scheme wherein all the column keyvalues are stored inside a single cell? In 
that case, the hbase cell limit would apply to the sum of all the columns (or 
something along those lines). Either we should expand this to apply to all 
storage schemes, or restrict scope to only apply to ONE_CELL_PER_COLUMN

##########
File path: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
##########
@@ -187,6 +213,9 @@ public static MutationState upsertSelect(StatementContext 
childContext, TableRef
         int maxSizeBytes =
                 
services.getProps().getInt(QueryServices.MAX_MUTATION_SIZE_BYTES_ATTRIB,
                     QueryServicesOptions.DEFAULT_MAX_MUTATION_SIZE_BYTES);
+        int maxCellSizeBytes =

Review comment:
       I don't understand why we need another config for this. Introducing a 
Phoenix level config to mean the same thing as an existing hbase-level config 
will lead to confusion and now we have to make sure that this config is updated 
in case the main hbase config is updated. Why not just use 
   `hbase.client.keyvalue.maxsize` or `hbase.server.keyvalue.maxsize` ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to