smiklosovic commented on code in PR #4622:
URL: https://github.com/apache/cassandra/pull/4622#discussion_r2816298045


##########
src/java/org/apache/cassandra/db/compression/CompressionDictionaryDetailsTabularData.java:
##########
@@ -278,15 +289,11 @@ private void validate()
             if (table == null)
                 throw new IllegalArgumentException("Table not specified.");
             if (tableId == null)
-                throw new IllegalArgumentException("Table id not specified");
+                throw new IllegalArgumentException("Table id not specified.");
             if (dictId <= 0)
                 throw new IllegalArgumentException("Provided dictionary id 
must be positive but it is '" + dictId + "'.");
             if (dict == null || dict.length == 0)
                 throw new IllegalArgumentException("Provided dictionary byte 
array is null or empty.");
-            if (dict.length > FileUtils.ONE_MIB)

Review Comment:
   I removed this here because when I was testing import / export with 
created_at, I realized that we can not import dictionary bigger than 1MiB BUT 
WE CAN TRAIN IT.
   
   So we train > 1MiB but we can not import after export.
   
   It is possible to override the configuration via nodetool or cql, there we 
do not check max size, we check that only on import ...
   
   I can revert this change and treat is more robustly in a completely 
different ticket, hardening sizes on all levels (cql, nodetool ...), can go in 
even after 6.0-alpha1. If I remove it here, we will at least not see the 
discrepancy I described above. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to