kfaraz commented on PR #16512:
URL: https://github.com/apache/druid/pull/16512#issuecomment-2137851032

   > maybe it would make sense to just truncate that error message and throw a 
warn in OverlordResource when a task payload is very large?
   
   Yes, we should definitely do these two items even if we decide not to do the 
rest.
   
   > i don't think the config totally loses its purpose with a higher default, 
since you can still choose to lower it to limit the blast radius of large 
payloads.
   
   True, but I don't think most cluster operators would think of updating this 
config when they encounter a problem. They would be more likely to increase the 
`max_allowed_packet` unless we update the error message thrown for 
`max_allowed_packet` to suggest using the new config.
   
   > I personally don't think a task payload above a certain size makes sense. 
for msq, would it really try to generate that large of a payload? 60 MB is huge 
for metadata
   
   I agree that 60MiB is large enough, but I do recall some cases where users 
had to resort to increasing the `max_allowed_packet`. Example,
   ```
   Could not send query: query size is >= to max_allowed_packet (67108864)
   ```
   
   ---
   
   In conclusion, we could do the following:
   - Truncate the error message thrown when we exceed `max_allowed_packet` and 
include info about using the new config.
   - Log a warning when task payload crosses say 80% of max limit.
   - Log a stronger warning and raise an alert when task payload crosses 100% 
of max limit, maybe also mentioning that such task payloads will be rejected in 
future Druid releases.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to