gianm commented on issue #6469: URL: https://github.com/apache/druid/issues/6469#issuecomment-654955302
> @gianm In your example, if user wants limits the total size to 2TB, I think he should achieve that by reducing the size of locations. I think that's fair. > @drcrallen in your example, whether `druid.server.maxSize` is dropped or not, the problem still exists. I think this is another problem. To solve that, maybe coordinator needs to know the _max file size_ a historical can save a file. And that _max file size_ is not a configured property but a dynamic runtime value depending on the left size of each locations It would have to be dynamically changing even within a single coordinator run. That level of communication between coordinators and historicals might end up being a lot of overhead. Maybe instead the historical could directly tell the coordinator about the size of each of its locations; then the coordinator would know on its own what max file size is supported. Alternatively we could do a hack like set maxsize to sum(locations) - (count(locations) - 1) * 5GB. I do agree it is a separate problem, though, that we don't necessarily need to solve at the same time as getting rid of the need for `druid.server.maxSize`. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
