I agree that `exec_mem_limit` is more easily understood by users as a "memory limit of a query on a BE". With the previous behavior, it was difficult and uncontrollable for the user to understand the relationship between concurrency and query.
However, we may need to be concerned about the following compatibility. Because if the memory limit is modified directly to query level, the default memory limit may cause the original query to run with an error "memory exceed limit". This compatibility alone is difficult to solve perfectly, and perhaps we need to explain the impact of this change in the next version's changlog. -- 此致!Best Regards 陈明雨 Mingyu Chen Email: chenmin...@apache.org At 2022-03-12 18:23:06, "Yi WU" <dataroar...@gmail.com> wrote: >Sorry for previous email including a wrong link to discussion on github. > >exec_mem_limit is a session variable, which can be set by users. I think we >should define it exactly to make users understood. For example , It is max >memory consumption of a query on a be. If a query consumes memory beyond >exec_mem_limit on a be , it should be failed due to memory allocation. > >I am not sure whether the above idea is acceptable. > >Now, exec_mem_limit does not work because some memory allocation is not >limited by it due to calling MemPool::allocate. Actually exec_mem_limit >works at fragment instance level, it does not work at query level. However, >FragmentInstance is related to table, users can not expect how many >fragment instances would run on a be, so it is difficult to make users >understood. > >Should we let exec_mem_limit limit memory consumption on a query on a be? > > >The same message is put on discussion. > > >https://github.com/apache/incubator-doris/discussions/8455