This option controls how much each query will be able to allocate at most for it's sort operator on every drillbit. The higher this value the less likely your query will spill to disk and the faster it will finish. Because it's per query, and let's say you set it to 10% of your total available direct memory, then if you run 10 queries in parallel and they all use sort, you may run out of memory as no memory will be left for the other operators.
I guess you can set it to some high value, and if you start seeing queries running out of memory while running queries that use sort, then you may lower it down or run less queries in parallel. Thanks On Wed, Jun 1, 2016 at 2:00 PM, John Omernik <[email protected]> wrote: > So for my Parquet issues it will not likely make a difference, (It appears > to be heap related and/or parquet writer related) Still, I would be very > interested in guidelines here, keeping it at 2GB with such beefy nodes > seems to be a waste. > > John > > On Wed, Jun 1, 2016 at 3:38 PM, Abdel Hakim Deneche <[email protected] > > > wrote: > > > I don't know about any specific guidelines for this options, but what I > > know is that it only affects the sort operator, and it's related to > direct > > memory not heap memory. > > > > > > > > On Wed, Jun 1, 2016 at 1:20 PM, John Omernik <[email protected]> wrote: > > > > > I am reposting this question here as well. (I posted on the MapR > > Community > > > forums). > > > > > > The default as I understand it, for the setting > > > planner.memory.max_query_memory_per_node > > > is 2G. The default heap memory settings in drill-env.sh is 4G and the > > > default Direct memory is 8G. > > > > > > I guess, is there any advice on where I should set my > > > planner.memory.max_query_memory_per_node > > > as the other numbers scale? I.e. does this setting coordinate more with > > > heap or direct or both? If I double my direct mem, should I double the > > > setting? Are there any guidelines or methods for tuning this? > > > > > > I am currently running bits at 24 GB of Heap and 84GB of Direct, > should I > > > take the planner.memory.max_query_memory_per_node and x 8? to put 16G? > > > Thoughts? > > > > > > Thanks! > > > > > > John > > > > > > > > > > > -- > > > > Abdelhakim Deneche > > > > Software Engineer > > > > <http://www.mapr.com/> > > > > > > Now Available - Free Hadoop On-Demand Training > > < > > > http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available > > > > > > -- Abdelhakim Deneche Software Engineer <http://www.mapr.com/> Now Available - Free Hadoop On-Demand Training <http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available>
