"Tom Lane" <[EMAIL PROTECTED]> writes:

> Setting it at conclusion is correct, I think, since if we ever changed
> the code to abandon TSS_BOUNDED state in the face of unexpected memory
> growth, it would be wrong to have set it in make_bounded_sort.

Actually that would be pretty easy to do and strikes me as worth doing. It
isn't hard to contrive an example that over-runs work_mem though I'm not sure
how easy it would be to actually run out of RAM.

One thing though, we would have to let up on switching to bounded sort if we
run out of memory accumulating tuples. We wouldn't want to switch to bounded
sort and then spill to disk right afterward. Here I just added 10% assuming
usually the remaining tuples won't be 10% larger than the first batch.
Introducing floating point math in here kind of bothers me though.

Index: src/backend/utils/sort/tuplesort.c
===================================================================
RCS file: /home/stark/src/REPOSITORY/pgsql/src/backend/utils/sort/tuplesort.c,v
retrieving revision 1.77
diff -c -r1.77 tuplesort.c
*** src/backend/utils/sort/tuplesort.c	7 Jun 2007 19:19:57 -0000	1.77
--- src/backend/utils/sort/tuplesort.c	1 Sep 2007 20:31:53 -0000
***************
*** 940,946 ****
  			 */
  			if (state->bounded &&
  				(state->memtupcount > state->bound * 2 ||
! 				 (state->memtupcount > state->bound && LACKMEM(state))))
  			{
  #ifdef TRACE_SORT
  				if (trace_sort)
--- 940,946 ----
  			 */
  			if (state->bounded &&
  				(state->memtupcount > state->bound * 2 ||
! 				 (state->memtupcount > state->bound * 1.1 && LACKMEM(state))))
  			{
  #ifdef TRACE_SORT
  				if (trace_sort)
***************
*** 970,975 ****
--- 970,976 ----
  			break;
  
  		case TSS_BOUNDED:
+ 
  			/*
  			 * We don't want to grow the array here, so check whether the
  			 * new tuple can be discarded before putting it in.  This should
***************
*** 991,996 ****
--- 992,1009 ----
  				tuplesort_heap_siftup(state, false);
  				tuplesort_heap_insert(state, tuple, 0, false);
  			}
+ 
+ 			/* If the later tuples were larger than the first batch we could be
+ 			 * low on memory in which case we have to give up on the bounded
+ 			 * sort and fail over to a disk sort 
+ 			 */
+ 			if (LACKMEM(state)) 
+ 			{
+ 				REVERSEDIRECTION(state);
+ 				inittapes(state);
+ 				dumptuples(state, false);
+ 			}
+ 
  			break;
  
  		case TSS_BUILDRUNS:
***************
*** 1068,1075 ****
  			 * in memory, using a heap to eliminate excess tuples.  Now we have
  			 * to transform the heap to a properly-sorted array.
  			 */
! 			if (state->memtupcount > 1)
! 				sort_bounded_heap(state);
  			state->current = 0;
  			state->eof_reached = false;
  			state->markpos_offset = 0;
--- 1081,1087 ----
  			 * in memory, using a heap to eliminate excess tuples.  Now we have
  			 * to transform the heap to a properly-sorted array.
  			 */
! 			sort_bounded_heap(state);
  			state->current = 0;
  			state->eof_reached = false;
  			state->markpos_offset = 0;


-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to