Re: AW: AW: [HACKERS] LIMIT in DECLARE CURSOR: request for comments

2000-11-03 Thread Tom Lane

Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
 Right.  So what do you think about a hint that takes the form of a SET
 variable for the fetch percentage to assume for a DECLARE CURSOR?

 Since we don't have other hints that are embedded directly into the SQL
 that sounds perfect. 

 The not so offhand question for me is whether to use this percentage
 for non cursor selects also. Imho both should (at least in default)
 behave the same.

Not at all, since in a non-cursor select you *must* retrieve all the
data.  I can't see any reason to optimize that on any other basis than
total execution time.

regards, tom lane



Re: AW: [HACKERS] LIMIT in DECLARE CURSOR: request for comments

2000-11-03 Thread Tom Lane

Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
 I did understand this, but I still disagree. Whether this is what you want
 strongly depends on what the application does with the resulting rows.

Sure ...

 There is no way for the backend to know this, thus imho the app needs
 to give a hint. 

Right.  So what do you think about a hint that takes the form of a SET
variable for the fetch percentage to assume for a DECLARE CURSOR?

regards, tom lane



AW: [HACKERS] LIMIT in DECLARE CURSOR: request for comments

2000-11-03 Thread Zeugswetter Andreas SB


  I'd say that normally you're not using cursors because you intend to throw
  away 80% or 90% of the result set, but instead you're using it because
  it's convenient in your programming environment (e.g., ecpg).  There are
  other ways of getting only some rows, this is not it.
 
 I didn't say I was assuming that the user would only fetch 10% of the
 rows.  Since what we're really doing is a linear interpolation between
 startup and total cost, what this is essentially doing is favoring low
 startup cost, but not to the complete exclusion of total cost.
 I think that that describes the behavior we want for a cursor pretty well.

I did understand this, but I still disagree. Whether this is what you want
strongly depends on what the application does with the resulting rows.
It is the correct assumption if the application needs a lot of time 
to process each row. If the application processing for each row is fast,
we will still want least total cost. 

There is no way for the backend to know this, thus imho the app needs
to give a hint. 

Andreas



Re: AW: [HACKERS] LIMIT in DECLARE CURSOR: request for comments

2000-10-31 Thread Philip Warner

At 14:14 31/10/00 +0100, Zeugswetter Andreas SB wrote:
 
 Which is why I like the client being able to ask the 
 optimizer for certain kinds of solutions *explicitly*.

Yes, something like:
   set optimization to [first_rows|all_rows]


That's one way that is usefull for affecting all subsequent statements, but
it would be nice to also allow such things in each statement, eg. in comments:

/*++optimizer: fast_start, no_seq_scan */
select...

ie. make all settable values dynamically settable in a statement. The scope
of the settings would probably have to depend on the abilities of the
optimizer - eg. how would subselects and views be handled?



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



AW: [HACKERS] LIMIT in DECLARE CURSOR: request for comments

2000-10-30 Thread Zeugswetter Andreas SB


 After thinking some more about yesterday's discussions, I propose that
 we adopt the following planning behavior for cursors:
 
 1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the
 basis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be
 just as reasonable, if people want to argue about the exact number;
 perhaps a SET variable is in order?).  10% seems to be a reasonable
 compromise between delivering tuples promptly and not choosing a plan
 that will take forever if the user fetches the whole result.

Imho that was a wrong assumption in the first place. The default assumption 
imho needs to be 100 %. Especially if you fixed the limit clause enabling people
to optimize the few rows fetched case.

 3. If DECLARE CURSOR contains "LIMIT ALL", plan on the assumption that
 all tuples will be fetched, ie, select lowest-total-cost plan.
 
 (Note: LIMIT ALL has been in the grammar right along, but up to now
 it has been entirely equivalent to leaving out the LIMIT clause.  This
 proposal essentially suggests allowing it to act as a planner 
 hint that
 the user really does intend to fetch all the tuples.)
 
 Comments?

Imho an explicit statement to switch optimizer mode from all rows to first rows
would be a lot easier to understand and is what other DB vendors do.

Andreas



Re: AW: [HACKERS] LIMIT in DECLARE CURSOR: request for comments

2000-10-30 Thread Stephan Szabo


On Mon, 30 Oct 2000, Zeugswetter Andreas SB wrote:

 
  After thinking some more about yesterday's discussions, I propose that
  we adopt the following planning behavior for cursors:
  
  1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the
  basis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be
  just as reasonable, if people want to argue about the exact number;
  perhaps a SET variable is in order?).  10% seems to be a reasonable
  compromise between delivering tuples promptly and not choosing a plan
  that will take forever if the user fetches the whole result.
 
 Imho that was a wrong assumption in the first place. The default assumption 
 imho needs to be 100 %. Especially if you fixed the limit clause enabling people
 to optimize the few rows fetched case.

But what if you're doing fetch 10 rows, fetch 10 rows, ...
You're not limiting, because you want all of them, but you are only
pulling a small number at a time to say do expensive front end processing.
It might make sense to actually pull a plan which is lower startup and
higher per row.  Although the full cost is higher, you get a better
turnaround time on the first set and the cost difference per set may
be unnoticeable (it would depend on the particulars).