Rick Hillegas wrote:
Thanks for the pointer to this presentation, Oyvind. It's a pretty
startling observation though I'm not sure how to use it. I'd be
interested in hearing your thoughts about this some time.
The question raised in this presentation is whether the optimizers in
the
Rick Hillegas wrote:
Hi Oyvind,
I agree that this is inelegant. As you note, this approach step by step
forces a plan which the current Derby optimizer is capable of
considering--with or without the covering index. Regardless of whether
we teach the optimizer some better tricks, I think it's
Thanks for the pointer to this presentation, Oyvind. It's a pretty
startling observation though I'm not sure how to use it. I'd be
interested in hearing your thoughts about this some time.
Cheers,
-Rick
That reminds me of a very entertaining presentation which was held at
VLDB this year:
Rick Hillegas wrote:
It might help to add another column to the index so that it covers both
the restriction and the ordering information. And if we could add a
primary key to a temporary table, then something like the following
might take us in the right direction:
create index time_index
SAD == Suavi Ali Demir [EMAIL PROTECTED] writes:
SAD Another little detail about optimization is that
SAD Statement.setMaxRows() kind of functions on the JDBC side may
SAD not be sufficient since it is called after SQL statement is
SAD prepared and returned as an object (after
I agree with Øystein, given that the standard JDBC api for the maximum
rows is Statement.setMaxRows then Derby should be able to take advantage
of that regardless of when it is set.
This doesn't imply that a plan gets reprepared, though that could be a
solution, it could be implemented as a plan
Hi Oyvind,
I agree that this is inelegant. As you note, this approach step by step
forces a plan which the current Derby optimizer is capable of
considering--with or without the covering index. Regardless of whether
we teach the optimizer some better tricks, I think it's worth beefing up
our
.
*From:* Craig Russell [mailto:[EMAIL PROTECTED]
*Sent:* Saturday, September 17, 2005 2:35 PM
*To:* Derby Discussion
*Subject:* Re: derby performance and 'order by'
Hi Scott,
How have you set up the test? Are you using ij and displaying all
Discussion *Subject:* Re: derby performance and 'order by'Hi Scott,How have you set up the test? Are you using ij and displaying all of the data or using jdbc to access the data?What do you do in 0.010 seconds? Do you read all of the rows into memory, or just record the ti
me until
you get
:[EMAIL PROTECTED]
Sent: Friday, September 16, 2005 5:55 PM
To: Derby Discussion
Subject: Re: derby performance and 'order by'
Scott Ogden wrote:
I have observed some interesting query performance behavior and am
hoping someone here can explain.
In my scenario, it appears that an existing index
:35 PM
To: Derby Discussion
Subject: Re: derby performance and
'order by'
Hi Scott,
How have you set up the test? Are you using ij and displaying all of
the data or using jdbc to access the data?
What do you do in 0.010 seconds? Do you read all of the rows into
memory, or just
row count: 78377.51 optimizer estimated cost: 166745.12 --scott-Original Message-From: Sunitha Kambhampati [mailto:[EMAIL PROTECTED]] Sent: Friday, September 16, 2005 5:55 PMTo: Derby DiscussionSubject: Re: derby performance and 'order by'Scott Ogden wrote: I have obs
ng columns:
qualifiers:
None
optimizer estimated row count:78377.51
optimizer estimated cost:166745.12
--scott
-Original Message-
From: Sunitha Kambhampati [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 16, 2005 5:55 PM
To: Derby Discussion
Subject: Re: derby performance
Suavi Ali Demir wrote:
Actually, it sounds like the problem of finding top 1000 rows out of
166333 rows is different than sorting 166333 rows and maybe it could be
optimized. There is no need to sort all 166333 but the information that
we are only looking 1000 rows would have to be passed
optimizer estimated row count: 78377.51 optimizer estimated cost: 166745.12 --scott-Original Message- From: Sunitha Kambhampati [mailto:[EMAIL PROTECTED]] Sent: Friday, September 16, 2005 5:55 PM To: Derby Discussion Subject: Re: derby performance and 'order
How about this:
Make 1 pass through the big chunk which is 166333 rows (or could be millions): For each row, decide whether or not it belongs to the final 1000 chunk. To do this efficiently, the tricky part needs to be on the 1000 chunk side.
Steps:
1. Keep and maintain max-min values for this
Hi Scott,How have you set up the test? Are you using ij and displaying all of the data or using jdbc to access the data?What do you do in 0.010 seconds? Do you read all of the rows into memory, or just record the time until you get the first row? Are you measuring the time taken to return all the
17 matches
Mail list logo