I ended up forcing the join using "--DERBY-PROPERTIES joinOrder=fixed" to
ensure that the join would go the way it should.
Here is the DDL:
CREATE TABLE CONFIGURATION_SET
(
ID INTEGER NOT NULL PRIMARY KEY,
DTYPE VARCHAR(64),
OPLOCK INTEGER NOT NULL DEFAULT 0,
CONFIGURATION_NUMBER INTEGER NOT NULL,
CREATED_BY VARCHAR(64) NOT NULL,
CREATED_ON TIMESTAMP NOT NULL,
CONFIGURABLE_HARDWARE_ID INTEGER
);
CREATE TABLE CONFIGURATION_BUNDLE
(
ID INTEGER NOT NULL PRIMARY KEY,
DTYPE VARCHAR(64),
OPLOCK INTEGER NOT NULL DEFAULT 0,
BUNDLE_NAME VARCHAR(64) NOT NULL
);
CREATE TABLE CONFIGURATIONSET_CONFIGURATIONBUNDLE
(
CONFIGURATIONBUNDLE_ID INTEGER NOT NULL,
CONFIGURATIONSET_ID INTEGER NOT NULL,
CONSTRAINT CONFIGURATIONSET_CONFIGURATIONBUNDLE_PK PRIMARY KEY
(CONFIGURATIONBUNDLE_ID,CONFIGURATIONSET_ID)
);
ALTER TABLE CONFIGURATIONSET_CONFIGURATIONBUNDLE
ADD CONSTRAINT CONFIGURATIONSET_CONFIGURATIONBUNDLE_FK_1
FOREIGN KEY (CONFIGURATIONBUNDLE_ID)
REFERENCES CONFIGURATION_BUNDLE(ID);
ALTER TABLE CONFIGURATIONSET_CONFIGURATIONBUNDLE
ADD CONSTRAINT CONFIGURATIONSET_CONFIGURATIONBUNDLE_FK_2
FOREIGN KEY (CONFIGURATIONSET_ID)
REFERENCES CONFIGURATION_SET(ID);
CREATE INDEX CONFIGURATION_BUNDLE_IX_1 ON CONFIGURATION_BUNDLE(DTYPE ASC);
CREATE INDEX CONFIGURATION_BUNDLE_IX_2 ON CONFIGURATION_BUNDLE(BUNDLE_NAME
ASC);
CREATE INDEX CONFIGURATION_SET_IX_1 ON CONFIGURATION_SET(DTYPE ASC);
A configuration set is a point in time of the configuration of a network
device. It is made up of configuration bundles which are logical organization
of configuration parameters. The bundles are named (IP configuration, SNTP
configuration, etc.) and probably 50 to 60 per configuration set. A
configuration bundle is shared among configuration sets if it is not changed
from a period of time.
So to your questions:
- The bundle name is not unique and is probably spread out over a range of 60
discrete values pretty evenly. I guess having an index on this column is not
buying much now that I think about it
- Right now the query is a string that is being dynamically built. It is not a
prepared statement. This is done using the JPA
EntityManager.createNativeQuery. So yes, it is probably being optimized every
time.
Just a naive thought, however, is that the query is being constrained by a
single value of a primary key. One would think this would have higher value
than a constraint against a index column that allows duplicate values. I am
ignorant of the way the optimizer works. Also, I prototyped the query in
SquirrelSQL, saw it was what I needed, and then put it in place in the code.
It ran for many hours correctly and then I noted a major performance hit and
investigated.
I guess the best thing on these where I know better is to use the optimizer
override to fix the plan to ensure it does not spontaneously change. I was
under the assumption that the index stats daemon would keep the statistics up
to date and prevent this kind of thing. I guess I still don't fully
understand all of this. Note that the database had hundreds of thousands of
confguration set and configuration bundle entries and while testing today, I
probably added another thousand or two. So the change on the number of rows in
the tables was probably 1 to 10 percent.
I do appreciate your time to comment on this!
Brett
________________________________________
From: Mike Matrigali [[email protected]]
Sent: Friday, March 23, 2012 8:06 PM
To: [email protected]
Subject: Re: Why is a very bad query plan being chosen and why did it change
from good to bad?
Bergquist, Brett wrote:
> Okay, I have in fact hit a problem that I think is Derby related that I need
> some input on where to look.
>
> While looking at syscs_diag.transaction table, I saw one of the queries that
> needs to be performed. So on a whim, I ran this same query in SquirrelSQL
> and took a look at the query execution plan. The query returns exactly 1 row.
>
> There is an index on CORE_V1.CONFIGURATION_BUNDLE.BUNDLE_NAME.
> CORE_V1.CONFIGURATION_SET.ID is the primary key.
> CORE_V1.CONFIGURATIONSET_CONFIGURATIONBUNDLE has a primary key of
> (CONFIGURATIONSET_ID, CONFIGURATIONBUNDLE_ID).
> CORE_V1.CONFIGURATION_BUNDLE.ID is the primary key.
>
> Here is the query plan:
>
> Statement Name:
> SQL_CURLH000C2
> Statement Text:
> SELECT CB.ID FROM PKG_9145E_V1.CONFIGURATION_SET CS JOIN
> CORE_V1.CONFIGURATIONSET_CONFIGURATIONBUNDLE CSCB ON CSCB.CONFIGURATIONSET_ID
> = CS.ID JOIN CORE_V1.CONFIGURATION_BUNDLE CB ON CB.ID =
> CSCB.CONFIGURATIONBUNDLE_ID AND CB.BUNDLE_NAME = 'SOAM_SYSTEM_CFG' AND CS.ID
> = 898103
> Parse Time: 0
> Bind Time: 0
> Optimize Time: 0
> Generate Time: 0
> Compile Time: 0
> Execute Time: 0
> Begin Compilation Timestamp : null
> End Compilation Timestamp : null
> Begin Execution Timestamp : null
> End Execution Timestamp : null
> Statement Execution Plan Text:
> Project-Restrict ResultSet (7):
> Number of opens = 1
> Rows seen = 1
> Rows filtered = 0
> restriction = false
> projection = true
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> restriction time (milliseconds) = 0
> projection time (milliseconds) = 0
> optimizer estimated row count: 19561.00
> optimizer estimated cost: 206733.70
> Source result set:
> Nested Loop Exists Join ResultSet:
> Number of opens = 1
> Rows seen from the left = 436409
> Rows seen from the right = 1
> Rows filtered = 0
> Rows returned = 1
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> optimizer estimated row count: 19561.00
> optimizer estimated cost: 206733.70
> Left result set:
> Nested Loop Join ResultSet:
> Number of opens = 1
> Rows seen from the left = 1
> Rows seen from the right = 436409
> Rows filtered = 0
> Rows returned = 436409
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> optimizer estimated row count: 19561.00
> optimizer estimated cost: 53033.14
> Left result set:
> Index Scan ResultSet for CONFIGURATION_SET using
> constraint SQL081029110511950 at read committed isolation level using share
> row locking chosen by the optimizer
> Number of opens = 1
> Rows seen = 1
> Rows filtered = 0
> Fetch Size = 1
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> next time in milliseconds/row = 0
>
> scan information:
> Bit set of columns fetched={0}
> Number of columns fetched=1
> Number of deleted rows visited=0
> Number of pages visited=3
> Number of rows qualified=1
> Number of rows visited=1
> Scan type=btree
> Tree height=3
> start position:
> >= on first 1 column(s).
> Ordered null semantics on the following
> columns:
> 0
> stop position:
> > on first 1 column(s).
> Ordered null semantics on the following
> columns:
> 0
> qualifiers:
> None
> optimizer estimated row count: 1.00
> optimizer estimated cost: 4.71
> Right result set:
> Index Row to Base Row ResultSet for
> CONFIGURATION_BUNDLE:
> Number of opens = 1
> Rows seen = 436409
> Columns accessed from heap = {2, 3}
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> optimizer estimated row count: 19561.00
> optimizer estimated cost: 53028.43
> Index Scan ResultSet for CONFIGURATION_BUNDLE
> using index CONFIGURATION_BUNDLE_IX_2 at read committed isolation level using
> instantaneous share row locking chosen by the optimizer
> Number of opens = 1
> Rows seen = 436409
> Rows filtered = 0
> Fetch Size = 16
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> next time in milliseconds/row = 0
>
> scan information:
> Bit set of columns fetched=All
> Number of columns fetched=2
> Number of deleted rows visited=193
> Number of pages visited=3979
> Number of rows qualified=436409
> Number of rows visited=436603
> Scan type=btree
> Tree height=5
> start position:
> >= on first 1 column(s).
> Ordered null semantics on the
> following columns:
> 0
> stop position:
> > on first 1 column(s).
> Ordered null semantics on the
> following columns:
> 0
> qualifiers:
> None
> optimizer estimated row count: 19561.00
> optimizer estimated cost: 53028.43
>
>
> Right result set:
> Index Scan ResultSet for CONFIGURATIONSET_CONFIGURATIONBUNDLE
> using constraint CONFIGURATIONSET_CONFIGURATIONBUNDLE_PK at read committed
> isolation level using share row locking chosen by the optimizer
> Number of opens = 436409
> Rows seen = 1
> Rows filtered = 0
> Fetch Size = 1
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> next time in milliseconds/row = 0
>
> scan information:
> Bit set of columns fetched={0, 1}
> Number of columns fetched=2
> Number of deleted rows visited=0
> Number of pages visited=2186202
> Number of rows qualified=1
> Number of rows visited=436409
> Scan type=btree
> Tree height=5
> start position:
> >= on first 2 column(s).
> Ordered null semantics on the following columns:
> 0 1
> stop position:
> > on first 2 column(s).
> Ordered null semantics on the following columns:
> 0 1
> qualifiers:
> None
> optimizer estimated row count: 19561.00
> optimizer estimated cost: 153700.56
>
> It seems the optimizer decided to choose retrieve the
> CORE_V1.CONFIGURATION_BUNDLE table, filter out the rows that matched the
> constraint (436409), and then joined that with the other tables.
>
> If it had chosen to retrieve the CORE_V1.CONFIGURATION_SET table first with
> the constraint, it would have returned 1 row, joined this with the
> CORE_V1.CONFIGURATIONSET_CONFIGURATIONBUNDLE table which would have produced
> about 60 rows, and then joined with the CORE_V1.CONFIGURATION_BUNDLE table to
> produce 60 rows which then would have been filtered by the constrain to 1
> row, this would have come back nearly instantaneously.
>
> The funny thing is, the optimizer was choosing the optimal query plan for a
> while today. And then I went to test a bit more and everything stopped and I
> found this. Ouch that the query plan changed. More rows were added to all
> of the tables today.
>
> Note that I am running 10.8.2.2 and do have the IndexStats daemon enabled.
>
> So any idea why the query plan would change to a non-optimal one? I almost
> think I should drop the index on CORE_V1.CONFIGURATION_BUNDLE.BUNDLE_NAME so
> that it will not attempt to use this index in the optimizer...
>
> Brett
>
>
Quick answer is definitely query plans can change, and in 10.8 we will
update statistics dynamically. Those statistics are only used determine
how many duplicates on average there are for values in indexes. Even
without automatic statistics query plans can dynamically change as we
recompile them every so often to make sure they are not out of date, and
query cache can push old plans out.
Not enough info posted to fully understand the query plan. Need exact
ddl of tables involved and the indexes. Or at least exact ddl of all
tables/indexes mentioned exactly in the query plan. First thing I would
be looking for is what is the exact index on BUNDLE_NAME. We are using
CONFIGURATION_BUNDLE_IX_2.
Is CB.BUNDLE_NAME unique? If not, how many duplicates per value are
expected (rough guess), and is that distribution unusual in some way,
for example 90% of the rows are a single value.
I am not sure what is going on, but to make sure we are working on the
same issue, want to ask something about the query. I don't look at
query plans very often so not sure what gets printed when you print out
a query plan.
My question is if you are always executing a query with the exact
parameters in strings, or is the real query being executed have "?"
parameters which are substituted at execution time. This difference
causes very different paths through the optimizer.
With parameters inlined, it is likely the query is going to get
optimized every single time it is executed (at least every time any
single character in the query is different - like a different CS.ID).
But derby tends to do a way better job at it's estimates for
distributions when it has exact values. Note derby only knows about
distributions for a single value in a single index, it has nothing about
how many values come from a join based on those distributions. It just
uses some heuristics.