[ 
https://issues.apache.org/jira/browse/DERBY-3479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12575441#action_12575441
 ] 

A B commented on DERBY-3479:
----------------------------

> I noticed that other tests that depend on stable query plans (wisconsin and 
> StalePlansTest)
> set derby.storage.checkpointInterval=100000.

This is an interesting observation.  When I read it I assumed that this number 
was greater than the default and that by setting it we were avoiding 
checkpoints.  But it turns out that the opposite is true: the default 
checkpoint interval is 10*1024*1024, while 100000 is the *minimum* checkpoint 
interval allowed (see raw/LogToFile.java).  With the default interval we will 
not do any checkpoints during the running of predicatePushdown.sql; but with 
the minimum interval of 100000, we do THREE checkpoints (at least on my 
machine).

So I wonder if the checkpoint does something which updates the row counts 
and/or statistics for the tables, thus affecting the optimizer's decisions?  I 
have no idea what the answer to that might be...

> Perhaps changing the concurrency/timing in the buffer manager somehow changes 
> when
> the row count is flushed

While scanning LogToFile.java for the default checkpoint interval, I noticed 
the following:

        /////////////////////////////////////////////////////////////
        // setup checkpoint daemon and cache cleaner
        /////////////////////////////////////////////////////////////
        checkpointDaemon = rawStoreFactory.getDaemon();
        if (checkpointDaemon != null)
        {
            myClientNumber =
                checkpointDaemon.subscribe(this, true /*onDemandOnly */);

            // use the same daemon for the cache cleaner
            dataFactory.setupCacheCleaner(checkpointDaemon);
        }

Note how the *same* daemon service is used for checkpointing and for cleaning 
the cache.  The call to "setupCacheCleaner" ultimately ends up at 
CacheManager.java, which was modified by DERBY-2911.  So I wonder if this 
daemon "sharing" could explain a) why a different cache manager has an effect 
on predicatePushdown.sql, and/or b) why changing the checkpoint interval seems 
to alleviate the effect?

Not sure if that's useful info or not, as I'm completely out of my knowledge 
base here.  But I thought I'd mention it.

In the meantime, do you think it would be worth it set checkpointInterval to 
100000 to see if that gets predicatePushdown passing in the tinderbox again?

> Changed/unexpected query plan when running test 'lang/predicatePushdown.sql'
> ----------------------------------------------------------------------------
>
>                 Key: DERBY-3479
>                 URL: https://issues.apache.org/jira/browse/DERBY-3479
>             Project: Derby
>          Issue Type: Bug
>          Components: Regression Test Failure
>    Affects Versions: 10.4.0.0
>         Environment: OS: Solaris 10 6/06 s10x_u2wos_09a X86 64bits - SunOS 
> 5.10 Generic_118855-14
> JVM: Sun Microsystems Inc., java version "1.6.0_04", Java(TM) SE Runtime 
> Environment (build 1.6.0_04-b12), Java HotSpot(TM) Client VM (build 10.0-b19, 
> mixed mode)
>            Reporter: Ole Solberg
>
> Seen in tinderbox since r631930.
> See e.g. 
> http://dbtg.thresher.com/derby/test/tinderbox_trunk16/jvm1.6/testing/testlog/SunOS-5.10_i86pc-i386/631932-derbyall_diff.txt
>  :
> *** Start: predicatePushdown jdk1.6.0_04 derbyall:derbylang 2008-02-28 
> 14:02:49 ***
> 9285 del
> <             Rows seen from the left = 20
> 9285a9285
> >             Rows seen from the left = 10
> 9297 del
> <                     Rows seen from the right = 20
> 9297a9297
> >                     Rows seen from the right = 10
> 9299 del
> <                     Rows returned = 20
> 9299a9299
> >                     Rows returned = 10
> .
> .
> .

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to