On 12/12/11 16:13, Martin J. Evans wrote:
> On 09/12/11 18:02, Charles Jardine wrote:
>> On 09/12/11 14:01, Martin J. Evans wrote:
>>> Hi,
>>>
>>> If anyone is around who wrote or has worked on the object/collections
>>> support in DBD::Oracle I'd greatly appreciate it if you could take a
>>> quick look at this problem as the code in DBD::Oracle for this has
>>> defeated me so far.
>>>
>>> The problem is I have a query which uses types and collect and
>>> although it works fine initially once it has been run a number of
>>> times in the same connection it eventually goes from a 3s fetch time
>>> to a number of minutes. I have reduced it to the example below.
>>>
>>> I can run this code all day long in sqlplus without a problem so I
>>> don't think it is an Oracle issue.
>>
>> I have been re-writing parts of the of the object/collections
>> support with a view to correcting aspects of the storage management.
>> The existing code has some store drains, and some incorrect
>> freeing of Perl SVs, which can lead to crashes.
>>
>> The changes I have made so far do not fix your problem, but
>> the problem might provide me with a useful test case.
>>
>> Do you find, as I do, that you can chase your problem away by
>> adding 'NOT FINAL' to the end of each of the specifications
>> of the two objects (as opposed to collections)? I find this
>> helps with my version of the code. This intrigues me.
>> I would expect non-final types to be handled worse, not better.
>>
> 
> The problem does appear to go away when NOT FINAL is added.
> 
> However, it is slower when not final is used.
> 
> I'd still like to track this down.

I wasn't suggesting NOT FINAL as possible solution to your problem.
It is a waste of resources to declare an object type NON FINAL unless
you actually need to use sub-types. I asked you to test NON FINAL
because I had expected to make matters worse, and was very surprised
when it did not.

When I run your test script, the first 36 iterations are quick
(2 secs) and the 37th is slow (45 secs).

I have found I can chase the problem away in two different
ways. The slow-down does not occur if I change the script so it
uses the same statement handle for all iterations rather than
preparing a new one each time. This suggests that, if there is
a resource drain of any sort, the drain might be associated with
prepare/destroy rather than with fetch.

Another way removing the slow-down is to re-authenticate,
with $dbh->func('uuuu', 'pppp', 'reauthenticate'), after
each iteration. This causes Oracle to create a new session
within the existing connection. Re-authentication is much
faster than creating a new connection. This might provide
a workaround for you.

All that said, I still don't have any real clue as to what
is going on. I had suspected that the pin counts on TDOs
(Type Descriptor Objects) might be a problem, but adding
the OCIObjectUnpin calls I thought might be necessary made
no difference.

I shall stare at the prepare/destroy code a bit more before
I give up.

-- 
Charles Jardine - Computing Service, University of Cambridge
c...@cam.ac.uk    Tel: +44 1223 334506, Fax: +44 1223 334679

Reply via email to