[ 
https://issues.apache.org/jira/browse/DERBY-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-4050:
----------------------------------

    Attachment: ClobGrowth.java

Attached is the repro ClobGrowth.java.  The output on trunk is as below and the 
database grows to 359MB.

[C:/kmarsden/repro/clobgrowth] java ClobGrowth
Iterations to perform = 10000
Derby database created from main thread
New thread started
Derby connection from new thread
DeployThread update:1000
update:1000
DeployThread update:2000
update:2000
DeployThread update:3000
update:3000
DeployThread update:4000
update:4000
DeployThread update:5000
update:5000
DeployThread update:6000
update:6000
DeployThread update:7000
update:7000
DeployThread update:8000
update:8000
DeployThread update:9000
update:9000
SELECT * FROM new org.apache.derby.diag.SpaceTable('APP','CLOBTAB') t
CONGLOMERATENAME
        |ISIND&|NUMALLOCATEDPAGES   |NUMFREEPAGES        |NUMUNFILLEDPAGES    
|PAGESIZE   |ESTIMSPACESAVING
------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------
CLOBTAB
        |0     |11181               |2                   |1                   
|32768      |65536
SQL090205165714850
        |1     |1                   |0                   |1                   
|4096       |0
Normal shutdown
Test complete



on 10.4 and earlier,  it still grows , but slower to about 33MB on 10.4
Iterations to perform = 10000
Derby database created from main thread
New thread started
Derby connection from new thread
DeployThread update:1000
update:1000
DeployThread update:2000
update:2000
DeployThread update:3000
update:3000
DeployThread update:4000
update:4000
DeployThread update:5000
update:5000
DeployThread update:6000
update:6000
DeployThread update:7000
update:7000
DeployThread update:8000
update:8000
DeployThread update:9000
update:9000
SELECT * FROM new org.apache.derby.diag.SpaceTable('APP','CLOBTAB') t
CONGLOMERATENAME
        |ISIND&|NUMALLOCATEDPAGES   |NUMFREEPAGES        |NUMUNFILLEDPAGES    
|PAGESIZE   |ESTIMSPACESAVING
------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------
CLOBTAB
        |0     |973                 |2                   |1                   
|32768      |65536
SQL090205165416880
        |1     |1                   |0                   |1                   
|4096       |0
Normal shutdown
Test complete
[C:/kmarsden/repro/clobgrowth]





































































































































> Multithreaded clob update causes growth in table that does not get reclaimed
> ----------------------------------------------------------------------------
>
>                 Key: DERBY-4050
>                 URL: https://issues.apache.org/jira/browse/DERBY-4050
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.0.0
>            Reporter: Kathey Marsden
>         Attachments: ClobGrowth.java
>
>
> Doing a multithreaded update of a Clob table causes table growth that does 
> not get reclaimed except by compressing the table.  The reproduction has a 
> table with two threads. One  thread  updates row 1 repeatedly with 33,000 
> character clob. The other thread updates row 2 with a small clob, "hello".  
> The problem occurs back to 10.2 but seems much worse on trunk than 10.2.   
> The trunk database grew to 273MB on trunk after 10000 updates of each row. 
> The 10.2 database grew only to 25MB.  If the update is synchronized there is 
> no growth.
> I will attach the repro.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to