[ 
https://issues.apache.org/jira/browse/JENA-801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209812#comment-14209812
 ] 

Andy Seaborne commented on JENA-801:
------------------------------------

TDB itself does not need much heap because the indexes when mmap'ed don't count 
towards the total.  The rest of the app may need that space.

The critical numbers to show a difference for TransactionManager.QueueBatchSize 
are 0 and 1 though whether it wil make an observable difference depends very 
much of the exact details of the access pattern.

Very long jounral call chains arise when 2+ transactions (RR or RW) are always 
active and the system does not get a chance to flush updates (which are already 
safe and commited) to the main database. 

It would be better to have a high water mark on the pending writebacks so that 
whe they reached that point, readers were held up to let the system clear the 
backlog but that isn't in itself automaticly a throughput improvement.

> When the server is under load, many queries are piling up and seems to be in 
> some kind of dead lock.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: JENA-801
>                 URL: https://issues.apache.org/jira/browse/JENA-801
>             Project: Apache Jena
>          Issue Type: Bug
>          Components: TDB
>    Affects Versions: TDB 0.9.4, Jena 2.11.2
>            Reporter: Bala Kolla
>         Attachments: 
> ThreadLocksInBlockMgrJournalAfterGuavaCacheInNodeTable.htm, 
> TracesWithManyItersOfBlockMgrJournal_Valid_Method.txt, 
> TracesWithManyItersOfBlockMgrJournal_getRead_Method.txt, 
> WAITDataReportShowingTheLockContention.zip, 
> WAITDataReportShowingTheLockContentionWithoutQueryFilter.zip
>
>
> We were testing our server with repositories of varied sizes and in almost 
> all the cases when the server peaks its capacity (of maximum number of users 
> it can support), It seems like the queries are piling up because of the lock 
> contention in NodeTableCache.
> Here are some details about the repository..
> size of indices on disk - 150GB
> type of hard disk used - SSD and HDD with high RAM (seeing the same result in 
> both the cases)
> OS - Linux
> Details on the user load;
> We are trying to simulate a very active user load where all the users are 
> executing many usecases that would result in many queries and updates on TDB.
> I would like to know what are the possible solutions to work around and avoid 
> this situation. I am thinking of the following, please let me know if there 
> is any other way to work around this bottleneck.
> Control the updates to the triple store so that we only do it when there are 
> not many queries pending. We would have to experiment how this impact the 
> usecases..
> Is there any other way to make this lock contention go away? Can we have 
> multiple instances of this cache? For example many (90%) of our queries are 
> executed with a query scope (per project). So, can we have a separate 
> NodeTable cache for each query scope (project in our case) and one for 
> global? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to