[ 
https://issues.apache.org/jira/browse/ASTERIXDB-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17948309#comment-17948309
 ] 

ASF subversion and git services commented on ASTERIXDB-3574:
------------------------------------------------------------

Commit b0a5993b313976f29168c3204dce17ae5e420981 in asterixdb's branch 
refs/heads/master from Ritik Raj
[ https://gitbox.apache.org/repos/asf?p=asterixdb.git;h=b0a5993b31 ]

Revert "[ASTERIXDB-3574][STO] Taking resource-level lock instead of global lock"

Reason for Revert: causing some correctness issue while caching/uncaching files

Ext-ref: MB-65695

Change-Id: I0bc0afaccaaf3519fa6b51df06b6077933f91461
Reviewed-on: https://asterix-gerrit.ics.uci.edu/c/asterixdb/+/19691
Integration-Tests: Jenkins <[email protected]>
Tested-by: Ali Alsuliman <[email protected]>
Reviewed-by: Ali Alsuliman <[email protected]>


> Enhance concurrency in DatasetLifecycleManager by replacing the global lock 
> with resource-specific locks where applicable.
> --------------------------------------------------------------------------------------------------------------------------
>
>                 Key: ASTERIXDB-3574
>                 URL: https://issues.apache.org/jira/browse/ASTERIXDB-3574
>             Project: Apache AsterixDB
>          Issue Type: Bug
>          Components: STO - Storage
>    Affects Versions: 0.9.10
>            Reporter: Ritik Raj
>            Assignee: Ritik Raj
>            Priority: Major
>              Labels: triaged
>             Fix For: 0.9.10
>
>
> Currently, all dataset and index metadata operations (create, register, open, 
> close, and unregister) in DatasetLifecycleManager are synchronized using a 
> global lock. This limits concurrency.
> To improve performance, the global lock can be *downgraded* to a 
> resource-specific lock when operating on individual datasets or indexes. This 
> will allow more concurrency while maintaining correctness.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to