[ 
https://issues.apache.org/jira/browse/ASTERIXDB-3394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17846382#comment-17846382
 ] 

ASF subversion and git services commented on ASTERIXDB-3394:
------------------------------------------------------------

Commit a637aa1c2c8ed113ad06a1d8c877866558130a76 in asterixdb's branch 
refs/heads/master from Ritik Raj
[ https://gitbox.apache.org/repos/asf?p=asterixdb.git;h=a637aa1c2c ]

[ASTERIXDB-3394][STO] Follow up patch for addressing comments

Change-Id: I7beaa28e76e9a4c649dbcd331665479b1a582684
Reviewed-on: https://asterix-gerrit.ics.uci.edu/c/asterixdb/+/18284
Integration-Tests: Jenkins <[email protected]>
Tested-by: Jenkins <[email protected]>
Reviewed-by: Ritik Raj <[email protected]>
Reviewed-by: Wail Alkowaileet <[email protected]>


> Introducing concurrent size bound merge policy
> ----------------------------------------------
>
>                 Key: ASTERIXDB-3394
>                 URL: https://issues.apache.org/jira/browse/ASTERIXDB-3394
>             Project: Apache AsterixDB
>          Issue Type: Improvement
>          Components: STO - Storage
>            Reporter: Ritik Raj
>            Priority: Major
>              Labels: triaged
>
> Introducing size-bound merge policy to account for cloud service providers' 
> max blob storage file size and to prevent exceeding that specified limit.
>  - Add storage config to set max mergable component size.
>  - Add a new merge policy and make it our default.
>  - The new merge policy will schedule merges similar to the current 
> ConcurrentMergePolicy but it will consider the max mergable component size.
>  - Update the isMergeLagging check on max components to use the "effective" 
> meragable components which can actually be merged rather than the current 
> total number of disk components.
>  - In case of max components is reached and no merges could be scheduled, 
> force a merge on the effective meragable components.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to