[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16355681#comment-16355681
 ] 

ASF subversion and git services commented on ASTERIXDB-2276:
------------------------------------------------------------

Commit f4485553da14fff7c653c7724cb8b8256ada7b3b in asterixdb's branch 
refs/heads/master from [~mhubail]
[ https://git-wip-us.apache.org/repos/asf?p=asterixdb.git;h=f448555 ]

[ASTERIXDB-2276][CONF] Introduce Max Active Writable Datasets

- user model changes: no
- storage format changes: no
- interface changes: no

Details:
- Introduce number of metadata datasets parameter
  and use it to reserve memory for all metadata
  datasets.
- Default metadata dataset memory component number
  of pages to 32 pages.
- Introduce max active writable datasets and use
  it to automatically calculate the number of pages
  of a user dataset memory component.
- Remve the assumpation of reserving an NC core
  for heartbeats.
- Fix for [ASTERIXDB-2279] by making the expected
  result order deterministic.

Change-Id: I9909c26b1e12b431f913e201d2c3d83769be7269
Reviewed-on: https://asterix-gerrit.ics.uci.edu/2362
Sonar-Qube: Jenkins <jenk...@fulliautomatix.ics.uci.edu>
Integration-Tests: Jenkins <jenk...@fulliautomatix.ics.uci.edu>
Tested-by: Jenkins <jenk...@fulliautomatix.ics.uci.edu>
Reviewed-by: Michael Blow <mb...@apache.org>
Contrib: Jenkins <jenk...@fulliautomatix.ics.uci.edu>


> Introduce max active writable datasets
> --------------------------------------
>
>                 Key: ASTERIXDB-2276
>                 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2276
>             Project: Apache AsterixDB
>          Issue Type: Improvement
>            Reporter: Murtadha Hubail
>            Assignee: Murtadha Hubail
>            Priority: Major
>
> currently, it is possible for an operation that tries to allocates memory for 
> its dataset to fail without the user knowing about it. For example, if a feed 
> is connected to a number of datasets that exceeds the max global memory for 
> datasets, this issue will happen. The proposal is to add a configurable max 
> active writable datasets parameter and calculate the memory component size 
> based on it and the the memory component page size. After that, any operation 
> could check the currently modified datasets and fail early if that number 
> exceeds the configured value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to