[ 
https://issues.apache.org/jira/browse/GEODE-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17399818#comment-17399818
 ] 

ASF subversion and git services commented on GEODE-9493:
--------------------------------------------------------

Commit 1210cbc48c80df9d4f87a94cd417e424b85cbccf in geode's branch 
refs/heads/develop from Darrel Schneider
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=1210cbc ]

GEODE-9493: rework sizing of RedisString, RedisHash, RedisSet, and 
RedisSortedSet (#6727)

Removed SizeableObjectSizer and made the Sizeable redis classes abstract.
They now require subclasses that implement sizeKey, sizeValue, or sizeElement.

Instead of having hard coded constants stating what the size of a redis base 
class is,
the code now uses JvmSizeUtils.memoryOverhead(Class) to statically compute the 
class size overhead.
This allows the size to be correct for different jvm configs.

Also simplified the sizing logic to use int and if needed use casts instead of 
narrow.
Narrow was more expensive and would still cause problems as the size goes back 
down toward zero.

The sizing of OrderStatisticsTree was simplified some more since it did not 
need an extra int field
since we do not keep track of its element sizes.

Also the size of the hashing strategy object is no longer accounted for since 
it is a singleton.

OrderStatisticsTree has been simplified to not size the elements it contains.
It is currently only used by RedisSortedSet and it does not need the element
size to be computed (it already is done in the hash map it also uses).
If in the future we need to also size
the elements of an OrderStatisticsTree we can figure out the best
way to do that then in that new context.


> redis data structure sizing is hard coded for compressed oops
> -------------------------------------------------------------
>
>                 Key: GEODE-9493
>                 URL: https://issues.apache.org/jira/browse/GEODE-9493
>             Project: Geode
>          Issue Type: Bug
>          Components: redis
>            Reporter: Darrel Schneider
>            Assignee: Darrel Schneider
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.15.0
>
>
> The sizing of the redis data structures (string, hash, set, sorted set) has 
> some constants that were precomputed by tests. Because the tests are run with 
> smaller heaps that use compressed oops the size estimates end up being too 
> small for large heaps or if compressed oops are disabled.
> Also a "strategy" object is currently part of the size and should not be 
> since it is a single canonical instance shared by all hash and set instances.
> Also the way sizing is currently done does not take advantage of us knowing 
> the element type. By optimizing the code for "byte[]" for example we can 
> compute the size faster and use less memory by storing less "sizing" state in 
> our objects. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to