Hi All,

I am making a normal select all query for my cube having around 9 dimension
and 19 measure, the cube source table size is 230 Mb with cube expansion
rate of 500% resulting in cube of 1.1GB.

The query which I am making on top of Kylin makes the kylin server go down,
I am moving ahead to raise the concern memory, but looking out for opinion
if some other reason might be possible

Stack Trace mentioned below

The configured limit of 1,000 object references was reached while
attempting to calculate the size of the object graph. Severe performance
degradation could occur if the sizing operation continues. This can be
avoided by setting the CacheManger or Cache <sizeOfPolicy> elements
maxDepthExceededBehavior to "abort" or adding stop points with
@IgnoreSizeOf annotations. If performance degradation is NOT an issue at
the configured limit, raise the limit value using the CacheManager or Cache
<sizeOfPolicy> elements maxDepth attribute. For more information, see the
Ehcache configuration documentation.


Thanks!

Reply via email to