What I could infer by now is that it works good when the default limit(50k)
is applied on the kylin query browser and even if I query for limit 500k
records it takes latency but respond back.

There are 30 Million records present current in my hbase table and I guess
the aggregation query is trying to pull all the data from start to end,
which results it to go down. But I am not sure, if the data size is huge
why/how does it impact the Kylin Server to go down without any Error log in
the trace.

Thanks!


On Fri, Jun 19, 2015 at 2:21 PM, Li Yang <[email protected]> wrote:

> Need more log, pls post all start from query begin to its end as attachment
> then we can analyze.
>
> The log you posted is by cache manager, by when the query should be already
> finished.
>
>
>
> On Fri, Jun 19, 2015 at 3:24 PM, Vineet Mishra <[email protected]>
> wrote:
>
> > Hi All,
> >
> > I am making a normal select all query for my cube having around 9
> dimension
> > and 19 measure, the cube source table size is 230 Mb with cube expansion
> > rate of 500% resulting in cube of 1.1GB.
> >
> > The query which I am making on top of Kylin makes the kylin server go
> down,
> > I am moving ahead to raise the concern memory, but looking out for
> opinion
> > if some other reason might be possible
> >
> > Stack Trace mentioned below
> >
> > The configured limit of 1,000 object references was reached while
> > attempting to calculate the size of the object graph. Severe performance
> > degradation could occur if the sizing operation continues. This can be
> > avoided by setting the CacheManger or Cache <sizeOfPolicy> elements
> > maxDepthExceededBehavior to "abort" or adding stop points with
> > @IgnoreSizeOf annotations. If performance degradation is NOT an issue at
> > the configured limit, raise the limit value using the CacheManager or
> Cache
> > <sizeOfPolicy> elements maxDepth attribute. For more information, see the
> > Ehcache configuration documentation.
> >
> >
> > Thanks!
> >
>

Reply via email to