Sure, i will open a JIRA.

So, at eBay you are storing the data forever in the cubes ?

Rebuilding the cube several days seems to be very suboptimal as it means we
have to spend lot more resources again.
Even if i partitioned my cubes by days such as cube_01, cube_02 by month i
would have to go and run parallel queries against all of them when my date
range is across months and then re aggregate in memory.

On Fri, Jul 24, 2015 at 8:39 PM, Han, Luke <[email protected]> wrote:

> Could you please open one JIRA for this? We have one for streaming case,
> but I think it make sense to enable retention for batch also.
>
> Currently, I would like to say you have to rebuild cube several days to
> discard old data.
> To minimum impact, you can define two cubes with same logical, and build
> one first, then build another one like 7days later, once new one done,
> disable old one and purge the data, then, again and again....
>
> Thanks.
>
> 发自我的 iPhone
>
> > 在 2015年7月24日,22:22,vipul jhawar <[email protected]> 写道:
> >
> > Hi
> >
> > Would be interested to know, what solutions you would recommend to
> > implement data retention. Say if we want to retain data for only upto
> last
> > 90 days in the cube, what is the best option.
> >
> > Our daily size is > 60 G so we cannot store data forever and want limit
> to
> > a time range to support advanced analysis.
> >
> > Thanks
>

Reply via email to