On Mon, Nov 16, 2015 at 9:00 PM, Sarnath <[email protected]> wrote:

> Also, if SQL parsing is CPU intensive, it should not really take 100ms
> unless some IO is being performed...
>

It isn't the parsing.  It is the combinatoric explosion in the optimizer.


> btw....do aggregated data also run into billions ?? How much is the size of
> aggregated data from a billion row table?
>

Yes. The issue is that you have aggregations across many combinations of
variables.

That can mean that the number of rows in the cube datastores can be a
significant fraction of the size of the original data.  In fact, you could
cause it to be much bigger than the original data (not that such a thing
would make much sense).

The number of similar aggregations also tends to increase the complexity of
the query optimization, although there are good guarantees on how this
complexity will grow.

Reply via email to