I tried with a small table of 10mln entries... with 2 dimensions and
1metric (product, branch, qty).  Each dimension has 1000 unique values...
Thus 1000,000 (1 mln) combinations are possible. And that's what is being
computed as a cube.

And then, I query the cube for a particular prod-id (select.product,
branch, sum(qty) from table where product=pid group by product, branch)
The same for a particular branch in the where clause....

Also, if SQL parsing is CPU intensive, it should not really take 100ms
unless some IO is being performed...

btw....do aggregated data also run into billions ?? How much is the size of
aggregated data from a billion row table?

Reply via email to