[ 
https://issues.apache.org/jira/browse/HBASE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12870456#action_12870456
 ] 

Andrew Purtell commented on HBASE-2605:
---------------------------------------

Yeah a trillion is deliberately crazy but might make the test prohibitively 
expensive to run. A billion would be fine.

> full system application test scenario: sensor network mining simulation
> -----------------------------------------------------------------------
>
>                 Key: HBASE-2605
>                 URL: https://issues.apache.org/jira/browse/HBASE-2605
>             Project: Hadoop HBase
>          Issue Type: Sub-task
>            Reporter: Andrew Purtell
>
> Try to insert a trillion rows of simulated time series data as floating point 
> values as byte[]. Run scans while writing and over the final result. Compute 
> aggregates. Verify counts and metrics reported by the writers to within some 
> acceptable threshold. 
> Consider a mix of online updates and incremental bulk import (HBASE-1923). 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to