[
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346017#comment-14346017
]
stack commented on HBASE-13071:
-------------------------------
Wondering why we have a pool data member though we are passing pool to the
super class and the super class has a getPool accessor.
On your feedback, understand that you add caching to ClientScanner but why not
AbstractClientScanner? A hierarchy that is AbstractClientScanner subclassed to
make a ClientScanner (which is itself abstract) which is subclassed by
ClientAsyncPrefetchScanner is a little ugly; can we cut out the ClientScanner
tier?
bq, How would you suggest to get a hold of the thread executing the prefetch,
so as to interrupt it on close?
You will only ever have a single prefetcher? If so, executorpool is probably
overkill? Just start a single thread that you control?
Formatting irregularities are still in there...
Pictures coming.. they are provoking interesting questions (smile)
> Hbase Streaming Scan Feature
> ----------------------------
>
> Key: HBASE-13071
> URL: https://issues.apache.org/jira/browse/HBASE-13071
> Project: HBase
> Issue Type: New Feature
> Affects Versions: 0.98.11
> Reporter: Eshcar Hillel
> Attachments: HBASE-13071_98_1.patch, HBASE-13071_trunk_1.patch,
> HBASE-13071_trunk_2.patch, HBASE-13071_trunk_3.patch,
> HBASE-13071_trunk_4.patch, HBaseStreamingScanDesign.pdf,
> HbaseStreamingScanEvaluation.pdf
>
>
> A scan operation iterates over all rows of a table or a subrange of the
> table. The synchronous nature in which the data is served at the client side
> hinders the speed the application traverses the data: it increases the
> overall processing time, and may cause a great variance in the times the
> application waits for the next piece of data.
> The scanner next() method at the client side invokes an RPC to the
> regionserver and then stores the results in a cache. The application can
> specify how many rows will be transmitted per RPC; by default this is set to
> 100 rows.
> The cache can be considered as a producer-consumer queue, where the hbase
> client pushes the data to the queue and the application consumes it.
> Currently this queue is synchronous, i.e., blocking. More specifically, when
> the application consumed all the data from the cache --- so the cache is
> empty --- the hbase client retrieves additional data from the server and
> re-fills the cache with new data. During this time the application is blocked.
> Under the assumption that the application processing time can be balanced by
> the time it takes to retrieve the data, an asynchronous approach can reduce
> the time the application is waiting for data.
> We attach a design document.
> We also have a patch that is based on a private branch, and some evaluation
> results of this code.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)