[
https://issues.apache.org/jira/browse/PHOENIX-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14069067#comment-14069067
]
James Taylor commented on PHOENIX-1103:
---------------------------------------
Thanks for the patch, [~gabriel.reid]. I may have missed it, but don't you need
to not add a zero byte to the last key, because you want to process that row
again, so like this instead:
{code}
private PeekingResultIterator getResultIterator() throws SQLException {
if (resultIterator == null) {
singleChunkResultIterator = new SingleChunkResultIterator(
new TableResultIterator(context, tableRef, scan),
chunkSize);
resultIterator = delegateIteratorFactory.newIterator(context,
singleChunkResultIterator);
} else if (resultIterator.peek() == null &&
!singleChunkResultIterator.isEndOfStreamReached()) {
singleChunkResultIterator.close();
try {
this.scan = new Scan(scan);
} catch (IOException e) {
throw new PhoenixIOException(e);
}
scan.setStartRow(singleChunkResultIterator.getLastKey());
singleChunkResultIterator = new SingleChunkResultIterator(
new TableResultIterator(context, tableRef, scan),
chunkSize);
resultIterator = delegateIteratorFactory.newIterator(context,
singleChunkResultIterator);
}
return resultIterator;
}
{code}
Also minor, but consider making the two ImmutableBytesWritable pointers member
variables so you don't reallocated again and again:
{code}
+ private boolean rowKeyChanged(Tuple lastTuple, Tuple newTuple) {
+ ImmutableBytesWritable oldKeyPtr = new ImmutableBytesWritable();
+ ImmutableBytesWritable newKeyPtr = new ImmutableBytesWritable();
+ lastTuple.getKey(oldKeyPtr);
+ newTuple.getKey(newKeyPtr);
+
+ return oldKeyPtr.compareTo(newKeyPtr) != 0;
+ }
{code}
> Remove hash join special case for ChunkedResultIterator
> -------------------------------------------------------
>
> Key: PHOENIX-1103
> URL: https://issues.apache.org/jira/browse/PHOENIX-1103
> Project: Phoenix
> Issue Type: Improvement
> Reporter: Gabriel Reid
> Assignee: Gabriel Reid
> Fix For: 5.0.0, 3.1, 4.1
>
> Attachments: PHOENIX-1103.patch
>
>
> This is a follow-up issue to PHOENIX-539. There is currently an special case
> which disables the ChunkedResultIterator in the case of a hash join. This
> disabling of the ChunkedResultIterator is needed due to the fact that a hash
> join scan can return multiple rows with the same row key.
> As discussed in the comments of PHOENIX-539, the ChunkedResultIterator should
> be updated to only end a chunk at between different row keys.
--
This message was sent by Atlassian JIRA
(v6.2#6252)