[
https://issues.apache.org/jira/browse/HIVE-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16215870#comment-16215870
]
Eugene Koifman edited comment on HIVE-17458 at 10/23/17 9:23 PM:
-----------------------------------------------------------------
On disabling LLAP cache
{noformat}
[2:07 PM] Sergey Shelukhin: OrcSplit.canUseLlapIo()
[2:07 PM] Sergey Shelukhin: in general, LlapAwareSplit
[2:07 PM] Sergey Shelukhin: is the cleanest way
[2:09 PM] Sergey Shelukhin: LlapRecordReader.create() is another place where
one could check, on lower level
[2:09 PM] Sergey Shelukhin: and return null
{noformat}
current impl of canUseLlapIo() looks like it will disable LllapIo for
"original" acid read
was (Author: ekoifman):
On disabling LLAP cache
{noformat}
[2:07 PM] Sergey Shelukhin: OrcSplit.canUseLlapIo()
[2:07 PM] Sergey Shelukhin: in general, LlapAwareSplit
[2:07 PM] Sergey Shelukhin: is the cleanest way
[2:09 PM] Sergey Shelukhin: LlapRecordReader.create() is another place where
one could check, on lower level
[2:09 PM] Sergey Shelukhin: and return null
{noformat}
> VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
> ---------------------------------------------------------------
>
> Key: HIVE-17458
> URL: https://issues.apache.org/jira/browse/HIVE-17458
> Project: Hive
> Issue Type: Improvement
> Affects Versions: 2.2.0
> Reporter: Eugene Koifman
> Assignee: Eugene Koifman
> Priority: Critical
> Attachments: HIVE-17458.01.patch, HIVE-17458.02.patch,
> HIVE-17458.03.patch, HIVE-17458.04.patch, HIVE-17458.05.patch
>
>
> VectorizedOrcAcidRowBatchReader will not be used for original files. This
> will likely look like a perf regression when converting a table from non-acid
> to acid until it runs through a major compaction.
> With Load Data support, if large files are added via Load Data, the read ops
> will not vectorize until major compaction.
> There is no reason why this should be the case. Just like
> OrcRawRecordMerger, VectorizedOrcAcidRowBatchReader can look at the other
> files in the logical tranche/bucket and calculate the offset for the RowBatch
> of the split. (Presumably getRecordReader().getRowNumber() works the same in
> vector mode).
> In this case we don't even need OrcSplit.isOriginal() - the reader can infer
> it from file path... which in particular simplifies
> OrcInputFormat.determineSplitStrategies()
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)