[
https://issues.apache.org/jira/browse/HIVE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15763251#comment-15763251
]
liyunzhang_intel commented on HIVE-9153:
----------------------------------------
[~lirui]: in [hive on spark
wiki|https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started],
it said that the recommended value for
"mapreduce.input.fileinputformat.split.maxsize" is 750000000(750MB). is this
configuration suitable for all cases? for example, if the input data is 1.5G,
there will be 2 mappers when "mapreduce.input.fileinputformat.split.maxsize" is
750M and there will be 4 mappers when
"mapreduce.input.fileinputformat.split.maxsize" is 375M. But if we can start 4
mappers when the job starts, there is no obvious difference in execution time
when setting "mapreduce.input.fileinputformat.split.maxsize" as 750M or 375M
because there is 1 round of run in these two cases.
Another question: can you kindly print out some jira numbers about performance
test of hive on spark as you are familiar with this project? currently i only
find HIVE-9134.
> Perf enhancement on CombineHiveInputFormat and HiveInputFormat
> --------------------------------------------------------------
>
> Key: HIVE-9153
> URL: https://issues.apache.org/jira/browse/HIVE-9153
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Brock Noland
> Assignee: Rui Li
> Fix For: 1.1.0
>
> Attachments: HIVE-9153.1-spark.patch, HIVE-9153.1-spark.patch,
> HIVE-9153.2.patch, HIVE-9153.3.patch, screenshot.PNG
>
>
> The default InputFormat is {{CombineHiveInputFormat}} and thus HOS uses this.
> However, Tez uses {{HiveInputFormat}}. Since tasks are relatively cheap in
> Spark, it might make sense for us to use {{HiveInputFormat}} as well. We
> should evaluate this on a query which has many input splits such as {{select
> count(\*) from store_sales where something is not null}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)