[
https://issues.apache.org/jira/browse/HBASE-28756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869017#comment-17869017
]
Hudson commented on HBASE-28756:
--------------------------------
Results for branch branch-3
[build #259 on
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/259/]:
(x) *{color:red}-1 overall{color}*
----
details (if available):
(/) {color:green}+1 general checks{color}
-- For more information [see general
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/259/General_20Nightly_20Build_20Report/]
(/) {color:green}+1 jdk17 hadoop3 checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/259/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 source release artifact{color}
-- See build output for details.
(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console
output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/259//console].
> RegionSizeCalculator ignored the size of memstore, which leads Spark miss data
> ------------------------------------------------------------------------------
>
> Key: HBASE-28756
> URL: https://issues.apache.org/jira/browse/HBASE-28756
> Project: HBase
> Issue Type: Bug
> Components: mapreduce
> Affects Versions: 2.6.0, 3.0.0-beta-1, 2.5.10
> Reporter: Sun Xin
> Assignee: Sun Xin
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.0.0-beta-2, 2.6.1, 2.5.11
>
>
> RegionSizeCalculator only considers the size of StoreFile and ignores the
> size of MemStore. For a new region that has only been written to MemStore and
> has not been flushed, will consider its size to be 0.
> When we use TableInputFormat to read HBase table data in Spark.
> {code:java}
> spark.sparkContext.newAPIHadoopRDD(
> conf,
> classOf[TableInputFormat],
> classOf[ImmutableBytesWritable],
> classOf[Result])
> }{code}
> Spark defaults to ignoring empty InputSplits, which is determined by the
> configurationĀ "{{{}spark.hadoopRDD.ignoreEmptySplits{}}}".
> {code:java}
> private[spark] val HADOOP_RDD_IGNORE_EMPTY_SPLITS =
> ConfigBuilder("spark.hadoopRDD.ignoreEmptySplits")
> .internal()
> .doc("When true, HadoopRDD/NewHadoopRDD will not create partitions for
> empty input splits.")
> .version("2.3.0")
> .booleanConf
> .createWithDefault(true) {code}
> The above reasons lead to Spark missing data. So we should consider both the
> size of the StoreFile and the MemStore in the RegionSizeCalculator.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)