[
https://issues.apache.org/jira/browse/HIVE-25958?focusedWorklogId=730664&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-730664
]
ASF GitHub Bot logged work on HIVE-25958:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 22/Feb/22 07:10
Start Date: 22/Feb/22 07:10
Worklog Time Spent: 10m
Work Description: rbalamohan commented on a change in pull request #3037:
URL: https://github.com/apache/hive/pull/3037#discussion_r811633336
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/stats/BasicStatsNoJobTask.java
##########
@@ -446,4 +473,86 @@ private void shutdownAndAwaitTermination(ExecutorService
threadPool) {
@Override
public void setDpPartSpecs(Collection<Partition> dpPartSpecs) {
}
+
+ /**
+ * Utility class to process file level stats in parallel.
+ */
+ private static class FileStatProcessor implements Callable <FileStats> {
+
+ private final InputSplit dummySplit;
+ private final InputFormat<?, ?> inputFormat;
+ private final JobConf jc;
+ private final FileStatus file;
+
+ FileStatProcessor(FileStatus file, InputFormat<?, ?> inputFormat,
InputSplit dummySplit, JobConf jc) {
+ this.file = file;
+ this.dummySplit = dummySplit;
+ this.inputFormat = inputFormat;
+ this.jc = jc;
+ }
+
+ @Override
+ public FileStats call() throws Exception {
+ try (org.apache.hadoop.mapred.RecordReader<?, ?> recordReader =
inputFormat
+ .getRecordReader(dummySplit, jc, Reporter.NULL)) {
+ if (recordReader instanceof StatsProvidingRecordReader) {
+ StatsProvidingRecordReader statsRR;
+ statsRR = (StatsProvidingRecordReader) recordReader;
+ FileStats fileStats = new FileStats();
+ fileStats.setRawDataSize(statsRR.getStats().getRawDataSize());
Review comment:
pass 3-4 args in constructor itself and have just getters in FileStat?
Mark FileStats fields as final.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 730664)
Time Spent: 40m (was: 0.5h)
> Optimise BasicStatsNoJobTask
> ----------------------------
>
> Key: HIVE-25958
> URL: https://issues.apache.org/jira/browse/HIVE-25958
> Project: Hive
> Issue Type: Improvement
> Reporter: Rajesh Balamohan
> Priority: Major
> Labels: pull-request-available
> Time Spent: 40m
> Remaining Estimate: 0h
>
> When there are large number of files are present, it takes lot of time for
> analyzing table (for stats) takes lot longer time especially on cloud
> platforms. Each file is read in sequential fashion for computing stats, which
> can be optimized.
>
> {code:java}
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:293)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:506)
> - locked <0x0000000642995b10> (a org.apache.hadoop.fs.s3a.S3AInputStream)
> at
> org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:775)
> - locked <0x0000000642995b10> (a org.apache.hadoop.fs.s3a.S3AInputStream)
> at
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:116)
> at
> org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:574)
> at
> org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:282)
> at
> org.apache.orc.impl.RecordReaderImpl.readAllDataStreams(RecordReaderImpl.java:1172)
> at
> org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1128)
> at
> org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1281)
> at
> org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1316)
> at org.apache.orc.impl.RecordReaderImpl.<init>(RecordReaderImpl.java:302)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:68)
> at
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:83)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createReaderFromFile(OrcInputFormat.java:367)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.<init>(OrcInputFormat.java:276)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:2027)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask$FooterStatCollector.run(BasicStatsNoJobTask.java:235)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> "HiveServer2-Background-Pool: Thread-5161" #5161 prio=5 os_prio=0
> tid=0x00007f271217d800 nid=0x21b7 waiting on condition [0x00007f26fce88000]
> java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000006bee1b3a0> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
> at
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask.shutdownAndAwaitTermination(BasicStatsNoJobTask.java:426)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask.aggregateStats(BasicStatsNoJobTask.java:338)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask.process(BasicStatsNoJobTask.java:121)
> at org.apache.hadoop.hive.ql.exec.StatsTask.execute(StatsTask.java:107)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213)
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105)
> at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:361)
> at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:334)
> at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:250) {code}
--
This message was sent by Atlassian Jira
(v8.20.1#820001)