[
https://issues.apache.org/jira/browse/HIVE-25958?focusedWorklogId=730646&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-730646
]
ASF GitHub Bot logged work on HIVE-25958:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 22/Feb/22 05:32
Start Date: 22/Feb/22 05:32
Worklog Time Spent: 10m
Work Description: rbalamohan commented on a change in pull request #3037:
URL: https://github.com/apache/hive/pull/3037#discussion_r811588175
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/stats/BasicStatsNoJobTask.java
##########
@@ -223,6 +227,16 @@ public void run() {
} else {
fileList = HiveStatsUtils.getFileStatusRecurse(dir, -1, fs);
}
+ ThreadPoolExecutor tpE = null;
+ ArrayList<Future<FileStats>> futures = null;
Review comment:
List instead?
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/stats/BasicStatsNoJobTask.java
##########
@@ -223,6 +227,16 @@ public void run() {
} else {
fileList = HiveStatsUtils.getFileStatusRecurse(dir, -1, fs);
}
+ ThreadPoolExecutor tpE = null;
+ ArrayList<Future<FileStats>> futures = null;
+ int numThreads = HiveConf.getIntVar(jc,
HiveConf.ConfVars.BASICSTATSTASKSMAXTHREADS);
+ if (fileList.size() > 1 && numThreads > 1) {
+ numThreads = Math.max(fileList.size(), numThreads);
Review comment:
Limit to 2*available processors? (i.e instead of fileList.size(). In
case file listing is 1k, it shouldn't spin up 1k threads).
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/stats/BasicStatsNoJobTask.java
##########
@@ -232,28 +246,33 @@ public void run() {
if (file.getLen() == 0) {
numFiles += 1;
} else {
- org.apache.hadoop.mapred.RecordReader<?, ?> recordReader =
inputFormat.getRecordReader(dummySplit, jc, Reporter.NULL);
- try {
- if (recordReader instanceof StatsProvidingRecordReader) {
- StatsProvidingRecordReader statsRR;
- statsRR = (StatsProvidingRecordReader) recordReader;
- rawDataSize += statsRR.getStats().getRawDataSize();
- numRows += statsRR.getStats().getRowCount();
- fileSize += file.getLen();
- numFiles += 1;
- if (file.isErasureCoded()) {
- numErasureCodedFiles++;
- }
- } else {
- throw new HiveException(String.format("Unexpected file found
during reading footers for: %s ", file));
- }
- } finally {
- recordReader.close();
+ FileStatProcessor fsp = new FileStatProcessor(file, inputFormat,
dummySplit, jc);
+ if (tpE != null) {
+ futures.add(tpE.submit(fsp));
Review comment:
Add exception handling? (e.g kill/cancel other tasks on any exception
from other tasks)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 730646)
Time Spent: 20m (was: 10m)
> Optimise BasicStatsNoJobTask
> ----------------------------
>
> Key: HIVE-25958
> URL: https://issues.apache.org/jira/browse/HIVE-25958
> Project: Hive
> Issue Type: Improvement
> Reporter: Rajesh Balamohan
> Priority: Major
> Labels: pull-request-available
> Time Spent: 20m
> Remaining Estimate: 0h
>
> When there are large number of files are present, it takes lot of time for
> analyzing table (for stats) takes lot longer time especially on cloud
> platforms. Each file is read in sequential fashion for computing stats, which
> can be optimized.
>
> {code:java}
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:293)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:506)
> - locked <0x0000000642995b10> (a org.apache.hadoop.fs.s3a.S3AInputStream)
> at
> org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:775)
> - locked <0x0000000642995b10> (a org.apache.hadoop.fs.s3a.S3AInputStream)
> at
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:116)
> at
> org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:574)
> at
> org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:282)
> at
> org.apache.orc.impl.RecordReaderImpl.readAllDataStreams(RecordReaderImpl.java:1172)
> at
> org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1128)
> at
> org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1281)
> at
> org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1316)
> at org.apache.orc.impl.RecordReaderImpl.<init>(RecordReaderImpl.java:302)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:68)
> at
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:83)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createReaderFromFile(OrcInputFormat.java:367)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.<init>(OrcInputFormat.java:276)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:2027)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask$FooterStatCollector.run(BasicStatsNoJobTask.java:235)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> "HiveServer2-Background-Pool: Thread-5161" #5161 prio=5 os_prio=0
> tid=0x00007f271217d800 nid=0x21b7 waiting on condition [0x00007f26fce88000]
> java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000006bee1b3a0> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
> at
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask.shutdownAndAwaitTermination(BasicStatsNoJobTask.java:426)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask.aggregateStats(BasicStatsNoJobTask.java:338)
> at
> org.apache.hadoop.hive.ql.stats.BasicStatsNoJobTask.process(BasicStatsNoJobTask.java:121)
> at org.apache.hadoop.hive.ql.exec.StatsTask.execute(StatsTask.java:107)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213)
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105)
> at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:361)
> at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:334)
> at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:250) {code}
--
This message was sent by Atlassian Jira
(v8.20.1#820001)