[ 
https://issues.apache.org/jira/browse/HIVE-25976?focusedWorklogId=784080&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784080
 ]

ASF GitHub Bot logged work on HIVE-25976:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Jun/22 08:51
            Start Date: 23/Jun/22 08:51
    Worklog Time Spent: 10m 
      Work Description: pvary commented on code in PR #3289:
URL: https://github.com/apache/hive/pull/3289#discussion_r904752539


##########
ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java:
##########
@@ -66,32 +70,13 @@ public void initialize(QueryState queryState, QueryPlan 
queryPlan, TaskQueue tas
     super.initialize(queryState, queryPlan, taskQueue, context);
     work.initializeForFetch(context.getOpContext());
 
+    cachingEnabled = HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.HIVEFETCHTASKCACHING);
+    fetchedData = new ArrayList<>();
+
     try {
       // Create a file system handle
-      if (job == null) {
-        // The job config should be initilaized once per fetch task. In case 
of refetch, we should use the
-        // same config.
-        job = new JobConf(conf);
-      }
-
-      Operator<?> source = work.getSource();
-      if (source instanceof TableScanOperator) {
-        TableScanOperator ts = (TableScanOperator) source;
-        // push down projections
-        ColumnProjectionUtils.appendReadColumns(job, ts.getNeededColumnIDs(), 
ts.getNeededColumns(),
-                ts.getNeededNestedColumnPaths(), 
ts.getConf().hasVirtualCols());
-        // push down filters and as of information
-        HiveInputFormat.pushFiltersAndAsOf(job, ts, null);
-
-        AcidUtils.setAcidOperationalProperties(job, 
ts.getConf().isTranscationalTable(),
-            ts.getConf().getAcidOperationalProperties());
-      }
-      sink = work.getSink();
-      fetch = new FetchOperator(work, job, source, getVirtualColumns(source));
-      source.initialize(conf, new 
ObjectInspector[]{fetch.getOutputObjectInspector()});
-      totalRows = 0;
-      ExecMapper.setDone(false);
-
+      job = new JobConf(conf);

Review Comment:
   What happend with the comment?
   ```
   The job config should be initilaized once per fetch task. In case of 
refetch, we should use the same config.
   ```





Issue Time Tracking
-------------------

    Worklog Id:     (was: 784080)
    Time Spent: 0.5h  (was: 20m)

> Cleaner may remove files being accessed from a fetch-task-converted reader
> --------------------------------------------------------------------------
>
>                 Key: HIVE-25976
>                 URL: https://issues.apache.org/jira/browse/HIVE-25976
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Zoltan Haindrich
>            Assignee: László Végh
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: fetch_task_conv_compactor_test.patch
>
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> in a nutshell the following happens:
> * query is compiled in fetch-task-converted mode
> * no real execution happens....but the locks are released
> * the HS2 is communicating with the client and uses the fetch-task to get the 
> rows - which in this case will directly read files from the table's 
> directory....
> * client sleeps between reads - so there is ample time for other events...
> * cleaner wakes up and removes some files....
> * in the next read the fetch-task encounters a read error...



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to