comphead opened a new pull request, #3842:
URL: https://github.com/apache/datafusion-comet/pull/3842

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   Closes https://github.com/apache/datafusion-comet/issues/3735
   
   Prerequisites for 
   
   ## Rationale for this change
   
   ### Problem                                                                  
                                                                                
                                                                                
                                                                        
      
     When using Comet's native_datafusion scan (CometNativeScanExec), Spark's 
task-level input metrics (bytesRead, recordsRead) are always zero. These 
metrics feed the "Input" column in the Spark UI Stages tab and are aggregated 
by AppStatusListener for job-level reporting.                                   
 
                                                               
     Standard Spark reports input metrics in FileScanRDD.compute() by reading 
Hadoop FileSystem thread-local statistics via 
SparkHadoopUtil.get.getFSBytesReadOnThreadCallback(). Since the native 
DataFusion scan reads Parquet files entirely in Rust, it never touches Hadoop's 
Java I/O layer, so those
     thread-local counters are never incremented.
   
     What Comet already tracks
   
     The native side already tracks the relevant data:
   
     - bytes_scanned -- counted in parquet_read_cached_factory.rs via a 
DataFusion counter metric, incremented on every get_bytes() and 
get_byte_ranges() call.
     - output_rows -- tracked by DataFusion's ParquetExec.
   
     These flow back to the JVM via CometMetricNode.set_all_from_bytes() and 
appear as SQL-level metrics in the Spark UI operator details. However, they 
were never propagated to the task-level TaskMetrics.inputMetrics.
   
     ### Solution
   
     In the existing TaskCompletionListener inside CometExecRDD.compute(), 
after closing the iterator, read the final values of bytes_scanned and 
output_rows from the CometMetricNode tree and set them on 
TaskContext.taskMetrics().inputMetrics. This adds zero per-batch overhead -- 
metrics are written once at
     task completion.
   
     A findMetric helper on CometMetricNode performs a depth-first search 
through the metric tree, so it works whether the scan is standalone 
(CometNativeScanExec creates the RDD directly) or wrapped inside a larger 
native plan (CometNativeExec with Filter/Project above the scan).
   
     Changes
   
     - CometMetricNode.scala -- Added findMetric(name) for depth-first metric 
lookup in the node tree.
     - CometExecRDD.scala -- In the task completion listener, propagate 
bytes_scanned and output_rows to inputMetrics.setBytesRead / setRecordsRead.
     - CometTaskMetricsSuite.scala -- Added test that compares input metrics 
from native_datafusion scan against vanilla Spark (Comet disabled). Records 
must match exactly;
   
   
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.
   -->
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   ## How are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to