andygrove commented on PR #491:
URL: https://github.com/apache/datafusion-comet/pull/491#issuecomment-2145401283

   I hacked my local copy to call `getNormalizedQueryExecutionResult` instead 
of `getNormalizedResult` and was then able to run the example from the 
installation guide. :rocket: 
   
   ```
   Welcome to
         ____              __
        / __/__  ___ _____/ /__
       _\ \/ _ \/ _ `/ __/  '_/
      /___/ .__/\_,_/_/ /_/\_\   version 3.5.1
         /_/
            
   Using Scala version 2.12.18 (OpenJDK 64-Bit Server VM, Java 11.0.22)
   Type in expressions to have them evaluated.
   Type :help for more information.
   
   scala> (0 until 10).toDF("a").write.mode("overwrite").parquet("/tmp/test")
   24/06/03 08:47:47 INFO src/lib.rs: Comet native library initialized
   24/06/03 08:47:47 WARN CometSparkSessionExtensions$CometExecRule: Comet 
cannot execute some parts of this plan natively because:
        - LocalTableScan is not supported
        - WriteFiles is not supported
        - Execute InsertIntoHadoopFsRelationCommand is not supported
   24/06/03 08:47:48 WARN MemoryManager: Total allocation exceeds 95.00% 
(1,020,054,720 bytes) of heap memory
   Scaling row group sizes to 95.00% for 8 writers
   24/06/03 08:47:48 WARN MemoryManager: Total allocation exceeds 95.00% 
(1,020,054,720 bytes) of heap memory
   Scaling row group sizes to 84.44% for 9 writers
   24/06/03 08:47:48 WARN MemoryManager: Total allocation exceeds 95.00% 
(1,020,054,720 bytes) of heap memory
   Scaling row group sizes to 76.00% for 10 writers
   24/06/03 08:47:48 WARN MemoryManager: Total allocation exceeds 95.00% 
(1,020,054,720 bytes) of heap memory
   Scaling row group sizes to 84.44% for 9 writers
   24/06/03 08:47:48 WARN MemoryManager: Total allocation exceeds 95.00% 
(1,020,054,720 bytes) of heap memory
   Scaling row group sizes to 95.00% for 8 writers
   
   scala> spark.read.parquet("/tmp/test").createOrReplaceTempView("t1")
   24/06/03 08:48:09 WARN CometSparkSessionExtensions$CometExecRule: Comet 
cannot execute some parts of this plan natively because Execute 
CreateViewCommand is not supported
   
   scala>  spark.sql("select * from t1 where a > 5").explain
   == Physical Plan ==
   *(1) ColumnarToRow
   +- CometFilter [a#7], (isnotnull(a#7) AND (a#7 > 5))
      +- CometScan parquet [a#7] Batched: true, DataFilters: [isnotnull(a#7), 
(a#7 > 5)], Format: CometParquet, Location: InMemoryFileIndex(1 
paths)[file:/tmp/test], PartitionFilters: [], PushedFilters: [IsNotNull(a), 
GreaterThan(a,5)], ReadSchema: struct<a:int>
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to