andygrove commented on issue #3882:
URL: 
https://github.com/apache/datafusion-comet/issues/3882#issuecomment-4208680320

   > > [@andygrove](https://github.com/andygrove) thanks for creating this issue
   > > Adding some more details
   > > Records  Comet Shuffle Write     Standard Shuffle Write  Bytes/Record 
(Comet)    Bytes/Record (Std)
   > > 204,073,258      4.77 GB 1.58 GB **25.1 B/rec**  **8.3 B/rec**
   > > Schema: 8 columns(7 String, 1 Timestamp type)
   > > This was simple scan-> shuffle(repartition) ->write
   > > cc: [@parthchandra](https://github.com/parthchandra) 
[@mbutrovich](https://github.com/mbutrovich)
   > 
   > [@karuppayya](https://github.com/karuppayya) I tried creating a repro for 
this but I saw Comet shuffle files that were much smaller than Spark. Perhaps 
there is some difference in the data that I generated. Is there any chance that 
you could provide a repro?
   > 
   > edit: I created a PR with my benchmark scripts: 
[#3909](https://github.com/apache/datafusion-comet/pull/3909)
   
   I have now reproduced the issue by using shorter and unique strings.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to