mbutrovich commented on code in PR #21184:
URL: https://github.com/apache/datafusion/pull/21184#discussion_r3002547737


##########
datafusion/core/tests/fuzz_cases/join_fuzz.rs:
##########
@@ -1125,6 +1128,138 @@ impl JoinFuzzTestCase {
     }
 }
 
+/// Fuzz test: compare SMJ (with spilling) against HJ (no spill) for filtered
+/// outer joins under memory pressure. This exercises the deferred filtering +
+/// spill read-back path that unit tests can't easily cover with random data.
+#[tokio::test]
+async fn test_filtered_join_spill_fuzz() {
+    let join_types = [JoinType::Left, JoinType::Right, JoinType::Full];
+
+    let runtime_spill = RuntimeEnvBuilder::new()
+        .with_memory_limit(4096, 1.0)
+        .with_disk_manager_builder(
+            
DiskManagerBuilder::default().with_mode(DiskManagerMode::OsTmpDirectory),
+        )
+        .build_arc()
+        .unwrap();
+
+    for join_type in &join_types {
+        for (left_extra, right_extra) in [(true, true), (false, true), (true, 
false)] {
+            let input1 = make_staggered_batches_i32(1000, left_extra);
+            let input2 = make_staggered_batches_i32(1000, right_extra);
+
+            let schema1 = input1[0].schema();
+            let schema2 = input2[0].schema();
+            let filter = col_lt_col_filter(schema1.clone(), schema2.clone());
+
+            let on = vec![
+                (
+                    Arc::new(Column::new_with_schema("a", &schema1).unwrap()) 
as _,
+                    Arc::new(Column::new_with_schema("a", &schema2).unwrap()) 
as _,
+                ),
+                (
+                    Arc::new(Column::new_with_schema("b", &schema1).unwrap()) 
as _,
+                    Arc::new(Column::new_with_schema("b", &schema2).unwrap()) 
as _,
+                ),
+            ];
+
+            for batch_size in [2, 49, 100] {

Review Comment:
   The spill test is a bit slow already (it added about 20 seconds to what was 
already ~90 second test) because it's running with a memory limit and disk 
spilling across multiple join types and extra-column combos. The smaller set 
was chosen intentionally to keep the test run time reasonable. The spill test 
combinatorics (3 join types * 3 extra-column combos * N batch sizes) make the 
full set expensive, and these three cover small/medium/boundary cases. I can 
add them if we don't think these fuzz tests will become too expensive.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to