Biranavan-Parameswaran opened a new pull request, #2376:
URL: https://github.com/apache/systemds/pull/2376
## RollOperation: Single vs Multithreaded Performance Summary
### **Test Matrix Sizes**
Two ranges of matrix sizes were evaluated:
1. **Small/Medium Matrices:**
Rows = 2017–2516, Cols = 1001–1500
(Defined as: `MIN_ROWS = 2017`, `MIN_COLS = 1001`, plus up to +500 random)
2. **Large Matrices:**
Rows ≈ 8000–8500, Cols ≈ 4000–4500
(Same generation logic but higher baseline)
These two ranges allow observing how workload size impacts single-threaded
(**ST**) vs multithreaded (**MT**) performance.
---
### **Dense Matrices**
- Multithreaded (**MT**, 10 cores) execution consistently improves
performance versus single-threaded (**ST**, 1 core).
- **Small/Medium matrices:** typically **2×–3.5× speedup**.
- **Large matrices:** typically **1.3×–4.7× speedup**, depending on layout
and shift.
- Occasional slowdowns are visible, almost certainly due to **JVM warm-up
effects** during early measurements.
---
### **Sparse Matrices (1% sparsity)**
- **Small/Medium sparse matrices:** MT is often **slower** than ST because
the workload is too small; overhead dominates.
- Example: 0.5 ms (ST) → 2.0 ms (MT) → **0.25× slowdown**
- **Large sparse matrices:** MT becomes **consistently beneficial**,
commonly achieving **3×–4.5× speedups**.
- Example: 7.0 ms (ST) → 1.8 ms (MT) → **3.9× speedup**
---
### **Key Point**
- **Dense:** MT provides strong speedups for all matrix sizes. Occasional
outliers are due to **JVM warm-up**.
- **Sparse:** MT helps only when the matrix contains *enough actual stored
values*. Highly sparse matrices should remain **single-threaded (ST)** to avoid
overhead.
This shows that very sparse matrices should stay single-threaded, while
denser workloads benefit from multithreading.
@janniklinde will review this PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]