Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18320
**jobs with many stages**:
I tested the codes below:
```R
```r
df <- createDataFrame(list(list(1L, 1, "1", 0.1)), c("a", "b", "c", "d"))
for(i in 0:90) {
df <- (gapply(df, "a", function(key, x) { x }, schema(df)))
}
collect(df)
```
More iteration produced `StackOverflowError` in my local and CentOS. This
created 18201 tasks with 92 stages.
**jobs with long stages**:
I made the change as below:
```r
df <- createDataFrame(list(list(1L, 1, "1", 0.1)), c("a", "b", "c", "d"))
collect(dapply(repartition(df, 8), function(x) { x }, schema(df)))
```
after manual changes as below:
```r
outputCon <- socketConnection(
port = port, blocking = TRUE, open = "wb", timeout = connectionTimeout)
+Sys.sleep(600L)
+
# read the index of the current partition inside the RDD
partition <- SparkR:::readInt(inputCon)
```
This took 10 mins (default cores in executors was 8).
It looks both were fine. Would this address your concern enough
@felixcheung?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]