Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/6901#discussion_r32855391
--- Diff: docs/programming-guide.md ---
@@ -1144,9 +1144,11 @@ generate these on the reduce side. When data does
not fit in memory Spark will s
to disk, incurring the additional overhead of disk I/O and increased
garbage collection.
Shuffle also generates a large number of intermediate files on disk. As of
Spark 1.3, these files
--- End diff --
I know this has been merged, but a annoying issue that I have found in docs
(including mine, so I am guilty too) is use of this `as of Spark X`. No one
remembers searching for this pattern and it never gets updated. Rather we
should use markdown variables, `as of Spark {{site.SPARK_VERSION_SHORT}}`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]