Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11595#discussion_r55493264
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -85,10 +85,13 @@ abstract class RDD[T: ClassTag](
private def sc: SparkContext = {
if (_sc == null) {
throw new SparkException(
- "RDD transformations and actions can only be invoked by the
driver, not inside of other " +
- "transformations; for example, rdd1.map(x => rdd2.values.count() *
x) is invalid because " +
- "the values transformation and count action cannot be performed
inside of the rdd1.map " +
- "transformation. For more information, see SPARK-5063.")
+ "This RDD refers to empty SparkContext. It could happen in
following cases: \n(1) RDD " +
--- End diff --
Lots of typos:
"This RDD lacks a SparkContext" is more accurate.
"in following" -> "in the following"
"When Spark streaming" -> "When a Spark Streaming"
"if external RDD" -> "if an external RDD" but what is an external RDD? a
reference to an RDD not defined by the streaming job?
"is used in DStream operation" -> "is used in DStream operations".
Spark-13758 -> SPARK-13758
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]