Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/831#discussion_r12824726
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
    @@ -118,8 +118,25 @@ abstract class RDD[T: ClassTag](
       // Methods and fields available on all RDDs
       // 
=======================================================================
     
    +  /** Accessor method which throws a runtime exception if null. This lets 
us have
    +    a clearer error method when attempting to perform operations on an RDD 
inside of
    +    a parallel operation as the partitioner is marked as transient */
    +  def getPartitioner: Option[Partitioner] = {
    +    partitioner match {
    +      case null => throw new SparkException("Actions on RDDs inside of 
another RDD operation are " +
    +          "not supported")
    +      case _ => partitioner
    +    }
    +  }
    +
       /** The SparkContext that created this RDD. */
    -  def sparkContext: SparkContext = sc
    +  def sparkContext: SparkContext = {
    --- End diff --
    
    Without the partitioner check, what error does it through? NPE? In that 
case can this be handled by the lookup() function, rather than introducing a 
non-intuitive check for partitioner?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to