Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/363#discussion_r11554379
  
    --- Diff: python/pyspark/rdd.py ---
    @@ -1387,6 +1387,95 @@ def _jrdd(self):
         def _is_pipelinable(self):
             return not (self.is_cached or self.is_checkpointed)
     
    +class Row(dict):
    +    """
    +    An extended L{dict} that takes a L{dict} in its constructor, and 
exposes those items as fields.
    +
    +    >>> r = Row({"hello" : "world", "foo" : "bar"})
    +    >>> r.hello
    +    'world'
    +    >>> r.foo
    +    'bar'
    +    """
    +
    +    def __init__(self, d):
    +        d.update(self.__dict__)
    +        self.__dict__ = d
    +        dict.__init__(self, d)
    +
    +class SchemaRDD(RDD):
    +    """
    +    An RDD of Row objects that has an associated schema. The underlying 
JVM object is a SchemaRDD,
    +    not a PythonRDD, so we can utilize the relational query api exposed by 
SparkSQL.
    +
    +    For normal L{RDD} operations (map, count, etc.) the L{SchemaRDD} is 
not operated on directly, as
    --- End diff --
    
    This will become `L{pyspark.rdd.RDD}` if you move SchemaRDD to a sql module.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to