Github user MLnick commented on the pull request:

    https://github.com/apache/spark/pull/455#issuecomment-50111706
  
    You'd need to run loading code off master branch. It should be in 1.1 
release in a few weeks—
    Sent from Mailbox
    
    On Fri, Jul 25, 2014 at 4:14 AM, Russell Jurney <notificati...@github.com>
    wrote:
    
    > I got this to run and I'm able to get work done!
    > Does this code have to be run on the latest Spark code? Would it run on 
1.0?
    > On Tuesday, July 22, 2014, Eric Garcia <notificati...@github.com> wrote:
    >> @MLnick <https://github.com/MLnick>, I made a PR here: #1536
    >> <https://github.com/apache/spark/pull/1536>
    >> @rjurney <https://github.com/rjurney>, the updated code works for the
    >> .avro file you posted though it is still not fully implemented for *all*
    >> data types. Note that any null values in your data will show up as an 
empty
    >> string "". For some reason I could not get Java null to convert to Python
    >> None.
    >>
    >> —
    >> Reply to this email directly or view it on GitHub
    >> <https://github.com/apache/spark/pull/455#issuecomment-49792170>.
    >>
    > -- 
    > Russell Jurney twitter.com/rjurney russell.jur...@gmail.com 
datasyndrome.com
    > ---
    > Reply to this email directly or view it on GitHub:
    > https://github.com/apache/spark/pull/455#issuecomment-50101315


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to