Github user nkronenfeld commented on the pull request:

    https://github.com/apache/spark/pull/5565#issuecomment-94330557
  
    Thanks for taking a look at it.
    
    I'll take a look and write up here exactly what API changes this makes - 
while I don't think there are many, there aren't none, and that's why I was 
looking for comments before going any further, once I figured out if it were 
possible.
    
    As far as I could tell, I could not get this to compile without making 
these changes, for reasons I'll comment on individually when I get together a 
list.
    
    As far as the importance of unifying RDD and DStream - I would put that far 
above unifying RDD and DataFrame.  I only included DataFrame because someone 
had already made it inherit from a class they called RDDApi, which looked to me 
like a little bit of prep work for doing what I've done.  RDD and DStream, 
while different things, they are used for the same purpose.  One of the chief 
benefits of Spark, touted from early on, was that one could use the same code 
in varying circumstances - batch jobs, live jobs, streaming jobs, etc.  Yet, 
from the beginning of the streaming side, despite this being touted as a 
benefit, it wasn't really true.  I think making it really true is a huge 
up-side.  I know, for my company, the ability to take our batch jobs and apply 
them to streaming data without changing our code would be huge, and I can't 
imagine this isn't true for many other people.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to