I'm trying to create a DStream of DataFrames using PySpark. I receive data from Kafka in the form of a JSON string, and I'm parsing these RDDs of Strings into DataFrames.
My code is: I get the following error at pyspark/streaming/util.py, line 64: I've verified that the sqlContext is properly creating a DataFrame. The issue is in the return value in the callback. Am I doing something wrong in the DStream transform? I suspect it may be a problem in the DStream implementation, given that it's expecting a `_jrdd` attribute. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/PySpark-Streaming-DataFrames-tp25095.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
