Hello,
I'm using spark streaming to handle quite big data flow.

I'm solving a problem where we are inferring the type from the data ( we
need more specific data types than what JSON provides ). And quite often
there is a small difference between the schemas that we get.

Saving to parquet files and reading from them to do the merge from the
parquet implementation is ridiculously slow especially when the data grows.
SQL joins can do the trick but they aren't much faster either especially
when there are 20 rdds waiting to be joined. Is there a more efficient way
to achieve that?

Regards,
G



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Can-DataFrames-with-different-schema-be-joined-efficiently-tp24784.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to