Hi All,

I working on real time reporting project and i have a question about
structured streaming job, that is going to stream a particular table
records and would have to join to an existing table.

Stream ----> query/join to another DF/DS ---> update the Stream data record.

Now i have a problem on how do i approach the mid layer(query/join to
another DF/DS), should i create a DF from spark.read.format("JDBC") or
"stream and maintain the data in memory sink" or if there is any better way
to do it.

Would like to know, if anyone has faced a similar scenario and have any
suggestion on how to go ahead.

Regards,
Satyajit.

Reply via email to