Why not add a trigger to your database table and whenever its updated push
the changes to kafka etc and use normal sparkstreaming? You can also write
a receiver based architecture
<https://spark.apache.org/docs/latest/streaming-custom-receivers.html> for
this, but that will be a bit time consuming. Another approach would be to
use normal spark job which will be triggered whenever there's a change in
your DB tables.

Thanks
Best Regards

On Mon, Jul 13, 2015 at 9:43 PM, unk1102 <umesh.ka...@gmail.com> wrote:

> Hi I did Kafka streaming through Spark streaming I have a use case where I
> would like to stream data from a database table. I see JDBCRDD is there but
> that is not what I am looking for I need continuous streaming like
> JavaSparkStreaming which continuously runs and listens to changes in a
> database table and gives me changes to process and store in HDFS. Please
> guide I am new to Spark. Thank in advance.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Does-Spark-Streaming-support-streaming-from-a-database-table-tp23801.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to