Hi,

I'm building aggregates over Streaming Data. When new data effects
previously processed aggregates, I'll need to update the effected rows or
delete them before writing the new processed aggregates back to HDFS (Hive
Metastore) and a SAP HANA Table. How would you do this, when writing a
complete dataframe every Interval is not an option?

Somewhat related is the question for custom JDBC SQL for writing to the SAP
HANA DB. How would you implement SAP HANA specific commands if the build in
JDBC df writer is not sufficient for your needs. In this case I primarily
want to to do the incremental updates as described before and maybe also
want to send specific CREATE TABLE syntax for columnar store and time table. 

Thank you very much in advance. I'm a little stuck on this one. 

Regards
Sascha



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Incremental-Updates-and-custom-SQL-via-JDBC-tp27598.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to