Issue with structured streaming custom data source V2

2019-09-04 Thread stevech.hu
Hi, I started building a custom data source using data source v2 api to stream data incrementally from azure data lake (HDFS) in Spark 2.4.0. I used kafka as the reference to implement the DataSourceV2/MicroBatchReader/InputPartitionReader/InputPartition/OffsetV2 classes and goteverything working

Re: Even after VO fields are mapped using @Table and @Column annotations get error NoSuchElementException

2019-09-04 Thread Shyam P
Now I am getting different error as below : com.datastax.spark.connector.types.TypeConversionException: Cannot convert object [] of type class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema to com.datastax.driver.core.LocalDate. at

Re: Structured Streaming: How to add a listener for when a batch is complete

2019-09-04 Thread Gourav Sengupta
Hi Natalie, in case we are using Python then listener is not available, we can use the SPARK UI REST API's . For that to happen we have to start the SPARK session with following settings: spark.sql("SET spark.sql.streaming.metricsEnabled=true") For the UI from which we can use the JSON results