Would it be possible to use the connector as a datasource in the dashboard
for creating gadgets?
On Sat, Aug 15, 2015 at 9:47 AM, Inosh Goonewardena in...@wso2.com wrote:
Hi,
1. Adding new spark dialects related for various dbs (WIP)
I have added new spark JDBC dialects for following
Hi,
1. Adding new spark dialects related for various dbs (WIP)
I have added new spark JDBC dialects for following DBs.
- mysql
- mssql
- oracle
- postgres
- db2
No. Not incremental data processing. My question regarding the deleting
entire summery table records and
Hi Niranda,
No. Not incremental data processing. My question regarding the deleting
entire summery table records and re-insert again. IMO, doing upsert will be
more efficient than your above approach. Again, if there is no other
option, above re-insert is done as a batch operation or are you
Hi Gihan,
are we talking about incremental processing here? insert into/overwrite
queries will normally be used to push analyzed data into summary tables.
in the spark jargon, insert overwrite table means, completely deleting the
table and recreating it. I'm a confused with the meaning of
Hi Niranda,
Are we going to solve those limitations before the GA? Specially limitation
no.2. Over time we can have stat table with thousands of records, so are we
going to remove all the records and reinsert every time that spark script
runs?
Regards,
Gihan
On Tue, Aug 11, 2015 at 7:13 AM,
Hi Niranda,
i'll have a look.
Thanks and Regards.
On Tue, Aug 11, 2015 at 7:13 AM, Niranda Perera nira...@wso2.com wrote:
Hi all,
we have implemented a custom Spark JDBC connector to be used in the Carbon
environment.
this enables the following
1. Now, temporary tables can be
Hi all,
we have implemented a custom Spark JDBC connector to be used in the Carbon
environment.
this enables the following
1. Now, temporary tables can be created in the Spark environment by
specifying an analytics datasource (configured by the
analytics-datasources.xml) and a table