An update on release 1.15. We’re still on track for first RC on Monday.
We never seem to do a good job of verifying Calcite’s adapters. Can I have
volunteers to validate the Cassandra, Mongo, and Druid adapters? I plan to test
Calcite on Windows.
The following issues remain for the release:
Can you log that a JIRA case to make “autoCommit” an accepted parameter in the
JDBC URL? I think it would solve this problem.
> On Nov 22, 2017, at 8:52 AM, Christian Tzolov wrote:
>
> Hi Josh,
>
> Here
> is
> Z
> eppelin
> 's JDBC Interpreter forcefull auto-commit
FYI,
you can use the Apache Zeppelin's generic JDBC interpreter to plug a
Calcite based JDBC adapter.
Here are my results of plug-in the Geode-Calcite adapter:
https://www.linkedin.com/pulse/advanced-apache-geode-data-analytics-zeppelin-over-sqljdbc-tzolov/
Although i've been testing with
Hi Josh,
Here
is
Z
eppelin
's JDBC Interpreter forcefull auto-commit implementation
:
http://bit.ly/2zrykP9
So if the connection is not auto-commit = true the JDBC interpreter will
forcefully call commit.
But Avatica's connection defaults to auto-commit = true and there is no way
to
Hey Christian,
Thanks for sharing this. Sounds cool.
I'm curious what you mean when you say that the Avatica connection
doesn't support commit. This was implemented in CALCITE-767.
Also, is there a reason that Zeppelin needs a property to control
autoCommit and can't use the
Hello,
I'm using Calcite to provide an SQL interface (Read-Only) for a Java
Service that contains its data in memory. However most of the fields are
primitive types and exposing them in a ProjectableFilterableTable forces
boxing of the fields values since the scan method returns