Re: How to create tables with JDBC, read with ODBC?

2018-09-10 Thread limabean
Thank you very much for the thorough discussion/explanation and pending fix for public schemas. Much appreciated ! As an aside, I also contacted QLIK to see if they will fix their product behavior, which does not seem correct to me either. -- Sent from:

Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread limabean
Although I specify lower case public in the odbc definition in Windows 10, the QLIK BI application, on its ODBC connection page, forces an upper case "PUBLIC" as you can see in the screen shot, and as far as I can tell there are no options to change that. QlikOdbcPanel.png

How to create tables with JDBC, read with ODBC?

2018-09-06 Thread limabean
Scenario: 64-bit ODBC driver cannot read data created from the Java Thin driver. Ignite 2.6. Running a single node server on Centos to test this. First: Using Intellij to remotely run the sample code from the Ignite Getting started page here on SQL: First Ignite SQL Application

Re: [ANNOUNCE] Apache Ignite 2.1.0 Released

2017-07-31 Thread limabean
I apologize if this is documented somewhere, but I am having trouble building the new 2.1 release from source. Suggestions on switches, environment, etc are appreciated. I see the following: [INFO] ignite-appserver-test .. SUCCESS [ 0.166 s] [INFO]

Re: ODBC version is not supported issue

2017-07-26 Thread limabean
Thank you, Igor. Very helpful ! -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/ODBC-version-is-not-supported-issue-tp15641p15690.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: ODBC version is not supported issue

2017-07-26 Thread limabean
Igor, Thanks for a prompt reply to this and opening a Jira for us. Do you have an idea on how difficult something like this is to fix ? Wallace and I have to support a legacy BI tool and ODBC is required to support that tool (unfortunately). We use RazorSQL in our development process, which is

Re: Inserting Data From Spark to Ignite

2017-07-19 Thread limabean
One problem is that peer class loading did not match between the server and the client (being started inside the Spark container). That required a change in the server start up configuration because the spark client starts with a default of "true". It was a needle in the haystack that I found

Re: Inserting Data From Spark to Ignite

2017-07-18 Thread limabean
Here is the requested example code demonstrating one of the methods that is failing to write to Ignite. The readme shows the full stack trace appearing from the app as it runs in Spark 2.1 I first run the CreateIgniteCache program and verify using a DB tool that the Cache and Table are

Re: How does CacheStore persistence actually work?

2017-07-15 Thread limabean
ped. > > > On Tue, Jul 11, 2017 at 8:46 PM, limabean <[hidden email] > <http:///user/SendEmail.jtp?type=node=14939=0>> wrote: > >> Hi, >> >> I saw this remark on the mailing list from vkulichenko: >> >> Generally CacheStore is designed to be a singl

Inserting Data From Spark to Ignite

2017-07-14 Thread limabean
Hoping to get some help on how to insert data from Spark to Ignite. The test cases in the build are too trivial to help. Here is Try 3: It fails with this error at the moment: diagnostics: User class threw exception: javax.cache.CacheException: class

How does CacheStore persistence actually work?

2017-07-11 Thread limabean
Hi, I saw this remark on the mailing list from vkulichenko: >> Generally CacheStore is designed to be a single store shared between all >> nodes. I want to develop my own CacheStore implementation to a data store. The data store will have a "contact point" or client running on each Ignite node

Re: Data eviction to disk

2017-07-09 Thread limabean
Hi Userx, You might need to implement your own code to back Ignite by disk, but here are examples, discussions and documentation around that topic that you can review: https://github.com/apache/ignite/tree/master/examples/src/main/java-lgpl/org/apache/ignite/examples/datagrid/store/hibernate

Re: listening for events on transactions

2016-06-06 Thread limabean
Hi Alexey, I did poke around on continuous queries before. Based on your recommendation I will take another look to see if they solve my architecture pattern. Transaction listeners/messages are a common pattern with other technologies. Here is a suggestion of how Ignite might evolve to be

Re: listening for events on transactions

2016-06-06 Thread limabean
Hey Denis, Thank you for this suggestion. I plan to take a serious look at what you suggest to see if it will work for me and will let you know. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/listening-for-events-on-transactions-tp5346p5458.html Sent from the

Re: listening for events on transactions

2016-06-01 Thread limabean
Hi Alexi, Thank you for the clarification. My final goal is to notify other processes not related to the grid application about changes to the data caches. For example, it would be nice to have a Kafka publisher registered as a transaction listener and then when it gets transaction events,

clarification on how to start transactions only on servers

2016-05-31 Thread limabean
I was reading this thread: http://apache-ignite-users.70518.x6.nabble.com/CacheStore-handles-persistence-in-client-node-for-transactional-cache-td3428.html#a3435 and found this confusing. This line in particular from the post above is confusing: >> By default TRANSACTIONAL cache invokes store

listening for events on transactions

2016-05-31 Thread limabean
Hi, The Ignite documentation implies that listeners can be set on transactions: "IgniteTransactions interface contains functionality for starting and completing transactions, as well as subscribing listeners or getting metrics."

Re: Using SQL to query Object field stored in Cache ?

2016-05-31 Thread limabean
Hi Alexei, I wanted to get back to you on this. Thank you for the detailed examples, they did help. Based on your recommendation, I ended up using the first approach where a type field was added and then a different field stores the value depending on the type of data that is stored. This