Re: The system cache size was slowly increased

2018-09-06 Thread Justin Ji
Who can give me some advice?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: The system cache size was slowly increased

2018-09-06 Thread Justin Ji
Who can give me some advice?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Connection reset by peer

2018-09-06 Thread Justin Ji
I also encountered this problem before when I am doing performance testing.
The reason why I get the exception is the CPU of the client is used 100%, so
the connection was closed occasionally.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


回复:how does Ignite Client restart ContinuousQuery when Ignite cluster failed and restart

2018-09-06 Thread Hansen Qin
It is a Map storaging a snapshot of ignite cache data.You can see it defined in 
my code.

private Map cacheMap = new ConcurrentHashMap<>();

public void continueQueryCache() {
  try {
   IgniteCache clientCache = makeIgniteCache();
   if (clientCache != null) {
ContinuousQuery query = new ContinuousQuery<>();
query.setIncludeExpired(true);

query.setInitialQuery(new ScanQuery<>());

// listener
query.setLocalListener(new CacheEntryUpdatedListener() {
 @Override
 public void onUpdated(Iterable> evts) {
  for (CacheEntryEvent e : evts) {
   switch (e.getEventType()) {
   case CREATED:
cacheMap.put(e.getKey(), e.getValue());
break;
   case UPDATED:
cacheMap.put(e.getKey(), e.getValue());
break;
   case REMOVED:
cacheMap.remove(e.getKey());
break;
   case EXPIRED:
cacheMap.remove(e.getKey());
break;
   default:
throw new IllegalStateException("Unknown type: " + e.getEventType());
   }
  }
  LOG.info("reload [{}] objects from remote cache..", cacheMap.size());
 }
});
// start query
QueryCursor> cursor = clientCache.query(query);
for (Entry entry : cursor) {
 cacheMap.put(entry.getKey(), entry.getValue());
}
   }
  }catch(Exception e) {
   LOG.error("continueQueryCache():get cacheClient error");
  }
 }
--
发件人:akurbanov 
发送时间:2018年9月6日(星期四) 20:45
收件人:user 
主 题:Re: how does Ignite Client restart ContinuousQuery when Ignite cluster 
failed and restart

Hello,

Can you clarify what is "local cache" in your use-case? What are you using
as a client for Ignite?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Fwd: Query execution too long even after providing index

2018-09-06 Thread Prasad Bhalerao
Can we have update on this?

-- Forwarded message -
From: Prasad Bhalerao 
Date: Wed, Sep 5, 2018, 11:34 AM
Subject: Re: Query execution too long even after providing index
To: 


Hi Andrey,

Can you please help me with this? I

Thanks,
Prasad

On Tue, Sep 4, 2018 at 2:08 PM Prasad Bhalerao 
wrote:

>
> I tried changing SqlIndexMaxInlineSize to 32 byte and 100 byte using cache
> config.
>
> ipContainerIpV4CacheCfg.setSqlIndexMaxInlineSize(32/100);
>
> But it did not improve the sql execution time. Sql execution time
> increases with increase in cache size.
>
> It is a simple range scan query. Which part of the execution process might
> take time in this case?
>
> Can you please advise?
>
> Thanks,
> PRasad
>
> On Mon, Sep 3, 2018 at 8:06 PM Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> HI,
>>
>> Have you tried to increase index inlineSize? It is 10 bytes by default.
>>
>> Your indices uses simple value types (Java primitives) and all columns
>> can be easily inlined.
>> It should be enough to increase inlineSize up to 32 bytes (3 longs + 1
>> int = 3*(8 /*long*/ + 1/*type code*/) + (4/*int*/ + 1/*type code*/)) to
>> inline all columns for the idx1, and up to 27 (3 longs) for idx2.
>>
>> You can try to benchmark queries with different inline sizes to find
>> optimal ratio between speedup and index size.
>>
>>
>>
>> On Mon, Sep 3, 2018 at 5:12 PM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Hi,
>>> My cache has 1 million rows and the sql is as follows.
>>> This sql is taking around 1.836 seconds to execute and this time
>>> increases as I go on adding the data to this cache. Some time it takes more
>>> than 4 seconds.
>>>
>>> Is there any way to improve the execution time?
>>>
>>> *SQL:*
>>> SELECT id, moduleId,ipEnd, ipStart
>>> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
>>> WHERE subscriptionId = ?  AND moduleId = ? AND (ipStart
>>> <= ? AND ipEnd   >= ?)
>>> UNION ALL
>>> SELECT id, moduleId,ipEnd, ipStart
>>> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
>>> WHERE subscriptionId = ? AND moduleId = ? AND (ipStart<=
>>> ? AND ipEnd   >= ?)
>>> UNION ALL
>>> SELECT id, moduleId,ipEnd, ipStart
>>> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
>>> WHERE subscriptionId = ? AND moduleId = ? AND (ipStart>=
>>> ? AND ipEnd   <= ?)
>>>
>>> *Indexes are as follows:*
>>>
>>> public class IpContainerIpV4Data implements Data, 
>>> UpdatableData {
>>>
>>>   @QuerySqlField
>>>   private long id;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 1)})
>>>   private int moduleId;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 0),
>>>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 0)})
>>>   private long subscriptionId;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 3, descending = true),
>>>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 2, 
>>> descending = true)})
>>>   private long ipEnd;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 2),
>>>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 1)})
>>>   private long ipStart;
>>>
>>> }
>>>
>>>
>>> *Execution Plan:*
>>>
>>> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
>>> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
>>> __Z0.ID AS __C0_0,
>>> __Z0.MODULEID AS __C0_1,
>>> __Z0.IPEND AS __C0_2,
>>> __Z0.IPSTART AS __C0_3
>>> FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z0 USE INDEX
>>> (IP_CONTAINER_IPV4_IDX1)
>>> /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID =
>>> ?1
>>> AND MODULEID = ?2
>>> AND IPSTART <= ?3
>>> AND IPEND >= ?4
>>>  */
>>> WHERE ((__Z0.SUBSCRIPTIONID = ?1)
>>> AND (__Z0.MODULEID = ?2))
>>> AND ((__Z0.IPSTART <= ?3)
>>> AND (__Z0.IPEND >= ?4))
>>> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
>>> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
>>> __Z1.ID AS __C1_0,
>>> __Z1.MODULEID AS __C1_1,
>>> __Z1.IPEND AS __C1_2,
>>> __Z1.IPSTART AS __C1_3
>>> FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z1 USE INDEX
>>> (IP_CONTAINER_IPV4_IDX1)
>>> /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID =
>>> ?5
>>> AND MODULEID = ?6
>>> AND IPSTART <= ?7
>>> AND IPEND >= ?8
>>>  */
>>> WHERE ((__Z1.SUBSCRIPTIONID = ?5)
>>> AND (__Z1.MODULEID = ?6))
>>> AND ((__Z1.IPSTART <= ?7)
>>> AND (__Z1.IPEND >= ?8))
>>> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
>>> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
>>> __Z2.ID AS __C2_0,
>>> __Z2.MODULEID AS __C2_1,
>>> __Z2.IPEND AS __C2_2,

Re: The system cache size was slowly increased

2018-09-06 Thread Justin Ji
The second question:
I run the ignite nodes in the docker container with the following command:
sudo -u docker docker run -v /mnt/logs/apps/ignite:/mnt/logs/apps/ignite -v
/opt/ignite/ext-libs:/opt/ignite/ext-libs -v
/opt/ignite/config:/opt/ignite/config -v
/var/lib/ignite/persistence:/var/lib/ignite/persistence --name ignite
--net=host -e
"CONFIG_URI=file:///opt/ignite/config/ignite-config-prod-us.xml" -e
"OPTION_LIBS=ignite-zookeeper,ignite-indexing,ignite-log4j2,ignite-rest-http"
-e "JVM_OPTS=-Xms2g -Xmx2g -XX:+AlwaysPreTouch -XX:+UseG1GC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/mnt/heapdump/ignite
-XX:+ExitOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=100M -Xloggc:/mnt/logs/apps/ignite/gc.log" -e
"EXTERNAL_LIBS=http://www.*.jar; -d apacheignite/ignite

In the command, we can see that the JVM heap size are 2G, but the Docker
container consumes more than 4G:
[jisen@w2_s_ignite_003 ~]$ sudo docker stats ignite
CONTAINER   CPU %   MEM USAGE / LIMITMEM %  
NET I/O BLOCK I/O   PIDS
ignite  0.35%   4.119GiB / 7.45GiB   55.29% 
0B / 0B 979kB / 250GB   114

So, I want to know why the Ignite container consumes so many memory



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


The system cache size was slowly increased

2018-09-06 Thread Justin Ji
Hi all -

We use Ignite in our production environment, But I found that the system
cache was increased slowly and never reclaim. when the free system memory
less than 200M, the node seemed did not work anymore and our system cannot
get any response from the server nodes. The image below is our server's
monitoring data:
 

Our server nodes configuration is:

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>

















































and the client nodes cache configuration is:
TcpCommunicationSpi communicationSpi =
DefaultIgniteConfiguration.getTcpCommunicationSpi(ignitePort);
cfg.setCommunicationSpi(communicationSpi);

//设备缓存配置
//BinaryObject 即com.tuya.athena.ignite.domain.DeviceStatusIgniteVO
CacheConfiguration cacheCfg = new
CacheConfiguration<>();
cacheCfg.setName("device_status");
//分区存储
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
//backup count
cacheCfg.setBackups(1);
cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DeviceStatusCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
//flush every minutes
cacheCfg.setWriteBehindFlushFrequency(60 * 1000);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);

cfg.setCacheConfiguration(cacheCfg);

ignite = Ignition.getOrStart(cfg);
ignite.cluster().active(true);

Is there any inappropriate place in my configuration? Looking forward to
your reply.

PS:



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Need IgniteConfiguration and XML at the same time? IgniteStormStreamer?

2018-09-06 Thread Saikat Maitra
Hi,

Yes, you can connect to a remote(local running in some 475XX port or
another host in network ) ignite node using IgniteClient. Please take a
look into some examples available here

https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/client/ClientPutGetExample.java

Regards
Saikat

On Thu, Sep 6, 2018 at 1:27 AM, monstereo  wrote:

> I thought that I could connect specific ignite node via setting
> userAttributes in the xml configuration.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-09-06 Thread monstereo
I have solved it.
Thanks..



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Batch insert into ignite using jdbc thin driver taking long time

2018-09-06 Thread Sriveena Mattaparthi
Hi Ilya,

It did compiled...but it is not inserting records to ignite at all...

I waited for 20 mins for it to insert and terminated and there are no 
exceptions in the ignite log.

Could you please suggest the right syntax to enable the streaming using jdbc 
thin driver..and my key concern is performance...Inserting 1 million records 
with 90 columns each row should get completed in less than 30 sec is what we 
are looking for...
Is it possible with ignite?

Thanks & Regards,
Sriveena

From: Ilya Kasnacheev 
Sent: 06 September 2018 17:24:13
To: user@ignite.apache.org
Subject: Re: Batch insert into ignite using jdbc thin driver taking long time

Hello!

I'm not sure but it doesn't look right to me. Does it compile?

Note that you also need to SET STREAMING OFF after data load is complete.


--
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 14:42, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:

Thanks Ilya. Will try using together.



Please correct if the below statement work.

getConnection().createStatement().execute("SET STREAMING ON");



Regards,

Sriveena



From: Ilya Kasnacheev 
[mailto:ilya.kasnach...@gmail.com]
Sent: Thursday, September 06, 2018 3:28 PM
To: user@ignite.apache.org
Subject: Re: Batch insert into ignite using jdbc thin driver taking long time



Hello!



I don't see why not. Have you tried? Any difference in run time?



Regards,

--

Ilya Kasnacheev





чт, 6 сент. 2018 г. в 6:20, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:

Hi Ilya,



Can we cobimbe both of them together I mean

1. SET STREAMING ON;

2.   JDBC thin driver batch insert (pstmt.executeBatch())



Thanks & Regards,
Sriveena



From: Ilya Kasnacheev 
[mailto:ilya.kasnach...@gmail.com]
Sent: Wednesday, September 05, 2018 7:02 PM
To: user@ignite.apache.org
Subject: Re: Batch insert into ignite using jdbc thin driver taking long time



Hello!



Have you tried streaming mode?



https://apacheignite-sql.readme.io/docs/set



Regards,

--

Ilya Kasnacheev





ср, 5 сент. 2018 г. в 15:44, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:

Hi,



I am trying to batch insert 30 records into ignite cache(on remote server) 
using jdbc thin driver.

It is taking nearly 4mins to complete this operation. Please advise.



Thanks & Regards,

Sriveena



“Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited.”

“Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited.”

“Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited.”
“Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this 

Re: Instant data type mapping to SQLServer through Pojo store

2018-09-06 Thread Ilya Kasnacheev
Hello!

It seems to me that MSSQL driver lacks any explicit support for
java.time.Instant so that's on MSSQL end. You could use blob cache store
instead.

Regards,
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 18:48, michal23849 :

> Hi,
>
> I have a Class with Instant data type:
>
> private Instant indexCreated;
>
>
> Then I map it to VARCHAR in PojoStore:
>
>  class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
>  name="databaseFieldType" >
>  static-field="java.sql.Types.VARCHAR"/>
> 
>  name="databaseFieldName" value="indexCreated" />
>  name="javaFieldType" value="java.time.Instant" />
>  name="javaFieldName" value="indexCreated" />
> 
>
> And then I am getting the error from SQLServer JDBC driver:
> 2018-09-03T11:48:36,737 ERROR o.a.i.i.p.c.s.GridCacheWriteBehindStore
> [flusher-0-#46] Unable to update underlying store: CacheJdbcPojoStore []
> javax.cache.CacheException: Failed to set statement parameter name:
> indexCreated
> at
>
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1391)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillValueParameters(CacheAbstractJdbcStore.java:1443)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeAll(CacheAbstractJdbcStore.java:1102)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.updateStore(GridCacheWriteBehindStore.java:816)
> [ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.applyBatch(GridCacheWriteBehindStore.java:726)
> [ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.access$2400(GridCacheWriteBehindStore.java:76)
> [ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.flushCacheCoalescing(GridCacheWriteBehindStore.java:1147)
> [ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.body(GridCacheWriteBehindStore.java:1018)
> [ignite-core-2.6.0.jar:2.6.0]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> [ignite-core-2.6.0.jar:2.6.0]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The conversion
> from UNKNOWN to UNKNOWN is unsupported.
> at
>
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:228)
> ~[mssql-jdbc-6.4.0.jre8.jar:?]
> at
>
> com.microsoft.sqlserver.jdbc.DataTypes.throwConversionError(DataTypes.java:1647)
> ~[mssql-jdbc-6.4.0.jre8.jar:?]
> at
>
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObject(SQLServerPreparedStatement.java:1868)
> ~[mssql-jdbc-6.4.0.jre8.jar:?]
> at
>
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObjectNoType(SQLServerPreparedStatement.java:1695)
> ~[mssql-jdbc-6.4.0.jre8.jar:?]
> at
>
> com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObject(SQLServerPreparedStatement.java:1704)
> ~[mssql-jdbc-6.4.0.jre8.jar:?]
> at
>
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1385)
> ~[ignite-core-2.6.0.jar:2.6.0]
> ... 9 more
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Instant data type mapping to SQLServer through Pojo store

2018-09-06 Thread michal23849
Hi,

I have a Class with Instant data type:

private Instant indexCreated;


Then I map it to VARCHAR in PojoStore:










And then I am getting the error from SQLServer JDBC driver:
2018-09-03T11:48:36,737 ERROR o.a.i.i.p.c.s.GridCacheWriteBehindStore
[flusher-0-#46] Unable to update underlying store: CacheJdbcPojoStore []
javax.cache.CacheException: Failed to set statement parameter name:
indexCreated
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1391)
~[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillValueParameters(CacheAbstractJdbcStore.java:1443)
~[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeAll(CacheAbstractJdbcStore.java:1102)
~[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.updateStore(GridCacheWriteBehindStore.java:816)
[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.applyBatch(GridCacheWriteBehindStore.java:726)
[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.access$2400(GridCacheWriteBehindStore.java:76)
[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.flushCacheCoalescing(GridCacheWriteBehindStore.java:1147)
[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.body(GridCacheWriteBehindStore.java:1018)
[ignite-core-2.6.0.jar:2.6.0]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[ignite-core-2.6.0.jar:2.6.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The conversion
from UNKNOWN to UNKNOWN is unsupported.
at
com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:228)
~[mssql-jdbc-6.4.0.jre8.jar:?]
at
com.microsoft.sqlserver.jdbc.DataTypes.throwConversionError(DataTypes.java:1647)
~[mssql-jdbc-6.4.0.jre8.jar:?]
at
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObject(SQLServerPreparedStatement.java:1868)
~[mssql-jdbc-6.4.0.jre8.jar:?]
at
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObjectNoType(SQLServerPreparedStatement.java:1695)
~[mssql-jdbc-6.4.0.jre8.jar:?]
at
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObject(SQLServerPreparedStatement.java:1704)
~[mssql-jdbc-6.4.0.jre8.jar:?]
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1385)
~[ignite-core-2.6.0.jar:2.6.0]
... 9 more




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread limabean
Although I specify lower case public in the odbc definition in Windows 10,
the QLIK BI application, on its ODBC connection page, forces an upper case
"PUBLIC" as you can see in the screen shot, and as far as I can tell there
are no options to change that.

QlikOdbcPanel.png
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread David Robinson
Hi,
Here is my ODBC specification.
The schema is specified as Public and this looks (identical?) like the
example in the documentation:



On Thu, Sep 6, 2018 at 11:06 AM Вячеслав Коптилин 
wrote:

> Hi,
>
> > I have tried various things on the Java side to make the Public schema
> explicit, such as this:
> If I'm not mistaken the schema can be specified as a parameter of ODBC
> connection string.
> Please take a look at this page:
> https://apacheignite-sql.readme.io/docs/connection-string-and-dsn#section-connection-string-format
>
> Also, you can find an example here:
> https://github.com/apache/ignite/blob/master/modules/platforms/cpp/examples/odbc-example/src/odbc_example.cpp
>
> Thanks,
> S.
>
> чт, 6 сент. 2018 г. в 16:59, limabean :
>
>> Scenario:
>> 64-bit ODBC driver cannot read data created from the Java Thin driver.
>>
>> Ignite 2.6.
>> Running a single node server on Centos to test this.
>>
>> First:
>> Using Intellij to remotely run the sample code from the Ignite Getting
>> started page here on SQL:
>> First Ignite SQL Application
>> https://apacheignite.readme.io/docs/getting-started
>>
>> This all works fine.  Tables created, data inserted, data read.  All as
>> expected.
>>
>> Next:
>> Using the ODBC 64-bit driver from Windows 10 to connect to the still
>> running
>> Ignite server to read the same tables (City, Person).   This does not
>> work.
>>
>> The ODBC driver appears to be able to get meta data - it gets the table
>> names from the PUBLIC schema and it understands the fields / field counts
>> in
>> each table.  However, the ODBC driver is unable to perform any select
>> operations on the tables. See the following stack trace as an example of
>> the
>> errors I am seeing:
>>
>>
>> [13:30:19]   ^-- default [initSize=256.0 MiB, maxSize=6.3 GiB,
>> persistenceEnabled=false]
>> [13:33:09,424][SEVERE][client-connector-#45][OdbcRequestHandler] Failed to
>> execute SQL query [reqId=0, req=OdbcQueryExecuteRequest [schema=PUBLIC,
>> sqlQry=SELECT COUNT(*) FROM ""PUBLIC"".CITY, timeout=0, args=[]]]
>> class org.apache.ignite.internal.processors.query.IgniteSQLException:
>> Failed
>> to parse query. Table  not found; SQL statement:
>> SELECT COUNT(*) FROM ""PUBLIC"".CITY [42102-195]
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2026)
>>
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.parseAndSplit(IgniteH2Indexing.java:1796)
>>
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1652)
>>
>>
>> ---
>>
>> I have tried various things on the Java side to make the Public schema
>> explicit, such as this:
>> conn = DriverManager.getConnection("jdbc:ignite:thin://10.60.1.101/PUBLIC");
>>
>> // conn.setSchema("PUBLIC");
>>
>> but this does not help with the ODBC problem.  The Java stuff still works
>> fine.  Select statements in Java can be written like this and they still
>> work:
>>
>>  stmt.executeQuery("SELECT p.name, c.name " +
>>  " FROM PUBLIC.Person p, City c " +
>>  " WHERE p.city_id = c.id"))
>>
>>
>> Any advice on how this should be done (sample code?) is much appreciated.
>> Thank you.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread Ilya Kasnacheev
Hello!

Please try using public (lower case) as schema, since quotes force case
sensitivity.

Regards,
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 18:25, David Robinson :

> I have no control over the format of the query coming through the ODBC
> driver.
>
> That is done automatically as far as I know by the[QLIK BI tool that is
> leveraging the ODBC driver
> to try to read data.
>
> Are you suggesting it is QLIK adding the extra quotes that is causing the
> problem with the H2 driver
> on the Ignite side?
>
> On Thu, Sep 6, 2018 at 11:07 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> SELECT COUNT(*) FROM ""PUBLIC"".CITY <-- I don't think you need any
>> quotes around PUBLIC.
>>
>> Regards,
>> Ilya.
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 6 сент. 2018 г. в 16:59, limabean :
>>
>>> Scenario:
>>> 64-bit ODBC driver cannot read data created from the Java Thin driver.
>>>
>>> Ignite 2.6.
>>> Running a single node server on Centos to test this.
>>>
>>> First:
>>> Using Intellij to remotely run the sample code from the Ignite Getting
>>> started page here on SQL:
>>> First Ignite SQL Application
>>> https://apacheignite.readme.io/docs/getting-started
>>>
>>> This all works fine.  Tables created, data inserted, data read.  All as
>>> expected.
>>>
>>> Next:
>>> Using the ODBC 64-bit driver from Windows 10 to connect to the still
>>> running
>>> Ignite server to read the same tables (City, Person).   This does not
>>> work.
>>>
>>> The ODBC driver appears to be able to get meta data - it gets the table
>>> names from the PUBLIC schema and it understands the fields / field
>>> counts in
>>> each table.  However, the ODBC driver is unable to perform any select
>>> operations on the tables. See the following stack trace as an example of
>>> the
>>> errors I am seeing:
>>>
>>>
>>> [13:30:19]   ^-- default [initSize=256.0 MiB, maxSize=6.3 GiB,
>>> persistenceEnabled=false]
>>> [13:33:09,424][SEVERE][client-connector-#45][OdbcRequestHandler] Failed
>>> to
>>> execute SQL query [reqId=0, req=OdbcQueryExecuteRequest [schema=PUBLIC,
>>> sqlQry=SELECT COUNT(*) FROM ""PUBLIC"".CITY, timeout=0, args=[]]]
>>> class org.apache.ignite.internal.processors.query.IgniteSQLException:
>>> Failed
>>> to parse query. Table  not found; SQL statement:
>>> SELECT COUNT(*) FROM ""PUBLIC"".CITY [42102-195]
>>> at
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2026)
>>>
>>> at
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.parseAndSplit(IgniteH2Indexing.java:1796)
>>>
>>> at
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1652)
>>>
>>>
>>> ---
>>>
>>> I have tried various things on the Java side to make the Public schema
>>> explicit, such as this:
>>> conn = DriverManager.getConnection("jdbc:ignite:thin://
>>> 10.60.1.101/PUBLIC");
>>> // conn.setSchema("PUBLIC");
>>>
>>> but this does not help with the ODBC problem.  The Java stuff still works
>>> fine.  Select statements in Java can be written like this and they still
>>> work:
>>>
>>>  stmt.executeQuery("SELECT p.name, c.name " +
>>>  " FROM PUBLIC.Person p, City c " +
>>>  " WHERE p.city_id = c.id"))
>>>
>>>
>>> Any advice on how this should be done (sample code?) is much
>>> appreciated.
>>> Thank you.
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread David Robinson
I have no control over the format of the query coming through the ODBC
driver.

That is done automatically as far as I know by the[QLIK BI tool that is
leveraging the ODBC driver
to try to read data.

Are you suggesting it is QLIK adding the extra quotes that is causing the
problem with the H2 driver
on the Ignite side?

On Thu, Sep 6, 2018 at 11:07 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> SELECT COUNT(*) FROM ""PUBLIC"".CITY <-- I don't think you need any quotes
> around PUBLIC.
>
> Regards,
> Ilya.
> --
> Ilya Kasnacheev
>
>
> чт, 6 сент. 2018 г. в 16:59, limabean :
>
>> Scenario:
>> 64-bit ODBC driver cannot read data created from the Java Thin driver.
>>
>> Ignite 2.6.
>> Running a single node server on Centos to test this.
>>
>> First:
>> Using Intellij to remotely run the sample code from the Ignite Getting
>> started page here on SQL:
>> First Ignite SQL Application
>> https://apacheignite.readme.io/docs/getting-started
>>
>> This all works fine.  Tables created, data inserted, data read.  All as
>> expected.
>>
>> Next:
>> Using the ODBC 64-bit driver from Windows 10 to connect to the still
>> running
>> Ignite server to read the same tables (City, Person).   This does not
>> work.
>>
>> The ODBC driver appears to be able to get meta data - it gets the table
>> names from the PUBLIC schema and it understands the fields / field counts
>> in
>> each table.  However, the ODBC driver is unable to perform any select
>> operations on the tables. See the following stack trace as an example of
>> the
>> errors I am seeing:
>>
>>
>> [13:30:19]   ^-- default [initSize=256.0 MiB, maxSize=6.3 GiB,
>> persistenceEnabled=false]
>> [13:33:09,424][SEVERE][client-connector-#45][OdbcRequestHandler] Failed to
>> execute SQL query [reqId=0, req=OdbcQueryExecuteRequest [schema=PUBLIC,
>> sqlQry=SELECT COUNT(*) FROM ""PUBLIC"".CITY, timeout=0, args=[]]]
>> class org.apache.ignite.internal.processors.query.IgniteSQLException:
>> Failed
>> to parse query. Table  not found; SQL statement:
>> SELECT COUNT(*) FROM ""PUBLIC"".CITY [42102-195]
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2026)
>>
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.parseAndSplit(IgniteH2Indexing.java:1796)
>>
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1652)
>>
>>
>> ---
>>
>> I have tried various things on the Java side to make the Public schema
>> explicit, such as this:
>> conn = DriverManager.getConnection("jdbc:ignite:thin://10.60.1.101/PUBLIC");
>>
>> // conn.setSchema("PUBLIC");
>>
>> but this does not help with the ODBC problem.  The Java stuff still works
>> fine.  Select statements in Java can be written like this and they still
>> work:
>>
>>  stmt.executeQuery("SELECT p.name, c.name " +
>>  " FROM PUBLIC.Person p, City c " +
>>  " WHERE p.city_id = c.id"))
>>
>>
>> Any advice on how this should be done (sample code?) is much appreciated.
>> Thank you.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Fulltext matching

2018-09-06 Thread Ilya Kasnacheev
Hello!

Unfortunately, fulltext doesn't seem to have much traction, so I recommend
doing investigations on your side, possibly creating JIRA issues in the
process.

Regards,
-- 
Ilya Kasnacheev


пн, 3 сент. 2018 г. в 22:34, Courtney Robinson :

> Hi,
>
> We've got Ignite in production and decided to start using some fulltext
> matching as well.
> I've investigated and can't figure out why my queries are not matching.
>
> I construct a query entity e.g new QueryEntity(keyClass, valueClass) and
> in debug I can see it generates a list of fields
> e.g. a, b, c.a, c.b
> I then expected to be able to match on those fields that are marked as
> indexed. Everything is annotation driven. The appropriate fields have been
> annotated and appear to be detected as such
> when I inspect what gets put into the QueryEntityDescriptor. i.e. all
> expected indices and indexed fields are present.
>
> In LuceneGridIndex I see that the lucene document generated as fields a,b
> (c.a and c.b are not included). Now a couple of questions arise:
>
> 1. Is there a way to get Ignite to index the nested fields as well so that
> c.a and c.b end up in the doc?
>
> 2. If you use a composite object as a key, its fields are extracted into
> the top level so if you have Key.a and Value.a you cannot index both since
> Key.a becomes a which collides with Value.a - can this be changed, are
> there any known reasons why it couldn't be (i.e. I'm happy to send a PR
> doing so - but I suspect the answer to this is linked to the answer to the
> first question)
>
> 3. The docs simply say you can use lucene syntax, I presume it means the
> syntax that appears in
> https://lucene.apache.org/core/2_9_4/queryparsersyntax.html is all valid
> - checking the code that appears to be case as it does
> a MultiFieldQueryParser in GridLuceneIndex. However, when I try to run a
> query such as a: - none of the indexed documents match. In debug
> mode I've enabled parser.setAllowLeadingWildcard(true); and if I do a
> simple searcher.search * I get back the list of expected documents.
>
> What's even more odd is I tried querying each of the 6 indexed fields as
> found in idxdFields in GridLuceneIndex and 1 of them match. The other
> values are being typed exactly but also doing wild cards or other free text
> forms do not match.
>
> 4. I couldn't see a way to provide a custom GridLuceneIndex, I found the
> two cases where it's constructed in the code base and doesn't look like I
> can inject instances. Is it ok to construct and use a custom
> GridLuceneDirectory/IndexWriter/Searcher and so on in the same way
> GridLuceneIndex does it so I can do a custom IndexingSpi to change how
> indexing happens?
> There are a number of things I'd like to customise and from looking at the
> current impl. these things aren't injectable, I guess it's not considered a
> prime use case maybe.
>
> Yeah, the analyzer and a number of things would be handy to change.
> Ideally also want to customise how a field is indexed e.g. to be able to do
> term matches with lucene queries
>
> Looking at this impl as well it passes Integer.MAX_VALUE and pulls back
> all matches. That'll surely kill our nodes for some of the use cases we're
> considering.
> I'd also like to implement paging, the searcher API has a nice option to
> pass through a last doc it can continue from to potentially implement
> something like deep-paging.
>
> 5. If I were to do a custom IndexingSpi to make all of this happen, how do
> I get additional parameters through so that I could have paging params
> passed
>
> Ideally I could customise the indexing, searching and paging through
> standard Ignite means but I can't find any means of doing that in the
> current code and short of doing a custom IndexingSpi I think I've gone as
> far as I can debugging and could do with a few pointers of how to go about
> this.
>
> FYI, SQL isn't a great option for this part of the product, we're
> generating and compiling Java classes at runtime and generating SQL to do
> the queries is an order of magnitude more work than indexing the relatively
> few fields we need and then searching but off the bat the paging would be
> an issue as there can be several million matches to a query. Can't have
> Ignite pulling all of those into memory.
>
> Thanks in advance
>
> Courtney
>


Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread Ilya Kasnacheev
Hello!

SELECT COUNT(*) FROM ""PUBLIC"".CITY <-- I don't think you need any quotes
around PUBLIC.

Regards,
Ilya.
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 16:59, limabean :

> Scenario:
> 64-bit ODBC driver cannot read data created from the Java Thin driver.
>
> Ignite 2.6.
> Running a single node server on Centos to test this.
>
> First:
> Using Intellij to remotely run the sample code from the Ignite Getting
> started page here on SQL:
> First Ignite SQL Application
> https://apacheignite.readme.io/docs/getting-started
>
> This all works fine.  Tables created, data inserted, data read.  All as
> expected.
>
> Next:
> Using the ODBC 64-bit driver from Windows 10 to connect to the still
> running
> Ignite server to read the same tables (City, Person).   This does not work.
>
> The ODBC driver appears to be able to get meta data - it gets the table
> names from the PUBLIC schema and it understands the fields / field counts
> in
> each table.  However, the ODBC driver is unable to perform any select
> operations on the tables. See the following stack trace as an example of
> the
> errors I am seeing:
>
>
> [13:30:19]   ^-- default [initSize=256.0 MiB, maxSize=6.3 GiB,
> persistenceEnabled=false]
> [13:33:09,424][SEVERE][client-connector-#45][OdbcRequestHandler] Failed to
> execute SQL query [reqId=0, req=OdbcQueryExecuteRequest [schema=PUBLIC,
> sqlQry=SELECT COUNT(*) FROM ""PUBLIC"".CITY, timeout=0, args=[]]]
> class org.apache.ignite.internal.processors.query.IgniteSQLException:
> Failed
> to parse query. Table  not found; SQL statement:
> SELECT COUNT(*) FROM ""PUBLIC"".CITY [42102-195]
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2026)
>
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.parseAndSplit(IgniteH2Indexing.java:1796)
>
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1652)
>
>
> ---
>
> I have tried various things on the Java side to make the Public schema
> explicit, such as this:
> conn = DriverManager.getConnection("jdbc:ignite:thin://10.60.1.101/PUBLIC");
>
> // conn.setSchema("PUBLIC");
>
> but this does not help with the ODBC problem.  The Java stuff still works
> fine.  Select statements in Java can be written like this and they still
> work:
>
>  stmt.executeQuery("SELECT p.name, c.name " +
>  " FROM PUBLIC.Person p, City c " +
>  " WHERE p.city_id = c.id"))
>
>
> Any advice on how this should be done (sample code?) is much appreciated.
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to create tables with JDBC, read with ODBC?

2018-09-06 Thread Вячеслав Коптилин
Hi,

> I have tried various things on the Java side to make the Public schema
explicit, such as this:
If I'm not mistaken the schema can be specified as a parameter of ODBC
connection string.
Please take a look at this page:
https://apacheignite-sql.readme.io/docs/connection-string-and-dsn#section-connection-string-format

Also, you can find an example here:
https://github.com/apache/ignite/blob/master/modules/platforms/cpp/examples/odbc-example/src/odbc_example.cpp

Thanks,
S.

чт, 6 сент. 2018 г. в 16:59, limabean :

> Scenario:
> 64-bit ODBC driver cannot read data created from the Java Thin driver.
>
> Ignite 2.6.
> Running a single node server on Centos to test this.
>
> First:
> Using Intellij to remotely run the sample code from the Ignite Getting
> started page here on SQL:
> First Ignite SQL Application
> https://apacheignite.readme.io/docs/getting-started
>
> This all works fine.  Tables created, data inserted, data read.  All as
> expected.
>
> Next:
> Using the ODBC 64-bit driver from Windows 10 to connect to the still
> running
> Ignite server to read the same tables (City, Person).   This does not work.
>
> The ODBC driver appears to be able to get meta data - it gets the table
> names from the PUBLIC schema and it understands the fields / field counts
> in
> each table.  However, the ODBC driver is unable to perform any select
> operations on the tables. See the following stack trace as an example of
> the
> errors I am seeing:
>
>
> [13:30:19]   ^-- default [initSize=256.0 MiB, maxSize=6.3 GiB,
> persistenceEnabled=false]
> [13:33:09,424][SEVERE][client-connector-#45][OdbcRequestHandler] Failed to
> execute SQL query [reqId=0, req=OdbcQueryExecuteRequest [schema=PUBLIC,
> sqlQry=SELECT COUNT(*) FROM ""PUBLIC"".CITY, timeout=0, args=[]]]
> class org.apache.ignite.internal.processors.query.IgniteSQLException:
> Failed
> to parse query. Table  not found; SQL statement:
> SELECT COUNT(*) FROM ""PUBLIC"".CITY [42102-195]
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2026)
>
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.parseAndSplit(IgniteH2Indexing.java:1796)
>
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1652)
>
>
> ---
>
> I have tried various things on the Java side to make the Public schema
> explicit, such as this:
> conn = DriverManager.getConnection("jdbc:ignite:thin://10.60.1.101/PUBLIC");
>
> // conn.setSchema("PUBLIC");
>
> but this does not help with the ODBC problem.  The Java stuff still works
> fine.  Select statements in Java can be written like this and they still
> work:
>
>  stmt.executeQuery("SELECT p.name, c.name " +
>  " FROM PUBLIC.Person p, City c " +
>  " WHERE p.city_id = c.id"))
>
>
> Any advice on how this should be done (sample code?) is much appreciated.
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


How to create tables with JDBC, read with ODBC?

2018-09-06 Thread limabean
Scenario: 
64-bit ODBC driver cannot read data created from the Java Thin driver. 

Ignite 2.6. 
Running a single node server on Centos to test this. 

First:
Using Intellij to remotely run the sample code from the Ignite Getting
started page here on SQL: 
First Ignite SQL Application 
https://apacheignite.readme.io/docs/getting-started

This all works fine.  Tables created, data inserted, data read.  All as
expected. 

Next:
Using the ODBC 64-bit driver from Windows 10 to connect to the still running
Ignite server to read the same tables (City, Person).   This does not work.

The ODBC driver appears to be able to get meta data - it gets the table
names from the PUBLIC schema and it understands the fields / field counts in
each table.  However, the ODBC driver is unable to perform any select
operations on the tables. See the following stack trace as an example of the
errors I am seeing: 


[13:30:19]   ^-- default [initSize=256.0 MiB, maxSize=6.3 GiB,
persistenceEnabled=false] 
[13:33:09,424][SEVERE][client-connector-#45][OdbcRequestHandler] Failed to
execute SQL query [reqId=0, req=OdbcQueryExecuteRequest [schema=PUBLIC,
sqlQry=SELECT COUNT(*) FROM ""PUBLIC"".CITY, timeout=0, args=[]]] 
class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed
to parse query. Table  not found; SQL statement: 
SELECT COUNT(*) FROM ""PUBLIC"".CITY [42102-195] 
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2026)
 
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.parseAndSplit(IgniteH2Indexing.java:1796)
 
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1652)
 

--- 

I have tried various things on the Java side to make the Public schema
explicit, such as this: 
conn = DriverManager.getConnection("jdbc:ignite:thin://10.60.1.101/PUBLIC"); 
// conn.setSchema("PUBLIC"); 

but this does not help with the ODBC problem.  The Java stuff still works
fine.  Select statements in Java can be written like this and they still
work:

 stmt.executeQuery("SELECT p.name, c.name " +
 " FROM PUBLIC.Person p, City c " +
 " WHERE p.city_id = c.id"))


Any advice on how this should be done (sample code?) is much appreciated. 
Thank you. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Transition from FULL_ASYNC/PRIMARY_SYNC to FULL_SYNC

2018-09-06 Thread Dave Harvey
It is my understanding that for Ignite transactions to be ACID, we need to
have the caches configured as FULL_SYNC.  [  Some of the code seems to
imply only at least one of the caches in the transaction  needs to be
FULL_SYNC, but that is outside the scope of my question. ]

The initial load of our caches takes a long time, because our
StreamReceiver needs to use transactions because it is transforming the
data.   This phase is idempotent, and could easily be run as FULL_ASYNC.
 However, once the data is loaded, we need the guarantees associated with
FULL_SYNC.  Is there any way to accomplish that, short of adding the
ability to change this cache setting dynamically?   Is there any way to
force transactions on caches not configured as FULL_SYNC to be FULL_SYNC?


Thanks,

-DH

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Memory configurations for ignite client on EC2

2018-09-06 Thread akurbanov
What are the differences between linux and ec2 machines? What disks are used
on ec2 instances?

Could you also get logs for crashed nodes for analysis, because crash might
be caused by differents reasons.

Regards



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how does Ignite Client restart ContinuousQuery when Ignite cluster failed and restart

2018-09-06 Thread akurbanov
Hello,

Can you clarify what is "local cache" in your use-case? What are you using
as a client for Ignite?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Instant data type mapping to SQLServer through Pojo store

2018-09-06 Thread Ilya Kasnacheev
Hello!

Do you have any kind of reproducer? I have tried to add Instant to Cache
Store tests but didn't see any problems outright.

Regards,
-- 
Ilya Kasnacheev


пн, 3 сент. 2018 г. в 10:54, michal23849 :

> Hi,
>
> I have a Instant data types in my Ignite data model and I want to map it to
> SQL Server.
>
> I don't have a problem with Date or Time types, but I can't map the Instant
> one.
> Is there any way to still map it (even to VARCHAR) or if it is not on the
> supported data types (https://apacheignite-sql.readme.io/docs/data-types)
> then I will not be able to map it through CacheJdbcPojoStoreFactory?
>
> Regards
> Michal
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: class org.apache.ignite.IgniteIllegalStateException: Ignite instance with provided name doesn't exist.

2018-09-06 Thread Ilya Kasnacheev
Hello!

You can use Thin Client, REST or SQL.

Regards,
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 15:00, monstereo :

> Is there any way to get only cache datas from ignite node(without creating
> new one)
> I can get via ClientCache cache =
> igniteClient.getOrCreateCache("sampleCache");
> However, I could not iterate over it.
> I mean,  I want to:
>
> Iterator> iter =
> igniteCache.iterator();
> while(iter.hasNext()){
> System.out.println(iter.next().getValue());
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Querying IgniteCache returns nothing when value class is located in a different package

2018-09-06 Thread Ilya Kasnacheev
Hello!

Yes!

However, if you use simple name mapper, this restriction is lifted:












Regards,

-- 
Ilya Kasnacheev


ср, 5 сент. 2018 г. в 21:26, max8795 :

> Hello,
>
> I ran into an issue where querying the IgniteCache returns nothing if the
> class is located in a different package than where I'm initializing the
> cache and running the query. For example, let's say I have the cache
> IgniteCache inside my IgniteCacheConnector.java class
> and my package structure is as follows:
>
> *Case 1: (works)*
> If I run an SqlFieldsQuery inside IgniteCacheConnector.java, it returns the
> expected results.
> packageA
> IgniteCacheConnector.java
> ValueClass.java
> KeyClass.java
>
> *Case 2: (does not work)*
> However, If I change the package structure to the following, the same query
> returns nothing.
> packageA
> IgniteCacheConnector.java
> packageB
> ValueClass.java
> KeyClass.java
>
> *Case 3: (works)*
> I also tried the following package structure and the query works fine.
> packageA
> IgniteCacheConnector.java
> ValueClass.java
> packageB
> KeyClass.java
>
> It appears that the ValueClass (but not the KeyClass) must be in the same
> package as where I'm initializing the cache and running the query. Is this
> the expected behaviour?
>
> Thanks,
>
> Max
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Regards the JDBC Read-Through function

2018-09-06 Thread Ilya Kasnacheev
Hello!

First of all, you can implement javax.cache.integration.CacheLoader only.

And there you can use stored procedures or specific SQL.

Ignite includes two JDBC cache stores but you are not limited to those.

Regards,
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 9:28, aa...@tophold.com :

> Hi Igniter,
>
> If there any exist component implement read through only cache store.
>
> And the read through SQL from a store proc or a specific customized SQL,
> as it may join different tables.
>
>
> Thanks for your time!
>
>
> Regards
> Aaron
>


Re: class org.apache.ignite.IgniteIllegalStateException: Ignite instance with provided name doesn't exist.

2018-09-06 Thread monstereo
Is there any way to get only cache datas from ignite node(without creating
new one)
I can get via ClientCache cache = 
igniteClient.getOrCreateCache("sampleCache");
However, I could not iterate over it. 
I mean,  I want to:

Iterator> iter =
igniteCache.iterator();
while(iter.hasNext()){
System.out.println(iter.next().getValue());
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Connection reset by peer

2018-09-06 Thread Ilya Kasnacheev
Hello!

Looks like some network problems, such as forced network timeout in case of
inactivity.

Regards,
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 5:14, Jeff Jiao :

> Hi guys,
>
> we are using Ignite 2.3.0
> we have a Ignite cluster in production which has 4 server nodes, recently
> we
> found that Ignite throws "Connection reset by peer" occasionally after some
> complex query, we know it is caused by connection closed while socket
> reading/writing, but why it closed?
>
> can you observe any abnormal info from our log below?
> Ignite server and client almost threw the exception at the same time, and
> this exception occurs only 3 seconds after we issued the query. i see
> bytesSent and Rcvd is very big, don't know whether it is related..
>
> Thanks.
>
>
>
> Server log
>
> 2018-09-06_00:28:13.996 [ERROR]
> [grid-nio-worker-tcp-comm-23-#192%PROD_IDEA_default_SZ_NewCluster%]
> [o.a.i.s.c.tcp.TcpCommunicationSpi] Failed to process selec
> tor key [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
> [super=AbstractNioClientWorker [idx=23, bytesRcvd=10166800,
> bytesSent=27235902413, bytesR
> cvd0=730, bytesSent0=119379534, select=true, super=GridWorker
> [name=grid-nio-worker-tcp-comm-23,
> igniteInstanceName=PROD_IDEA_default_SZ_NewCluster, finished=false,
> hashCode=1200501937, interrupted=false,
>
> runner=grid-nio-worker-tcp-comm-23-#192%PROD_IDEA_default_SZ_NewCluster%]]],
> writeBuf=java.nio.DirectByteBuffer[pos=4786 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=GridNioRecoveryDescriptor [acked=2784, resendCnt=0, rcvCnt=2261,
> sentCnt=2805, reserved=true, lastAck=2240, nodeLeft=false,
> node=TcpDiscoveryNode [id=9770d3e3-83e8-498c-a10b-c0bb991cbd60,
> addrs=[10.42.223.207, 127.0.0.1], sockAddrs=[/10.42.223.207:0,
> /127.0.0.1:0], discPort=0, order=335, intOrder=179,
> lastExchangeTime=1536094829004, loc=false,
> ver=2.3.0#20171028-sha1:8add7fd5,
> isClient=true], connected=true, connectCnt=0, queueLimit=4096,
> reserveCnt=1,
> pairedConnections=false], outRecovery=GridNioRecoveryDescriptor
> [acked=2784,
> resendCnt=0, rcvCnt=2261, sentCnt=2805, reserved=true, lastAck=2240,
> nodeLeft=false, node=TcpDiscoveryNode
> [id=9770d3e3-83e8-498c-a10b-c0bb991cbd60, addrs=[10.42.223.207, 127.0.0.1],
> sockAddrs=[/10.42.223.207:0, /127.0.0.1:0], discPort=0, order=335,
> intOrder=179, lastExchangeTime=1536094829004, loc=false,
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], connected=true,
> connectCnt=0, queueLimit=4096, reserveCnt=1, pairedConnections=false],
> super=GridNioSessionImpl [locAddr=/26.2.17.163:47100,
> rmtAddr=/26.2.17.15:37836, createTime=1536094833590, closeTime=0,
> bytesSent=5894311985, bytesRcvd=2541962, bytesSent0=119379534,
> bytesRcvd0=730, sndSchedTime=1536163292987, lastSndTime=1536164892990,
> lastRcvTime=1536164892980, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@40ad221b, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=true]]]
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1233)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2272)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2048)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1717)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-06_00:28:13.997 [WARN ]
> [grid-nio-worker-tcp-comm-23-#192%PROD_IDEA_default_SZ_NewCluster%]
> [o.a.i.s.c.tcp.TcpCommunicationSpi] Closing NIO session bec
> ause of unhandled exception [cls=class o.a.i.i.util.nio.GridNioException,
> msg=Connection reset by peer]
>
>
>
> Client log
>
> 2018-09-06_00:28:13.995 [ERROR]
> [grid-nio-worker-tcp-comm-5-#181%PROD_IDEA_default_SZ_NewCluster%]
> [o.a.i.s.c.tcp.TcpCommunicationSpi] Failed to process select
> or key [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
> [super=AbstractNioClientWorker [idx=5, bytesRcvd=5830373027,
> bytesSent=2485475, bytesRcvd0
> =233844279, bytesSent0=9333, select=true, super=GridWorker
> [name=grid-nio-worker-tcp-comm-5,
> igniteInstanceName=PROD_IDEA_default_SZ_NewCluster, finished=false
> , hashCode=918657525, interrupted=false,
> 

Re: Batch insert into ignite using jdbc thin driver taking long time

2018-09-06 Thread Ilya Kasnacheev
Hello!

I'm not sure but it doesn't look right to me. Does it compile?

Note that you also need to SET STREAMING OFF after data load is complete.


-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 14:42, Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com>:

> Thanks Ilya. Will try using together.
>
>
>
> Please correct if the below statement work.
>
> getConnection().createStatement().execute("SET STREAMING ON");
>
>
>
> Regards,
>
> Sriveena
>
>
>
> *From:* Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
> *Sent:* Thursday, September 06, 2018 3:28 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Batch insert into ignite using jdbc thin driver taking
> long time
>
>
>
> Hello!
>
>
>
> I don't see why not. Have you tried? Any difference in run time?
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> чт, 6 сент. 2018 г. в 6:20, Sriveena Mattaparthi <
> sriveena.mattapar...@ekaplus.com>:
>
> Hi Ilya,
>
>
>
> Can we cobimbe both of them together I mean
>
> 1. SET STREAMING ON;
>
> 2.   JDBC thin driver batch insert (pstmt.executeBatch())
>
>
>
> Thanks & Regards,
> Sriveena
>
>
>
> *From:* Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
> *Sent:* Wednesday, September 05, 2018 7:02 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Batch insert into ignite using jdbc thin driver taking
> long time
>
>
>
> Hello!
>
>
>
> Have you tried streaming mode?
>
>
>
> https://apacheignite-sql.readme.io/docs/set
> 
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 5 сент. 2018 г. в 15:44, Sriveena Mattaparthi <
> sriveena.mattapar...@ekaplus.com>:
>
> Hi,
>
>
>
> I am trying to batch insert 30 records into ignite cache(on remote
> server) using jdbc thin driver.
>
> It is taking nearly 4mins to complete this operation. Please advise.
>
>
>
> Thanks & Regards,
>
> Sriveena
>
>
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>


Re: class org.apache.ignite.IgniteIllegalStateException: Ignite instance with provided name doesn't exist.

2018-09-06 Thread Ilya Kasnacheev
Hello!

You can only access Apache Ignite instances launched in the same JVM with
Ignition.ignite(). Come to think of it, how would you gain an Ignite object
that is in a different process?

Regards,
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 10:44, monstereo :

> Think of it, i will work on production.
>
> Here is the igniteConfig.xml
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> // others default configuration.
>
> I have started this node on terminal: here is the log
>
> 2018-09-06 10:39:22 INFO  IgniteKernal%as:95 -
>
> >>>
> +--+
> >>> Ignite ver. 2.5.0#20180523-sha1:86e110c750a340dc9be2d3964113
> >>>
> +--+
> >>> OS name: Linux 4.4.0-134-generic amd64
> >>> CPU(s): 4
> >>> Heap: 1.3GB
> >>> VM name: 15276@ubuntu
> >>> Ignite instance name: sample
>
>
> Now on the ide: I have written
>
> public static void main(String[] args) {
> Ignite node  = Ignition.ignite("sample");
> slf4jLogger.info("\n\n" + node.toString() + "\n\n");
> }
>
> but it gives me error:
> Exception in thread "main" class
> org.apache.ignite.IgniteIllegalStateException: Ignite instance with
> provided
> name doesn't exist. Did you call Ignition.start(..) to start an Ignite
> instance? [name=sample]
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Batch insert into ignite using jdbc thin driver taking long time

2018-09-06 Thread Sriveena Mattaparthi
Thanks Ilya. Will try using together.

Please correct if the below statement work.
getConnection().createStatement().execute("SET STREAMING ON");

Regards,
Sriveena

From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: Thursday, September 06, 2018 3:28 PM
To: user@ignite.apache.org
Subject: Re: Batch insert into ignite using jdbc thin driver taking long time

Hello!

I don't see why not. Have you tried? Any difference in run time?

Regards,
--
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 6:20, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:
Hi Ilya,

Can we cobimbe both of them together I mean

1. SET STREAMING ON;

2.   JDBC thin driver batch insert (pstmt.executeBatch())

Thanks & Regards,
Sriveena

From: Ilya Kasnacheev 
[mailto:ilya.kasnach...@gmail.com]
Sent: Wednesday, September 05, 2018 7:02 PM
To: user@ignite.apache.org
Subject: Re: Batch insert into ignite using jdbc thin driver taking long time

Hello!

Have you tried streaming mode?

https://apacheignite-sql.readme.io/docs/set

Regards,
--
Ilya Kasnacheev


ср, 5 сент. 2018 г. в 15:44, Sriveena Mattaparthi 
mailto:sriveena.mattapar...@ekaplus.com>>:
Hi,

I am trying to batch insert 30 records into ignite cache(on remote server) 
using jdbc thin driver.
It is taking nearly 4mins to complete this operation. Please advise.

Thanks & Regards,
Sriveena

“Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited.”
“Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited.”
“Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited.”


Re: Batch insert into ignite using jdbc thin driver taking long time

2018-09-06 Thread Ilya Kasnacheev
Hello!

I don't see why not. Have you tried? Any difference in run time?

Regards,
-- 
Ilya Kasnacheev


чт, 6 сент. 2018 г. в 6:20, Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com>:

> Hi Ilya,
>
>
>
> Can we cobimbe both of them together I mean
>
> 1. SET STREAMING ON;
>
> 2.   JDBC thin driver batch insert (pstmt.executeBatch())
>
>
>
> Thanks & Regards,
> Sriveena
>
>
>
> *From:* Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
> *Sent:* Wednesday, September 05, 2018 7:02 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Batch insert into ignite using jdbc thin driver taking
> long time
>
>
>
> Hello!
>
>
>
> Have you tried streaming mode?
>
>
>
> https://apacheignite-sql.readme.io/docs/set
> 
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 5 сент. 2018 г. в 15:44, Sriveena Mattaparthi <
> sriveena.mattapar...@ekaplus.com>:
>
> Hi,
>
>
>
> I am trying to batch insert 30 records into ignite cache(on remote
> server) using jdbc thin driver.
>
> It is taking nearly 4mins to complete this operation. Please advise.
>
>
>
> Thanks & Regards,
>
> Sriveena
>
>
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>


class org.apache.ignite.IgniteIllegalStateException: Ignite instance with provided name doesn't exist.

2018-09-06 Thread monstereo
Think of it, i will work on production.

Here is the igniteConfig.xml


// others default configuration.

I have started this node on terminal: here is the log

2018-09-06 10:39:22 INFO  IgniteKernal%as:95 - 

>>> +--+
>>> Ignite ver. 2.5.0#20180523-sha1:86e110c750a340dc9be2d3964113
>>> +--+
>>> OS name: Linux 4.4.0-134-generic amd64
>>> CPU(s): 4
>>> Heap: 1.3GB
>>> VM name: 15276@ubuntu
>>> Ignite instance name: sample


Now on the ide: I have written

public static void main(String[] args) {
Ignite node  = Ignition.ignite("sample");
slf4jLogger.info("\n\n" + node.toString() + "\n\n");
}

but it gives me error:
Exception in thread "main" class
org.apache.ignite.IgniteIllegalStateException: Ignite instance with provided
name doesn't exist. Did you call Ignition.start(..) to start an Ignite
instance? [name=sample]




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Memory configurations for ignite client on EC2

2018-09-06 Thread Sriveena Mattaparthi
Hi,

We are using ignite for heavy datajoins operations with persistence enabled set 
true. It was performing well on linux machines.
But once deployed to EC2( production environment), any data joins performed is 
crashing the client application.

EC2 used is 32GB RAM and 24GB is assigned to client application.
Ignite server is on a different EC2 instance.

Are there any recommended memory settings for ignite client node?

Thanks & Regards,
Sriveena

"Confidentiality Notice: The contents of this email message and any attachments 
are intended solely for the addressee(s) and may contain confidential and/or 
privileged information and may be legally protected from disclosure. If you are 
not the intended recipient of this message or their agent, or if this message 
has been addressed to you in error, please immediately alert the sender by 
reply email and then delete this message and any attachments. If you are not 
the intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited."


Regards the JDBC Read-Through function

2018-09-06 Thread aa...@tophold.com
Hi Igniter, 

If there any exist component implement read through only cache store. 

And the read through SQL from a store proc or a specific customized SQL, as it 
may join different tables.


Thanks for your time!


Regards
Aaron


Re: Need IgniteConfiguration and XML at the same time? IgniteStormStreamer?

2018-09-06 Thread monstereo
I thought that I could connect specific ignite node via setting
userAttributes in the xml configuration.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Need IgniteConfiguration and XML at the same time? IgniteStormStreamer?

2018-09-06 Thread monstereo
thank you for your response
you comments helped me alot

But there is point about IgniteStorm, one way or another I can run the code
via ide.(using ignite-storm). However when I convert to there was error like
you said backward compability. That's why I gave up to do with ignite-storm,
just do it simply cache.put(...)

may i ask how to get instance of specific ignite node: (assume that in
terminal mode ignite run on port 47501 and when i write Ignite ignite =
Ignition.ignite(), I just want to connect that node) (not creating new one)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/