Re: Concurrent merge into operations cause critical system error on ignite 2.7 node.

2019-01-15 Thread yangjiajun
Hello.

Please see the logs.

ignite-8bdefd7a.zip
 
 


ilya.kasnacheev wrote
> Hello!
> 
> Can you provide logs?
> 
> Regards,
> -- 
> Ilya Kasnacheev
> 
> 
> вс, 13 янв. 2019 г. в 18:05, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> I have a ignite 2.7 node with persistence enabled.I test concurrent merge
>> into operations on it and find below concurrent operations can cause
>> critical system error:
>> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
>> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
>> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
>> VALUES(2,'1','1'),(1,'1','1')".
>>
>> But the following concurrent operations seem no problem:
>> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
>> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
>> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
>> VALUES(1,'1','1'),(2,'1','1')".
>>
>> Is this a bug?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Is there a way to allow overwrite when set streaming on?

2019-01-15 Thread yangjiajun
Hello.

We can set streaming on while insert data to ignite using sql.I want to
enable data overwrite in this mode.Is it possible?

https://apacheignite-sql.readme.io/docs/set
https://apacheignite.readme.io/docs/data-streamers#section-allow-overwrite



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-15 Thread mahesh76private
On 2.7, we are regularly seeing the below message and then the nodes stop. 


[16:45:04,759][SEVERE][disco-event-worker-#63][] JVM will be halted
immediately due to the failure: [failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class o.a.i.IgniteCheckedException: Maximum number
of retries 1000 reached for Put operation (the tree may be corrupted).
Increase IGNITE_BPLUS_TREE_LOCK_RETRIES system property if you regularly see
this message (current value is 1000).]]


Can you please through some light on what this error is?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Baselined node rejoining crashes other baseline nodes - Duplicate Key Error

2019-01-15 Thread mahesh76private
I have two nodes on which we have 3 tables which are partitioned.  Index are
also built on these tables. 

For 24 hours caches work fine.  The tables are definitely distributed across
both the nodes

Node 2 reboots due to some issue - goes out of the baseline - comes back and
joins the baseline.  Other baseline nodes crash and in the logs we see
duplicate Key error

[10:38:35,437][INFO]tcp-disco-srvr-#2[TcpDiscoverySpi] TCP discovery
accepted incoming connection [rmtAddr=/192.168.1.7, rmtPort=45102]
[10:38:35,437][INFO]tcp-disco-srvr-#2[TcpDiscoverySpi] TCP discovery
spawning a new thread for connection [rmtAddr=/192.168.1.7, rmtPort=45102]
[10:38:35,437][INFO]tcp-disco-sock-reader-#12[TcpDiscoverySpi] Started
serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102]
[10:38:35,451][INFO]tcp-disco-sock-reader-#12[TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102
[10:38:35,457][SEVERE]tcp-disco-msg-worker-#3[TcpDiscoverySpi]
TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
in order to prevent cluster wide instability.
*java.lang.IllegalStateException: Duplicate key
at org.apache.ignite.cache.QueryEntity.checkIndexes(QueryEntity.java:223)
at org.apache.ignite.cache.QueryEntity.makePatch(QueryEntity.java:174)*


Logs and confurations are attached here 
https://issues.apache.org/jira/browse/IGNITE-8728
 
please offer any suggestions 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.7 Client node security credentials configuration

2019-01-15 Thread garima.j
I'm using IgniteConfiguration to provide the config while starting the node
and not the XML file. 

I can't find a way to set the SecurityCredentials inside it. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.7 Client node security credentials configuration

2019-01-15 Thread Evgenii Zhuravlev
Hi,

When you use the thick client, it uses pretty the same configuration file
as server nodes, so, you can define it there.

Best Regards,
Evgenii

вт, 15 янв. 2019 г. в 09:33, garima.j :

> Hello,
>
> While going through the documents, I read that to enable basic
> authentication, i can enable the flag in XML.
>
> How do I provide the username/password when I connect to the cluster using
> a
> thick client node at another VM?
> I need to secure my cluster with a username/password so that no other node
> can join the cluster without correct credentials.
>
> Please help.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteInterruptedException on cache.get in a transaction inside runnable (ignite 2.6)

2019-01-15 Thread bintisepaha
The next time it happens, I will gather the logs.

Thanks,
Binti



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite 2.7 Client node security credentials configuration

2019-01-15 Thread garima.j
Hello,

While going through the documents, I read that to enable basic
authentication, i can enable the flag in XML. 

How do I provide the username/password when I connect to the cluster using a
thick client node at another VM? 
I need to secure my cluster with a username/password so that no other node
can join the cluster without correct credentials. 

Please help. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to wait for partition map exchange on cluster activation

2019-01-15 Thread Andrey Davydov
Hello,

You can find full log there:
https://drive.google.com/file/d/1FwCjsXMw5LQJnKO0x5GNJ2w9gVsDbXlc/view?usp=sharing

I can rerun tests with additional logging settings if needed

Andrey.






On Tue, Jan 15, 2019 at 6:23 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Can you please upload the full verbose log somewhere?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 9 янв. 2019 г. в 20:43, Andrey Davydov :
>
>>
>>
>> Hello,
>>
>> I found in test logs of my project that Ignite warns about failed
>> partition maps exchange. In test environment 3 Ignite 2.7 server nodes run
>> in the same JVM8 on Win10, using localhost networking.
>>
>>
>>
>> 2019-01-09 20:15:27,719 [sys-#164%TestNode-2%] INFO
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>> - Affinity changes applied in 10 ms.
>>
>> 2019-01-09 20:15:27,719 [sys-#163%TestNode-1%] INFO
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>> - Affinity changes applied in 10 ms.
>>
>> 2019-01-09 20:15:27,724 [sys-#164%TestNode-2%] INFO
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>> - Full map updating for 5 groups performed in 4 ms.
>>
>> 2019-01-09 20:15:27,724 [sys-#163%TestNode-1%] INFO
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>> - Full map updating for 5 groups performed in 5 ms.
>>
>> 2019-01-09 20:15:27,725 [sys-#163%TestNode-1%] INFO
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
>> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
>> err=null]
>>
>> 2019-01-09 20:15:27,725 [sys-#164%TestNode-2%] INFO
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
>> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
>> err=null]
>>
>> 2019-01-09 20:15:28,710 [db-checkpoint-thread-#157%TestNode-1%] INFO
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>> - Checkpoint started [checkpointId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
>> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
>> checkpointLockWait=0ms, checkpointLockHoldTime=6ms,
>> walCpRecordFsyncDuration=248ms, pages=204, reason='node started']
>>
>> 2019-01-09 20:15:28,713 [db-checkpoint-thread-#151%TestNode-0%] INFO
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>> - Checkpoint started [checkpointId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
>> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
>> checkpointLockWait=0ms, checkpointLockHoldTime=8ms,
>> walCpRecordFsyncDuration=257ms, pages=204, reason='node started']
>>
>> 2019-01-09 20:15:28,715 [db-checkpoint-thread-#146%TestNode-2%] INFO
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>> - Checkpoint started [checkpointId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
>> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
>> checkpointLockWait=0ms, checkpointLockHoldTime=22ms,
>> walCpRecordFsyncDuration=289ms, pages=204, reason='node started']
>>
>> 2019-01-09 20:15:30,788 [db-checkpoint-thread-#157%TestNode-1%] INFO
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>> - Checkpoint finished [cpId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
>> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
>> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1103ms,
>> pagesWrite=84ms, fsync=1992ms, total=3179ms]
>>
>> 2019-01-09 20:15:30,858 [db-checkpoint-thread-#151%TestNode-0%] INFO
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>> - Checkpoint finished [cpId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
>> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
>> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1213ms,
>> pagesWrite=79ms, fsync=2066ms, total=3358ms]
>>
>> 2019-01-09 20:15:30,998 [db-checkpoint-thread-#146%TestNode-2%] INFO
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>> - Checkpoint finished [cpId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
>> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
>> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1262ms,
>> pagesWrite=79ms, fsync=2203ms, total=3544ms]
>>
>> 2019-01-09 20:15:37,510 [exchange-worker-#44%TestNode-0%] WARN
>> org.apache.ignite.internal.diagnostic:118 - Failed to wait for partition
>> map exchange [topVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
>> 

Re: Visor "cache" command hangs when client node connects.

2019-01-15 Thread John Smith
Yeah so far it works great inside DC/OS with marathon.

On Tue, 15 Jan 2019 at 08:01, Ilya Kasnacheev 
wrote:

> Hello!
>
> I think there were people on userlist who was able to open Dockerized
> Ignite cluster to outside clients. I recommend searching archives.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 14 янв. 2019 г. в 21:54, John Smith :
>
>> So if it's all running inside DC/OS it works ni issues. So wondering what
>> would be the strategy if external clients want to connect either Ignite
>> being inside the contaimer env or outside... Just REST?
>>
>> On Fri., Jan. 11, 2019, 15:00 John Smith >
>>> Yeah this doesn't work on the dev environment either, because the
>>> application is running in docker on bridge mode, but the cluster is on
>>> standard VM hosts. I'm using DC/OS...
>>> Does the Mesos deployment support DC/OS? If not I can create custom
>>> marathon docker images for it...
>>>
>>> On Fri, 11 Jan 2019 at 14:12, John Smith  wrote:
>>>
 And it seems to say like that indefinitely. I let it go for 5 minutes
 and nothing has printed to the console or logs.

 On Fri, 11 Jan 2019 at 12:49, John Smith 
 wrote:

> I can confirm I just tested it. There is no stack trace. Basically the
> client connects, no errors, the cache command hangs/pauses, I disconnect
> the client and cache command completes. I'm also 100% certain the client
> works when connecting to the cluster over wi-fi. I have been able to 
> create
> caches dynamically. Query the caches etc...
>
> On Fri, 11 Jan 2019 at 12:23, John Smith 
> wrote:
>
>> That's the thing... There is none. It just seems to pause and wait.
>> The moment I close my client application it just resumes...
>>
>> But other commands like top work fine...
>>
>> On Fri, 11 Jan 2019 at 12:15, Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> If they're on the same network it is not obvious what happens here,
>>> but I have just performed the steps you have mentioned without problems.
>>>
>>> Can you collect stack traces from all nodes when this hang happens?
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пт, 11 янв. 2019 г. в 20:12, Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com>:
>>>
 Hello!

 I'm afraid that visor will try to connect to your client and will
 wait until this is successful.

 Regards,
 --
 Ilya Kasnacheev


 пт, 11 янв. 2019 г. в 20:01, John Smith :

> Humm maybe not. The client is running on my laptop through the
> wi-fi. But the cluster and visor are on the dev network. But the 
> client on
> my laptop is capable of joining the cluster through the wi-fi and
> processing requests no problems.
>
> On Fri, 11 Jan 2019 at 10:56, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Are you sure that your Visor node is able to connect to client
>> node via communication port? Nodes in cluster need to be able to do 
>> that,
>> which is somewhat unexpected in case of client node.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 11 янв. 2019 г. в 18:36, John Smith :
>>
>>> Hi, sorry if this a double post I tried through nabble and I
>>> don't think it came through...
>>>
>>> So using 2.7...
>>>
>>> I have a 3 node cluster started with ignite.sh and that works
>>> perfectly fine. I'm also able to connect to the cluster with visor 
>>> and I
>>> can also run top, cache etc... commands no problem. But the issue 
>>> arises
>>> only when an external client node connects
>>> using igniteConfig.setClientMode(true);
>>>
>>> 1- Start the cluster
>>> 2- Connect with visor
>>> 3- Run cache command (prints cache details, no problem)
>>> 4- Connect client application
>>> 5- Run cache command (seems to hang, doesn't crash)
>>> 6- Disconnect client app
>>> 7- Cache command completes and prints.
>>>
>>> Cache seems to be the only command that hangs/pauses when the
>>> client is connected.
>>>
>>> The cache config incase...
>>>
>>> 
>>>
>>> http://www.springframework.org/schema/beans;
>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>>xmlns:util="http://www.springframework.org/schema/util;
>>>xsi:schemaLocation="
>>> http://www.springframework.org/schema/beans
>>>
>>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>>
>>> 

Re: Unable to activate an ignite cluster with multiple hosts.

2019-01-15 Thread Mikhail
Hi,

as I can see you created a cluster without ignite native persistence, so it
already active.

Activation is required only when you created cluster with persistence:
https://apacheignite.readme.io/docs/distributed-persistent-store

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Recovering from a data region OOM condition

2019-01-15 Thread Mikhail
What ignite version do you use? Could you please share a reproducer with us?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to wait for partition map exchange on cluster activation

2019-01-15 Thread Ilya Kasnacheev
Hello!

Can you please upload the full verbose log somewhere?

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 20:43, Andrey Davydov :

>
>
> Hello,
>
> I found in test logs of my project that Ignite warns about failed
> partition maps exchange. In test environment 3 Ignite 2.7 server nodes run
> in the same JVM8 on Win10, using localhost networking.
>
>
>
> 2019-01-09 20:15:27,719 [sys-#164%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Affinity changes applied in 10 ms.
>
> 2019-01-09 20:15:27,719 [sys-#163%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Affinity changes applied in 10 ms.
>
> 2019-01-09 20:15:27,724 [sys-#164%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Full map updating for 5 groups performed in 4 ms.
>
> 2019-01-09 20:15:27,724 [sys-#163%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Full map updating for 5 groups performed in 5 ms.
>
> 2019-01-09 20:15:27,725 [sys-#163%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
> err=null]
>
> 2019-01-09 20:15:27,725 [sys-#164%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
> err=null]
>
> 2019-01-09 20:15:28,710 [db-checkpoint-thread-#157%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint started [checkpointId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
> checkpointLockWait=0ms, checkpointLockHoldTime=6ms,
> walCpRecordFsyncDuration=248ms, pages=204, reason='node started']
>
> 2019-01-09 20:15:28,713 [db-checkpoint-thread-#151%TestNode-0%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint started [checkpointId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
> checkpointLockWait=0ms, checkpointLockHoldTime=8ms,
> walCpRecordFsyncDuration=257ms, pages=204, reason='node started']
>
> 2019-01-09 20:15:28,715 [db-checkpoint-thread-#146%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint started [checkpointId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
> checkpointLockWait=0ms, checkpointLockHoldTime=22ms,
> walCpRecordFsyncDuration=289ms, pages=204, reason='node started']
>
> 2019-01-09 20:15:30,788 [db-checkpoint-thread-#157%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint finished [cpId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1103ms,
> pagesWrite=84ms, fsync=1992ms, total=3179ms]
>
> 2019-01-09 20:15:30,858 [db-checkpoint-thread-#151%TestNode-0%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint finished [cpId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1213ms,
> pagesWrite=79ms, fsync=2066ms, total=3358ms]
>
> 2019-01-09 20:15:30,998 [db-checkpoint-thread-#146%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint finished [cpId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1262ms,
> pagesWrite=79ms, fsync=2203ms, total=3544ms]
>
> 2019-01-09 20:15:37,510 [exchange-worker-#44%TestNode-0%] WARN
> org.apache.ignite.internal.diagnostic:118 - Failed to wait for partition
> map exchange [topVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
> node=454d2051-cea6-4f2c-99a7-7c5698494175]. Dumping pending objects that
> might be the cause:
>
> 2019-01-09 20:15:37,510 [exchange-worker-#44%TestNode-0%] WARN
> org.apache.ignite.internal.diagnostic:118 - Ready affinity version:
> AffinityTopologyVersion [topVer=-1, minorTopVer=0]
>
> 2019-01-09 20:15:37,515 [exchange-worker-#44%TestNode-0%] WARN
> 

Re: read from igniteRDD and write to igniteRDD

2019-01-15 Thread Mikhail
Hi Mehdi

I think first you need to read the following doc:
https://apacheignite-fs.readme.io/docs/ignitecontext-igniterdd

it describes how to properly setup Ignite cluster and create IgniteRDD, also
it has examples.

if you still have a question after reading the documentation, please
describe your case and what you want to implement in more details.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Visor cache command freezes, when client node connects.

2019-01-15 Thread Ilya Kasnacheev
Hello!

Can you collect thread dumps during freeze?

Regards,
-- 
Ilya Kasnacheev


пт, 11 янв. 2019 г. в 01:23, javadevmtl :

> Hi, using 2.7.3
>
> I start my client as...
>
> TcpDiscoverySpi spi = new TcpDiscoverySpi();
> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
> ipFinder.setAddresses(addresses.getList());
> spi.setIpFinder(ipFinder);
>
> igniteConfig.setDiscoverySpi(spi);
>
> igniteConfig.setClientMode(true);
>
> I then also create a cache dynamically as...
>
> CacheConfiguration cacheCfg = new CacheConfiguration("DJTAZZ");
>
> cacheCfg.setCacheMode(CacheMode.REPLICATED);
> this.cache =
> igniteClient.getIgniteInstance().getOrCreateCache(cacheCfg);
>
> In ignitegridvisor.sh when I run the "cache" command it seems to freeze
> until I disconnect my application.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Again Failed to get page IO instance

2019-01-15 Thread Ilya Kasnacheev
Hello!

It's hard to say judging from your stack trace alone. Why don't you try and
see?

Regards,
-- 
Ilya Kasnacheev


вт, 8 янв. 2019 г. в 01:24, wengyao04 :

> Hi, we have ran ignite 2.6.0, and have  8 server in our base line
> topology.
> After we bounce the severs, our client starts writing update using invoke
> method, but we get error
> Caused by: java.lang.IllegalStateException: Item not found: 9
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.findIndirectItemIndex(AbstractDataPageIO.java:341)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.getDataOffset(AbstractDataPageIO.java:450)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.readPayload(AbstractDataPageIO.java:492)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:150)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:149)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:67)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:33)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:167)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:46)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.getRow(BPlusTree.java:4482)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:209)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:46)
> ~[ignite-indexing-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4469)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4389)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$1500(BPlusTree.java:83)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:278)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4816)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4801)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:158)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.read(DataStructure.java:332)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2336)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2086)
> ~[ignite-core-2.6.0.jar:2.6.0]
>
> DO you think if it is related to the issue mentioned in
> /
> http://apache-ignite-users.70518.x6.nabble.com/And-again-Failed-to-get-page-IO-instance-page-content-is-corrupted-td20095.html#none
> /
>
> and I see there is fixed 2.7
> [1] https://issues.apache.org/jira/browse/IGNITE-8659
> [2] https://issues.apache.org/jira/browse/IGNITE-5874
>
> Do you think the [1] and [2] are the root cause ? Thank you very much.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Not able to load data from Cassandra database to Ignite Cache.

2019-01-15 Thread Ilya Kasnacheev
Hello!

> But here the requirement is first I need to load the data from cassandra
database to ignite cache

Have you tried loadCache()?

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 16:08, Kiran Kumar :

> Configured three xml files, one for cassandra connections, one for
> persistence and one for default.xml where both cassandra and persistence
> bean ids configured and also updated cachestore configuration.
>
> I was able to save data to cassandra using *cache.put*.
>
> But here the requirement is first I need to load the data from cassandra
> database to ignite cache and then need to perform dataframe streaming and
> then save new data to cassandra from ignite cache.
>
> Is there any way to add dynamic POJO's in Keypersistence ??
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQLFieldsQuery timeout is not working

2019-01-15 Thread Taras Ledkov
Looks like your query scans more then 4K rows to produce 200 rows result 
set.


15.01.2019 17:02, garima.j пишет:

Hi,

The number of rows in the table are 300k. In the SQL query, I specify the
limit as 10.

If I increase the limit to 200, it throws QueryCancelledException and times
out. Is timeout dependent on the resultset size as well?

Also, is there any way through which I can customize H2 timeout scanned row
count (instead of 4k rows).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: SQLFieldsQuery timeout is not working

2019-01-15 Thread Taras Ledkov

Hi,

Timeout isn't dependent on the result set size but timeout is checked 
one time for 4K rows are scanned.


15.01.2019 17:02, garima.j пишет:

Hi,

The number of rows in the table are 300k. In the SQL query, I specify the
limit as 10.

If I increase the limit to 200, it throws QueryCancelledException and times
out. Is timeout dependent on the resultset size as well?

Also, is there any way through which I can customize H2 timeout scanned row
count (instead of 4k rows).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: IgniteInterruptedException on cache.get in a transaction inside runnable (ignite 2.6)

2019-01-15 Thread Ilya Kasnacheev
Hello!

I have no idea, maybe some kind of remote operation failed and caused
domino effect. Do you have logs?

Regards,
-- 
Ilya Kasnacheev


вт, 15 янв. 2019 г. в 17:17, bintisepaha :

> Sorry, we are still on 2.3 :)
> The timeout is 5 minutes on this transaction and it fails pretty quickly.
> At
> the same time, many threads fail at the exact same operation across nodes.
>
> Was there a similar bug in 2.3? We will be upgrading to 2.7 soon.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteInterruptedException on cache.get in a transaction inside runnable (ignite 2.6)

2019-01-15 Thread bintisepaha
Sorry, we are still on 2.3 :)
The timeout is 5 minutes on this transaction and it fails pretty quickly. At
the same time, many threads fail at the exact same operation across nodes.

Was there a similar bug in 2.3? We will be upgrading to 2.7 soon.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQLFieldsQuery timeout is not working

2019-01-15 Thread garima.j
Hi,

The number of rows in the table are 300k. In the SQL query, I specify the
limit as 10. 

If I increase the limit to 200, it throws QueryCancelledException and times
out. Is timeout dependent on the resultset size as well?

Also, is there any way through which I can customize H2 timeout scanned row
count (instead of 4k rows).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Data Streamer Kafka version 2.0

2019-01-15 Thread Alexey Kukushkin
Hi Mahesh,

I do not think Kafka streamer uses any 2.x specific Kafka client APIs. From
what you say I think kafka-client-1.1 (that Ignite 2.7 uses) cannot connect
to Kafka 2.x cluster. Can you manually replace kafka-client-1.1.jar with
kafka-client-2.0 jar on the Kafka streamer side and see if it fixes your
issue? BTW, how does your issue look like? What is the error message?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: SQLFieldsQuery timeout is not working

2019-01-15 Thread Stanislav Lukyanov
Hi,

What’s your Ignite version?
Can you share Ignite and cache configs and the query SQL?

Thanks,
Stan 

From: garima.j
Sent: 15 января 2019 г. 14:18
To: user@ignite.apache.org
Subject: SQLFieldsQuery timeout is not working

Hello, 

I'm using the below code to execute a SQL fields query : 

SqlFieldsQuery qry = new
SqlFieldsQuery(jfsIgniteSQLFilter.getSQLQuery()).setTimeout(timeout,TimeUnit.MILLISECONDS);
 
List listFromCache = cache.query(qry).getAll();

The query doesn't timeout at all. My timeout is 5 milliseconds and the data
is retrieved in 168 ms without timing out.

Please let me know what am I missing.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: SQLFieldsQuery timeout is not working

2019-01-15 Thread Taras Ledkov

Hi,

How many rows does the result set contains? How many rows are scanned to 
produce the result?

Ignite use H2 as the SQL frontend.
H2 checks the timeout after each 4K scanned rows.

15.01.2019 14:18, garima.j пишет:

Hello,

I'm using the below code to execute a SQL fields query :

SqlFieldsQuery qry = new
SqlFieldsQuery(jfsIgniteSQLFilter.getSQLQuery()).setTimeout(timeout,TimeUnit.MILLISECONDS);
List listFromCache = cache.query(qry).getAll();

The query doesn't timeout at all. My timeout is 5 milliseconds and the data
is retrieved in 168 ms without timing out.

Please let me know what am I missing.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



RE: Extra console output from logs.

2019-01-15 Thread Stanislav Lukyanov
Hi,

First, try to disable IGNITE_QUIET.

If still seeing duplicated messages after that, make sure you don’t have 
multiple slf4j adapters in the classpath.

Let me know if that helps.

Thanks,
Stan 

From: javadevmtl
Sent: 9 января 2019 г. 21:43
To: user@ignite.apache.org
Subject: Re: Extra console output from logs.

More precisely this is what we see...

This line is good:
{"appTimestamp":"2019-01-09T18:29:34.298+00:00","threadName":"vert.x-worker-thread-0","level":"INFO","loggerName":"org.apache.ignite.internal.IgniteKernal%xx-dev","message":"\n\n>>>
   
__    \n>>>   /  _/ ___/ |/ /  _/_  __/ __/  \n>>> 
_/ // (7 7// /  / / / _/\n>>> /___/\\___/_/|_/___/ /_/ /___/   \n>>>
\n>>> ver. 2.7.0#20181130-sha1:256ae401\n>>> 2018 Copyright(C) Apache
Software Foundation\n>>> \n>>> Ignite documentation:
http://ignite.apache.org\n"}

The below shouldn't print:
[13:29:34]__   
[13:29:34]   /  _/ ___/ |/ /  _/_  __/ __/ 
[13:29:34]  _/ // (7 7// /  / / / _/   
[13:29:34] /___/\___/_/|_/___/ /_/ /___/  
[13:29:34] 
[13:29:34] ver. 2.7.0#20181130-sha1:256ae401
[13:29:34] 2018 Copyright(C) Apache Software Foundation
[13:29:34] 
[13:29:34] Ignite documentation: http://ignite.apache.org
[13:29:34] 
[13:29:34] Quiet mode.
[13:29:34]   ^-- Logging by 'Slf4jLogger
[impl=Logger[o.a.i.i.IgniteKernal%xx-dev], quiet=true]'
[13:29:34]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[13:29:34] 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: IgniteInterruptedException on cache.get in a transaction inside runnable (ignite 2.6)

2019-01-15 Thread Ilya Kasnacheev
Hello!

You are saying you're on 2.6 but stack trace shows 2.3:

~[ignite-core-2.3.0.jar:2.3.0]

Other than that, no reason why tasks submitted to a pool would NOT be
interrupted by timeout if 'get' operation fails to finish timely. Need more
context.

Regards,
-- 
Ilya Kasnacheev


пн, 14 янв. 2019 г. в 23:38, bintisepaha :

>
> Hi folks, we are getting this error in existing code in ignite 2.6.0.
> The cache.get is on a replicated/transactional cache and holds only only a
> single key/value pair. It has been used like this for a while in
> production.
> The code is executed in a runnable and wrapped in a
> pessimistic/repeatable_read transaction.
>
> The below line throws an exception. Any idea what could be causing this?
>
> Date positionStartDate = (Date)
> posStartDateCache.get("positionStartDate");
>
> [14 Jan 2019 14:55:49.690 EST] [pub-#12352%DataGridServer-Staging%] ERROR
> 11223 (TradeOrdersLoaderForMatching.java:69) Exception received while
> loading tradeOrders for key: TraderTidSettlementKey [traderId=6671,
> instrumentId=60083, settlement=null]
> javax.cache.CacheException: class
> org.apache.ignite.IgniteInterruptedException: Got interrupted while waiting
> for future to complete.
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1287)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1648)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:831)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:662)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
> com.tudor.datagridI.utils.Util.getPositionStartDate(Util.java:27)
> ~[data-grid-server-ignite.jar:?]
> at
>
> com.tudor.datagridI.server.cachestore.springjdbc.TradeOrderTradeCacheLoader.loadingFromSingleTrimDb1(TradeOrderTradeCacheLoader.java:34)
> ~[data-grid-server-ignite.jar:?]
> at
>
> com.tudor.datagridI.server.matching.LoadTradeOrdersForMatchingLoader.loadCache(LoadTradeOrdersForMatchingLoader.java:46)
> ~[data-grid-server-ignite.jar:?]
> at
>
> com.tudor.datagridI.server.matching.LoadTradeOrdersForMatchingLoader.loadCache(LoadTradeOrdersForMatchingLoader.java:55)
> ~[data-grid-server-ignite.jar:?]
> at
>
> com.tudor.datagridI.server.matching.TradeOrdersLoaderForMatching.addTradeOrdersForTrader(TradeOrdersLoaderForMatching.java:79)
> ~[data-grid-server-ignite.jar:?]
> at
>
> com.tudor.datagridI.server.matching.TradeOrdersLoaderForMatching.loadTradeOrders(TradeOrdersLoaderForMatching.java:65)
> ~[data-grid-server-ignite.jar:?]
> at
>
> com.tudor.datagridI.server.matching.TradeOrdersLoaderForMatching.run(TradeOrdersLoaderForMatching.java:87)
> ~[data-grid-server-ignite.jar:?]
> at
>
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.execute(GridClosureProcessor.java:1944)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1181)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1913)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [?:1.8.0_112]
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 

Re: NPE when start

2019-01-15 Thread Ilya Kasnacheev
Hello!

Can you provide a small reproducer project?

Regards,
-- 
Ilya Kasnacheev


сб, 12 янв. 2019 г. в 09:35, wangsan :

> When a client node start with zk discovery and persistence enabled,Some
> null
> point exceptions will be throw (when the node start on a new machine )
>
> The exception traces as follows:
>
> 12:26:03.288 [zk-172_22_29_108_SEARCH_NODE_8100-EventThread] ERROR
> o.a.i.i.p.c.GridContinuousProcessor  - Failed to unmarshal continuous
> routine handler, ignore routine
> [routineId=31b253bb-df3a-45f2-b658-3917b82993b2,
> srcNodeId=76b83b66-3858-49bd-97d8-38c49333e6f5]
> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object with
> optimized marshaller
> at
>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9968)
> at
>
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.startDiscoveryDataRoutine(GridContinuousProcessor.java:568)
> at
>
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onGridDataReceived(GridContinuousProcessor.java:529)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:888)
> at
>
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processLocalJoin(ZookeeperDiscoveryImpl.java:2946)
> at
>
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processBulkJoin(ZookeeperDiscoveryImpl.java:2772)
> at
>
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processNewEvents(ZookeeperDiscoveryImpl.java:2638)
> at
>
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.processNewEvents(ZookeeperDiscoveryImpl.java:2610)
> at
>
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.access$2000(ZookeeperDiscoveryImpl.java:108)
> at
>
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl$ZkWatcher.processResult(ZookeeperDiscoveryImpl.java:4120)
> at
>
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient$DataCallbackWrapper.processResult(ZookeeperClient.java:1163)
> at
>
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:569)
> at
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)
> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
> unmarshal object with optimized marshaller
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1780)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1962)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
> at
>
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:310)
> at
>
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:99)
> at
>
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
> at
>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9962)
> ... 12 common frames omitted
> Caused by: org.apache.ignite.IgniteCheckedException: Failed to deserialize
> object with given class loader:
> [clsLdr=sun.misc.Launcher$AppClassLoader@42a57993, err=null]
> at
>
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:236)
> at
>
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1777)
> ... 18 common frames omitted
> Caused by: java.lang.NullPointerException: null
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.aliveServerNodes(GridDiscoveryManager.java:1858)
> at
>
> org.apache.ignite.internal.processors.marshaller.ClientRequestFuture.(ClientRequestFuture.java:80)
> at
>
> org.apache.ignite.internal.processors.marshaller.MarshallerMappingTransport.requestMapping(MarshallerMappingTransport.java:138)
> at
>
> org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:375)
> at
>
> org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:344)
> at
>
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshallerUtils.classDescriptor(OptimizedMarshallerUtils.java:264)
> at
>
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:341)
> at
>
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:416)

Re: Concurrent merge into operations cause critical system error on ignite 2.7 node.

2019-01-15 Thread Ilya Kasnacheev
Hello!

Can you provide logs?

Regards,
-- 
Ilya Kasnacheev


вс, 13 янв. 2019 г. в 18:05, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I have a ignite 2.7 node with persistence enabled.I test concurrent merge
> into operations on it and find below concurrent operations can cause
> critical system error:
> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
> VALUES(2,'1','1'),(1,'1','1')".
>
> But the following concurrent operations seem no problem:
> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
> VALUES(1,'1','1'),(2,'1','1')".
>
> Is this a bug?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Insert performance

2019-01-15 Thread Ilya Kasnacheev
Hello!

This might be the point of IgniteDataStreamer (bigger latency, bigger
throughput).
However, you should be feeding data in parallel to IgniteDataStreamer, from
multiple threads, for optimal performance.

There are a lot of tuning considerations, such as thread pools, etc.

Regards,
-- 
Ilya Kasnacheev


пн, 14 янв. 2019 г. в 23:34, yann Blazart :

> Hello,
>
> I'm doing tests to get the best performance whily using Ignite.
>
> I've to insert a lot of data, then doing some requests.
>
> First, in my tests, I'm using objects with @SqlFields annotations,
> cache.putAll is faster than IgniteDataStream, is it normal ? I'm using
> cache.putAll to send 5000 objects at the same time.
>
> Second, when I'm doing my insertion test, 30 millions of objects, using
> two servers node and one client node on same machine, I raise 500 insert
> per milliseconds, wich is excellent !
>
> But if I use two differents machines, threw my network, performances go
> down to 200 inserts per milliseconds. But my network is far away to be busy
> (only 80Mb/s used on 200Mb/s available on simple test).
>
> Is there any reason ?
>
> Thanks in advance,
>
> Regards.
>


Re: MVCC and continuous query

2019-01-15 Thread Ilya Kasnacheev
Hello!

Continuous Query is not event.

We have three separate mechanisms with similar function: events, cache
store, continuous query. It is possible that some supported and some aren't.

Regards.
-- 
Ilya Kasnacheev


вт, 15 янв. 2019 г. в 03:28, Cindy Xing :

> As per
> https://apacheignite.readme.io/docs/multiversion-concurrency-control,
> continuous query can be done against ignite with mvcc enabled
> (transactional_snapshot). However, on the same page, it also mentioned that
> events is not supported with MVCC.
>
> I am wondering what mechanism continuous query uses. Is it event? If so, it
> seems conflicting to me.
>
> Thanks
> Cindy
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Thread got interrupted while trying to acquire table lock & Gotinterrupted while waiting for future to complete

2019-01-15 Thread Stanislav Lukyanov
Looks like the thread just ended.

Do you see a similar issue? Do you have a reproducer?

Stan 

From: bintisepaha
Sent: 14 января 2019 г. 23:07
To: user@ignite.apache.org
Subject: Re: Thread got interrupted while trying to acquire table lock & 
Gotinterrupted while waiting for future to complete

Was there any resolution to this?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Extra console output from logs.

2019-01-15 Thread Ilya Kasnacheev
Hello!

As far as I can see Ignite is going to print its logo to stderr or stdout
regardless of configured logger, along as some other output.

Regards,
-- 
Ilya Kasnacheev


чт, 10 янв. 2019 г. в 21:25, javadevmtl :

> Nobody has experienced this? I'm not trying to disable logs. I'm just
> getting
> double the output...
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Visor "cache" command hangs when client node connects.

2019-01-15 Thread Ilya Kasnacheev
Hello!

I think there were people on userlist who was able to open Dockerized
Ignite cluster to outside clients. I recommend searching archives.

Regards,
-- 
Ilya Kasnacheev


пн, 14 янв. 2019 г. в 21:54, John Smith :

> So if it's all running inside DC/OS it works ni issues. So wondering what
> would be the strategy if external clients want to connect either Ignite
> being inside the contaimer env or outside... Just REST?
>
> On Fri., Jan. 11, 2019, 15:00 John Smith 
>> Yeah this doesn't work on the dev environment either, because the
>> application is running in docker on bridge mode, but the cluster is on
>> standard VM hosts. I'm using DC/OS...
>> Does the Mesos deployment support DC/OS? If not I can create custom
>> marathon docker images for it...
>>
>> On Fri, 11 Jan 2019 at 14:12, John Smith  wrote:
>>
>>> And it seems to say like that indefinitely. I let it go for 5 minutes
>>> and nothing has printed to the console or logs.
>>>
>>> On Fri, 11 Jan 2019 at 12:49, John Smith  wrote:
>>>
 I can confirm I just tested it. There is no stack trace. Basically the
 client connects, no errors, the cache command hangs/pauses, I disconnect
 the client and cache command completes. I'm also 100% certain the client
 works when connecting to the cluster over wi-fi. I have been able to create
 caches dynamically. Query the caches etc...

 On Fri, 11 Jan 2019 at 12:23, John Smith 
 wrote:

> That's the thing... There is none. It just seems to pause and wait.
> The moment I close my client application it just resumes...
>
> But other commands like top work fine...
>
> On Fri, 11 Jan 2019 at 12:15, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> If they're on the same network it is not obvious what happens here,
>> but I have just performed the steps you have mentioned without problems.
>>
>> Can you collect stack traces from all nodes when this hang happens?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 11 янв. 2019 г. в 20:12, Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com>:
>>
>>> Hello!
>>>
>>> I'm afraid that visor will try to connect to your client and will
>>> wait until this is successful.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пт, 11 янв. 2019 г. в 20:01, John Smith :
>>>
 Humm maybe not. The client is running on my laptop through the
 wi-fi. But the cluster and visor are on the dev network. But the 
 client on
 my laptop is capable of joining the cluster through the wi-fi and
 processing requests no problems.

 On Fri, 11 Jan 2019 at 10:56, Ilya Kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> Are you sure that your Visor node is able to connect to client
> node via communication port? Nodes in cluster need to be able to do 
> that,
> which is somewhat unexpected in case of client node.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 11 янв. 2019 г. в 18:36, John Smith :
>
>> Hi, sorry if this a double post I tried through nabble and I
>> don't think it came through...
>>
>> So using 2.7...
>>
>> I have a 3 node cluster started with ignite.sh and that works
>> perfectly fine. I'm also able to connect to the cluster with visor 
>> and I
>> can also run top, cache etc... commands no problem. But the issue 
>> arises
>> only when an external client node connects
>> using igniteConfig.setClientMode(true);
>>
>> 1- Start the cluster
>> 2- Connect with visor
>> 3- Run cache command (prints cache details, no problem)
>> 4- Connect client application
>> 5- Run cache command (seems to hang, doesn't crash)
>> 6- Disconnect client app
>> 7- Cache command completes and prints.
>>
>> Cache seems to be the only command that hangs/pauses when the
>> client is connected.
>>
>> The cache config incase...
>>
>> 
>>
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xmlns:util="http://www.springframework.org/schema/util;
>>xsi:schemaLocation="
>> http://www.springframework.org/schema/beans
>>
>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>
>> http://www.springframework.org/schema/util
>>
>> http://www.springframework.org/schema/util/spring-util.xsd;>
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>>
>> 
>> 

Re: Ignite documentation error regarding RAM usage

2019-01-15 Thread Artem Budnikov

Hi,

The information on the page you referenced is correct. The default max 
size of the default data region is either 20% of RAM or 256MB, whichever 
is larger. The log message changed in 
https://issues.apache.org/jira/browse/IGNITE-7824 refers to something else.


Perhaps, someone from the community could explain what it is exactly 
that should be more than 80%. Anyone?


--

Artem

On 14.01.2019 21:10, Loredana Radulescu Ivanoff wrote:

Hello,

Following this bug fix - 
https://issues.apache.org/jira/browse/IGNITE-7824, I think the 
documentation needs to be updated too, right? It still mentions 20% 
RAM, and it should be 80%.


This is the documentation I'm referring to:

https://apacheignite.readme.io/docs/memory-configuration



SQLFieldsQuery timeout is not working

2019-01-15 Thread garima.j
Hello, 

I'm using the below code to execute a SQL fields query : 

SqlFieldsQuery qry = new
SqlFieldsQuery(jfsIgniteSQLFilter.getSQLQuery()).setTimeout(timeout,TimeUnit.MILLISECONDS);
 
List listFromCache = cache.query(qry).getAll();

The query doesn't timeout at all. My timeout is 5 milliseconds and the data
is retrieved in 168 ms without timing out.

Please let me know what am I missing.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partitions stuck in MOVING state after upgrade to 2.7

2019-01-15 Thread Dmitry Lazurkin


I think I can comment this lines in
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition:
    if (grp.walEnabled())
    ctx.wal().log(new
PartitionMetaStateRecord(grp.groupId(), id, state(), updateCounter()));

PS. https://issues.apache.org/jira/browse/IGNITE-10226

On 1/10/19 7:33 PM, dilaz03 wrote:
> There is problem with Kubernetes, because scheduler can restart Ignite node
> at any time.
>
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/