Re: ApacheCon talk available

2020-10-15 Thread Andrey Gura
Saikat,

thanks a lot! *applause*

On Fri, Oct 16, 2020 at 12:11 AM Saikat Maitra  wrote:
>
> Hi,
>
> My talk about Data Streaming using Apache Flink and Apache Ignite in
> ApacheCon is available now
>
> https://www.youtube.com/watch?v=n74HMmTz5i0
>
> Regards,
> Saikat


Re: Cache metrics on server nodes does not update correctly

2020-03-13 Thread Andrey Gura
Hi,

AI 2.7.6 doesn't contain a bug with aggregation of cache hits/misses.

I don't sure that described problem is related with IGNITE-3495 [1].
So it makes sense to file an issue.

[1] https://issues.apache.org/jira/browse/IGNITE-3495

On Thu, Mar 12, 2020 at 8:21 PM Dominik Przybysz  wrote:
>
> Hi,
> I used ignite in version 2.7.6 (but I have also seen this behaviour on other 
> 2.7.x versions) and there aren't any near or local cache.
> I expect that if I ask distributed cache about key which does not exist then 
> the miss metric will be incremented.
>
>
> śr., 11 mar 2020 o 11:35 Andrey Gura  napisał(a):
>>
>> Denis,
>>
>> I don't sure that I understand what is expected behavior should be.
>> There are local and aggregated cluster wide metrics. I don't know
>> which one used by Visor because I never used it :)
>>
>> Also it would be great to know what version of Apache Ignite used in
>> described case. I remember some bug with metrics aggregation during
>> discovery metrics message round trip.
>>
>> On Wed, Mar 11, 2020 at 12:05 AM Denis Magda  wrote:
>> >
>> > @Nikolay Izhikov , @Andrey Gura ,
>> > could you folks check out this thread?
>> >
>> > I have a feeling that what Dominik is describing was talked out before and
>> > rather some sort of a limitation than an issue with the current
>> > implementation.
>> >
>> > -
>> > Denis
>> >
>> >
>> > On Tue, Mar 3, 2020 at 11:41 PM Dominik Przybysz 
>> > wrote:
>> >
>> > > Hi,
>> > > I am trying to use partitioned cache on server nodes to which I connect
>> > > with client node. Statistics of cache in the cluster are updated, but 
>> > > only
>> > > for hits metric - misses metric is always 0.
>> > >
>> > > To reproduce this problem I created cluster of two nodes:
>> > >
>> > > Server node 1 adds 100 random test cases and prints cache statistics
>> > > continuously:
>> > >
>> > > public class IgniteClusterNode1 {
>> > > public static void main(String[] args) throws InterruptedException {
>> > > IgniteConfiguration igniteConfiguration = new
>> > > IgniteConfiguration();
>> > >
>> > > CacheConfiguration cacheConfiguration = new CacheConfiguration();
>> > > cacheConfiguration.setName("test");
>> > > cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
>> > > cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>> > > cacheConfiguration.setStatisticsEnabled(true);
>> > > igniteConfiguration.setCacheConfiguration(cacheConfiguration);
>> > >
>> > > TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
>> > > communicationSpi.setLocalPort(47500);
>> > > igniteConfiguration.setCommunicationSpi(communicationSpi);
>> > >
>> > > TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
>> > > discoverySpi.setLocalPort(47100);
>> > > discoverySpi.setLocalPortRange(100);
>> > > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>> > > ipFinder.setAddresses(Arrays.asList("127.0.0.1:47100..47200",
>> > > "127.0.0.1:48100..48200"));
>> > > igniteConfiguration.setDiscoverySpi(discoverySpi);
>> > >
>> > > try (Ignite ignite = Ignition.start(igniteConfiguration)) {
>> > > try (IgniteCache cache =
>> > > ignite.getOrCreateCache("test")) {
>> > > new Random().ints(1000).map(i -> Math.abs(i %
>> > > 1000)).distinct().limit(100).forEach(i -> {
>> > > String key = "data_" + i;
>> > > String value = UUID.randomUUID().toString();
>> > > cache.put(key, value);
>> > > }
>> > > );
>> > > }
>> > > while (true) {
>> > > System.out.println(ignite.cache("test").metrics());
>> > > Thread.sleep(5000);
>> > > }
>> > > }
>> > > }
>> > > }
>> > >
>> > > Server node 2 only prints cache statistics continuously:
>> > >
>> > > 

Re: JVM tuning parameters

2019-04-01 Thread Andrey Gura
Hi,

I don't see any mentions of OOM. Provided log message reports blocking
of db-checkpoint-thread. I think worker tries to acquire checkpoint
read lock.
Stack trace corresponds to the thread that detected blocking. Failure
handler prints out threads dump to the log. This thread dump can help
in problem analysis.

Also more detailed case description is required (it's just creation of
400 tables or some data also adding to the tables).

And finally... 1 core - is too hard restriction from my point of view.

On Sat, Mar 30, 2019 at 9:56 AM Denis Magda  wrote:
>
> Hi,
>
> How does the JVM error look like?
>
> Apart from that, Andrey, Igniters, the failure handler fired off but I have 
> no glue from the shared logs what happened or how it is connected to the Java 
> heap issues. Should I expect to see anything from the logs not added to the 
> thread?
>
> --
> Denis Magda
>
>
> On Thu, Mar 28, 2019 at 6:58 AM ashfaq  wrote:
>>
>> Hi Team,
>>
>> We are installing ignite on kubernetes environment with native persistence
>> enabled. When we try to create around 400 tables using the sqlline end point
>> ,  the pods are restarting after creating 200 tables with jvm heap error so
>> we have increased the java heap size from 1GB to 2GB and this time it failed
>> at 300 tables.
>>
>> We would like to know how can we arrive at jvm heap size . Also we want to
>> know how do we configure such that the pods are not restarted and the
>> cluster is stable.
>>
>> Below are the current values that we have used.
>>
>> cpu - 1core
>> xms - 1GB
>> xmx - 2GB
>> RAM - 3GB
>>
>> Below is the error log:
>>
>> "Critical system error detected. Will be handled accordingly to configured
>> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
>> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]],
>> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
>> o.a.i.IgniteException: GridWorker [name=db-checkpoint-thread,
>> igniteInstanceName=null, finished=false, heartbeatTs=1553771825864]]] class
>> org.apache.ignite.IgniteException: GridWorker [name=db-checkpoint-thread,
>> igniteInstanceName=null, finished=false, heartbeatTs=1553771825864]
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
>> at
>> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
>> at
>> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[ANNOUNCE] Apache Ignite 2.6.0 Released

2018-07-18 Thread Andrey Gura
The Apache Ignite Community is pleased to announce the release of
Apache Ignite 2.6.0.

Apache Ignite [1] is a memory-centric distributed database, caching,
and processing platform for transactional, analytical, and streaming
workloads delivering in-memory speeds at petabyte scale.

This release fixes several critical issues and brings in some improvements:
https://ignite.apache.org/releases/2.6.0/release_notes.html

Download the latest Ignite version from here:
https://ignite.apache.org/download.cgi

Please let us know [2] if you encounter any problems.

Regards,
Andrey Gura on behalf of Apache Ignite community

[1] https://ignite.apache.org
[2] https://ignite.apache.org/community/resources.html#ask


Re: Affinity Key field is not identified if binary configuration is used on cache key object

2017-08-04 Thread Andrey Gura
Could you please clarify what does it mean "I already did LoadCache
before running the program."?

Also it would be good if you can share some minimal reproducer.

On Fri, Aug 4, 2017 at 4:55 PM, kotamrajuyashasvi
 wrote:
> Hi
> All nodes use same config xml and pojos.
>
> On Aug 4, 2017 7:22 PM, "agura [via Apache Ignite Users]" <[hidden email]>
> wrote:
>>
>> Hi
>>
>> It seems that you have different configuration on nodes (e.g. one node
>> has cacheKeyConfiguration while other doesn't). Isn't it?
>>
>> On Fri, Aug 4, 2017 at 12:54 PM, kotamrajuyashasvi
>> <[hidden email]> wrote:
>>
>> > Hi
>> >
>> > Thanks for the response. When I put cacheKeyConfiguration in ignite
>> > configuration, the affinity was working. But when I call Cache.Get() in
>> > client program I'm getting the following error.
>> >
>> > "Java exception occurred
>> > [cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has
>> > different affinity key fields [typeName=PersonPK,
>> > affKeyFieldName1=customer_ref, affKeyFieldName2=null]]"
>> >
>> > I already did LoadCache before running the program.
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-tp15959p15990.html
>> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>>
>> 
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-tp15959p15995.html
>> To unsubscribe from Affinity Key field is not identified if binary
>> configuration is used on cache key object, click here.
>> NAML
>
>
> 
> View this message in context: Re: Affinity Key field is not identified if
> binary configuration is used on cache key object
>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Affinity Key field is not identified if binary configuration is used on cache key object

2017-08-04 Thread Andrey Gura
Hi

It seems that you have different configuration on nodes (e.g. one node
has cacheKeyConfiguration while other doesn't). Isn't it?

On Fri, Aug 4, 2017 at 12:54 PM, kotamrajuyashasvi
 wrote:
> Hi
>
> Thanks for the response. When I put cacheKeyConfiguration in ignite
> configuration, the affinity was working. But when I call Cache.Get() in
> client program I'm getting the following error.
>
> "Java exception occurred
> [cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has
> different affinity key fields [typeName=PersonPK,
> affKeyFieldName1=customer_ref, affKeyFieldName2=null]]"
>
> I already did LoadCache before running the program.
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-tp15959p15990.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Explicit setting of write synchronization mode FULL_SYNC needed for replicated caches?

2017-08-04 Thread Andrey Gura
Hi Muthu,

You understand correctly. If you have write-then-read logic that can
be executed on backup node for particular key than you should use
FULL_SYNC write synchronization mode.

Other way to get similar behaviour is setting readFromBackup property
value to false. In this case you still can use PRIMARY_SYNC
synchronization mode. But you should understand what consistency
guarantee you have in case of node failure.

On Fri, Aug 4, 2017 at 4:48 AM, Muthu  wrote:
> Hi Folks,
>
> I understand from the docs (https://apacheignite.readme.io/docs/cache-modes)
> that replicated caches are implemented using partitioned caches where every
> key has a primary copy and is also backed up on all other nodes in the
> cluster & that when data is queried lookups would be made from both primary
> & backup on the node for serving the query.
>
> But i see that the default cache write synchronization mode is PRIMARY_SYNC,
> where client will not wait for backups to be updated. Does that mean i have
> to explicitly set it to FULL_SYNC for replicated caches since responses rely
> on lookup of primary & backup?
>
> Regards,
> Muthu
>
> -- The real danger with modern technology isn't that machines will begin to
> think like people, but that people will begin to think like machines.
> -- Faith is to believe what you do not see; the reward of this faith is to
> see what you believe.


Re: Issue with Ignite + Zeppelin

2017-07-20 Thread Andrey Gura
Zeppelin works good with Ignite. It seems you have some configuration or
setup issues.

20 июля 2017 г. 9:56 AM пользователь "Megha Mittal" <
meghamittal1...@gmail.com> написал:

> Hi,
>
> I added ignite-core jar in the jdbc folder of ignite and even provided jar
> path as an artifact under dependencies section in Zeppelin, but still no
> success.
>
> Can you suggest me some other UI tool that I can coonect to my java
> application. I am not using any xml configurations for Ignite.
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15161.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Inserting Data From Spark to Ignite

2017-07-18 Thread Andrey Gura
Please, try use other JDBC driver version. You mNge it by JDBC
connections string. Checkout this article [1].

[1] https://apacheignite.readme.io/docs/jdbc-driver#jdbc-connection


On Tue, Jul 18, 2017 at 7:02 PM, limabean  wrote:
> Here is the requested example code demonstrating one of the methods that is
> failing to write to Ignite.
> The readme shows the full stack trace appearing from the app as it runs in
> Spark 2.1
>
> I first run the CreateIgniteCache program and verify using a DB tool that
> the Cache and Table are successfully created in Ignite as expected.  The DB
> tool shows a "schema name" (cache name) of TESTCACHE and a single table in
> the schema called SAMPLETABLE.  SAMPLETABLE has the 3 columns with the
> correct types as expected.
>
> Next I run the TestCacheWrite2 program which receives a
>
>>> java.sql.SQLFeatureNotSupportedException: Updates are not supported.
>
> exception (with this particular example, I was able to work through the
> "cannot connect" error I repeated previously.
>
> This is on apache-ignite-2.0.0-src built from source.  It is my
> understanding this version supports updates.
> Updates (data inserts, etc) work from Java programs.  Spark is failing.
>
> Thanks for the help and advice.
>
> Sample Project:
> https://github.com/graben1437/SparkToIgniteTest/tree/master
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Inserting-Data-From-Spark-to-Ignite-tp14937p15066.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Issue with Ignite + Zeppelin

2017-07-18 Thread Andrey Gura
First of all, I don't recommend use Zeppelin 0.7.1 with Ignite 2.0
because different versions of Ignite can have some backward
compatibility issues.

It's very strange that you don't have any errors in logs. Could you
please try other JDBC driver? See docs here [1].


[1] https://apacheignite.readme.io/docs/jdbc-driver#jdbc-connection

On Tue, Jul 18, 2017 at 3:07 PM, Megha Mittal  wrote:
> Hi,
>
> I am using binary release of Zeppelin-0.7.1 .
>
> Here is the cache configuration :
>
> CacheConfiguration itemCacheConfiguration = new
> CacheConfiguration("Item");
>
> itemCacheConfiguration.setMemoryPolicyName(igniteProperties.getMemoryPolicyName());
> itemCacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
>
> itemCacheConfiguration.setIndexedTypes(ItemKey.class, Item.class);
> itemCacheConfiguration.setBackups(1);
> itemCacheConfiguration.setStatisticsEnabled(true);
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15056.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Issue with Ignite + Zeppelin

2017-07-18 Thread Andrey Gura
Did you build Zeppelin from sources or use binary release? From Ignite
configuration I see that you use Ignite 2.x, but still there is no
Zeppelin release with Ignite 2.x.

Also I asked you share cache configuration but I see only Ignite
configuration. Could you please share configuration for cache Item?



On Tue, Jul 18, 2017 at 2:16 PM, Megha Mittal  wrote:
> Hi,
>
> When I execute the paragraph on zeppelin notebook, I get these logs in
> Zeppelin :
>
> INFO [2017-07-18 16:43:03,550] ({pool-2-thread-17}
> SchedulerFactory.java[jobStarted]:131) - Job
> paragraph_1500273041300_616329143 started by scheduler
> org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session474696442
>  INFO [2017-07-18 16:43:03,551] ({pool-2-thread-17}
> Paragraph.java[jobRun]:362) - run paragraph 20170717-120041_1127772031 using
> ignite.ignitesql
> org.apache.zeppelin.interpreter.LazyOpenInterpreter@491f00d6
>  WARN [2017-07-18 16:43:03,556] ({pool-2-thread-17}
> NotebookServer.java[afterStatusChange]:2058) - *Job
> 20170717-120041_1127772031 is finished, status: ERROR, exception: null,
> result: %text Failed to establish connection.*
>  INFO [2017-07-18 16:43:03,565] ({pool-2-thread-17}
> SchedulerFactory.java[jobFinished]:137) - Job
> paragraph_1500273041300_616329143 finished by scheduler
> org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session474696442
>
>
> Also, here are my configurations are :
>
> IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
>
> TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
> TcpDiscoveryS3IpFinder ipFinder = new TcpDiscoveryS3IpFinder();
> ipFinder.setAwsCredentials(new
> BasicAWSCredentials(awsProperties.getAccessKey(),
> awsProperties.getSecretKey()));
> ipFinder.setBucketName(awsProperties.getIgniteBucketName());
> discoverySpi.setIpFinder(ipFinder);
> igniteConfiguration.setDiscoverySpi(discoverySpi);
>
> MemoryPolicyConfiguration memoryPolicyConfiguration = new
> MemoryPolicyConfiguration();
>
> memoryPolicyConfiguration.setName(igniteProperties.getMemoryPolicyName());
>
> memoryPolicyConfiguration.setInitialSize(igniteProperties.getMemoryRegionInitialSize());
>
> memoryPolicyConfiguration.setMaxSize(igniteProperties.getMemoryRegionMaxSize());
>
> memoryPolicyConfiguration.setPageEvictionMode(DataPageEvictionMode.DISABLED);
> memoryPolicyConfiguration.setMetricsEnabled(true);
> MemoryConfiguration memoryConfiguration = new MemoryConfiguration();
> memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration);
> igniteConfiguration.setMemoryConfiguration(memoryConfiguration);
>
>
> igniteConfiguration.setClientMode(igniteProperties.getClientModeEnabled());
> igniteConfiguration.setCommunicationSpi(new
> TcpCommunicationSpi().setMessageQueueLimit(igniteProperties.getMessageQueueLimit()));
>
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15052.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Issue with Ignite + Zeppelin

2017-07-18 Thread Andrey Gura
Hi,

what about zeppelin ignite interpreter logs? At least this log should
contain something helpful.

Also could you please share your cache configuration?



On Tue, Jul 18, 2017 at 10:33 AM, Megha Mittal
 wrote:
> Hi,
>
> I am using TcpDiscoveryS3IpFinder for ip discovery of servers and client.
> Can this be the issue that zeppelin is not able to create the connection ?
>
> Also, for the logs on server, i get normal messages like :
>
> 2017-07-18 12:58:57,026 INFO IgniteKernal.info - FreeList [name=null,
> buckets=256, dataPages=2737, reusePages=0]
> 2017-07-18 12:59:57,048 INFO IgniteKernal.info -
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=05598393, name=null, uptime=07:05:49:553]
> ^-- H/N/C [hosts=1, nodes=1, CPUs=4]
> ^-- CPU [cur=0.33%, avg=0.2%, GC=0%]
> ^-- PageMemory [pages=8023]
> ^-- Heap [used=211MB, free=58.71%, comm=512MB]
> ^-- Non heap [used=106MB, free=-1%, comm=108MB]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> ^-- Outbound messages queue [size=0]
>
>  No other log is there even while zeppelin tries to make a connection.
>
> Let me know if any other information is required.
>
> Thanks.
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15045.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Weird index out bound Exception

2017-07-17 Thread Andrey Gura
> Also what's the different between the #getOrCreateCache  and #cache.

#getOrCreateCache() return cache if it's already created or creates
and then return newly created cache.

#cache() just returns cache by name or null if cache doesn't exist.

> So if a pure client query side which API should prefer?

Just use #cache(String name) method.

On Mon, Jul 17, 2017 at 4:26 AM, aa...@tophold.com  wrote:
> Also what's the different between the #getOrCreateCache  and #cache.
>
> So if a pure client query side which API should prefer?  very appreciate
> your time. I'm a newer to the Ignite.
>
>
> Regards
> 
> aa...@tophold.com
>
>
> 发件人: aa...@tophold.com
> 发送时间: 2017-07-17 09:16
> 收件人: user; aaron
> 主题: Re: 回复:Weird index out bound Exception
> Also If my server side cache with cache storage in backend, my client side
> also need to configuration this?
>
> as I notice client throw a exception : Spring application context resource
> is not injected;   as my client maybe is a very Simple, not include all the
> configuration available in product.
>
> so Client should not be aware of the server side cache storage.
>
>
> ---
>
>
> Failed to wait for completion of partition map exchange (preloading will not
> start): GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false,
> reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null,
> affTopVer=AffinityTopologyVersion [topVer=4, minorTopVer=1],
> super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=a39278cf-4950-49b8-b516-72b821c45088, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1,
> 192.168.1.58, 192.168.56.1, 2001:0:9d38:6ab8:3c42:1ec4:8a6f:e895],
> sockAddrs=[Aaron/192.168.56.1:0, /0:0:0:0:0:0:0:1:0, /127.0.0.1:0,
> /192.168.1.58:0, /2001:0:9d38:6ab8:3c42:1ec4:8a6f:e895:0], discPort=0,
> order=4, intOrder=0, lastExchangeTime=1500253677140, loc=true,
> ver=2.0.0#20170430-sha1:d4eef3c6, isClient=true], topVer=4,
> nodeId8=a39278cf, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1500253679239]], crd=TcpDiscoveryNode
> [id=51e73bad-3528-4634-b062-83b57dbdbd0a, addrs=[127.0.0.1, 192.168.1.58,
> 192.168.56.1], sockAddrs=[Aaron/192.168.56.1:47500, /127.0.0.1:47500,
> /192.168.1.58:47500], discPort=47500, order=1, intOrder=1,
> lastExchangeTime=1500253678911, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6,
> isClient=false], exchId=GridDhtPartitionExchangeId
> [topVer=AffinityTopologyVersion [topVer=4, minorTopVer=1], nodeId=a39278cf,
> evt=DISCOVERY_CUSTOM_EVT], added=true, initFut=GridFutureAdapter
> [ignoreInterrupts=false, state=DONE, res=false, hash=146566071], init=false,
> lastVer=null, partReleaseFut=null, affChangeMsg=null, skipPreload=true,
> clientOnlyExchange=false, initTs=1500253679239, centralizedAff=false,
> changeGlobalStateE=null, exchangeOnChangeGlobalState=false,
> forcedRebFut=null, evtLatch=0,
> remaining=[51e73bad-3528-4634-b062-83b57dbdbd0a], srvNodes=[TcpDiscoveryNode
> [id=51e73bad-3528-4634-b062-83b57dbdbd0a, addrs=[127.0.0.1, 192.168.1.58,
> 192.168.56.1], sockAddrs=[Aaron/192.168.56.1:47500, /127.0.0.1:47500,
> /192.168.1.58:47500], discPort=47500, order=1, intOrder=1,
> lastExchangeTime=1500253678911, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6,
> isClient=false]], super=GridFutureAdapter [ignoreInterrupts=false,
> state=DONE, res=class o.a.i.IgniteException: Spring application context
> resource is not injected., hash=1325230042]]
> class org.apache.ignite.IgniteCheckedException: Spring application context
> resource is not injected.
> at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7242)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:206)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:158)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1812)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: class org.apache.ignite.IgniteException: Spring application
> context resource is not injected.
> at
> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:171)
> at
> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:100)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1458)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1931)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
> at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
> at
> 

Re: yardstick-ignite:ERROR: Driver process has not started on dserver1 during 10.0 seconds.

2017-07-07 Thread Andrey Gura
Hi,

are there any errors in driver and server logs?

Also please note, that you should have passwordless ssh access to localhost.

On Fri, Jul 7, 2017 at 3:29 PM, 罗 辉  wrote:
> Hello there
>
>I am new to Yarkstick, when I was running benckmark-run-all.sh with the
> attached property file, I usually see below error:
>
> ERROR: Driver process has not started on dserver1 during 10.0 seconds.
> Type "--help" for usage.
> <20:26:05> Driver is started on dserver1 with id=0
> <20:26:05> Driver is stopped on dserver1
> <20:26:06> Server is stopped on dserver2
> <20:26:06> Server is stopped on dserver3
> <20:26:06> Server is stopped on dserver4
> <20:26:06> Server is stopped on dserver5
>
> what is the reason this error takes place?
> thanks for any idea.
>


Re: How to set timeout for client node

2017-07-07 Thread Andrey Gura
Hi Gracelin,

I'm a bit confused. This particular ClassCastException can happen only
in user code because, as I can see, result set processing is already
in progress. So I don't understand where is relation between
connection timeout and hanging. You should correctly handle this
exception and just close your result set.

On Fri, Jul 7, 2017 at 2:20 PM, Gracelin Priya  wrote:
> Hello there,
>
>
>
> Test tool  which populates the gridgain cluster cache , had an error and it
> hanged @ the below point. It is not getting terminated automatically. It had
> to be killed for the client session to exit it.
>
>
>
> Exception in thread "main" java.lang.ClassCastException: java.lang.Short
> cannot be cast to java.lang.Integer
>
> at
> org.apache.ignite.internal.jdbc2.JdbcResultSet.getInt(JdbcResultSet.java:229)
>
>
>
>
>
> Is there any setting in gridgain xml/java which I can set so that the client
> connection should get disconnected after say 3 seconds if any error is seen.
>
>
>
> Any help on this is appreciated.
>
>
>
> Regards,
>
> Priya
>
>


Re: Exception when Ignite server starts up and also when a new node joins the cluster

2017-06-15 Thread Andrey Gura
Hi,

no problems with provided configuration. There are no any errors.
Could you try run your example on different environment setups?

On Thu, Jun 15, 2017 at 7:52 AM, jaipal  wrote:
> Hi Agura.
>
> You can try with this  configuration
> 
> but it is with out write behind
> I have been using jdk7 (build 1.7.0_45-b18) on Red Hat Linux 7.3.
>
> Regards
> Jaipal
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Exception-when-Ignite-server-starts-up-and-also-when-a-new-node-joins-the-cluster-tp13684p13797.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Exception when Ignite server starts up and also when a new node joins the cluster

2017-06-14 Thread Andrey Gura
Hi,

I can't run Ignite with provided configuration because some benas like
writeBehind_one don't exist. Without cache stores I can't reproduce
problem. Could you please provide minimal reproducer in order to
investigate the problem.

Also could you please share environment details that can be helpfull
(OS, JDK implementation, etc)?

On Wed, Jun 14, 2017 at 8:22 AM, jaipal  wrote:
> I am getting the following exception when starting Ignite in server mode(2.0)
> and from then Ignite is stopping all the caches.
>
> Exception in thread "main" class org.apache.ignite.IgniteException:
> Attempted to release write lock while not holding it [lock=7eff50271580,
> state=0002
> 2639
> at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:949)
> at org.apache.ignite.Ignition.start(Ignition.java:325)
> at com.gvc.impl.IgniteEDSBootstrap.main(IgniteEDSBootstrap.java:31)
> Caused by: class org.apache.ignite.IgniteCheckedException: Attempted to
> release write lock while not holding it [lock=7eff50271580,
> state=00022639
> at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7242)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:231)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:158)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:150)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.onKernalStart0(GridCachePartitionExchangeManager.java:463)
> at
> org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter.onKernalStart(GridCacheSharedManagerAdapter.java:108)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:911)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1013)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1895)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1647)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1075)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:595)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:519)
> at org.apache.ignite.Ignition.start(Ignition.java:322)
> ... 1 more
> Caused by: java.lang.IllegalMonitorStateException: Attempted to release
> write lock while not holding it [lock=7eff50271580,
> state=00022639
> at
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeUnlock(OffheapReadWriteLock.java:259)
> at
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeUnlock(PageMemoryNoStoreImpl.java:495)
> at
> org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writeUnlock(PageHandler.java:379)
> at
> org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writePage(PageHandler.java:288)
> at
> org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.initPage(PageHandler.java:225)
> at
> org.apache.ignite.internal.processors.cache.database.DataStructure.init(DataStructure.java:328)
> at
> org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.initTree(BPlusTree.java:796)
> at
> org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.initTree(BPlusTree.java:781)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataTree.(IgniteCacheOffheapManagerImpl.java:1423)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.createCacheDataStore0(IgniteCacheOffheapManagerImpl.java:728)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.createCacheDataStore(IgniteCacheOffheapManagerImpl.java:706)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.(GridDhtLocalPartition.java:163)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.createPartition(GridDhtPartitionTopologyImpl.java:718)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.initPartitions0(GridDhtPartitionTopologyImpl.java:405)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.beforeExchange(GridDhtPartitionTopologyImpl.java:569)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchaengeFuture.java:844)
>   

Re: Application Stack for Heavy transactions sustainability for analytic engine

2017-05-30 Thread Andrey Gura
Hi,

There are no silver bullet answer for your case. Scalability will
depend on your Mongo or MySQL set up, data schema, engines, etc. It is
different databases with different ways to ensure scalability. So you
should consider this ways and may be create some POC's with
performance tests.

On Tue, May 23, 2017 at 6:46 PM, Pothanaboyina  wrote:
> Hi,
>
> I have a requirement where i get to collect almost 8millons of record which
> should process simltaneuosly and should be allowed for regular read purpose.
> we thought of two options
>
> 1)mongo db as database and apache ignite as cache with write back approach.
> 2)mysqldb as database and apache ignite as cache with write back approach.
>
> Requested suggest me scalable architecture for my requirement
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Application-Stack-for-Heavy-transactions-sustainability-for-analytic-engine-tp13095.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Persistent data store error

2017-05-23 Thread Andrey Gura
Hi,

Make sure that you have cache store configuration on server and client nodes.

Also use only one type of node configuration: declarative (Spring XML)
or programmatically configuration.

What version of Ignite do you use? I need it for applying provided
stack trace to proper version of code.


On Tue, May 23, 2017 at 2:16 PM, debasish pradhan  wrote:
> hi ,
>
>
> Please find the config file for client .
>
> import com.mchange.v2.c3p0.ComboPooledDataSource;
> import java.io.InputStream;
> import java.math.BigDecimal;
> import java.sql.Types;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.LinkedHashMap;
> import java.util.Properties;
> import javax.cache.configuration.Factory;
> import javax.sql.DataSource;
> import org.apache.ignite.cache.CacheAtomicityMode;
> import org.apache.ignite.cache.CacheMode;
> import org.apache.ignite.cache.QueryEntity;
> import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
> import org.apache.ignite.cache.store.jdbc.JdbcType;
> import org.apache.ignite.cache.store.jdbc.JdbcTypeField;
> import org.apache.ignite.cache.store.jdbc.dialect.BasicJdbcDialect;
> import org.apache.ignite.configuration.BinaryConfiguration;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.configuration.OdbcConfiguration;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import
> org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> //import com.timesten.jdbc.JdbcOdbcConnection;
> //import com.timesten.jdbc.TimesTenDataSource;
>
> public class ClientConfigurationFactory {
> /** Secret properties loading. **/
> private static final Properties props = new Properties();
>
> static {
> try (InputStream in =
> IgniteConfiguration.class.getClassLoader().getResourceAsStream("secret.properties"))
> {
> props.load(in);
> }
> catch (Exception ignored) {
> // No-op.
> }
> }
>
> /** Helper class for datasource creation. **/
> public static class DataSources {
> public static final ComboPooledDataSource INSTANCE_dataStore =
> createdataStore();
>
> private static ComboPooledDataSource createdataStore() {
> ComboPooledDataSource dataStore = new ComboPooledDataSource();
>
>
> dataStore.setJdbcUrl("jdbc:timesten:client:TTC_Server=localhost;TCP_PORT=53397;TTC_Server_DSN=EMSDSN;UID=test;PWD=test;TTC_Timeout=180");
> dataStore.setUser("kodiak");
> dataStore.setPassword("kodiak");
>
> return dataStore;
> }
> }
>
>
> /**
>  * Configure grid.
>  *
>  * @return Ignite configuration.
>  * @throws Exception If failed to construct Ignite configuration
> instance.
>  **/
> public static IgniteConfiguration createConfiguration() throws Exception
> {
> IgniteConfiguration cfg = new IgniteConfiguration();
>
> cfg.setClientMode(true);
> cfg.setGridName("TestDB1");
>
> TcpDiscoverySpi discovery = new TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>
> ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
>
> discovery.setIpFinder(ipFinder);
>
> cfg.setDiscoverySpi(discovery);
>
> BinaryConfiguration binaryCfg = new BinaryConfiguration();
>
> binaryCfg.setCompactFooter(false);
>
> cfg.setBinaryConfiguration(binaryCfg);
>
> cfg.setOdbcConfiguration(new OdbcConfiguration());
>
> cfg.setCacheConfiguration(cacheEmployeeCache());
>
> return cfg;
> }
>
> /**
>  * Create configuration for cache "EmployeeCache".
>  *
>  * @return Configured cache.
>  * @throws Exception if failed to create cache configuration.
>  **/
> public static CacheConfiguration cacheEmployeeCache() throws Exception {
> CacheConfiguration ccfg = new CacheConfiguration();
>
> ccfg.setName("EmployeeCache");
> ccfg.setCacheMode(CacheMode.PARTITIONED);
> ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>
> CacheJdbcPojoStoreFactory cacheStoreFactory = new
> CacheJdbcPojoStoreFactory();
>
> cacheStoreFactory.setDataSourceFactory(new Factory() {
> /** {@inheritDoc} **/
> @Override public DataSource create() {
> return DataSources.INSTANCE_dataStore;
> };
> });
>
> cacheStoreFactory.setDialect(new BasicJdbcDialect());
>
> cacheStoreFactory.setTypes(jdbcTypeEmployee(ccfg.getName()));
>
> ccfg.setCacheStoreFactory(cacheStoreFactory);
>
> ccfg.setReadThrough(true);
> ccfg.setWriteThrough(true);
>
> ArrayList qryEntities = new ArrayList<>();
>
> QueryEntity qryEntity = new QueryEntity();
>
> 

Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-05-22 Thread Andrey Gura
Hi,

issue mentioned early is related with both pessimistic and optimistic
transactions. So it is still possible that it is your case.

Anyway, I don't have any other ideas about the problem. It would be
great to have reproducer in order to debug issue. But from my point of
view migaration to newer version is the best way.

On Fri, May 19, 2017 at 12:23 PM, yfernando  wrote:
> The issue you mention doesn't seem to be related to the issue we are having
> as,
> - The entire grid does not lock up as in the situation described in the bug
> - There are no threads blocked or locked when our key lock occurs
> - The bug seems to occur on Optimistic locking whereas our scenario occurs
> on Pessimistic
>
>
> agura wrote
>> There was a problem with incorrect transaction timeout handling [1]
>> that was fixed in Ignite 1.8. It is possible that it is your case.
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-2797
>
> Another observation we had was that the key that was locked occurred on a
> cache that gets loaded at startup and never changes content. The cache does
> however take part in transactions in terms of cache reads (using
> IgniteCache.get() )
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p13021.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Affinity Function in Apache Ignite

2017-05-18 Thread Andrey Gura
Hi,

1. RendezvousAffinityFunction is use by default with 1024 partitions.
2. It is simple implementation enough. Just look at source code of
RendezvousAffinityFunction class. Start from assignPartitions method.

On Thu, May 18, 2017 at 9:46 AM, rishi007bansod
 wrote:
> Hi,
>1. What is default affinity function used in ignite for key to partition
> to node mapping?
>2. Also exactly how mapping is done in RendezvousAffinityFunction using
> highest random weight algorithm?
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Function-in-Apache-Ignite-tp12991.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to set cache name in Kafka connect?

2017-05-17 Thread Andrey Gura
Hi,

you should get data streamer using cache name. You example should be
modified like this:

Ignition.setClientMode(true);

try (Ignite ignite =
Ignition.start("D:/Applns/apache-ignite-fabric-1.6.0-bin/apache-ignite-fabric-1.6.0-bin/examples/config/example-ignite.xml"))
{
String cacheName = "Democache"; //
NAME OF YOUR CACHE
CacheConfiguration
cfg = new CacheConfiguration<>();
cfg.setName();
IgniteCache cache =
ignite.getOrCreateCache(cfg);

try (IgniteDataStreamer stmr =
ignite.dataStreamer(cacheName)) // USE CACHE NAME HERE
{
stmr.allowOverwrite(true);

KafkaStreamer
kafkaStreamer = new KafkaStreamer<>();

kafkaStreamer.setIgnite(ignite);
kafkaStreamer.setStreamer(stmr);
kafkaStreamer.setTopic("cachetest3");
kafkaStreamer.setThreads(4);

Properties settings = new Properties();
settings.put("bootstrap.servers",
"192.168.15.120:9092");
settings.put("group.id", "test");
settings.put("zookeeper.connect",
"192.168.15.120:2181");

settings.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");

settings.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");

settings.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");

settings.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");

kafka.consumer.ConsumerConfig config =
new ConsumerConfig(settings);
kafkaStreamer.setConsumerConfig(config);

StringDecoder strDecoder = new StringDecoder(new
VerifiableProperties());
kafkaStreamer.setKeyDecoder(strDecoder);
kafkaStreamer.setValueDecoder(strDecoder);

kafkaStreamer.start();
} finally {
kafkaStreamer.stop();
}

}

On Wed, May 17, 2017 at 3:52 PM, Humphrey  wrote:
> Take a look here.
>
> http://apache-ignite-users.70518.x6.nabble.com/Kindly-tell-me-where-to-find-these-jar-files-td12649.html
>
> Here I posted some code.
>
> Humphrey
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-set-cache-name-in-Kafka-connect-tp12959p12969.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to get number of cache entries in a particular cache, in a single ignite instance?

2017-05-17 Thread Andrey Gura
Hi,

you can use "select count(*) from TableName" query. In order to limit
query by particular local node you should set local flag to true and
execute query on this node. In other words you should do something
like this on your node:


IgniteCache cache = ignite.getOrCreateCache(CacheName); // get cache proxy

Query qry = new SqlFieldsQuery("select count(*) from TableName"); //
create query
qry.setLocal(true); // make query local

QueryCursor cursor = cache.query(qry); // exceute query


If table and cache are the same things in your case then you can just
query local cache size without any SQL:

IgniteCache cache = ignite.getOrCreateCache(CacheName); // get cache proxy

int size = cache.localSize(CachePeekMode.ALL);


On Wed, May 17, 2017 at 12:49 PM, blasteralfred  wrote:
> Hi everyone,
>
> I am learning Ignite, and would like to know if this is possible. Whenever a
> new entry is pushed to cache, I want to confirm it by looking the count,
> just like `numrows` of an sql table. Is this possible? I am trying to build
> a stub, which takes cache name as input and return the number of entries in
> it. Kindly help me if somebody know something regarding this.
>
> Thank you.
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-get-number-of-cache-entries-in-a-particular-cache-in-a-single-ignite-instance-tp12966.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: how to save data to local node and share it with other nodes

2017-05-17 Thread Andrey Gura
Hi,

You can provide IgnitePredicate instance to your CacheConfiguration.
It will allow you create cache only on nodes that satisfy this
predicate implementation. But cache itself will be available from all
nodes that are want operate with data in this cache.

On Tue, May 16, 2017 at 9:09 PM, Libo Yu  wrote:
> Hi all,
>
>
>
> I am trying to figure out how to do this with Ignite Cache:
>
>
>
> I have ignite cache installed on several application servers. Those
> application servers may need the same data from time to time.
>
> Here is my scenario:
>
> After server A gets some data, it saves the data to the cache which is local
> to A. When B needs the same data, B can get data from
>
> A.
>
>
>
> The issue with Ignite affinity function is that when A tries to save the
> data, the data’s partition may be on a different server. That makes
>
> It really inefficient. Is there a way to save the data locally and share it
> clusterwise? Thanks.
>
>
>
> Libo
>
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-05-16 Thread Andrey Gura
Hi,

There was a problem with incorrect transaction timeout handling [1]
that was fixed in Ignite 1.8. It is possible that it is your case.

[1] https://issues.apache.org/jira/browse/IGNITE-2797

On Thu, May 11, 2017 at 1:51 AM, bintisepaha  wrote:
> Hey guys, we had a key lock issue again on 1.7.0. here is a suspicious thread
> dump. Is this helpful for tracking down our issue further?
> we did not see any topology changes or any other exceptions.
>
> Attaching the entire thread dump too tdump.zip
> 
>
> "pub-#7%DataGridServer-Production%" Id=47 in WAITING on
> lock=org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture@2094df59
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:159)
>   at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4800)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4783)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1395)
>   at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:956)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.updatePosition(OrderHolderSaveRunnable.java:790)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.cancelPosition(OrderHolderSaveRunnable.java:805)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.cancelExistingTradeOrderForPositionUpdate(OrderHolderSaveRunnable.java:756)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.processOrderHolders(OrderHolderSaveRunnable.java:356)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.run(OrderHolderSaveRunnable.java:109)
>   at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4V2.execute(GridClosureProcessor.java:2184)
>   at
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
>   at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
>   at
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
>   at
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
>   at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1161)
>   at
> org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1766)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
>   at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>
>   Locked synchronizers: count = 1
>  <1237e0be>  - java.util.concurrent.ThreadPoolExecutor$Worker@1237e0be
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p12611.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: OOM Issue with Eviction Event listener - Ignite 1.9.0

2017-05-16 Thread Andrey Gura
Hi,

anyway such increasing of memory consumption is too suspicious for me.
Could you please provide more information about your use case: cluster
topology, Ignite and caches configuration? Siomple reproducer would be
grade in order to analyse the problem.

On Mon, May 1, 2017 at 11:05 PM, Pradeep Badiger
<pradeepbadi...@fico.com> wrote:
> Hi Andrey,
>
> I tried with -Xmx3072m -Xmx3072m and I still get OOM. If I try with 4GB heap 
> setting then it works with GC running continuously. With 8GB, it works 
> without any issues.
>
> Is it just that it needs more heap to perform unmarshalling and give the 
> evicted entry to the listener?
>
> [15:33:08] Ignite node started OK (id=3a14ca7d, 
> grid=IgniteWithCacheEvictListener)
> [15:33:08] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8, heap=3.0GB]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3236)
> at java.lang.StringCoding.safeTrim(StringCoding.java:79)
> at java.lang.StringCoding.encode(StringCoding.java:365)
> at java.lang.String.getBytes(String.java:941)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteString(BinaryWriterExImpl.java:435)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.writeStringField(BinaryWriterExImpl.java:1102)
> at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.write(BinaryFieldAccessor.java:506)
> at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:784)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:206)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
> at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:239)
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:521)
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:914)
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheObject(CacheObjectBinaryProcessorImpl.java:859)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheObject(GridCacheContext.java:1792)
> at 
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.updateAllInternal(GridLocalAtomicCache.java:834)
> at 
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.put0(GridLocalAtomicCache.java:147)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2276)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2253)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1375)
> at 
> com.example.IgniteWithCacheEvictListener.main(IgniteWithCacheEvictListener.java:63)
>
> -Original Message-
> From: Andrey Gura [mailto:ag...@apache.org]
> Sent: Monday, May 01, 2017 11:34 AM
> To: user@ignite.apache.org
> Subject: Re: OOM Issue with Eviction Event listener - Ignite 1.9.0
>
> Hi,
>
> Listener just lead to additional Event objects instantiation. You should give 
> more memory for your Java process.
>
> On Mon, May 1, 2017 at 5:54 PM, Pradeep Badiger <pradeepbadi...@fico.com> 
> wrote:
>> Hi,
>>
>>
>>
>> I am facing an OOM Exception when eviction policy is turned on. I have
>> attached an eclipse project that has two test programs. One is set
>> with eviction event listener and another one is not.
>>
>>
>>
>> The test program with eviction listener throws an OOM error almost
>> immediately after the ignite is initialized. The one without the
>> listener works fine.
>>
>>
>>
>> I ran both the test programs with –Xmx512m –Xms512m.
>>
>>
>>
>> Can someone let me know if there are any issues with my configurations?
>>
>>
>>
>> Thanks,
>>
>> Pradeep V.B.
>>
>> This email and any files transmitted with it are confidential,
>> proprietary and intended solely for the individual or entity to whom they 
>> are addressed.
>> If you have received this email in error please delete it immediately.
> This email and any files transmitted with it are confidential, proprietary 
> and intended solely for the individual or entity to whom they are addressed. 
> If you have received this email in error please delete it immediately.


Re: vertx-ignite

2017-05-10 Thread Andrey Gura
Anil,

What version of vertx-ignite or Ignite itself do you use?

In provided ignite.xml there is no minimal configuration that is
mandatory for Ignite cluster manager for vert.x (see
default-ignite.xml for example).


On Tue, May 2, 2017 at 9:18 AM, Anil <anilk...@gmail.com> wrote:
>
> Hi Andrey,
>
> Apologies for late reply. I don't have any exact reproduce. I can see this
> log frequently in our logs.
>
> attached the ignite.xml.
>
> Thanks.
>
>
>
> On 26 April 2017 at 18:32, Andrey Gura <ag...@apache.org> wrote:
>>
>> Anil,
>>
>> what kind of lock do you mean? What are steps for reproduce? What
>> version if vert-ignite do use and what is your configuration?
>>
>> On Wed, Apr 26, 2017 at 2:16 PM, Anil <anilk...@gmail.com> wrote:
>> > HI,
>> >
>> > I am using vertx-ignite and when node is left the topology, lock is not
>> > getting released and whole server is not responding.
>> >
>> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
>> > BlockedThreadChecker:57 - Thread
>> > Thread[vert.x-worker-thread-82,5,ignite]
>> > has been blocked for 2329981 ms, time limit is 6
>> > io.vertx.core.VertxException: Thread blocked
>> > at sun.misc.Unsafe.park(Native Method)
>> > at
>> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>> > at
>> >
>> > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>> > at
>> >
>> > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>> > at
>> >
>> > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>> > at
>> >
>> > org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:161)
>> > at
>> >
>> > org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119)
>> > at
>> >
>> > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
>> > at
>> >
>> > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4663)
>> > at
>> >
>> > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1388)
>> > at
>> >
>> > org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:1117)
>> > at io.vertx.spi.cluster.ignite.impl.MapImpl.get(MapImpl.java:81)
>> > at
>> > io.vertx.core.impl.HAManager.chooseHashedNode(HAManager.java:590)
>> > at io.vertx.core.impl.HAManager.checkSubs(HAManager.java:519)
>> > at io.vertx.core.impl.HAManager.nodeLeft(HAManager.java:305)
>> > at io.vertx.core.impl.HAManager.access$100(HAManager.java:107)
>> > at io.vertx.core.impl.HAManager$1.nodeLeft(HAManager.java:157)
>> > at
>> >
>> > io.vertx.spi.cluster.ignite.IgniteClusterManager.lambda$null$4(IgniteClusterManager.java:254)
>> > at
>> >
>> > io.vertx.spi.cluster.ignite.IgniteClusterManager$$Lambda$36/837728834.handle(Unknown
>> > Source)
>> > at
>> >
>> > io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:271)
>> > at
>> > io.vertx.core.impl.ContextImpl$$Lambda$13/116289363.run(Unknown
>> > Source)
>> > at io.vertx.core.impl.TaskQueue.lambda$new$0(TaskQueue.java:60)
>> > at io.vertx.core.impl.TaskQueue$$Lambda$12/443290224.run(Unknown
>> > Source)
>> > at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> > at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> > was it a known issue ?
>> >
>> > Thanks
>
>


Re: How to execute ignite jdbc query on specific node

2017-05-09 Thread Andrey Gura
Ok, it means that only sub-queries will be executed by affinity on
collocated data.
So answer to your question is no, it is impossible to execute query on
one node at this moment.


On Tue, May 9, 2017 at 7:45 PM, Ajay  wrote:
> I tried with
> jdbc:ignite:cfg://cache=persons:collocated=true@file:///dir/ignite_client_config.xml
> but still  query executed on both servers.
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-execute-ignite-jdbc-query-on-specific-node-tp12562p12571.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: https://issues.apache.org/jira/browse/IGNITE-3401

2017-05-09 Thread Andrey Gura
No, it isn't fixed yet. Should be fixed in Ignite 2.1 I hope.

On Tue, May 9, 2017 at 6:52 PM, Ranjit Sahu  wrote:
> Hi Team,
>
> Is this issue fixed ? If yes on which version? IS there any work around to
> avoid this ?
>
> Thanks,
> Ranjit


Re: How to execute ignite jdbc query on specific node

2017-05-09 Thread Andrey Gura
You can try to add "collocated=true" parameter to your JDBC connection
string. But it will affect all queries that will be executed via your
JDBC connection.

On Tue, May 9, 2017 at 7:25 PM, Ajay  wrote:
> Hi
>
> Thanks quick reply,i did subscription.
>
> Coming to question:
>
> I loaded data using affinity key, so all keys with the same affinityKey will
> store in same node right? If yes now i want to select specific key, i
> observed in logs query was executed in both servers.If data reside in single
> node why query executed in both servers, is there any way to  execute query
> on data reside node  instead of all nodes.
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-execute-ignite-jdbc-query-on-specific-node-tp12562p12567.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: OOM Issue with Eviction Event listener - Ignite 1.9.0

2017-05-01 Thread Andrey Gura
Hi,

Listener just lead to additional Event objects instantiation. You
should give more memory for your Java process.

On Mon, May 1, 2017 at 5:54 PM, Pradeep Badiger  wrote:
> Hi,
>
>
>
> I am facing an OOM Exception when eviction policy is turned on. I have
> attached an eclipse project that has two test programs. One is set with
> eviction event listener and another one is not.
>
>
>
> The test program with eviction listener throws an OOM error almost
> immediately after the ignite is initialized. The one without the listener
> works fine.
>
>
>
> I ran both the test programs with –Xmx512m –Xms512m.
>
>
>
> Can someone let me know if there are any issues with my configurations?
>
>
>
> Thanks,
>
> Pradeep V.B.
>
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are addressed.
> If you have received this email in error please delete it immediately.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-26 Thread Andrey Gura
Hi,

fix merged into ignite-2.0 that will be released soon.

On Tue, Apr 25, 2017 at 4:52 PM, bintisepaha  wrote:
> We have not been able to migrate to 1.8 yet due to business reasons, and now
> we are planning to migrate to 1.9 directly and are testing it.
>
> Do you have an ETA for the fix? Also do you have the code for reproducing
> it?
>
> We are also changing our clients to become long running servers so that the
> topology does not change a lot.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p12237.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: vertx-ignite

2017-04-26 Thread Andrey Gura
Anil,

what kind of lock do you mean? What are steps for reproduce? What
version if vert-ignite do use and what is your configuration?

On Wed, Apr 26, 2017 at 2:16 PM, Anil  wrote:
> HI,
>
> I am using vertx-ignite and when node is left the topology, lock is not
> getting released and whole server is not responding.
>
> 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
> BlockedThreadChecker:57 - Thread Thread[vert.x-worker-thread-82,5,ignite]
> has been blocked for 2329981 ms, time limit is 6
> io.vertx.core.VertxException: Thread blocked
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:161)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4663)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1388)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:1117)
> at io.vertx.spi.cluster.ignite.impl.MapImpl.get(MapImpl.java:81)
> at io.vertx.core.impl.HAManager.chooseHashedNode(HAManager.java:590)
> at io.vertx.core.impl.HAManager.checkSubs(HAManager.java:519)
> at io.vertx.core.impl.HAManager.nodeLeft(HAManager.java:305)
> at io.vertx.core.impl.HAManager.access$100(HAManager.java:107)
> at io.vertx.core.impl.HAManager$1.nodeLeft(HAManager.java:157)
> at
> io.vertx.spi.cluster.ignite.IgniteClusterManager.lambda$null$4(IgniteClusterManager.java:254)
> at
> io.vertx.spi.cluster.ignite.IgniteClusterManager$$Lambda$36/837728834.handle(Unknown
> Source)
> at
> io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:271)
> at io.vertx.core.impl.ContextImpl$$Lambda$13/116289363.run(Unknown
> Source)
> at io.vertx.core.impl.TaskQueue.lambda$new$0(TaskQueue.java:60)
> at io.vertx.core.impl.TaskQueue$$Lambda$12/443290224.run(Unknown
> Source)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> was it a known issue ?
>
> Thanks


Re: Unable to inject spring configuration on Cachestore

2017-04-24 Thread Andrey Gura
Hi Guilherme,

Ignite creates Spring application context only if started using
Ignition.start(springUrlCfg) method and can work only with XML
application context.

Try to use IgniteSpringBean. This class implements
ApplicationContextAware interface so your Spring application context
should be propagated to Ignite instance.

Another way is using IgniteSpring.start() factory method that get
optional Spring application context.

On Mon, Apr 24, 2017 at 1:09 PM, Guilherme Melo  wrote:
> Hello,
> I am trying to utilize @SpringResource and
> @SpringApplicationContextResource, however both are null on my CacheStore,
> has anyone encountered this before?
>
> Below is my config:
>
> @Bean
> public IgniteConfiguration getIgniteConfiguration() {
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setClientMode(false);
> CacheConfiguration orderCache = new CacheConfiguration();
> orderCache.setName("testCache");
> orderCache.setBackups(1);
> orderCache.setCacheMode(CacheMode.PARTITIONED);
>
> orderCache.setCacheWriterFactory(FactoryBuilder.factoryOf(CacheStoreWithInjectedBeans.class));
> orderCache.setReadThrough(true);
> orderCache.setWriteThrough(true);
>
> cfg.setCacheConfiguration(
> orderCache);
> return cfg;
> }
>
> and the class:
>
> public class CacheStoreWithInjectedBeans implements CacheWriter String> {
> @SpringResource
> private MyService myService;
> @SpringApplicationContextResource
> private Object appCtx;
> @Override
> public void write(Cache.Entry entry)
> throws CacheWriterException {
> myService.apply(entry.getValue());
> }
> @Override
> public void writeAll(Collection> entries) throws CacheWriterException {
>
> }
> @Override
> public void delete(Object key) throws CacheWriterException {
>
> }
> @Override
> public void deleteAll(Collection keys) throws CacheWriterException {
>
> }
>
> }
>
> Cheers,
> Guilherme Melo
> www.gmelo.org


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-24 Thread Andrey Gura
Hi Binti,

unfortunately I don't have any other ideas why this NPE happens
(except of two mentioned above). But I believe that fix of this
problem will prevent this NPE in all other cases.

About key remaining future: in some cases mentioned NPE led to it.

BTW, did you migrate to Ignite 1.8?


On Fri, Apr 21, 2017 at 10:39 PM, bintisepaha  wrote:
> Andrey, we never start a txn on the client side. The key that gets locked on
> our end and stays locked even after a successful txn is never read or
> updated in a txn from the client side. Are you also able to reproduce the
> key remaining locked issue?
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p12164.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-20 Thread Andrey Gura
Hi,

I have a simple reproducer and JIRA ticket is created:
https://issues.apache.org/jira/browse/IGNITE-5041

Workaround looks very strange but it should work: you should call
getOrCreateCache on client node for all your caches or disable
deadlock detection (see ticket)


On Thu, Apr 20, 2017 at 12:32 AM, bintisepaha  wrote:
> This is positive news Andrey. Thanks a lot.
>
> Please keep us posted about reproducing this. We are definitely not using
> node filters...and we suspect topology changes to be causing issues, but
> irrespective of that, we are not able to reproduce it. we also do not see
> deadlock issues reported anywhere. the last time we got a key lock last
> week, we did not see the NPE but only topology change for client.
>
> Thanks,
> Binti
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p12094.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-19 Thread Andrey Gura
Hi,

I've reproduced the problem and have exatly the same stack traces for
NullPointerException and IgniteTxTimeoutCheckedException that you
mentioned early.

But my case looks too complex. I started three nodes and cache1 on
nodes N1, N2 and N3, and cache2 on nodes N1 and N2. After it deadlock
was created between nodes N1 and N2 where both caches are
participiants (without transaction timeout). And finally I try to
update key (that participiates in deadlock) for cache2 from N3 in
transaction with timeout. As result deadlock detection receives
message from nodes N1 or/and N2 that contains information about cache1
that isn't started on N3. This leads to NPE and timeout.

I think that similar situation can happen in case of deadlock between
two caches on server nodes and attempt to update key from client node.
Will try this idea tomorrow hopefully.

This problem should exist on any Ignite version starting from 1.7.

On Mon, Apr 17, 2017 at 11:46 PM, bintisepaha  wrote:
> Looking further, I see this in the failed exception stack trace. The topology
> did change but it is only a client that joined, do you think that has any
> correlation to the key being locked?
>
> [INFO ] 2017-04-13 14:15:44.360 [pub-#44%DataGridServer-Production%]
> OrderHolderSaveRunnable - Updating PositionKey: PositionId [fundAbbrev=BVI,
> clearBrokerId=12718, insIid=679675, strategy=AFI, traderId=6531,
> valueDate=19000101]
> [14:15:46] Topology snapshot [ver=1980, servers=16, clients=82, CPUs=273,
> heap=850.0GB]
> [ERROR] 2017-04-13 14:15:54.348 [pub-#44%DataGridServer-Production%]
> OrderHolderSaveRunnable - Received Exception - printing on Entry
> javax.cache.CacheException: class
> org.apache.ignite.transactions.TransactionTimeoutException: Failed to
> acquire lock within provided timeout for transaction [timeout=1,
> tx=GridNearTxLocal [mappings=IgniteTxMappingsImpl [],
> nearLocallyMapped=false, colocatedLocallyMapped=false, needCheckBackup=null,
> hasRemoteLocks=true, thread=pub-#44%DataGridServer-Production%,
> mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter
> [nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
> depEnabled=false, txState=IgniteTxStateImpl [activeCacheIds=GridLongList
> [idx=2, arr=[2062286236,812449097]], txMap={IgniteTxKey
> [key=KeyCacheObjectImpl [val=OrderKey [traderId=6531, orderId=12382604],
> hasValBytes=true], cacheId=2062286236]=IgniteTxEntry [key=KeyCacheObjectImpl
> [val=OrderKey [traderId=6531, orderId=12382604], hasValBytes=true],
> cacheId=2062286236, partId=-1, txKey=IgniteTxKey [key=KeyCacheObjectImpl
> [val=OrderKey [traderId=6531, orderId=12382604], hasValBytes=true],
> cacheId=2062286236], val=[op=READ, val=CacheObjectImpl [val=TradeOrder
> [orderKey=OrderKey [traderId=6531, orderId=12382604], insIid=679675,
> clearBrokerId=12718, strategy=AFI, time=2017-04-13 13:30:00.0,
> settlement=2017-04-19 00:00:00.0, quantity=-6800.0, insType=STK, version=1,
> userId=3081, created=2017-04-13 13:29:47.831, status=open, allocFund=STD,
> isAlloc=Y, clearAgent=MSCOEPB, execBroker=DBKSE, initiate=L,
> notes=ClOrdId[20170413-Y47D580RHH99], allocRule=H2L, comType=T, comTurn=N,
> comImplied=N, trdCur=USD, trdFreeze=N, kindFlag=, lastRepo=, exCpn=,
> generatedTime=Thu Apr 13 14:15:02 EDT 2017, batchMatchFlag=N,
> commission=0.003, trdRate=1.0, gross=, delivInstruct=null, startflys=3,
> parentId=null, linkId=null, repo=N, repoRate=null, repoCalendar=null,
> repoStartDate=null, repoEndDate=null, xiid=null, quantityCurr=null,
> masterOrderId=null, unfilledQty=800.0, avgFillPrice=18.0021324, psRuleId=6,
> origDate=2017-04-13 00:00:00.0, postingId=2, executingUserId=5647,
> repoCloseDate=1900-01-01 00:00:00.0, repoPrice=0.0, directFxFlag=N, tax=0.0,
> fixStatusId=58, txnTypeId=0, yield=null, valueDate=null,
> interestOnlyRepoFlag=null, orderGroupId=0, fundingDate=2017-04-19
> 00:00:00.0, execBrokerId=12038, branchBrokerId=7511, fillOrigUserId=3081,
> initialMargin=null, cmmsnChgUserId=0, cmmsnChgReasonId=0, fixingSourceId=0,
> orderDesignationId=0, riskRewardId=0, placementTime=2017-04-13 13:29:47.657,
> initialInvestment=0.0, equityFxBrokerTypeId=0, execBranchBrokerId=0,
> createUserId=3081, targetAllocFlag=N, pvDate=null, pvFactor=null, pvId=0,
> executionTypeId=0, borrowScheduleId=0, borrowScheduleTypeId=0,
> marketPrice=null, interestAccrualDate=null, sourceAppId=103,
> initiatingUserId=6531, isDiscretionary=Y, traderBsssc=S, clearingBsssc=S,
> executingBsssc=S, shortsellBanApproverUserId=null, intendedQuantity=-7600.0,
> lastUpdated=2017-04-13 14:15:02.147, traderStrategyId=24686,
> businessDate=2017-04-13 00:00:00.0, firstExecutionTime=2017-04-13
> 13:29:47.657, doNotBulkFlag=null, trimDb=trim_grn, trades=[Trade
> [tradeKey=TradeKey [tradeId=263603637, tradeId64=789971421, traderId=6531],
> orderId=12382604, ftbId=2023850, quantity=-985.0, fundAbbrev=TRCP,
> 

Re: Ignite on FreeBSD 11 and OpenJDK

2017-04-13 Thread Andrey Gura
Kamil,

thanks a lot for your help. As I told before I need some time for
problem analyse. After it I'll create JIRA ticket and share link to it
in this thread.

Thank you again!

On Wed, Apr 12, 2017 at 11:56 PM, Kamil Misuth <ki...@ethome.sk> wrote:
> Hi Andrey,
>
> I've built ignite-3477-master (commit hash 5839f481b7) today.
> Apart from the fact that some other Configuration APIs changed,
> CacheMemoryMode disapeared. I guess this is related to
> http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-4758-introducing-cache-memory-policies-td14957.html
>
> Anyway, I've got the following exception on the first try, since
> ignite-indexing 2.0.0 is compiled against H2 1.4.191 and Spring Boot
> 1.5.2.RELEASE managed H2 version is 1.4.193 and apparently org.h2.result.Row
> changed from abstract class to an interface between the two versions.
>
> Caused by: java.lang.IncompatibleClassChangeError: class
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Row has interface
> org.h2.result.Row as super class
>
> After fixing the dependency issue, I've tried to create two node cluster and
> JVM SIGSEGVed in DirectNioClientWorker just as before.
>
> Attaching crash logs from both nodes (running on the same machine FreeBSD
> 11) to this e-mail.
>
> Kamil
>
>
> On 2017-04-12 00:49, Kamil Misuth wrote:
>>
>> Sure thing.
>>
>> I will check out ignite-3477-master as soon as I have some time tomorrow.
>>
>> Kamil
>>
>>
>> On 04/11/2017 05:48 PM, Andrey Gura wrote:
>>>
>>> Thanks for provided information. I need additional time for problem
>>> investigation.
>>>
>>> You can also try code from ignite-3477-master branch. This branch
>>> contains many memory related fixes but it isn't stable yet.
>>>
>>> On Mon, Apr 10, 2017 at 11:37 PM, kimec.ethome.sk <ki...@ethome.sk>
>>> wrote:
>>>>
>>>> Hi Andrey,
>>>>
>>>> sorry, I've got ahead of my self.
>>>>
>>>> I am on FreeBSD 11.0-RELEASE-p1 amd64
>>>> With OpenJDK Runtime Environment 1.8.0_121-b13 Oracle Corporation
>>>> OpenJDK
>>>> 64-Bit Server VM 25.121-b13
>>>> hw.model: Intel(R) Core(TM) i7-4702MQ CPU @ 2.20GHz
>>>> hw.machine_arch: amd64
>>>> hw.ncpu: 8
>>>> hw.physmem: 8251813888
>>>>
>>>> Core dump is 1 GB, so I guess that is no go. I am attaching crash log to
>>>> this e-mail.
>>>> I have uploaded the project I've used during my testing here
>>>> https://github.com/kimec/ignite-spring-boot .
>>>> The sample works perfectly well with stock ignite-core on Linux OpenJDK
>>>> 8
>>>> xs64 CentOS 7 .
>>>>
>>>> Kamil
>>>>
>>>>
>>>>
>>>> On 2017-04-10 12:24, Andrey Gura wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> could you please share core dump file? If not, it would be helpful to
>>>>> know what is CPU architecture on this server.
>>>>>
>>>>> On Mon, Apr 10, 2017 at 2:53 AM, Kamil Misuth <ki...@ethome.sk> wrote:
>>>>>>
>>>>>> Greetings,
>>>>>>
>>>>>> OpenJDK (7 and 8) HotSpot JVM SIGSEGVs on FreeBSD 11 as soon as node
>>>>>> joins a
>>>>>> topology and starts to communicate via DirectNioClientWorker.
>>>>>> The root cause is DirectByteBufferStreamImpl (both versions) which
>>>>>> uses
>>>>>> GridUnsafe.getXXX/putXXX(Object object, offset, value) methods to
>>>>>> manipulate
>>>>>> DirectByteBuffer, whereas it should really be using
>>>>>> GridUnsafe.getXXX/putXXX(address, value), since DirectByteBuffer is
>>>>>> allocated on C heap (off java heap).
>>>>>> Notice that at least one instance of the same problem is known to
>>>>>> exist
>>>>>> in
>>>>>> another project using Unsafe
>>>>>> https://issues.apache.org/jira/browse/CASSANDRA-8325 .
>>>>>> The OpenJDK source of Unsafe is more or less clear on this
>>>>>>
>>>>>>
>>>>>> http://hg.openjdk.java.net/jdk8u/jdk8u60/jdk/file/935758609767/src/share/classes/sun/misc/Unsafe.java#l391
>>>>>> I have prepared a simple fix here
>>>>>>
>>>>>>
>>>>>> https://github.com/apache/ignite/compare/1.9.0-rc2...kimec:freebsd-support
>>>>>>  .
>>>>>> However, I am not sure if the solution is right in regard to overall
>>>>>> ignite
>>>>>> performance.
>>>>>> I've tried to compile ignite-core with tests and after applying my
>>>>>> changes
>>>>>> was able to pass all the basic stuff until the performance test stage
>>>>>> at
>>>>>> which point my machine run out of RAM and swap space (some 10 GB)...
>>>>>> Not
>>>>>> sure if this is how the tests are supposed to be. After compiling with
>>>>>> -DskipTests I was able to create FreeBSD 11 - CentOS 7 two node
>>>>>> cluster
>>>>>> and
>>>>>> everything seemed OK (the two nodes shared an IGFS instance backed by
>>>>>> replicated caches).
>>>>>> Please note that OpenJDK on different systems as well as Oracle JDK
>>>>>> (via
>>>>>> Linux compatility layer) on FreeBSD seem to be more forgiving and does
>>>>>> not
>>>>>> SIGSEGV.
>>>>>> I've based my branch on 1.9.0-rc2 since tag 1.9.0 has already POM with
>>>>>> version 2.0.
>>>>>>
>>>>>> Kamil


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-12 Thread Andrey Gura
As far as I know no one has such a problems.

On Wed, Apr 12, 2017 at 4:15 PM, bintisepaha  wrote:
> Thanks for trying, is the node filter an issue someone else is seeing?
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11905.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite on FreeBSD 11 and OpenJDK

2017-04-11 Thread Andrey Gura
Thanks for provided information. I need additional time for problem
investigation.

You can also try code from ignite-3477-master branch. This branch
contains many memory related fixes but it isn't stable yet.

On Mon, Apr 10, 2017 at 11:37 PM, kimec.ethome.sk <ki...@ethome.sk> wrote:
> Hi Andrey,
>
> sorry, I've got ahead of my self.
>
> I am on FreeBSD 11.0-RELEASE-p1 amd64
> With OpenJDK Runtime Environment 1.8.0_121-b13 Oracle Corporation OpenJDK
> 64-Bit Server VM 25.121-b13
> hw.model: Intel(R) Core(TM) i7-4702MQ CPU @ 2.20GHz
> hw.machine_arch: amd64
> hw.ncpu: 8
> hw.physmem: 8251813888
>
> Core dump is 1 GB, so I guess that is no go. I am attaching crash log to
> this e-mail.
> I have uploaded the project I've used during my testing here
> https://github.com/kimec/ignite-spring-boot .
> The sample works perfectly well with stock ignite-core on Linux OpenJDK 8
> xs64 CentOS 7 .
>
> Kamil
>
>
>
> On 2017-04-10 12:24, Andrey Gura wrote:
>>
>> Hi,
>>
>> could you please share core dump file? If not, it would be helpful to
>> know what is CPU architecture on this server.
>>
>> On Mon, Apr 10, 2017 at 2:53 AM, Kamil Misuth <ki...@ethome.sk> wrote:
>>>
>>> Greetings,
>>>
>>> OpenJDK (7 and 8) HotSpot JVM SIGSEGVs on FreeBSD 11 as soon as node
>>> joins a
>>> topology and starts to communicate via DirectNioClientWorker.
>>> The root cause is DirectByteBufferStreamImpl (both versions) which uses
>>> GridUnsafe.getXXX/putXXX(Object object, offset, value) methods to
>>> manipulate
>>> DirectByteBuffer, whereas it should really be using
>>> GridUnsafe.getXXX/putXXX(address, value), since DirectByteBuffer is
>>> allocated on C heap (off java heap).
>>> Notice that at least one instance of the same problem is known to exist
>>> in
>>> another project using Unsafe
>>> https://issues.apache.org/jira/browse/CASSANDRA-8325 .
>>> The OpenJDK source of Unsafe is more or less clear on this
>>>
>>> http://hg.openjdk.java.net/jdk8u/jdk8u60/jdk/file/935758609767/src/share/classes/sun/misc/Unsafe.java#l391
>>> I have prepared a simple fix here
>>>
>>> https://github.com/apache/ignite/compare/1.9.0-rc2...kimec:freebsd-support .
>>> However, I am not sure if the solution is right in regard to overall
>>> ignite
>>> performance.
>>> I've tried to compile ignite-core with tests and after applying my
>>> changes
>>> was able to pass all the basic stuff until the performance test stage at
>>> which point my machine run out of RAM and swap space (some 10 GB)... Not
>>> sure if this is how the tests are supposed to be. After compiling with
>>> -DskipTests I was able to create FreeBSD 11 - CentOS 7 two node cluster
>>> and
>>> everything seemed OK (the two nodes shared an IGFS instance backed by
>>> replicated caches).
>>> Please note that OpenJDK on different systems as well as Oracle JDK (via
>>> Linux compatility layer) on FreeBSD seem to be more forgiving and does
>>> not
>>> SIGSEGV.
>>> I've based my branch on 1.9.0-rc2 since tag 1.9.0 has already POM with
>>> version 2.0.
>>>
>>> Kamil
>>>
>


Re: Lots of cache creation become slow

2017-04-11 Thread Andrey Gura
Creation of each cache requires creation and initialization of
internal data structures that leads to increased pressure to GC. Could
you enable GC logs and look at result. I think you will find long GC
pauses.

In order to reduce memory consumption by created caches on creation
stage we can do the following:

- Decrease cache start size (see cacheConfiguration.setStartSize(),
default value is 1 500 000). It safe but can lead to some performance
penalty during dynamic increasing cache size;
- For atomic caches we can decrease size of deferred delete queue size
via JVM parameter (-DIGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE=, default value is 200 000 entries per partition). It's static
configuration and can't be changed at runtime;
- Partitions number also affect memory consumption and cache creation
time. But it affect also data distribution so should be configured
carefully. Caches with default configuration uses
RendevousAffinityFunction ith 1024 partitions. This parameter can't be
changed after cache creation and start.

Does it work for you?

On Tue, Apr 11, 2017 at 12:53 PM, ctranxuan
 wrote:
> Hi,
> We are trying for test purposes to create lots of cache on a machine.
> Something like:
>
> for (int i = 0; i < 1; i++) {
>   IgniteCache cache;
>   cache = ignite.getOrCreateCache("cache-" +
> i).withAsync();
>   LOGGER.info("starting to read cache #" + cache.getName());
> }
>
> What we are noticing is that after ~200 caches created, the cache creation
> becomes slower and slower. For instance, when reaching the 2000th cache, it
> takes between 2-3 seconds. For instance, here some logs:
>
> 09:34:27.857 starting to read cache #cache-2087
> 09:34:29.621 starting to read cache #cache-2088
> 09:34:31.450 starting to read cache #cache-2089
> 09:34:33.127 starting to read cache #cache-2090
>
> That's may be a naive way of doing that and may be a naive question: but is
> it normal that the more cache is created, the more time the creation takes?
>
> The program is run on a 12GB RAM machine with 4 CPUs with a Java 8
> (1.8.0_121).
>
> Thanks in advance for the answers!
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Lots-of-cache-creation-become-slow-tp11875.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-10 Thread Andrey Gura
Hi,

unfortunatelly I didn't have possibility to reproduce it. But, anyway,
I'm going to reproduce case with node filters. I still don't have any
idea about your case. Hopefully I'll get some ideas during reproducing
and fixing problem that can happen when cache exists only on some
subset of nodes.

On Thu, Apr 6, 2017 at 6:41 PM, bintisepaha  wrote:
> Andrey,
>
> We start the caches only at server start up time. 2 of our caches are
> replicated, all other caches are partitioned with 1 backup. Do you think
> replicated caches might be causing this issue, when clients leave and join
> the cluster?
>
> All server nodes start with the same cache config. All client nodes start
> with the same config but have no knowledge of cache creation.
>
> We don't use any node filters. For IgniteCompute we start the Runnable on
> server nodes, but for cache creation there are no filters.
>
> Any luck reproducing? Has anyone else had an issue where a txn seems like it
> is finished but the key lock is not released?
>
> Thanks,
> Binti
>
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11784.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite on FreeBSD 11 and OpenJDK

2017-04-10 Thread Andrey Gura
Hi,

could you please share core dump file? If not, it would be helpful to
know what is CPU architecture on this server.

On Mon, Apr 10, 2017 at 2:53 AM, Kamil Misuth  wrote:
> Greetings,
>
> OpenJDK (7 and 8) HotSpot JVM SIGSEGVs on FreeBSD 11 as soon as node joins a
> topology and starts to communicate via DirectNioClientWorker.
> The root cause is DirectByteBufferStreamImpl (both versions) which uses
> GridUnsafe.getXXX/putXXX(Object object, offset, value) methods to manipulate
> DirectByteBuffer, whereas it should really be using
> GridUnsafe.getXXX/putXXX(address, value), since DirectByteBuffer is
> allocated on C heap (off java heap).
> Notice that at least one instance of the same problem is known to exist in
> another project using Unsafe
> https://issues.apache.org/jira/browse/CASSANDRA-8325 .
> The OpenJDK source of Unsafe is more or less clear on this
> http://hg.openjdk.java.net/jdk8u/jdk8u60/jdk/file/935758609767/src/share/classes/sun/misc/Unsafe.java#l391
> I have prepared a simple fix here
> https://github.com/apache/ignite/compare/1.9.0-rc2...kimec:freebsd-support .
> However, I am not sure if the solution is right in regard to overall ignite
> performance.
> I've tried to compile ignite-core with tests and after applying my changes
> was able to pass all the basic stuff until the performance test stage at
> which point my machine run out of RAM and swap space (some 10 GB)... Not
> sure if this is how the tests are supposed to be. After compiling with
> -DskipTests I was able to create FreeBSD 11 - CentOS 7 two node cluster and
> everything seemed OK (the two nodes shared an IGFS instance backed by
> replicated caches).
> Please note that OpenJDK on different systems as well as Oracle JDK (via
> Linux compatility layer) on FreeBSD seem to be more forgiving and does not
> SIGSEGV.
> I've based my branch on 1.9.0-rc2 since tag 1.9.0 has already POM with
> version 2.0.
>
> Kamil
>
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-04 Thread Andrey Gura
It's obviously bug. NPE occures if node doesn't have cache but got
respone message during deadlock detection about thiscache on toher
node. It's possible for example when you have node filter and caches
start on some subset nodes of cluster. Is it your case?

I'll try to reproduce this problem, create JIRA ticket and answer here.

On Mon, Apr 3, 2017 at 8:23 PM, bintisepaha  wrote:
> Sorry for the late response. We do not close/destroy caches dynamically.
> could you please explain this NPE?
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11676.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Client near cache with Apache Flink

2017-04-03 Thread Andrey Gura
What is the source of your data in case *without* Ignite cache? I'm
trying to understand your design without Ignite. What does your code
instead of nearCache.get(key)?

Also the code snippet that creates code still should fail because it
try to create near cache for original cache that doesn't exist yet.

On Mon, Apr 3, 2017 at 7:03 PM, nragon
 wrote:
> Hi,
>
> The code is not all there but i think it's the important one.
> Cache access: this.nearCache.get(key); (Mentioned above)
> Cache name comes from table.getName() which, for instance, can be
> "LK_SUBS_UPP_INST".
> Data in ignite cache: String key and Object[] value. For now I only have one
> record for each cache (3 total)
> +==+
> |Key Class | Key |Value Class |
> Value  |
> +==+
> | java.lang.String | 1   | java.lang.Object[] | size=10, values=[1, 1, null,
> null, null, null, null, null, null, null] |
> +--+
>
> The cache is loaded with datastreamer from oracle database. Again, loading
> is not the problem.
> I'm trying to understand if this (this.nearCache.get(key)) is the best way
> to access for fast lookups.
>
> Thanks
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Client-near-cache-with-Apache-Flink-tp11627p11670.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Client near cache with Apache Flink

2017-04-03 Thread Andrey Gura
Hi,

it still doesn't describe your use case in details. How exactly do you
use Ignite cache in your system? What kind of data do you cache and
what source of this data in case without Ignite?

Also it looks strange for me that you try to create near cache with
name that doesn't match with original cache name. This code should
fail at runtime.

On Sat, Apr 1, 2017 at 3:49 AM, nragon
 wrote:
> Starting with ignite:
>- 4 nodes managed by yarn
>- Node configurations are as above
>- 3 off heap caches with one record each
>
> Flink:
>- 5 task managers with 2 slots each and 2gb
>
> In my current job I'm reading from kafka, mapping some business rules with
> flink map functions and sink to hbase. This job without ignite cache
> processes 30k/s events. When I add ignite on flink map function to enrich
> data with cache.get(key) the performance drops to 2k/s.
> I'm bypassing data enrichment at first test(30k/s, just assigning a random
> value)
> Near cache should pull the one record from cache and from there it should be
> really quick right?
> Just wondering if the configurations above are the correct oned for fast
> lookup caches.
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Client-near-cache-with-Apache-Flink-tp11627p11636.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-03-27 Thread Andrey Gura
It's very strange NPE. It looks like cache is closed or destoryed. Is
it possible that you start and close/destroy caches dynamically?

On Fri, Mar 24, 2017 at 5:43 PM, bintisepaha  wrote:
> Hey Andrey, Thanks a lot for getting back.
>
> These errors were a result of a bad client connected to grid.
>
> We have been running clients that leave and join the cluster constantly in
> order to see if we can reproduce this issue. Last night we saw this issue
> again. Here is one of the errors that a sys thread has on a client node that
> initiates a transaction. The client node was not restarted or disconnected.
> It kept working fine.
> We do not restart these clients but there are some otehr clietns that leave
> and join the cluster.
>
> Do you think this is helpful in locating the cause?
>
> Exception in thread "sys-#41%DataGridServer-Production%"
> java.lang.NullPointerException
> at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey.finishUnmarshal(IgniteTxKey.java:92)
> at
> org.apache.ignite.internal.processors.cache.transactions.TxLocksResponse.finishUnmarshal(TxLocksResponse.java:190)
> at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$DeadlockDetectionListener.unmarshall(IgniteTxManager.java:2427)
> at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$DeadlockDetectionListener.onMessage(IgniteTxManager.java:2317)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> [20:02:18] Topology snapshot [ver=7551, servers=16, clients=53, CPUs=217,
> heap=740.0GB]
> [20:02:22] Topology snapshot [ver=7552, servers=16, clients=52, CPUs=213,
> heap=740.0GB]
> [20:02:28] Topology snapshot [ver=7553, servers=16, clients=53, CPUs=217,
> heap=740.0GB]
> [20:02:36] Topology snapshot [ver=7554, servers=16, clients=54, CPUs=217,
> heap=740.0GB]
> [20:02:40] Topology snapshot [ver=7555, servers=16, clients=53, CPUs=217,
> heap=740.0GB]
> [20:02:41] Topology snapshot [ver=7556, servers=16, clients=54, CPUs=217,
> heap=740.0GB]
> [20:02:48] Topology snapshot [ver=7557, servers=16, clients=53, CPUs=217,
> heap=740.0GB]
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11433.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-03-23 Thread Andrey Gura
Hi,

>From your errors list I see many connection errors. Why does it happen?

> how ignite treats cluster topology when transaction are running on server 
> nodes

When Ignite executes some operation in transaction it maps all
involved keys to affinity nodes and then transaction will be
propagated to this nodes. If transaction isn't finished and some of
involved nodes left topology then transaction will be failed and
rolled back.

Client nodes doesn't affect transactions when join or leave cluster
except of cases when client node is transaction initiator or have
local cache that participates in transaction.


On Tue, Mar 21, 2017 at 11:40 PM, bintisepaha  wrote:
> Sorry for keep following up on this, but this is becoming a major issue for
> us and we need to understand how ignite treats cluster topology when
> transaction are running on server nodes. How do clients joining and leaving
> the cluster affects txns?
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11346.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Aggregation of log files

2017-03-10 Thread Andrey Gura
Rishi,

I think that log aggreagtion task can't be solved only by log4j
configuration. But you can reduce logs size and amount of log files
using different log level.

Also if your application offen log something you can do it using
GridLogThrottle class (or LT alias).

On Fri, Mar 10, 2017 at 6:00 PM, Rishi Yagnik <rishiyag...@gmail.com> wrote:
> Andre,
>
> I am asking is there a way to aggregate log file with log4j
> /log4j-2configuration, currently I believe it s creating log file per
> session which produces 30-50k log files.
>
> This also creates problem in debugging when something gets wrong which log
> files to see since it s generating log file in humongous amount.
>
> Let us stick to Ignite here I know there are many tools out there which can
> do the job for me, I see it s a problem when you are monitoring ignite
> every 3 min to get different statistic, it is hard to get into root of the
> problem due to log file generated for every session.
>
> Any thoughts further ..
>
>
>
> On Fri, Mar 10, 2017 at 8:54 AM, Andrey Gura <ag...@apache.org> wrote:
>>
>> There are many ways to aggregate logs but it is all about special
>> instruments like LogStash, Kibana, etc, not about Ignite.
>>
>> I'm sure you will find a lot of information in your favorite search
>> engine.
>>
>> On Fri, Mar 10, 2017 at 5:17 PM, ignite_user2016 <rishiyag...@gmail.com>
>> wrote:
>> > Any reply would be helpful here..
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://apache-ignite-users.70518.x6.nabble.com/Aggregation-of-log-files-tp11102p6.html
>> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
>
> --
> Rishi Yagnik


Re: Aggregation of log files

2017-03-10 Thread Andrey Gura
There are many ways to aggregate logs but it is all about special
instruments like LogStash, Kibana, etc, not about Ignite.

I'm sure you will find a lot of information in your favorite search engine.

On Fri, Mar 10, 2017 at 5:17 PM, ignite_user2016  wrote:
> Any reply would be helpful here..
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Aggregation-of-log-files-tp11102p6.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: generic class support

2017-03-09 Thread Andrey Gura
Generics are compile time. You should know what types will be used in
runtime and configure indexing accordingly to this knowledge. If your
entires can contain different types just use java.lang.Object.

On Thu, Mar 9, 2017 at 4:37 AM, shawn.du  wrote:
> Hi,
>
> see below configuration:
> 
>  
>  
>  
>  java.lang.String
>  com.example.MyClass
>   
>  
> 
>
> MyClass is a generic class.  T can be Integer/Long/byte[]/String etc simple
> data type.
>
> public class MyClass
> {
> @QuerySqlField
>  private T value;
> }
>
> Does ignite support?
>
> Thanks
> Shawn
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-03-08 Thread Andrey Gura
Your logs should contain exception with stack trace that looks like
piece of log bellow. You can try find "TransactionDeadlockException"
or "Deadlock detected" patterns in log.

javax.cache.CacheException: class
org.apache.ignite.transactions.TransactionTimeoutException: Failed to
acquire lock within provided timeout for transaction [timeout=800,
tx=org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$10@647b61d9]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1440)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:2183)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.putAll(IgniteCacheProxy.java:1430)
at 
org.apache.ignite.internal.processors.cache.transactions.TxPessimisticDeadlockDetectionTest$1.run(TxPessimisticDeadlockDetectionTest.java:322)
at 
org.apache.ignite.testframework.GridTestUtils$8.call(GridTestUtils.java:1092)
at 
org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
Suppressed: class org.apache.ignite.IgniteException: Transaction
has been already completed.
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:924)
at 
org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl.close(TransactionProxyImpl.java:279)
at 
org.apache.ignite.internal.processors.cache.transactions.TxPessimisticDeadlockDetectionTest$1.run(TxPessimisticDeadlockDetectionTest.java:325)
... 2 more
Caused by: class org.apache.ignite.IgniteCheckedException:
Transaction has been already completed.
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finishDhtLocal(IgniteTxHandler.java:786)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:728)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxFinishRequest(IgniteTxHandler.java:687)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:157)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:155)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:827)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:369)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:293)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:95)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:238)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1082)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:710)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:102)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:673)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: class
org.apache.ignite.transactions.TransactionTimeoutException: Failed to
acquire lock within provided timeout for transaction [timeout=800,
tx=org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$10@647b61d9]
at 
org.apache.ignite.internal.util.IgniteUtils$13.apply(IgniteUtils.java:831)
at 
org.apache.ignite.internal.util.IgniteUtils$13.apply(IgniteUtils.java:828)
... 6 more
Caused by: class org.apache.ignite.transactions.TransactionDeadlockException:
Deadlock detected:

K1: TX1 holds lock, TX2 waits lock.
K2: TX3 holds lock, TX1 waits lock.
K3: TX2 holds lock, TX3 waits lock.

Transactions:

TX1 [txId=GridCacheVersion [topVer=100472269, time=1488992349410,
order=1488992271778, nodeOrder=5],
nodeId=52c630e7-3b59-4d58-9291-dd8bd9e4, threadId=637]
TX2 [txId=GridCacheVersion [topVer=100472269, time=1488992349410,
order=1488992271778, nodeOrder=6],
nodeId=652f18b9-8948-4d66-a1a8-f45346c5, threadId=636]
TX3 [txId=GridCacheVersion [topVer=100472269, time=1488992349410,
order=1488992271806, nodeOrder=4],
nodeId=5810f06c-f9a5-40b2-91ec-e9ec97a3, threadId=635]

Keys:

K1 [key=3, cache=cache]
K2 [key=2, cache=cache]
K3 [key=1, cache=cache]

at 
org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture$LockTimeoutObject$1.apply(GridDhtColocatedLockFuture.java:1355)
at 

Re: accurate offheap metrics

2017-03-08 Thread Andrey Gura
Hi,

JVM allocates more memory than pointed out by Xms and Xmx parameters.
For my case is 2-4 Gb more allocated for 4-8 Gb heaps. I think that it
is related with preloaded classes for faster start new JVM processes.

Ignite internal structures that provide offheap cache also allocates
memory in off heap but MBean doesn't take it into account. There are
two structures: offheap map implementation and offheap LRU eviction
policy implementation. Both structures of course are created for each
off heap cache. There is issue about this problem [1]

[1] https://issues.apache.org/jira/browse/IGNITE-4797

On Tue, Mar 7, 2017 at 6:59 PM, lawrencefinn  wrote:
> What's the best way to get accurate metrics of offheap memory used?  I've
> tried a bunch of different ways such as utilizing jmx, utilizing the API and
> iterating through each cache on each node to get offHeapAllocatedSize, using
> rest api, etc...  None of the values add up right.  I see my ignite process
> taking up 4GB, max heap is 2GB, and all the calculations are around 300MB.
> Are indexes not taken into account?
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/accurate-offheap-metrics-tp11057.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: [ANNOUNCE] Apache Ignite 1.9.0 Released

2017-03-07 Thread Andrey Gura
JFYI

Also today Vert.x 3.4.0 was released with Apache Ignite 1.9 based
cluster manager for Vert.x in HA/Clustered mode.

On Tue, Mar 7, 2017 at 3:10 AM, Denis Magda  wrote:
> The Apache Ignite Community is pleased to announce the release of Apache 
> Ignite 1.9.0.
>
> Apache Ignite In-Memory Data Fabric [1] is a high-performance, integrated and 
> distributed in-memory platform for computing and transacting on large-scale 
> data sets in real-time, orders of magnitude faster than possible with 
> traditional disk-based or flash-based technologies.
>
> The Fabric is a collection of independent and well integrated components some 
> of which are the following:
> Data Grid
> SQL Grid
> Compute Grid
> Streaming & CEP
> Service Grid
>
>
> In this release the community provided an integration with Kubernetes cluster 
> manager, improved performance of core and SQL Grid components, expanded Data 
> Modification Language support to the level of .NET and C++ API, integrated 
> with .NET TransactionScope API and more.
>
> Learn more details from our blog post: 
> https://blogs.apache.org/ignite/entry/apache-ignite-1-9-released
>
> The full list of the changes can be found here [2].
>
> Please visit this page if you’re ready to try the release out:
> https://ignite.apache.org/download.cgi
>
> Please let us know [3] if you encounter any problems.
>
> Regards,
>
> The Apache Ignite Community
>
> [1] https://ignite.apache.org
> [2] https://github.com/apache/ignite/blob/master/RELEASE_NOTES.txt
> [3] https://ignite.apache.org/community/resources.html#ask


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-03-06 Thread Andrey Gura
Hi,

you could to know more about deadlock detection here [1]

Mentioned ticket about optiomistic transactions. Deadlock detection
for pessimistic transactions (your case) is implemented in 1.7
release.

I don't think that it is your case because in case of
TransactionDeadlockException transaction should be rolled back and all
locks should be released. But it is possible that there is bug in this
place.

By the way, do you have the same issue on 1.8?

[1] https://apacheignite.readme.io/docs/transactions#deadlock-detection

On Mon, Mar 6, 2017 at 5:16 PM, bintisepaha  wrote:
> Andrey, Could you please tell us a little bit more about the deadlock
> detection feature in 1.8?
>
> https://issues.apache.org/jira/browse/IGNITE-2969
>
> How would we able to know that there is a deadlock? Do you think that is the
> case for us now in 1.7 but we can't be sure because we have no such feature?
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11038.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Renaming a cache

2017-02-28 Thread Andrey Gura
Hi, Steve

Cache couldn't be renamed. Only way to copy cache entries is read all
entries from one cache and insert entries into another cache.

Of course you can implement partition aware copy in order to reduce
network traffic and performance penalty.

On Tue, Feb 28, 2017 at 4:46 PM, steve.hostettler
 wrote:
> Hello all,
>
> currently, we have one cache per validity date. The validity date being part
> of the cache name. We are considering delta loading and therefore I would
> like to know whether it is possible to rename or quickly copy a cache
> (without doing it at a  per entry level).
>
> Many thanks in advance
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Renaming-a-cache-tp10943.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: cache namespace support

2017-02-28 Thread Andrey Gura
Hi,

There is no such concept as namespace in Ignite. I think tha both ways
is acceptable but it depends on your requirements and there is no
silver bullet.

On Tue, Feb 28, 2017 at 4:16 AM, shawn.du  wrote:
> Hi,
>
> I have a use case like below:
> I want to store key-value caches in ignite, but each key-value has a
> namespace. key are unique in a namespace.
> Each cache entry has the same data type for both key and value.
> When query, We always query by key and namespace.
> sometimes, we query all caches of a namespace or remove all entries in one
> namespace.
>
> I think there are two solutions:
> #1 create cache for each namespace, like use namespace as the cache name.
>   using the approach, there may many caches in ignite.  IMO, it is a bit
> dirty, I meant when using ignitevisor to show caches
>   you will see many caches.
> #2 create one cache, each cache entry has namespace,key and value. using
> (namespace,key) as the cache key.
>   This solution is clean. but you have to create a static class/complex
> configuration. also you have to use SQL to query all/remove all
>   data in a namespace.
>
> I think this is a very common use case. Can ignite support it in  lower
> layer for solution clean and  better performance?
> welcome your comments.
>
> Thanks
> Shawn
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-28 Thread Andrey Gura
> The long pause is between 2 different threads, so isn't that normal?

Usually it is not normal. So need some investigation in order to
understand why this pause happens.

> Also the 9990 ms and 10 ms used earlier for some previous step in the tn, is
this how ignite breaks out the time?

When Ignite starts transaction on initial node it pass timeout to
tranasction. After mapping keys on nodes of cluster Ignite starts
transactions on remote nodes and pass remaining transaction time to
it. So it is possible that it is your case.

> Is there any way we can force release a locked key without restarting the
whole cluster?

There is no such way.

On Mon, Feb 27, 2017 at 5:39 PM, bintisepaha  wrote:
> Andrey, thanks for getting back.
> The long pause is between 2 different threads, so isn't that normal?
>
> Also the 9990 ms and 10 ms used earlier for some previous step in the tn, is
> this how ignite breaks out the time? we have always seen the timeouts for
> the value in ms that we set in our code. Also the stack trace does not come
> from our code which it usually does on genuine timeouts that do not leave
> the key locked.
>
> We are trying 1.8 in UAT environment and will release to production soon.
> Unfortunately this issue does not happen in UAT and we have no way of
> reproducing it.
>
> Is there any way we can force release a locked key without restarting the
> whole cluster?
>
> Thanks,
> Binti
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p10910.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-27 Thread Andrey Gura
I still don't see any cause of timeout in logs. But there is one
suspiciously long pause between two log records:

[INFO ] 2017-02-22 15:45:52.807 [pub-#1%DataGridServer-Production%]
OrderHolderSaveRunnable - Begin TXN for orderHolder save, saving 1
orders
[ERROR] 2017-02-22 15:46:01.406 [pub-#2%DataGridServer-Production%]
OrderHolderSaveRunnable - javax.cache.CacheException: class
org.apache.ignite.transactions.TransactionTimeoutException: Cache
transaction time
d out (was rolled back automatically)

It would be great to get thread dumps during this pause.

Did you try Apache Ignite 1.8 release?


On Thu, Feb 23, 2017 at 5:03 PM, bintisepaha  wrote:
> This is the actual error that looks like it is not coming from our code
>
> 109714 Feb 22, 2017 3:46:17 PM org.apache.ignite.logger.java.JavaLogger
> error
> 109715 SEVERE:  Failed to acquire lock for request:
> GridNearLockRequest [topVer=AffinityTopologyVersion [topVer=3153,
> minorTopVer=32], miniId=acdced25a51-c5e64ee6-1079-4b90-bb7b-5ec14032a859,
> implicitTx=f   alse, implicitSingleTx=false, onePhaseCommit=false,
> dhtVers=[null], subjId=f6663b00-24fc-4515-91ac-20c3b47d90ec, taskNameHash=0,
> hasTransforms=false, syncCommit=true, accessTtl=-1, retVal=true,
> firstClientRe   q=false, filter=null, super=GridDistributedLockRequest
> [nodeId=f6663b00-24fc-4515-91ac-20c3b47d90ec, nearXidVer=GridCacheVersion
> [topVer=98913254, time=1487796367155, order=1487785731866, nodeOrder=7],
> threa   dId=57, futId=8cdced25a51-c5e64ee6-1079-4b90-bb7b-5ec14032a859,
> timeout=9990, isInTx=true, isInvalidate=false, isRead=true,
> isolation=REPEATABLE_READ, retVals=[true], txSize=0, flags=0, keysCnt=1,
> super=Grid   DistributedBaseMessage [ver=GridCacheVersion
> [topVer=98913254, time=1487796367155, order=1487785731866, nodeOrder=7],
> committedVers=null, rolledbackVers=null, cnt=0, super=GridCacheMessage
> [msgId=2688455, de   pInfo=null, err=null, skipPrepare=false,
> cacheId=812449097, cacheId=812449097
> 109716 class
> org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
> Failed to acquire lock within provided timeout for transaction
> [timeout=9990, tx=org.apache.ignite.internal.processors.cache.dis
> tributed.dht.GridDhtTxLocalAdapter$1@7f2a8c8a]
> 109717 at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$PostLockClosure1.apply(IgniteTxLocalAdapter.java:3924)
> 109718 at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$PostLockClosure1.apply(IgniteTxLocalAdapter.java:3874)
> 109719 at
> org.apache.ignite.internal.util.future.GridEmbeddedFuture$2.applyx(GridEmbeddedFuture.java:91)
> 109720 at
> org.apache.ignite.internal.util.future.GridEmbeddedFuture$AsyncListener1.apply(GridEmbeddedFuture.java:297)
> 109721 at
> org.apache.ignite.internal.util.future.GridEmbeddedFuture$AsyncListener1.apply(GridEmbeddedFuture.java:290)
> 109722 at
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:263)
> 109723 at
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListeners(GridFutureAdapter.java:251)
> 109724 at
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:381)
> 109725 at
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:347)
> 109726 at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onComplete(GridDhtLockFuture.java:752)
> 109727 at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.access$600(GridDhtLockFuture.java:79)
> 109728 at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture$LockTimeoutObject.onTimeout(GridDhtLockFuture.java:1116)
> 109729 at
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$TimeoutWorker.body(GridTimeoutProcessor.java:159)
> 109730 at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> 109731 at java.lang.Thread.run(Thread.java:745)
> 109732
> 109733 Feb 22, 2017 3:46:17 PM org.apache.ignite.logger.java.JavaLogger
> error
> 109716 class
> org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
> Failed to acquire lock within provided timeout for transaction
> [timeout=9990, tx=org.apache.ignite.internal.processors.cache.dis
> tributed.dht.GridDhtTxLocalAdapter$1@7f2a8c8a]
> 109717 at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$PostLockClosure1.apply(IgniteTxLocalAdapter.java:3924)
> 109718 at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$PostLockClosure1.apply(IgniteTxLocalAdapter.java:3874)
> 109719 at
> org.apache.ignite.internal.util.future.GridEmbeddedFuture$2.applyx(GridEmbeddedFuture.java:91)
> 109720 at
> 

Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-27 Thread Andrey Gura
Hi

Timeout with value 9990 is ok because it is just the next step of
transaction and 10 ms was spend already for the previous steps of
transactions.

On Thu, Feb 23, 2017 at 8:48 AM, bintisepaha  wrote:
> Andrey,
>
> I finally have an error that might help. this happened again in production
> today for us.
> Ignite-Console-3.zip
> 
>
> This is the last update that threw an error, after this error every update
> just times out.
>
> The timeout=9990 in this error, none of our transactions have this timeout.
> Do you think this is an ignite bug? If you look at the stack trace, this is
> not happening due to our code.
>
> Although there is an error on marshaling. How can we further narrow it down?
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p10828.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Node fauliure

2017-02-24 Thread Andrey Gura
Hi, Anil

Could you please provide crash dump? In your case it is
/opt/ignite-manager/api/hs_err_pid18543.log file.

On Fri, Feb 24, 2017 at 9:05 AM, Anil  wrote:
> Hi ,
>
> I see the node is down with following error while running compute task
>
>
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7facd5cae561, pid=18543, tid=0x7fab8a9ea700
> #
> # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build
> 1.8.0_111-8u111-b14-3~14.04.1-b14)
> # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-amd64
> compressed oops)
> # Problematic frame:
> # J 8676 C2
> org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOffheap.getOffheapValue(I)Lorg/h2/value/Value;
> (290 bytes) @ 0x7facd5cae561 [0x7facd5cae180+0x3e1]
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> # /opt/ignite-manager/api/hs_err_pid18543.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> #
>
>
> I have two 2 caches on 4 node cluster each cache is configured with 10 gb
> off heap.
>
> ComputeTask performs the following execution and it is broad casted to all
> nodes.
>
>for (Integer part : parts) {
> ScanQuery scanQuery = new ScanQuery();
> scanQuery.setLocal(true);
> scanQuery.setPartition(part);
>
> Iterator> iterator =
> cache.query(scanQuery).iterator();
>
> while (iterator.hasNext()) {
> Cache.Entry row = iterator.next();
> String eqId =   row.getValue().getEqId();
> try {
> QueryCursor> pdCursor =
> detailsCache.query(new SqlQuery PersonDetail>(PersonDetail.class,
> "select * from DETAIL_CACHE.PersonDetail where eqId = ? order by enddate
> desc").setLocal(true).setArgs(eqId));
> Long prev = null;
> for (Entry d : pdCursor) {
> // populate person info into person detail
> dataStreamer.addData(new AffinityKey(detaildId, eqId),
> d);
> }
> pdCursor.close();
> }catch (Exception ex){
> }
> }
>
> }
>
>
> Please let me know if you see any issues with approach or any
> configurations.
>
> Thanks.
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-21 Thread Andrey Gura
There are no something suspicious in stack trace.

You can check that key is locked using IgniteCache.isLocalLocked()
method. For remote nodes you can run task that performs this checking.

Could you please provide full logs for analysis?

On Tue, Feb 21, 2017 at 6:48 PM, bintisepaha  wrote:
> Andrey, thanks for getting back.
> I am attaching the stack trace. Don't think the cause is a deadloc, but the
> trace is long so maybe I am missing out something, let me know if you find
> something useful.
>
> We cannot ourselves reproduce this issue as there are no errors on the prior
> successful update. It feels like the txn was marked successful, but on one
> of the keys the lock was not released. and later when we try to access the
> key, its locked, hence the exceptions.
>
> No messages in the logs for long running txns or futures.
> By killing the node that holds the key, the lock is not released.
>
> Is there a way to query ignite to see if the locks are being held on a
> particular key? Any code we can run to salvage such locks?
>
> Any other suggestions?
>
> Thanks,
> Binti
>
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p10764.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignit Cache Stopped

2017-02-21 Thread Andrey Gura
I think it is just H2 wrapper for string values.

On Tue, Feb 21, 2017 at 8:21 AM, Anil <anilk...@gmail.com> wrote:
> Thanks Andrey.
>
> I see node is down even gc log looks good. I will try to reproduce.
>
> May I know what is the org.h2.value.ValueString objects in the attached the
> screenshot ?
>
> Thanks.
>
> On 20 February 2017 at 18:37, Andrey Gura <ag...@apache.org> wrote:
>>
>> Anil,
>>
>> No, it doesn't. Only client should left topology in this case.
>>
>> On Mon, Feb 20, 2017 at 3:44 PM, Anil <anilk...@gmail.com> wrote:
>> > Hi Andrey,
>> >
>> > Does client ignite gc impact ignite cluster topology ?
>> >
>> > Thanks
>> >
>> > On 17 February 2017 at 22:56, Andrey Gura <ag...@apache.org> wrote:
>> >>
>> >> From GC logs at the end of files I see Full GC pauses like this:
>> >>
>> >> 2017-02-17T04:29:22.118-0800: 21122.643: [Full GC (Allocation Failure)
>> >>  10226M->8526M(10G), 26.8952036 secs]
>> >>[Eden: 0.0B(512.0M)->0.0B(536.0M) Survivors: 0.0B->0.0B Heap:
>> >> 10226.0M(10.0G)->8526.8M(10.0G)], [Metaspace:
>> >> 77592K->77592K(1120256K)]
>> >>
>> >> Your heap is exhausted. During GC discovery doesn't receive heart
>> >> betas and nodes stopped due to segmentation. Please check your nodes'
>> >> logs for NODE_SEGMENTED pattern. If it is your case try to tune GC or
>> >> reduce load on GC (see for details [1])
>> >>
>> >> [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
>> >>
>> >> On Fri, Feb 17, 2017 at 6:35 PM, Anil <anilk...@gmail.com> wrote:
>> >> > Hi Andrey,
>> >> >
>> >> > The queyr execution time is very high when limit 10000+250 .
>> >> >
>> >> > 10 GB of heap memory for both client and servers. I have attached the
>> >> > gc
>> >> > logs of 4 servers. Could you please take a look ? thanks.
>> >> >
>> >> >
>> >> > On 17 February 2017 at 20:52, Anil <anilk...@gmail.com> wrote:
>> >> >>
>> >> >> Hi Andrey,
>> >> >>
>> >> >> I checked GClogs  and everything looks good.
>> >> >>
>> >> >> Thanks
>> >> >>
>> >> >> On 17 February 2017 at 20:45, Andrey Gura <ag...@apache.org> wrote:
>> >> >>>
>> >> >>> Anil,
>> >> >>>
>> >> >>> IGNITE-4003 isn't related with your problem.
>> >> >>>
>> >> >>> I think that nodes are going out of topology due to long GC pauses.
>> >> >>> You can easily check this using GC logs.
>> >> >>>
>> >> >>> On Fri, Feb 17, 2017 at 6:04 PM, Anil <anilk...@gmail.com> wrote:
>> >> >>> > Hi,
>> >> >>> >
>> >> >>> > We noticed whenever long running queries fired, nodes are going
>> >> >>> > out
>> >> >>> > of
>> >> >>> > topology and entire ignite cluster is down.
>> >> >>> >
>> >> >>> > In my case, a filter criteria could get 5L records. So each API
>> >> >>> > request
>> >> >>> > could fetch 250 records. When page number is getting increased
>> >> >>> > the
>> >> >>> > query
>> >> >>> > execution time is high and entire cluster is down
>> >> >>> >
>> >> >>> >  https://issues.apache.org/jira/browse/IGNITE-4003 related to
>> >> >>> > this ?
>> >> >>> >
>> >> >>> > Can we set seperate thread pool for queries executions, compute
>> >> >>> > jobs
>> >> >>> > and
>> >> >>> > other services instead of common public thread pool ?
>> >> >>> >
>> >> >>> > Thanks
>> >> >>> >
>> >> >>> >
>> >> >>
>> >> >>
>> >> >
>> >
>> >
>
>


Re: Ignit Cache Stopped

2017-02-20 Thread Andrey Gura
Anil,

No, it doesn't. Only client should left topology in this case.

On Mon, Feb 20, 2017 at 3:44 PM, Anil <anilk...@gmail.com> wrote:
> Hi Andrey,
>
> Does client ignite gc impact ignite cluster topology ?
>
> Thanks
>
> On 17 February 2017 at 22:56, Andrey Gura <ag...@apache.org> wrote:
>>
>> From GC logs at the end of files I see Full GC pauses like this:
>>
>> 2017-02-17T04:29:22.118-0800: 21122.643: [Full GC (Allocation Failure)
>>  10226M->8526M(10G), 26.8952036 secs]
>>[Eden: 0.0B(512.0M)->0.0B(536.0M) Survivors: 0.0B->0.0B Heap:
>> 10226.0M(10.0G)->8526.8M(10.0G)], [Metaspace:
>> 77592K->77592K(1120256K)]
>>
>> Your heap is exhausted. During GC discovery doesn't receive heart
>> betas and nodes stopped due to segmentation. Please check your nodes'
>> logs for NODE_SEGMENTED pattern. If it is your case try to tune GC or
>> reduce load on GC (see for details [1])
>>
>> [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
>>
>> On Fri, Feb 17, 2017 at 6:35 PM, Anil <anilk...@gmail.com> wrote:
>> > Hi Andrey,
>> >
>> > The queyr execution time is very high when limit 1+250 .
>> >
>> > 10 GB of heap memory for both client and servers. I have attached the gc
>> > logs of 4 servers. Could you please take a look ? thanks.
>> >
>> >
>> > On 17 February 2017 at 20:52, Anil <anilk...@gmail.com> wrote:
>> >>
>> >> Hi Andrey,
>> >>
>> >> I checked GClogs  and everything looks good.
>> >>
>> >> Thanks
>> >>
>> >> On 17 February 2017 at 20:45, Andrey Gura <ag...@apache.org> wrote:
>> >>>
>> >>> Anil,
>> >>>
>> >>> IGNITE-4003 isn't related with your problem.
>> >>>
>> >>> I think that nodes are going out of topology due to long GC pauses.
>> >>> You can easily check this using GC logs.
>> >>>
>> >>> On Fri, Feb 17, 2017 at 6:04 PM, Anil <anilk...@gmail.com> wrote:
>> >>> > Hi,
>> >>> >
>> >>> > We noticed whenever long running queries fired, nodes are going out
>> >>> > of
>> >>> > topology and entire ignite cluster is down.
>> >>> >
>> >>> > In my case, a filter criteria could get 5L records. So each API
>> >>> > request
>> >>> > could fetch 250 records. When page number is getting increased the
>> >>> > query
>> >>> > execution time is high and entire cluster is down
>> >>> >
>> >>> >  https://issues.apache.org/jira/browse/IGNITE-4003 related to this ?
>> >>> >
>> >>> > Can we set seperate thread pool for queries executions, compute jobs
>> >>> > and
>> >>> > other services instead of common public thread pool ?
>> >>> >
>> >>> > Thanks
>> >>> >
>> >>> >
>> >>
>> >>
>> >
>
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-20 Thread Andrey Gura
Hi,

Actualy dead lock detection for pessimistic transactions was
introduced in Apache Ignite 1.7 release. But anyway it is good idea to
try the same code on the newest release.

In case of trasaction timeout dead lock detection should be triggered
and TransactionTimoutException should contain
TransactionDeadlockException as cause if dead lock was detected. Is it
your case or not?

If transaction doesn't release any locks (or non transactional locks)
then you will get messages in logs about long running transactions or
futures because exchange can't be finished in this case. Do you have
something like to this messages?

Is it possible to create some reproducer project that we can analyze?


On Sat, Feb 18, 2017 at 1:42 AM, bintisepaha  wrote:
> Thanks Andrew,
>
> The same thing happened again today. Clearly the key is locked, we get the
> timeout exceptions. But prior update to the same has not thrown any
> exceptions. Suddenly one update fails with timeout exceptions and we are
> notified due to those exceptions that the key is locked.
>
> We will upgrade to 1.8, but in the meantime is there a way to free up this
> locked key using some code?
>
> We try killing nodes, but we have one back up and it looks like the lock is
> carried over too, which would be the right thing to do.
>
> Outside the transaction we can read this key (dirty read). This is becoming
> an issue for us, since its a production system and the only way to free up
> is to restart the cluster.
>
> Please point us in a direction where we can avoid this or free it up.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p10713.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignit Cache Stopped

2017-02-17 Thread Andrey Gura
>From GC logs at the end of files I see Full GC pauses like this:

2017-02-17T04:29:22.118-0800: 21122.643: [Full GC (Allocation Failure)
 10226M->8526M(10G), 26.8952036 secs]
   [Eden: 0.0B(512.0M)->0.0B(536.0M) Survivors: 0.0B->0.0B Heap:
10226.0M(10.0G)->8526.8M(10.0G)], [Metaspace:
77592K->77592K(1120256K)]

Your heap is exhausted. During GC discovery doesn't receive heart
betas and nodes stopped due to segmentation. Please check your nodes'
logs for NODE_SEGMENTED pattern. If it is your case try to tune GC or
reduce load on GC (see for details [1])

[1] https://apacheignite.readme.io/docs/jvm-and-system-tuning

On Fri, Feb 17, 2017 at 6:35 PM, Anil <anilk...@gmail.com> wrote:
> Hi Andrey,
>
> The queyr execution time is very high when limit 1+250 .
>
> 10 GB of heap memory for both client and servers. I have attached the gc
> logs of 4 servers. Could you please take a look ? thanks.
>
>
> On 17 February 2017 at 20:52, Anil <anilk...@gmail.com> wrote:
>>
>> Hi Andrey,
>>
>> I checked GClogs  and everything looks good.
>>
>> Thanks
>>
>> On 17 February 2017 at 20:45, Andrey Gura <ag...@apache.org> wrote:
>>>
>>> Anil,
>>>
>>> IGNITE-4003 isn't related with your problem.
>>>
>>> I think that nodes are going out of topology due to long GC pauses.
>>> You can easily check this using GC logs.
>>>
>>> On Fri, Feb 17, 2017 at 6:04 PM, Anil <anilk...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > We noticed whenever long running queries fired, nodes are going out of
>>> > topology and entire ignite cluster is down.
>>> >
>>> > In my case, a filter criteria could get 5L records. So each API request
>>> > could fetch 250 records. When page number is getting increased the
>>> > query
>>> > execution time is high and entire cluster is down
>>> >
>>> >  https://issues.apache.org/jira/browse/IGNITE-4003 related to this ?
>>> >
>>> > Can we set seperate thread pool for queries executions, compute jobs
>>> > and
>>> > other services instead of common public thread pool ?
>>> >
>>> > Thanks
>>> >
>>> >
>>
>>
>


Re: Ignit Cache Stopped

2017-02-17 Thread Andrey Gura
Anil,

IGNITE-4003 isn't related with your problem.

I think that nodes are going out of topology due to long GC pauses.
You can easily check this using GC logs.

On Fri, Feb 17, 2017 at 6:04 PM, Anil  wrote:
> Hi,
>
> We noticed whenever long running queries fired, nodes are going out of
> topology and entire ignite cluster is down.
>
> In my case, a filter criteria could get 5L records. So each API request
> could fetch 250 records. When page number is getting increased the query
> execution time is high and entire cluster is down
>
>  https://issues.apache.org/jira/browse/IGNITE-4003 related to this ?
>
> Can we set seperate thread pool for queries executions, compute jobs and
> other services instead of common public thread pool ?
>
> Thanks
>
>


Re: Execution of Compute jobs

2017-02-16 Thread Andrey Gura
It seems that some kind of custom solution will be simpler. Balancer
doesn't help because map stage will always perfrom one time for all
you jobs that you pass to call() method.

On Thu, Feb 16, 2017 at 4:09 PM, Anil <anilk...@gmail.com> wrote:
>
> Thanks Andrey.
>
> Is there any to way to change the behavior ? assign the tasks (equal to
> cluster size) and assign the remain tasks in the same when tasks completed
> on nodes ?  ignite load balancer will rescue  here ?Please advice.
>
>
> Thanks
>
> On 16 February 2017 at 18:35, Andrey Gura <ag...@apache.org> wrote:
>>
>> Hi,
>>
>> when you invoke call() your jobs will mapped to nodes and then will be
>> sent to mapped nodes for job execution. So answer is yes, jobs will be
>> assigned to nodes even number of jobs is greater than number of nodes.
>> All 9 jobs will be assigned in the beginning.
>>
>> On Thu, Feb 16, 2017 at 9:12 AM, Anil <anilk...@gmail.com> wrote:
>> > Hi,
>> >
>> > Does ignite assign the jobs when ignite.compute().call(jobs) is executed
>> > even number of jobs greater than number of nodes?
>> >
>> > Lets say i have to execute the 9 jobs on 4 node cluster. Will 9 jobs
>> > assigned to nodes in the beginning itself ? or assign each job to each
>> > node
>> > and when node finishes the job assign the other job ?
>> >
>> > Thanks.
>> >
>> >
>> >
>
>


Re: Execution of Compute jobs

2017-02-16 Thread Andrey Gura
Hi,

when you invoke call() your jobs will mapped to nodes and then will be
sent to mapped nodes for job execution. So answer is yes, jobs will be
assigned to nodes even number of jobs is greater than number of nodes.
All 9 jobs will be assigned in the beginning.

On Thu, Feb 16, 2017 at 9:12 AM, Anil  wrote:
> Hi,
>
> Does ignite assign the jobs when ignite.compute().call(jobs) is executed
> even number of jobs greater than number of nodes?
>
> Lets say i have to execute the 9 jobs on 4 node cluster. Will 9 jobs
> assigned to nodes in the beginning itself ? or assign each job to each node
> and when node finishes the job assign the other job ?
>
> Thanks.
>
>
>


Re: Register CacheEvent Remote Listener

2017-02-13 Thread Andrey Gura
Yes, it's possible. Moreover, Ignite doesn't support event listening
only for particular cache (except of possibility to filter out cache
events by cache name).

Just register your listener and you will receive events for all caches:

ignite.events().remoteListen(lsnr, filter, types);

On Mon, Feb 13, 2017 at 6:57 AM, shawn.du  wrote:
> Hi,
>
> I want to only register listeners for cacheEvent, but I don't want to
> specify the particular cache.
> it is possible?
>
> like this code:
> ignite.events().remoteListen(null, new IgnitePredicate()
> {
>
> }
>
> Thanks
> Shawn
>


Re: Locking Threads

2017-02-07 Thread Andrey Gura
In lock implementation we have reference to the thread that owns lock
in order to prevent attempts lock releasing from another thread.

For key lock we also have thread ID for thread identification.

On Tue, Feb 7, 2017 at 6:23 PM, styriver  wrote:
> My understanding is that locking is performed on a thread and cache key
> basis. I wanted to know when you lock a thread how are you identifying it.
> By thread name or thread id?
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Locking-Threads-tp10466p10482.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Locking Threads

2017-02-07 Thread Andrey Gura
Hi,

I'm araid I don't understand the question. Do you mean
IgniteCache.lock() implementation?

On Tue, Feb 7, 2017 at 5:31 AM, styriver  wrote:
> Hello
>
> In  your locking implementation when a thread is locked what reference is
> used? Are they locked by thread name or some other identifier.
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Locking-Threads-tp10466.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to deploy cache to selected cluster group

2017-02-06 Thread Andrey Gura
David,

you can use CacheConfiguration.setNodeFilter() for definition some
nodes where cache will be created.

On Mon, Feb 6, 2017 at 12:03 PM, Andrey Mashenkov
 wrote:
> Hi David,
>
> Please, take a look at  Ignite.services(ClusterGroup grp) [1]
>
> [1] http://apacheignite.gridgain.org/docs/service-grid#igniteservices
>
> On Mon, Feb 6, 2017 at 9:38 AM, davida  wrote:
>>
>> Hi all,
>>
>> Is there a way to deploy cache to predefined cluster group(defined by
>> custom
>> attributes) instead of deploying to every server node ?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/How-to-deploy-cache-to-selected-cluster-group-tp10437.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>


Re: Create cache in apache ignite

2017-02-06 Thread Andrey Gura
Hi,

yes, you can configure cache using Spring XML. For start please see
this article and related ones:
https://apacheignite.readme.io/docs/cache-modes

On Mon, Feb 6, 2017 at 11:36 AM, Debasis Pradhan  wrote:
> Hi,
>
> For testing purpose i want to create some cache with 3/4 columns.Can i
> create cache using spring xml, please provide the links or docs if any.
>
> --
> Thanks
> D
> b
>


Re: Partitions within single node in Apache Ignite

2016-12-26 Thread Andrey Gura
Amount of partitions depends on configured affinity function.

RendezvousAffinityFunction with 1024 partitions is default for
replicated or partitioned cache (for single node too). So for example
you can perform scan query by partitions in concurrent threads.
Another example of parallel processing is IgniteCompute.affinityRun()
and IgniteCompute.affinityCall() that can use partition ID as
parameter.

Local cache always has only one partition.

On Sun, Dec 25, 2016 at 11:04 AM, rishi007bansod
 wrote:
> Is there any concept of partitions in Ignite for parallel processing of data
> within single node. That is, can there be more than one partitions within
> one node for multi-threading queries within single node. If there is any
> option, how can we set it?
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Partitions-within-single-node-in-Apache-Ignite-tp9726.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: enabling log4j, read previous post, still have issues

2016-12-12 Thread Andrey Gura
Hi

please read `README.txt` file in `IGNITE_HOME/libs` directory of
Ignite. You should copy `IGNITE_HOME/libs/optional/ignite-log4j`
directory to `IGNITE_HOME/libs/` directory.

On Fri, Dec 9, 2016 at 11:33 PM, rjdamore  wrote:
> Hi,
> I've tried enabling log4j simply by including the ignite-log4j library.
> However, it does not seem to have an effect.  I've also read that to enable
> Log4J module we should move the "optional/ignite-log4j" folder to the libs
> directory before running "ignite.bat".  My question is in order to avoid the
> :
> Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
> message,
>
> should I include the igntie-log4j.jar file in the Ignite-Home directory
> structure, or in my application structure?  Also; where is this
> optional/ignite-log4j folder? I have the binary distribution and the
> source-built ignite installations, and I cannot locate this folder in either
> of them. Is this a folder that I create myself?
>
> Sorry if this is a naive question, but the actual error message for the log
> handler is preventing us from using ignite in a distributed system. The log
> is interrupting another applications ability to log to the same err.txt
> file. I'm trying to get this message eliminated.
> Thanks for any help.
>
> The message i get is the same as :
> http://apache-ignite-users.70518.x6.nabble.com/Can-t-load-log-handler-quot-org-apache-ignite-logger-java-JavaLoggerFileHandler-quot-td4490.html
>
> However the solution in that thread is not working for me at all.
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/enabling-log4j-read-previous-post-still-have-issues-tp9466.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: NullPointerException on ScanQuery

2016-12-12 Thread Andrey Gura
Hi,

I've looked at your code.

First of all you have races in your code. For example your start two
threads and destroy caches before thread is finished and it leads to
cache closed error. Moreover, you stops application before any thread
finished and it leads to topology changing and NPE.

The second, I don't understand why do you use threads at all? Usually
you should start Ignite cluster, connect to it using client node and
run scan query.

Starting of several instances of Ignite in one JVM makes sense only
for test purposes. Anyway, you should start Ignite instances, creates
caches and after it run threads with your tasks.

On Mon, Dec 12, 2016 at 11:23 AM, Alper Tekinalp  wrote:
> Hi Andrey.
>
> Did you able to look to the code?
>
> Regards.
>
> On Thu, Dec 8, 2016 at 10:05 AM, Alper Tekinalp  wrote:
>
>> Hi.
>>
>> Could you please share your reproducer example?
>>
>>
>> I added classes to repoduce the error. It also throws cache closed errors
>> I am ok with it. But others.
>>
>> --
>> Alper Tekinalp
>>
>> Software Developer
>> Evam Streaming Analytics
>>
>> Atatürk Mah. Turgut Özal Bulv.
>> Gardenya 5 Plaza K:6 Ataşehir
>> 34758 İSTANBUL
>>
>> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
>> www.evam.com.tr
>> 
>>
>
>
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr
> 


Re: NullPointerException on ScanQuery

2016-12-07 Thread Andrey Gura
Could you please share your reproducer example?

On Wed, Dec 7, 2016 at 3:11 PM, Alper Tekinalp  wrote:
> Hi Andrey.
>
>> The difference is the first method returns all primary partitions for
>> given node ID and topology version while the second returns all node IDs
>> that own by given partition on given topology version.
>
>
> So if I get some partitions [1,3,5] with first method for node X, I expect
> to get X form second method with 1,3 or 5. But in my case it seems I get
> null.
>
>>
>> Did you executed query on stable topology?
>
>
> Yes, the topology is not changing.
>
>>
>> Is it reproducible problem?
>
>
> Yes, I get the same error once in a few tries.
>
> Regards.
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr


Re: NullPointerException on ScanQuery

2016-12-07 Thread Andrey Gura
The difference is the first method returns all primary partitions for given
node ID and topology version while the second returns all node IDs that own
by given partition on given topology version.

Did you executed query on stable topology? Is it reproducible problem?

On Wed, Dec 7, 2016 at 12:06 PM, Alper Tekinalp  wrote:

> Hi all.
>
> It seems one method uses affinity:
>
>  cctx.affinity().primaryPartitions(n.id(), topologyVersion());
>
> other uses topology API.
>
> cctx.topology().owners(part, topVer)
>
> Are these two fully consistent?
>
> On Tue, Dec 6, 2016 at 4:58 PM, Alper Tekinalp  wrote:
>
> > Hi all.
> >
> > We have 2 servers and a cache X. On both servers a method is running
> > reqularly and run a ScanQurey on that cache. We get partitions for that
> > query via
> >
> > ignite.affinity(cacheName).primaryPartitions(ignite.
> cluster().localNode())
> >
> > and run the query on each partitions. When cache has been destroyed by
> > master server on second server we get:
> >
> > javax.cache.CacheException: class org.apache.ignite.
> IgniteCheckedException:
> > null
> > at org.apache.ignite.internal.processors.cache.
> > IgniteCacheProxy.query(IgniteCacheProxy.java:740)
> > at com.intellica.evam.engine.event.future.FutureEventWorker.
> > processFutureEvents(FutureEventWorker.java:117)
> > at com.intellica.evam.engine.event.future.FutureEventWorker.run(
> > FutureEventWorker.java:66)
> > Caused by: class org.apache.ignite.IgniteCheckedException: null
> > at org.apache.ignite.internal.processors.query.
> GridQueryProcessor.
> > executeQuery(GridQueryProcessor.java:1693)
> > at org.apache.ignite.internal.processors.cache.
> > IgniteCacheProxy.query(IgniteCacheProxy.java:494)
> > at org.apache.ignite.internal.processors.cache.
> > IgniteCacheProxy.query(IgniteCacheProxy.java:732)
> > ... 2 more
> > Caused by: java.lang.NullPointerException
> > at org.apache.ignite.internal.processors.cache.query.
> > GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.init(
> > GridCacheQueryAdapter.java:712)
> > at org.apache.ignite.internal.processors.cache.query.
> > GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(
> > GridCacheQueryAdapter.java:677)
> > at org.apache.ignite.internal.processors.cache.query.
> > GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(
> > GridCacheQueryAdapter.java:628)
> > at org.apache.ignite.internal.processors.cache.query.
> > GridCacheQueryAdapter.executeScanQuery(GridCacheQueryAdapter.java:548)
> > at org.apache.ignite.internal.processors.cache.
> > IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:497)
> > at org.apache.ignite.internal.processors.cache.
> > IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:495)
> > at org.apache.ignite.internal.util.lang.IgniteOutClosureX.
> > apply(IgniteOutClosureX.java:36)
> > at org.apache.ignite.internal.processors.query.
> GridQueryProcessor.
> > executeQuery(GridQueryProcessor.java:1670)
> > ... 4 more
> >
> > for a while until cache is closed on that server too.
> >
> > The corresponding line is:
> >
> > 710:final ClusterNode node = nodes.poll();
> > 711:
> > 712:if (*node*.isLocal()) {
> >
> > Obviously node is null. nodes is a dequeue fill by following method:
> >
> > private Queue fallbacks(AffinityTopologyVersion
> > topVer) {
> > Deque fallbacks = new LinkedList<>();
> > Collection owners = new HashSet<>();
> >
> > for (ClusterNode node : cctx.topology().owners(part,
> topVer)) {
> > if (node.isLocal())
> > fallbacks.addFirst(node);
> > else
> > fallbacks.add(node);
> >
> > owners.add(node);
> > }
> >
> > for (ClusterNode node : cctx.topology().moving(part)) {
> > if (!owners.contains(node))
> > fallbacks.add(node);
> > }
> >
> > return fallbacks;
> > }
> >
> > There errors occurs before cache closed on second server. So checking if
> > cache closed is not enough.
> >
> > Why when we take partitions for local node we get some partitions but
> > ignite cant find any owner for that partition?
> > Is our method for getting partitions wrong?
> > Is there any way to avoid that?
> >
> > Best regards.
> > --
> > Alper Tekinalp
> >
> > Software Developer
> > Evam Streaming Analytics
> >
> > Atatürk Mah. Turgut Özal Bulv.
> > Gardenya 5 Plaza K:6 Ataşehir
> > 34758 İSTANBUL
> >
> > Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> > www.evam.com.tr
> > 
> >
>
>
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr
> 

Re: Question about backups

2016-11-14 Thread Andrey Gura
Hi,

Replicated cache has backups amount that equal to cluster nodes amount
minus one because this one is primary. Amount of nodes in cluster can be
changed during time so we use Integer.MAX_VALUE magic number for backups in
case of replicated cache. Amount of backups doesn't depend on cache memory
mode and you should not specify what node is primary and what is backup. It
is affinity function responsibility. If you will set backups parameter to
zero (or any other value) for replicated cache it will be ignored.

In fact, partitioned and replicated caches are implemented in the same way.
But there are some implementation details because our focus is high
performance.


On Tue, Nov 15, 2016 at 1:43 AM, styriver  wrote:

> Hello I am dumping the cache configuration for my defined caches. I am
> seeing
> this as the backup number
> memMode=OFFHEAP_TIERED cacheMode=REPLICATED, atomicityMode=TRANSACTIONAL,
> atomicWriteOrderMode=null, backups=2147483647
>
> I am not setting the backups property in any of my configurations so this
> must be the default. This is the same number for both the OFFHEAP_TIERED
> and
> ONHEAP_TIERED. We have two server nodes and am not specifying any of the
> nodes as primary or backup. Wondering what the implications of having this
> number set to this value. Is it only applicable if the cacheMode is
> PARTIONED? Like to know if I should set this to zero or not?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Question-about-backups-tp8968.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Multithreading SQL queries in Apache Ignite

2016-11-14 Thread Andrey Gura
>From my point of view, it depends. Only performance measurements can give
answer.

On Mon, Nov 14, 2016 at 5:06 PM, rishi007bansod 
wrote:

> I have set it to default value i.e. double the number of cores. But will it
> improve performance if I increase it further?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Multithreading-SQL-queries-in-
> Apache-Ignite-tp8944p8949.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Multithreading SQL queries in Apache Ignite

2016-11-14 Thread Andrey Gura
Hi,

Of course all requests will be processed concurrently by server. At this
moment queries use Ignite system pool for execution. You can adjust size of
this pool if needed using IgniteConfiguration.setSystemThreadPoolSize()
method.

On Mon, Nov 14, 2016 at 4:10 PM, rishi007bansod 
wrote:

> In my case I have data present on 1 server node and 25 clients connected to
> this server, concurrently firing sql queries. So, does Ignite by default
> parallelizes these queries or do we have to do some settings? Can we apply
> some kind of multithreading on server side to handle these queries for
> performance improvement?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Multithreading-SQL-queries-in-
> Apache-Ignite-tp8944.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Partitioning on a non-uniform cluster

2016-11-14 Thread Andrey Gura
Hi,

In order to implement resources aware affinity function you can use node
attributes (available via ClusterNode interface). But node attributes
should be initialized before Ignite node started (except of default node
attributes that can be found in IgniteNodeAttributes class).

Also you can start one Ignite instance on 16GB and two instances on 32GB.
In this case you should configure RendezvousAffinityFunction with
excludeNeighbors == true flag in order to increase cluster reliability.


On Mon, Nov 14, 2016 at 3:09 PM, Krzysztof  wrote:

> Hello,
>
> Judging by the documentation and some discussions on this list, can you
> confirm that Ignite cache does not take into account different memory
> settings, i.e. if we have various nodes with 16GB and 32GB allocated for
> cache, there would be no two times more partitions assigned to larger
> nodes?
>
> In order to not to underutilize larger nodes or overfill smaller nodes we
> would have to develop our own affinity strategy via AffinityFunction in
> order to make it cache-size aware?
>
> RendezvousAffinityFunction seems to be completely resource-blind?
>
> Could you please clarify what would be the best way to achieve balanced
> distribution cluster memory-wise?
>
> Thanks
> Krzysztof
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Partitioning-on-a-non-uniform-cluster-tp8940.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Failed to execute SQL from ODBC, Indexing is disabled for cache

2016-11-13 Thread Andrey Gura
Hi,

Your XML configuration doesn't contain indexing configuration so you can't
query data from cache. Just add indexing configuration to your cache
configuration like this:










java.lang.Stringvalue>
java.lang.String






It should work.


About configuration programmatically. I need all code in order to detect
problem.



On Sat, Nov 12, 2016 at 4:16 PM, austin solomon  wrote:

> Hi,
>
> I am using Kafka file source connector to fetch the data from csv file and
> sinking the data into Ignite's cache, The data has been saved in cache
> sucessfully, but when i tried to query the cache I got an error
>
> [SEVERE][odbc-#103%null%][OdbcRequestHandler] Failed to execute SQL query
> [reqId=11, req=OdbcQueryExecuteRequest [cacheName=confluentCache,
> sqlQry=select * from confluentCache, args=[]]]
> javax.cache.CacheException: Indexing is disabled for cache: confluentCache.
> Use setIndexedTypes or setTypeMetadata methods on CacheConfiguration to
> enable.
>
>
> My xml configuration is like this
>
>
> 
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.configuration.OdbcConfiguration">
> 
>
> I tried to set the values programatically like this in IgniteSinkTask.java
> file :
>
> cacheName = props.get(IgniteSinkConstants.CACHE_NAME);
> igniteConfigFile = props.get(IgniteSinkConstants.
> CACHE_CFG_PATH);
>
> CacheConfiguration cfg = new
> CacheConfiguration<>(cacheName);
> cfg.setIndexedTypes(String.class, String.class);
>
>  But no luck, can any one tell me how to setIndexedTypes when data is being
> sink to cache.
>
> Thanks,
> Austin
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Failed-to-execute-SQL-from-ODBC-
> Indexing-is-disabled-for-cache-tp8924.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Creating cache on client node in xml not working

2016-11-13 Thread Andrey Gura
Zero deploy feature doesn't work for cache store. You should provide cache
store configuration and implementation for each node in cluster.

On Fri, Nov 11, 2016 at 8:26 PM, Evans, Charlie <charlie.ev...@vimpelcom.com
> wrote:

> Hi,
>
>
> No the server does not. I was hoping the bean would be added to the server
> automatically somehow. I guess not?
>
>
> If I had multiple nodes as servers would they all need access to the bean
> xml. So the same xml file on each node?
>
>
> Thanks
> ------
> *From:* Andrey Gura <ag...@apache.org>
> *Sent:* 11 November 2016 18:05:30
> *To:* user@ignite.apache.org
> *Subject:* Re: Creating cache on client node in xml not working
>
> Hi,
>
> Does your Ignite server have cassandraAdminDataSource and other Cassandra
> related beans in classpath?
>
> On Fri, Nov 11, 2016 at 7:53 PM, Evans, Charlie <
> charlie.ev...@vimpelcom.com> wrote:
>
>> Hi all,
>>
>>
>> I've been trying to create a cache in my application with Cassandra as
>> the persistent storage.
>>
>>
>> My current setup is:
>>
>> - starting Ignite on the server with default configs.
>>
>> - application connects to the ignite server as a client and attempts to
>> load the cache configuration and create the cache
>>
>> - the cache configuration uses CassandraCacheStoreFactory.
>>
>>
>> I'm aware I cannot do this programmatically because DataSource is not
>> serializable (until 1.8) so have been trying to use xml files.
>>
>>
>> When my application starts it seems to create the cache (
>> [17:30:44,732][INFO][main][GridCacheProcessor] Started cache
>> [name=ctntimestamp, mode=PARTITIONED]) but later just gets stuck with "
>> [WARNING][main][GridCachePartitionExchangeManager] Still waiting for
>> initial partition map exchange" every 40 seconds. In the logs for the
>> server I see the error message
>>
>> "class org.apache.ignite.IgniteCheckedException: Spring bean with
>> provided name doesn't exist , beanName=cassandraAdminDataSource]".
>>
>> My xml file is below and is in the src/main/resources folder. It is
>> loaded with Ignition.start(getClass.getResource("/ignite-cass.xml"))
>>
>> Any ideas what the problem could be?
>>
>> P.S. When will 1.8 be released? I tried doing it all programmatically
>> with 1.8 SNAPSHOT and it works fine.
>>
>> 
>>
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
>> xmlns:util="http://www.springframework.org/schema/util;
>>xsi:schemaLocation="http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd
>> http://www.springframework.org/schema/util
>> http://www.springframework.org/schema/util/spring-util.xsd;>
>>
>> > class="com.datastax.driver.core.policies.TokenAwarePolicy">
>> > type="com.datastax.driver.core.policies.LoadBalancingPolicy">
>> > class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
>> 
>> 
>>
>> 
>> 127.0.0.1
>> 
>>
>> > class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
>> 
>> 
>> 
>> 
>> 
>>
>> > class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
>> 
>> 
>> 
>> 
>> 
>>
>> 
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>>
>> 
>>
>> 
>> 
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>> 
>> 
>> 
>> 
>> > class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
>> > value="cassandraAdminDataSource"/>
>> > value="cache1_persistence_settings"/>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>> 
>> 127.0.0.1
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>>
>>
>>
>


Re: Creating cache on client node in xml not working

2016-11-11 Thread Andrey Gura
Hi,

Does your Ignite server have cassandraAdminDataSource and other Cassandra
related beans in classpath?

On Fri, Nov 11, 2016 at 7:53 PM, Evans, Charlie  wrote:

> Hi all,
>
>
> I've been trying to create a cache in my application with Cassandra as the
> persistent storage.
>
>
> My current setup is:
>
> - starting Ignite on the server with default configs.
>
> - application connects to the ignite server as a client and attempts to
> load the cache configuration and create the cache
>
> - the cache configuration uses CassandraCacheStoreFactory.
>
>
> I'm aware I cannot do this programmatically because DataSource is not
> serializable (until 1.8) so have been trying to use xml files.
>
>
> When my application starts it seems to create the cache (
> [17:30:44,732][INFO][main][GridCacheProcessor] Started cache
> [name=ctntimestamp, mode=PARTITIONED]) but later just gets stuck with "
> [WARNING][main][GridCachePartitionExchangeManager] Still waiting for
> initial partition map exchange" every 40 seconds. In the logs for the
> server I see the error message
>
> "class org.apache.ignite.IgniteCheckedException: Spring bean with
> provided name doesn't exist , beanName=cassandraAdminDataSource]".
>
> My xml file is below and is in the src/main/resources folder. It is loaded
> with Ignition.start(getClass.getResource("/ignite-cass.xml"))
>
> Any ideas what the problem could be?
>
> P.S. When will 1.8 be released? I tried doing it all programmatically with
> 1.8 SNAPSHOT and it works fine.
>
> 
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
> xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd;>
>
>  class="com.datastax.driver.core.policies.TokenAwarePolicy">
>  type="com.datastax.driver.core.policies.LoadBalancingPolicy">
> 
> 
> 
>
> 
> 127.0.0.1
> 
>
>  class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
> 
> 
> 
> 
> 
>
>  class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
> 
> 
> 
> 
> 
>
> 
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>
> 
>
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>  class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
>  value="cassandraAdminDataSource"/>
>  value="cache1_persistence_settings"/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> 127.0.0.1
> 
> 
> 
> 
> 
> 
> 
> 
>
>
>
>


Re: How is maxMemorySize calculated in the presence of backups?

2016-11-11 Thread Andrey Gura
Josh,

Eviction policies track entries on local node only regardless of fact that
this node is primary or backup for particular entry. So the policy will
evict entries on particular node when this node will have 1G of heap.

On Fri, Nov 11, 2016 at 6:49 PM, Josh Cummings 
wrote:

> For example, if I have an LruEvictionPolicy on a cache with one backup,
> and I say the maxMemorySize is 1G, will it evict when the primary has 1G or
> when the primary and the backup together have 1G?
>
> --
>
> *JOSH CUMMINGS*
>
> Principal Engineer
>
> [image: Workfront] 
>
> *O*  801.477.1234  |  *M*  8015562751
>
> joshcummi...@workfront.com | www.workfront.com
> Address   |  Twitter
>   |  LinkedIn
>   |  Facebook
> 
>
> [image: Workfront] 
>


Re: Ignite Jdbc connection

2016-11-11 Thread Andrey Gura
Hi,


1. Ignite client node is thread-safe and you can create multiple statements
in order to query execution. So, from my point of view, you should close
connection when finish all your queries.
2. Could you please clarify your question?
3. I don't think that pooling is required.
4. Ignite client will try to reconnect to the Ignite cluster in case of
server node fails. All you need is proper IP finder configuration.


On Thu, Nov 10, 2016 at 5:01 PM, Anil  wrote:

> Any help in understanding below ?
>
> On 10 November 2016 at 16:31, Anil  wrote:
>
>> I have couple of questions on ignite jdbc connection. Could you please
>> clarify ?
>>
>> 1. Should connection be closed like other jdbc db connection ? - I see
>> connection close is shutdown of ignite client node.
>> 2. Connection objects are not getting released and all connections are
>> busy ?
>> 3. Connection pool is really required for ignite client ? i hope one
>> ignite connection can handle number of queries in parallel.
>> 4. What is the recommended configuration for ignite client to support
>> failover ?
>>
>> Thanks.
>>
>
>


Re: Class objects are fetched as string when JDBC api is used

2016-11-07 Thread Andrey Gura
Hi,

You'll get the same behaviour in case of SqlFieldsQuery query. If you want
get correct presentation of your objects you should override toString()
method or query specific fields separately.

On Sun, Nov 6, 2016 at 5:56 PM, chevy  wrote:

> Hi,
>
>   After adding data to cache, i am try to fetch it using JDBC api and below
> statement -
> *rs = conn.createStatement().executeQuery("select * from SalesModel where
> _key=" + storeId);*
>
> Issue: Values other than class objects is retrieved correctly but class
> objects are taken as strings as shown below -
>
> *TODAYDATA*: "com.target.ignite.model.TdyFeedModel@2306117a",
> *WEEKTODATE*: "com.target.ignite.model.WtdFeedModel@47de79ca",
> STOREID: 1234,
> *MONTHTODATE*: "com.target.ignite.model.MtdFeedModel@c3179f1",
> *PREVIOUSDAY*: "com.target.ignite.model.WtdFeedModel@4f1c611a",
> ID: "2016-10-24,1234",
> _KEY: 3,
> *_VAL*: "com.target.ignite.model.SalesModel@75c7cc6c",
> SALESDATE: "2016-10-24"
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Class-objects-are-fetched-as-string-
> when-JDBC-api-is-used-tp8720.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Null column values - bug

2016-11-03 Thread Andrey Gura
String.valuOf(null) return "null" string by contract.

On Thu, Nov 3, 2016 at 5:33 PM, Anil  wrote:

> HI ,
>
> null values are returned as "null" with ignite jdbc result set.
>
>  private  T getTypedValue(int colIdx, Class cls) throws SQLException
> {
> ensureNotClosed();
> ensureHasCurrentRow();
>
> try {
> T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx
> - 1)) : (T)curr.get(colIdx - 1);
>
> wasNull = val == null;
>
> return val;
> }
> catch (IndexOutOfBoundsException ignored) {
> throw new SQLException("Invalid column index: " + colIdx);
> }
> catch (ClassCastException ignored) {
> throw new SQLException("Value is an not instance of " +
> cls.getName());
> }
> }
>
>
> if a column value is null (curr.get(colIdx - 1) return null but
> String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".
>
> ArrayList obj = new ArrayList();
>   obj.add(null);
> System.out.println(null == (String)String.valueOf(obj.get(0)));
>
> above Sysout is false.
>
> Fix :
>
> Object colValue = curr.get(colIdx - 1);
>
> T val = cls == String.class ? (String) colValue : (T) colValue;
>
> or return (T) colValue
>
>
> please let me know if you see any issues. thanks
>
>
>


Re: Swap Problem

2016-11-02 Thread Andrey Gura
Hi,

The rest entries from swap will be moved into memroy only if they will be
requested using operation with get semantic.

On Wed, Nov 2, 2016 at 12:35 PM, Level D <724172...@qq.com> wrote:

> In additon, after all those operations have been done, will the 20 pieces
> of data be deleted from hard disk?
>
>
> -- Original --
> *From: * "Level D";<724172...@qq.com>;
> *Date: * Wed, Nov 2, 2016 05:22 PM
> *To: * "user";
> *Subject: * Swap Problem
>
> Hi all,
>
> Here's my case.
>
> The swapEnabled is true.
> I have 120 pieces of data that 100 pieces are stored in memory and the
> other 20 pieces in hard disk.
> Will the rest 10 pieces be moved to memory if I delete all data in memory
> and 10 pieces in hard disk?
>
> Regards,
>
> Zhou.
>
>>
>


Re: Loading Hbase data into Ignite

2016-10-11 Thread Andrey Gura
Hi,

HBase regions doesn't map to Ignite nodes due to architectural differences.
Each HBase region contains rows in some range of keys that sorted
lexicographically while distribution of keys in Ignite depends on affinity
function and key hash code. Also how do you remap region to nodes in case
of region was splitted?

Of course you can get node ID in cluster for given key but because HBase
keeps rows sorted by keys lexicographically you should perform full scan in
HBase table. So the simplest way for parallelization data loading from
HBase to Ignite it concurrently scan regions and stream all rows to one or
more DataStreamer.


On Tue, Oct 11, 2016 at 4:11 PM, Anil  wrote:

> HI,
>
> we have around 18 M records in hbase which needs to be loaded into ignite
> cluster.
>
> i was looking at
>
> http://apacheignite.gridgain.org/v1.7/docs/data-loading
>
> https://github.com/apache/ignite/tree/master/examples
>
> is there any approach where each ignite node loads the data of one hbase
> region ?
>
> Do you have any recommendations ?
>
> Thanks.
>


Re: How off heap works

2016-09-12 Thread Andrey Gura
Yes, LRU policy will be used for OFFHEAP_TIERED cache memeory mode.

If you have 3gb/sec it is possible that it leads to allocation of big
amount of object in heap. May be recomendations about JVM tuning
recommendations will help you [1].

[1]
https://apacheignite.readme.io/docs/jvm-and-system-tuning#jvm-tuning-for-clusters-with-off_heap-caches


On Mon, Sep 12, 2016 at 6:12 AM, Anmol Rattan <anmolrat...@gmail.com> wrote:

> It is off heap tiered with max off heap of 40gb or so and in some cases, 0
> (no limit)
> No eviction policy, by default will be LRU?
>
> Wondering internal datastructures shall not cause such large heap growth,
> unless there is a lag from on heap to off heap
>
> On Sep 12, 2016 5:23 AM, "Andrey Gura" <ag...@gridgain.com> wrote:
>
>> Hi,
>>
>> what cache model do you use? Do you have configured eviction policy for
>> your cache?
>>
>> Keep in mind that every entry will allocate heap memory before will be
>> evicted to offheap memory. Moreover, each entry stored in offheap consumes
>> some heap memory due to internal data structures' overhead.
>>
>> On Sun, Sep 11, 2016 at 8:59 PM, Anmol Rattan <anmolrat...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> We are dealing with a large grid of 7 server name nodes where each node
>>> has multiple local caches. Data allocation rate at a few nodes is around
>>> 3gb/sec which will be moved to off heap cache (as gathered from young gen
>>> gc). However, heap consumption is very fast, one node even shows 50gb
>>> consumption in 5 mins only.
>>>
>>> Can you help us understand how off heap in ignite works. Is there lag or
>>> transfer at regular interval from on heap to off heap. Pointers to why heap
>>> is showing such a rapid growth though all caches are off heap tiered.
>>>
>>> Thanks.
>>>
>>
>>
>>
>> --
>> Andrey Gura
>> GridGain Systems, Inc.
>> www.gridgain.com
>>
>


-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: How off heap works

2016-09-11 Thread Andrey Gura
Hi,

what cache model do you use? Do you have configured eviction policy for
your cache?

Keep in mind that every entry will allocate heap memory before will be
evicted to offheap memory. Moreover, each entry stored in offheap consumes
some heap memory due to internal data structures' overhead.

On Sun, Sep 11, 2016 at 8:59 PM, Anmol Rattan <anmolrat...@gmail.com> wrote:

> Hi,
>
> We are dealing with a large grid of 7 server name nodes where each node
> has multiple local caches. Data allocation rate at a few nodes is around
> 3gb/sec which will be moved to off heap cache (as gathered from young gen
> gc). However, heap consumption is very fast, one node even shows 50gb
> consumption in 5 mins only.
>
> Can you help us understand how off heap in ignite works. Is there lag or
> transfer at regular interval from on heap to off heap. Pointers to why heap
> is showing such a rapid growth though all caches are off heap tiered.
>
> Thanks.
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: understanding Locks usage

2016-09-08 Thread Andrey Gura
Sam,

as workaround you can use IgniteQueue in order to implement distributed
exclusive lock. You can find example of such solution in vertx-ignite
project. See IgniteClusterManager class [1], getLockWithTimeout() method
and LockImpl class.

[1]
https://github.com/vert-x3/vertx-ignite/blob/master/src/main/java/io/vertx/spi/cluster/ignite/IgniteClusterManager.java

On Thu, Sep 8, 2016 at 1:49 AM, javastuff@gmail.com <
javastuff@gmail.com> wrote:

> With Key Vs Entry lock, I am talking about lock granularity.
> Key lock is kind of synchronous block or method which have processing logic
> unrelated to the key or cached data being processed but a different shared
> resource. Kind of distributed mutex. but note that it does not need all
> other feature from a distributed cache access semantic.
> Entry lock is a distributed cache lock with all data access semantics. It
> can be seen as row level lock for a DB system.
> It is good to have, but not necessary. Entry lock can achieve what key lock
> need to do.
>
> Few last question on Locks -
> 1. Is there a way to have time-to-live for a lock? what happens
> thread/system got killed before unlocking?
> 2. Locks works only with TRANSACTIONAL mode, is there approx benchmark
> which
> can be shared for FETCH/PUT/FECTCHALL/PUTALL comparing TRANSACTIONAL vs
> ATOMIC?
>
> For key lock scenario above, I can have a separate cache just for locks
> (Transnational) and separate cache for data (Atomic), but want to decide is
> that really needed.
>
> Thanks,
> -Sam
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/understanding-Locks-usage-tp7489p7596.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: Spring application context resource is not injected exception while starting ignite in jdbc driver mode

2016-08-29 Thread Andrey Gura
Yes, SQL queries are read-only, so DB updates aren't possible.

CRUD operations using SQL are in progress now.

On Mon, Aug 29, 2016 at 10:24 AM, san <hv.sanj...@gmail.com> wrote:

> Hi,
>
> please let know "SQL Queries" are read-only?  is cache and database update
> possible using SQL Queries?
>
> https://apacheignite.readme.io/v1.7/docs/sql-queries
>
> Regards,
> Sanjeev
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Spring-application-context-resource-
> is-not-injected-exception-while-starting-ignite-in-jdbc-
> driver-me-tp7299p7364.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: Spring application context resource is not injected exception while starting ignite in jdbc driver mode

2016-08-26 Thread Andrey Gura
Val,

Why we need store on client node in case of partitioned or replicated cache?

On Fri, Aug 26, 2016 at 4:53 AM, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Hi,
>
> This happens because JDBC driver tries to initialize the store. This is
> needed for regular client nodes, but for the driver this doesn't make much
> sense. I created a ticket: https://issues.apache.org/
> jira/browse/IGNITE-3771
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Spring-application-context-resource-
> is-not-injected-exception-while-starting-ignite-in-jdbc-
> driver-me-tp7299p7328.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: Spring application context resource is not injected exception while starting ignite in jdbc driver mode

2016-08-24 Thread Andrey Gura
Hi,

could you please provide exception and stack trace? Also please make sure
that your Spring configuration file has  root element.

On Wed, Aug 24, 2016 at 4:27 PM, san <hv.sanj...@gmail.com> wrote:

> I am facing an issue while starting ignite in JDBC driver mode. i have my
> cache in remote node and i need to access using plain sql queries. Hence i
> am using ignite jdbc driver. please find the code snippet.
>
> Class.forName("org.apache.ignite.IgniteJdbcDriver");
> Connection conn =
> DriverManager.getConnection("jdbc:ignite:cfg://ignite-jdbc.xml");
> ResultSet rs = conn.createStatement().executeQuery("select * from users");
>
> my xml also simple. since cache is already loaded in remote node.
>
>class="org.apache.ignite.configuration.IgniteConfiguration">
>
> 
>
> 
>
>
>
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
> 
> 
> 127.0.0.1:47500..47549
> 
> 
> 
> 
> 
> 
>
> please do the needful..
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Spring-application-context-resource-
> is-not-injected-exception-while-starting-ignite-in-jdbc-
> driver-me-tp7272.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: Ignite - SQL queries aggregation

2016-07-25 Thread Andrey Gura
Ignite SQL engine built on top of H2 engine and supports aggregation
functions like cnt, max, etc.

Also Ignite supports user defined function. See QuerySqlFunction annotation
documentation and its usages in Ignite tests (e.g.
https://github.com/apache/ignite/blob/68891e89dd0e0f19321d6a4d45ae7372279b8b08/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/IgniteCacheAbstractQuerySelfTest.java#L1877
)

In order to define functions you should create class with some set of
functions where each function represented as method annotated by
QuerySqlFunction and configure cache using
CacheConfiguration.setSqlFunctionClasses() method.

On Mon, Jul 25, 2016 at 7:41 PM, M Singh <mans2si...@yahoo.com> wrote:

> Hi:
>
> I wanted to find out if ignite sql/other queries support aggregation or
> udf functions (like count, etc).
>
> Thanks
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: Ignite - CEP

2016-07-24 Thread Andrey Gura
Mans,

You can configure indexes for you entities that will take in account entity
creation time. Then you can periodically query cache data using SQL query
with time bounds (last 10 minutes). But be aware that in case of big
frequency of adding entries into the cache memory could be exhausted.

On Sun, Jul 24, 2016 at 8:44 PM, M Singh <mans2si...@yahoo.com> wrote:

> Hi:
>
> I am trying to understand how CEP works in Ignite and from my
> understanding, we need to create a sliding window with time (or some other
> criteria) based eviction policy.  At the expiration of the policy the entry
> will be removed from the cache.
>
> I wanted to find out if there is a way to still keep the item in the cache
> but still allow a sliding window without using the eviction policy.  The
> scenario I have is that I would like to query say the top 5 items in last
> 10 minutes but sill be able to query the the query the previous windows.
>
> Thanks
>
> Mans
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


  1   2   >