ignite used too much memory

2016-10-25 Thread Shawn Du
Hi,

 

In my ignite server, I have several caches, each cache has about 10k
entries. 

I build the entry using binary object. Each entry just has 3 or 4 fields,
each field is short, less than 20 bytes. But I enable index for each field.

Most entry has set expired time. The expired time is short, about 90
seconds. 

After run for 2 hours, 8G memory are used, and the ignite run out of memory.


I build the ignite from source code, and use yesterdays' github master
branch code.

 

My questions:

1)   Does the expired data release the memory?

2)   How ignite build the index? How much memory it cost?

 

Thanks

Shawn

 

 

 



Re: Kafka Streamer

2016-10-25 Thread Anil
HI Val,

in my case, kafka message key and value (actual message) are strings. I
used decoders for key and value as StringDecoder. but value is not the
value of cache and key is to maintain the order of the messages in kafka,
it is not actual key of the cache. Message is json object which is
transformed into number of cache entries.

I have created custom kafka data streamer with custom multiple tuple
extractor implementation and it looks good and working.

Thanks.

On 25 October 2016 at 23:56, vkulichenko 
wrote:

> Anil,
>
> Decoders convert binary message from Kafka to a key-value pair. Streamer
> then redirects this pair to cache. Why doesn't this work for you?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Kafka-Streamer-tp8432p8481.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How to avoid data skew in collocate data with data

2016-10-25 Thread ght230
How to increase number of partitions?

And if data skew happened, how can I rebalance it?
Using FairAffinityFunction()?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-data-with-data-tp8454p8491.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


imbalance heap used among nodes

2016-10-25 Thread Alec Lee
Hello, all, I 've just done a test to establish my understanding about the heap
what i did, I start 2-node cluster, 3-node cluster (2 server node in same 
host), before I started the nodes, I noticed the difference of initial heap 
size for each node, but not very much

when I started the server node, I launched a client node to write the data into 
a cache (I write into the file at the same time), I totally write into the 
cache 500,000 entries, 118Mb by file size. After I write into cache, I found 
one node takes about 481Mb, another take about 858MB, that I can understand 
since ignite store data in the way key-value objects, it cost more memory. 
However, I like to calculate the stable rate between heap used and actual file 
size, with the imbalance heap used among nodes, it seems to be a hard task. I 
try to guess maybe the first node I started is the primary node, and will take 
more heap, but after few test, it seems to be random, no rule to tell which one 
is the primary node.
any idea?


thanks

AL

Re: spark SQL thriftserver over ignite and cassandra

2016-10-25 Thread Igor Sapego
Vincent,

That's right, our ODBC driver does not support using HTTP(S) as a transport
currently.

Best Regards,
Igor

On Mon, Oct 17, 2016 at 9:40 PM, vincent gromakowski <
vincent.gromakow...@gmail.com> wrote:

> Hi
> I mean using HTTPS transport instead of binary (thrift?) transport.
>
> 2016-10-17 19:10 GMT+02:00 Igor Sapego :
>
>> Hi Vincent,
>>
>> Can you please explain what do you mean by HTTP(S) support for the ODBC?
>>
>> I'm not quite sure I get it.
>>
>> Best Regards,
>> Igor
>>
>> On Thu, Oct 6, 2016 at 9:59 AM, vincent gromakowski <
>> vincent.gromakow...@gmail.com> wrote:
>>
>>> Thanks
>>>
>>> Starting the thriftserver with igniterdd tables doesn't seem very hard.
>>> Implementing a security layer over ignite cache may be harder as I need to:
>>> - get username from thriftserver
>>> - intercept each request and check permissions
>>> Maybe spark will also be able to handle permissions...
>>>
>>> I will keep you informed
>>>
>>> Le 6 oct. 2016 00:12, "Denis Magda"  a écrit :
>>>
 Vincent,

 Please see below

 On Oct 5, 2016, at 4:31 AM, vincent gromakowski <
 vincent.gromakow...@gmail.com> wrote:

 Hi
 thanks for your explanations. Please find inline more questions

 Vincent

 2016-10-05 3:33 GMT+02:00 Denis Magda :

> Hi Vincent,
>
> See my answers inline
>
> On Oct 4, 2016, at 12:54 AM, vincent gromakowski <
> vincent.gromakow...@gmail.com> wrote:
>
> Hi,
> I know that Ignite has SQL support but:
> - ODBC driver doesn't seem to provide HTTP(S) support, which is easier
> to integrate on corporate networks with rules, firewalls, proxies
>
>
> *Igor Sapego*, what URIs are supported presently?
>
> - The SQL engine doesn't seem to scale like Spark SQL would. For
> instance, Spark won't generate OOM is dataset (source or result) doesn't
> fit in memory. From Ignite side, it's not clear…
>
>
> OOM is not related to scalability topic at all. This is about
> application’s logic.
>
> Ignite SQL engine perfectly scales out along with your cluster.
> Moreover, Ignite supports indexes which allows you to get O(logN) running
> time complexity for your SQL queries while in case of Spark you will face
> with full-scans (O(N)) all the time.
>
> However, to benefit from Ignite SQL queries you have to put all the
> data in-memory. Ignite doesn’t go to a CacheStore (Cassandra, relational
> database, MongoDB, etc) while a SQL query is executed and won’t preload
> anything from an underlying CacheStore. Automatic preloading works for
> key-value queries like cache.get(key).
>


 This is an issue because I will potentially have to query TB of data.
 If I use Spark thriftserver backed by IgniteRDD, does it solve this point
 and can I get automatic preloading from C* ?


 IgniteRDD will load missing tuples (key-value) pair from Cassandra
 because essentially IgniteRDD is an IgniteCache and Cassandra is a
 CacheStore. The only thing that is left to check is whether Spark
 triftserver can work with IgniteRDDs. Hope you will be able figure out this
 and share your feedback with us.



> - Spark thrift can manage multi tenancy: different users can connect
> to the same SQL engine and share cache. In Ignite it's one cache per user,
> so a big waste of RAM.
>
>
> Everyone can connect to an Ignite cluster and work with the same set
> of distributed caches. I’m not sure why you need to create caches with the
> same content for every user.
>

 It's a security issue, Ignite cache doesn't provide multiple user
 account per cache. I am thinking of using Spark to authenticate multiple
 users and then Spark use a shared account on Ignite cache


 Basically, Ignite provides basic security interfaces and some
 implementations which you can rely on by building your secure solution.
 This article can be useful for your case
 http://smartkey.co.uk/development/securing-an-apache-ignite-cluster/

 —
 Denis


> If you need a real multi-tenancy support where cacheA is allowed to be
> accessed by a group of users A only and cacheB by users from group B then
> you can take a look at GridGain which is built on top of Ignite
> https://gridgain.readme.io/docs/multi-tenancy
>
>
>
 OK but I am evaluating open source only solutions (kylin, druid,
 alluxio...), it's a constraint from my hierarchy

>
> What I want to achieve is :
> - use Cassandra for data store as it provides idempotence (HDFS/hive
> doesn't), resulting in exactly once semantic without any duplicates.
> - use Spark SQL thriftserver in multi tenancy for large scale adhoc
> analytics queries (> TB) from an ODBC 

Re: Random SSL unsupported record version

2016-10-25 Thread vkulichenko
Hi,

I think I found the reason for this error. Here is the ticket that you can
watch: https://issues.apache.org/jira/browse/IGNITE-4110

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Random-SSL-unsupported-record-version-tp8236p8487.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: re: org.apache.spark.Logging not found because of dependencies mismatches

2016-10-25 Thread vkulichenko
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


Sai wrote
> i'm trying to architect a solution leveraging in-memory fabric provided by
> Apache Ignite.  We use 1.7.0 version and our Spark application has a
> dependency on spark 2.0 and ignite-spark.  This Ignite-spark has a
> dependency on Spark 1.5.2 and while compiling and i get the
> org.apache.spark.Logging not found error.
> 
> 1. in the same JVM, spark worker has 2 dependencies on the same library
> (artifact) and it's my understanding that first one gets loaded and used
> by class loader.  Is this correct?  If so, does it create a conflict as
> anything deprecated won't work assuming 2.0 is the one that's loaded 
> 2. how do we resolve this one?
> 3. when does Ignite plan to update it to Spark 2.0 for ignite-spark?
> 
> thanks

Spark 2.0 is not supported right now, here is the ticket for upgrade:
https://issues.apache.org/jira/browse/IGNITE-3710. Feel free to pick it up
;)

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/re-org-apache-spark-Logging-not-found-because-of-dependencies-mismatches-tp8477p8486.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: OFFHEAP_VALUES mode: How to put data object into caches and query them from caches

2016-10-25 Thread vkulichenko
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


howfree wrote
> This memory mode allows us to store keys on-heap and values off-heap,but
> it doesn't support indexedTypes.
> 
> I am confused at how to put data objects into caches, then  query  them
> from caches.
> 
> I didn't find  code examples.
> 
> Does anyone give me some examples? thanks.

Any memory mode including OFFHEAP_VALUES supports indexedTypes property.
Actually, you can always switch between memory modes without any code
changes, because memory mode only defines the way entries are stored on
server nodes. Having said that, you can refer to CacheQueryExample and run
it with different memory modes.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/OFFHEAP-VALUES-mode-How-to-put-data-object-into-caches-and-query-them-from-caches-tp8452p8485.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to continuously subscribe for event?

2016-10-25 Thread vkulichenko
Hi,

Can you create a small example project (e.g. on GitHub) that reproduces the
issue?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-continuously-subscribe-for-event-tp8438p8482.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: expired policy didn't work

2016-10-25 Thread vkulichenko
Hi,

withExpiryPolicy() method returns another instance of IgniteCache which you
should use to make sure the policy takes effect:

cache.withExpiryPolicy(...).put(...);

Another option is to configure policy on cache startup via
CacheConfiguration.setExpiryPolicyFactory(...).

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/expired-policy-didn-t-work-tp8419p8483.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Kafka Streamer

2016-10-25 Thread vkulichenko
Anil,

Decoders convert binary message from Kafka to a key-value pair. Streamer
then redirects this pair to cache. Why doesn't this work for you?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Kafka-Streamer-tp8432p8481.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-25 Thread Vladislav Pyatkov
Hi,

Incorrect implementation of CacheStore is the most probable reason, because
stored entry is locked. You need to avoid lock one entry in othe.

Necessary re-write the code and re-check, I think, the issue will resolved.

On Tue, Oct 25, 2016 at 1:05 AM, bintisepaha  wrote:

> Hi, actually we use a lot of caches from cache store writeAll().
> For confirming if that is the cause of the grid stall, we would have to
> completely change our design.
>
> Can someone confirm that this is the cause for grid to stall? referencing
> cache.get from a cache store and then killing or bringing up nodes leads to
> a stall?
>
> We see a node blocked on flusher thread while doing a cache.get() when the
> grid is stalled, if we kill that node, the grid starts functioning. But we
> would like to understand are we using write behind incorrectly or there are
> some settings that we can use to re-balance or write-behind that might save
> us from something like this.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8449.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Composite affinity key

2016-10-25 Thread Sergej Sidorov
Yes, sure.The query will be split into multiple map queries and a single
reduce query. Then all the map queries are executed on all data nodes,
providing results to the reducing node, which will in turn run the reduce
query over these intermediate results.
For more information check [1]

[1] http://apacheignite.gridgain.org/docs/sql-queries

Sergej



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Composite-affinity-key-tp8462p8479.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to avoid data skew in collocate data with data

2016-10-25 Thread Vladislav Pyatkov
Hi,

I agree with Alexey, you need to increase number of partitions.

In additional, It look like your Affinity key has bad selectivity.
Why so mach data bind to  553, 551, but small data bind to 542, 530?

I recomend select other key as affinity or accept that disbalance.

On Tue, Oct 25, 2016 at 4:24 PM, Alexey Kuznetsov 
wrote:

> Hi!
>
> How many partitions configured in your cache?
> As far as I see - 11 partitions?
> Could you try to configure more (64, 128, 256)?
> And see how data will be distributed?
>
> By default Ignite caches configured with 1024 partitions.
>
>
>
> On Tue, Oct 25, 2016 at 8:20 PM, ght230  wrote:
>
>> When data skew happened, what can I do to rebalance all the data to the 3
>> nodes.
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/How-to-avoid-data-skew-in-collocate-data-
>> with-data-tp8454p8473.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Alexey Kuznetsov
>



-- 
Vladislav Pyatkov


Re: Cant events listen from client Node

2016-10-25 Thread Vladislav Pyatkov
Hi

Could you please provide source code as example?

On Tue, Oct 25, 2016 at 4:18 PM, Labard  wrote:

> I have already enabled this event into configuration
>
>class="org.apache.ignite.configuration.IgniteConfiguration">
>
> 
>
> 
>
> 
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
> 
>
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File.Key
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File
> 
> 
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File_Type.Key
>
> ru.at_consulting.dmp.vip.georgia.ignite.CTL_File_Type
> 
> 
> 
> 
> 
>
> this work for server node but still does not work for client node.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cant-events-listen-from-client-Node-tp8470p8472.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Kafka Streamer

2016-10-25 Thread Anil
Should we use Tuple Extractor ? i still need to look at code and give a try.

On 25 October 2016 at 09:18, Anil  wrote:

> No Val. A message cannot be converted into number of cache entries using
> value decoder. am i wrong ?
>
> Thanks.
>
> On 25 October 2016 at 02:42, vkulichenko 
> wrote:
>
>> Hi,
>>
>> There are keyDecoder and valueDecoder that you can specify when creating
>> the
>> KafkaStreamer. Is that what you're looking for?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Kafka-Streamer-tp8432p8447.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: How to avoid data skew in collocate data with data

2016-10-25 Thread Alexey Kuznetsov
Hi!

How many partitions configured in your cache?
As far as I see - 11 partitions?
Could you try to configure more (64, 128, 256)?
And see how data will be distributed?

By default Ignite caches configured with 1024 partitions.



On Tue, Oct 25, 2016 at 8:20 PM, ght230  wrote:

> When data skew happened, what can I do to rebalance all the data to the 3
> nodes.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-
> data-with-data-tp8454p8473.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alexey Kuznetsov


Re: How to avoid data skew in collocate data with data

2016-10-25 Thread ght230
When data skew happened, what can I do to rebalance all the data to the 3
nodes.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-data-with-data-tp8454p8473.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cant events listen from client Node

2016-10-25 Thread Labard
I have already enabled this event into configuration

  
















   
ru.at_consulting.dmp.vip.georgia.ignite.CTL_File.Key
   
ru.at_consulting.dmp.vip.georgia.ignite.CTL_File









   
ru.at_consulting.dmp.vip.georgia.ignite.CTL_File_Type.Key
   
ru.at_consulting.dmp.vip.georgia.ignite.CTL_File_Type






this work for server node but still does not work for client node.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cant-events-listen-from-client-Node-tp8470p8472.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cant events listen from client Node

2016-10-25 Thread Vladislav Pyatkov
Hi,

If you want to handle EVT_CACHE_OBJECT_PUT from a client node, you need to
enable the event into configuration:


...



...




For additional information look at the article[1].

[1]: https://apacheignite.readme.io/docs/events

On Tue, Oct 25, 2016 at 2:48 PM, Labard  wrote:

> Hello
> I have Ignite cluster and one client node.
> I want to listen "put into cache" events in my client node, but something
> does not work.
> If I use server node instead (just remove client property from config) it
> works exellent.
>
> My listener code:
>
> ignite.events(ignite.cluster().forCacheNodes(CTL_FILES)).
> remoteListen((uuid,
> evt) -> {
> if (evt.cacheName().equals(CTL_FILES)) {
> final CTL_File value = (CTL_File) evt.newValue();
> if (value.getFileStatus().equals(TaskStatus.NEW)) {
> loadFile(value.getFileName(), count++,
> value.getFileTypeName() + "_TOPIC", ERROR_TOPIC);
> }
> }
> return true;
> }, (IgnitePredicate) cacheEvent ->
> cacheEvent.cacheName().equals(CTL_FILES), EventType.EVT_CACHE_OBJECT_PUT);
>
> What's the problem?
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cant-events-listen-from-client-Node-tp8470.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Cant events listen from client Node

2016-10-25 Thread Labard
Hello
I have Ignite cluster and one client node.
I want to listen "put into cache" events in my client node, but something
does not work.
If I use server node instead (just remove client property from config) it
works exellent.

My listener code:

ignite.events(ignite.cluster().forCacheNodes(CTL_FILES)).remoteListen((uuid,
evt) -> {
if (evt.cacheName().equals(CTL_FILES)) {
final CTL_File value = (CTL_File) evt.newValue();
if (value.getFileStatus().equals(TaskStatus.NEW)) {
loadFile(value.getFileName(), count++,
value.getFileTypeName() + "_TOPIC", ERROR_TOPIC);
}
}
return true;
}, (IgnitePredicate) cacheEvent ->
cacheEvent.cacheName().equals(CTL_FILES), EventType.EVT_CACHE_OBJECT_PUT);

What's the problem?
 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cant-events-listen-from-client-Node-tp8470.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Composite affinity key

2016-10-25 Thread Anil
Agree. makes sense.

one quick question.. will those affinity mapped objects retrieved when
select * from Person is fired ?

On 25 October 2016 at 15:25, Sergi Vladykin 
wrote:

>
>> cache.put(department.getKey(), department);
>> cache.put(person.getKey(), person);
>>
>>
> As a side note: it is actually a bad practice to have a key as a field in
> a value, because this way it will be stored in cache twice.
>
> Sergi
>
>
>
>
>> Thanks,
>> Sergej
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Composite-affinity-key-tp8462p8464.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Composite affinity key

2016-10-25 Thread Anil
Thanks Sergej. i was trying the same. i will test this for my usecase.



On 25 October 2016 at 14:51, Sergej Sidorov 
wrote:

> Hi, Anil!
>
> What do you mean by "composite affinity key"? What problem you want to
> solve?
> If you want to use several fields as an affinity key, then you need to
> create special class and use that class in entity key class.
>
> For example:
>
> class DepartmentAffinityKey {
> private long companyId;
>
> private long departmentId;
>
> // setters, getters, equals & hashCode
> }
>
> class DepartmentKey {
> private long departmentId;
>
> @AffinityKeyMapped
> private DepartmentAffinityKey affinityKey;
>
> // setters, getters, equals & hashCode
> }
>
> class PersonKey {
> private long personId;
>
> @AffinityKeyMapped
> private DepartmentAffinityKey affinityKey;
>
> // setters, getters, equals & hashCode
> }
>
> class Department {
> private DepartmentKey key;
> // ...
> }
>
> class Person {
> private PersonKey key;
> // ...
> }
>
> cache.put(department.getKey(), department);
> cache.put(person.getKey(), person);
>
>
> Thanks,
> Sergej
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Composite-affinity-key-tp8462p8464.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Composite affinity key

2016-10-25 Thread Sergi Vladykin
>
>
> cache.put(department.getKey(), department);
> cache.put(person.getKey(), person);
>
>
As a side note: it is actually a bad practice to have a key as a field in a
value, because this way it will be stored in cache twice.

Sergi




> Thanks,
> Sergej
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Composite-affinity-key-tp8462p8464.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite client thread amount control

2016-10-25 Thread Vladislav Pyatkov
Not, it is not. One instance of Ignite client can to support several
parallel queries.
You can use client like this:

*Ignition.setClientMode(true);*
*Ignite ignite = Ignitin.strt(cfg)*
*threads[i] = new Thread() {*
*@Override public void run() {*
* try (QueryCursor cursor = ignite.cache(name).query(new
SqlFieldsQuery(sql))) {*
* for (Object obj : cursor) {*
* // Ops.*
* }*
* }*
*}*
*};*

*for (Thread thread : threads)*
* thread.join();*

On Tue, Oct 25, 2016 at 6:12 AM, Jeff Jiao  wrote:

> Hi vkulichenko,
>
> Thanks for the reply! I already subscribed.
>
> What if I have multiple users query at the same time? One user hold the
> Ignite client and the others just wait?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-client-thread-amount-control-tp8434p8455.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Composite affinity key

2016-10-25 Thread Sergej Sidorov
Hi, Anil!

What do you mean by "composite affinity key"? What problem you want to
solve?
If you want to use several fields as an affinity key, then you need to
create special class and use that class in entity key class.

For example:

class DepartmentAffinityKey {
private long companyId;

private long departmentId;

// setters, getters, equals & hashCode
}

class DepartmentKey {
private long departmentId;

@AffinityKeyMapped
private DepartmentAffinityKey affinityKey;

// setters, getters, equals & hashCode
}

class PersonKey {
private long personId;

@AffinityKeyMapped
private DepartmentAffinityKey affinityKey;

// setters, getters, equals & hashCode
}

class Department {
private DepartmentKey key;
// ...
}

class Person {
private PersonKey key;
// ...
}

cache.put(department.getKey(), department);
cache.put(person.getKey(), person);


Thanks,
Sergej




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Composite-affinity-key-tp8462p8464.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Threads got stuck

2016-10-25 Thread Yakov Zhdanov
Alper, thanks for clarification, this will definitely help after we get the
info I requested. This is the only way to go with the investigation.

--Yakov

2016-10-25 11:20 GMT+03:00 Alper Tekinalp :

> Hi Yakov.
>
> I should also mention that we load cache data from one server and wait the
> data to be replicated to others. Can that cause such a situation, too?
>
> On Tue, Oct 25, 2016 at 11:14 AM, Yakov Zhdanov 
> wrote:
>
>> Alper,
>>
>> There can be multiple reasons.
>>
>> Can you please reproduce the issue one more time, collect and share the
>> following with us:
>>
>> 1. collect all the logs from all the nodes - clients and servers
>> 2. take threaddumps of all JVMs (from all nodes) with jstack -l 
>>
>> --Yakov
>>
>> 2016-10-25 10:49 GMT+03:00 Alper Tekinalp :
>>
>>> Hi.
>>>
>>> There is also a few logs as :
>>>
>>>  Failed to register marshalled class for more than 10 times in a row
>>> (may affect performance).
>>>
>>> Can it be releated?
>>>
>>> On Tue, Oct 25, 2016 at 10:32 AM, Alper Tekinalp  wrote:
>>>
 Hi all.

 We have 3 servers and cache configuration like:

 >>> name="DEFAULT">
 
 
 
 
 
 
 >>> value="#{evamProperties['topology.cache.partition.size']}"/>
 
 
 
 
 
 
 
 

 For our worker threads we check heartbeat and if a thread did not sent
 heart beat for 10 minutes we consider it as stucked and interrrupt and
 recreate it.

 As I can see all our worker threads are stucked in cache.put() state
 and interrupted and recreated regularly.

 What can be the reason we are stucked at put? Following is stacktrace
 for interruption error.

 javax.cache.CacheException: class 
 org.apache.ignite.IgniteInterruptedException:
 Failed to wait for asynchronous operation permit (thread got interrupted).
 at org.apache.ignite.internal.processors.cache.GridCacheUtils.c
 onvertToCacheException(GridCacheUtils.java:1502)
 at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
 .cacheException(IgniteCacheProxy.java:2021)
 at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
 .put(IgniteCacheProxy.java:1221)
 at com.intellica.project.helper.ee.ConfigManagerHelperEE.setSta
 te(ConfigManagerHelperEE.java:90)
 at com.intellica.project.helper.ee.StateMachineConfigManagerEEI
 mpl.store(StateMachineConfigManagerEEImpl.java:53)
 at com.evelopers.unimod.runtime.AbstractEventProcessor.storeCon
 fig(AbstractEventProcessor.java:175)
 at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
 AbstractEventProcessor.java:130)
 at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
 AbstractEventProcessor.java:80)
 at com.evelopers.unimod.runtime.ModelEngine.process(ModelEngine
 .java:199)
 at com.evelopers.unimod.runtime.StrictHandler.handle(StrictHand
 ler.java:46)
 at com.intellica.evam.engine.server.worker.AbstractScenarioWork
 er.runScenarioLogic(AbstractScenarioWorker.java:172)
 at com.intellica.evam.engine.server.worker.AbstractScenarioWork
 er.runScenario(AbstractScenarioWorker.java:130)
 at com.intellica.evam.engine.server.worker.AsyncWorker.processE
 vent(AsyncWorker.java:156)
 at com.intellica.evam.engine.server.worker.AsyncWorker.run(Asyn
 cWorker.java:88)
 Caused by: class org.apache.ignite.IgniteInterruptedException: Failed
 to wait for asynchronous operation permit (thread got interrupted).
 at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt
 ils.java:747)
 at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt
 ils.java:745)
 ... 14 more
 Caused by: java.lang.InterruptedException
 at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
 eSharedInterruptibly(AbstractQueuedSynchronizer.java:1301)
 at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
 at org.apache.ignite.internal.processors.cache.GridCacheAdapter
 .asyncOpAcquire(GridCacheAdapter.java:4597)
 at org.apache.ignite.internal.processors.cache.distributed.dht.
 atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:683)
 at org.apache.ignite.internal.processors.cache.distributed.dht.
 atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1014)
 at org.apache.ignite.internal.processors.cache.distributed.dht.
 atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:484)
 at org.apache.ignite.internal.processors.cache.GridCacheAdapter
 

Composite affinity key

2016-10-25 Thread Anil
Hi,

Does ignite supports composite affinity key ?

Thanks


Re: Threads got stuck

2016-10-25 Thread Alper Tekinalp
Hi Yakov.

I should also mention that we load cache data from one server and wait the
data to be replicated to others. Can that cause such a situation, too?

On Tue, Oct 25, 2016 at 11:14 AM, Yakov Zhdanov  wrote:

> Alper,
>
> There can be multiple reasons.
>
> Can you please reproduce the issue one more time, collect and share the
> following with us:
>
> 1. collect all the logs from all the nodes - clients and servers
> 2. take threaddumps of all JVMs (from all nodes) with jstack -l 
>
> --Yakov
>
> 2016-10-25 10:49 GMT+03:00 Alper Tekinalp :
>
>> Hi.
>>
>> There is also a few logs as :
>>
>>  Failed to register marshalled class for more than 10 times in a row (may
>> affect performance).
>>
>> Can it be releated?
>>
>> On Tue, Oct 25, 2016 at 10:32 AM, Alper Tekinalp  wrote:
>>
>>> Hi all.
>>>
>>> We have 3 servers and cache configuration like:
>>>
>>> >> name="DEFAULT">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> value="#{evamProperties['topology.cache.partition.size']}"/>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>> For our worker threads we check heartbeat and if a thread did not sent
>>> heart beat for 10 minutes we consider it as stucked and interrrupt and
>>> recreate it.
>>>
>>> As I can see all our worker threads are stucked in cache.put() state and
>>> interrupted and recreated regularly.
>>>
>>> What can be the reason we are stucked at put? Following is stacktrace
>>> for interruption error.
>>>
>>> javax.cache.CacheException: class 
>>> org.apache.ignite.IgniteInterruptedException:
>>> Failed to wait for asynchronous operation permit (thread got interrupted).
>>> at org.apache.ignite.internal.processors.cache.GridCacheUtils.c
>>> onvertToCacheException(GridCacheUtils.java:1502)
>>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .cacheException(IgniteCacheProxy.java:2021)
>>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .put(IgniteCacheProxy.java:1221)
>>> at com.intellica.project.helper.ee.ConfigManagerHelperEE.setSta
>>> te(ConfigManagerHelperEE.java:90)
>>> at com.intellica.project.helper.ee.StateMachineConfigManagerEEI
>>> mpl.store(StateMachineConfigManagerEEImpl.java:53)
>>> at com.evelopers.unimod.runtime.AbstractEventProcessor.storeCon
>>> fig(AbstractEventProcessor.java:175)
>>> at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
>>> AbstractEventProcessor.java:130)
>>> at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
>>> AbstractEventProcessor.java:80)
>>> at com.evelopers.unimod.runtime.ModelEngine.process(ModelEngine
>>> .java:199)
>>> at com.evelopers.unimod.runtime.StrictHandler.handle(StrictHand
>>> ler.java:46)
>>> at com.intellica.evam.engine.server.worker.AbstractScenarioWork
>>> er.runScenarioLogic(AbstractScenarioWorker.java:172)
>>> at com.intellica.evam.engine.server.worker.AbstractScenarioWork
>>> er.runScenario(AbstractScenarioWorker.java:130)
>>> at com.intellica.evam.engine.server.worker.AsyncWorker.processE
>>> vent(AsyncWorker.java:156)
>>> at com.intellica.evam.engine.server.worker.AsyncWorker.run(Asyn
>>> cWorker.java:88)
>>> Caused by: class org.apache.ignite.IgniteInterruptedException: Failed
>>> to wait for asynchronous operation permit (thread got interrupted).
>>> at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt
>>> ils.java:747)
>>> at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt
>>> ils.java:745)
>>> ... 14 more
>>> Caused by: java.lang.InterruptedException
>>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1301)
>>> at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
>>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>>> .asyncOpAcquire(GridCacheAdapter.java:4597)
>>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>>> atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:683)
>>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>>> atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1014)
>>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>>> atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:484)
>>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>>> .putAsync(GridCacheAdapter.java:2541)
>>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>>> atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:461)
>>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>>> .put(GridCacheAdapter.java:2215)
>>> at 

Re: Threads got stuck

2016-10-25 Thread Yakov Zhdanov
Alper,

There can be multiple reasons.

Can you please reproduce the issue one more time, collect and share the
following with us:

1. collect all the logs from all the nodes - clients and servers
2. take threaddumps of all JVMs (from all nodes) with jstack -l 

--Yakov

2016-10-25 10:49 GMT+03:00 Alper Tekinalp :

> Hi.
>
> There is also a few logs as :
>
>  Failed to register marshalled class for more than 10 times in a row (may
> affect performance).
>
> Can it be releated?
>
> On Tue, Oct 25, 2016 at 10:32 AM, Alper Tekinalp  wrote:
>
>> Hi all.
>>
>> We have 3 servers and cache configuration like:
>>
>> > name="DEFAULT">
>> 
>> 
>> 
>> 
>> 
>> 
>> > value="#{evamProperties['topology.cache.partition.size']}"/>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> For our worker threads we check heartbeat and if a thread did not sent
>> heart beat for 10 minutes we consider it as stucked and interrrupt and
>> recreate it.
>>
>> As I can see all our worker threads are stucked in cache.put() state and
>> interrupted and recreated regularly.
>>
>> What can be the reason we are stucked at put? Following is stacktrace for
>> interruption error.
>>
>> javax.cache.CacheException: class 
>> org.apache.ignite.IgniteInterruptedException:
>> Failed to wait for asynchronous operation permit (thread got interrupted).
>> at org.apache.ignite.internal.processors.cache.GridCacheUtils.c
>> onvertToCacheException(GridCacheUtils.java:1502)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .cacheException(IgniteCacheProxy.java:2021)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .put(IgniteCacheProxy.java:1221)
>> at com.intellica.project.helper.ee.ConfigManagerHelperEE.setSta
>> te(ConfigManagerHelperEE.java:90)
>> at com.intellica.project.helper.ee.StateMachineConfigManagerEEI
>> mpl.store(StateMachineConfigManagerEEImpl.java:53)
>> at com.evelopers.unimod.runtime.AbstractEventProcessor.storeCon
>> fig(AbstractEventProcessor.java:175)
>> at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
>> AbstractEventProcessor.java:130)
>> at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
>> AbstractEventProcessor.java:80)
>> at com.evelopers.unimod.runtime.ModelEngine.process(ModelEngine
>> .java:199)
>> at com.evelopers.unimod.runtime.StrictHandler.handle(StrictHand
>> ler.java:46)
>> at com.intellica.evam.engine.server.worker.AbstractScenarioWork
>> er.runScenarioLogic(AbstractScenarioWorker.java:172)
>> at com.intellica.evam.engine.server.worker.AbstractScenarioWork
>> er.runScenario(AbstractScenarioWorker.java:130)
>> at com.intellica.evam.engine.server.worker.AsyncWorker.processE
>> vent(AsyncWorker.java:156)
>> at com.intellica.evam.engine.server.worker.AsyncWorker.run(Asyn
>> cWorker.java:88)
>> Caused by: class org.apache.ignite.IgniteInterruptedException: Failed to
>> wait for asynchronous operation permit (thread got interrupted).
>> at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt
>> ils.java:747)
>> at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt
>> ils.java:745)
>> ... 14 more
>> Caused by: java.lang.InterruptedException
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1301)
>> at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .asyncOpAcquire(GridCacheAdapter.java:4597)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:683)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1014)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:484)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .putAsync(GridCacheAdapter.java:2541)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:461)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .put(GridCacheAdapter.java:2215)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .put(IgniteCacheProxy.java:1214)
>> ... 11 more
>>
>>
>> --
>> Alper Tekinalp
>>
>> Software Developer
>> Evam Streaming Analytics
>>
>> Atatürk Mah. Turgut Özal Bulv.
>> Gardenya 5 Plaza K:6 Ataşehir
>> 34758 İSTANBUL
>>
>> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
>> www.evam.com.tr
>> 
>>
>
>
>
> --
> 

Re: Threads got stuck

2016-10-25 Thread Alper Tekinalp
Hi.

There is also a few logs as :

 Failed to register marshalled class for more than 10 times in a row (may
affect performance).

Can it be releated?

On Tue, Oct 25, 2016 at 10:32 AM, Alper Tekinalp  wrote:

> Hi all.
>
> We have 3 servers and cache configuration like:
>
>  name="DEFAULT">
> 
> 
> 
> 
> 
> 
>  value="#{evamProperties['topology.cache.partition.size']}"/>
> 
> 
> 
> 
> 
> 
> 
> 
>
> For our worker threads we check heartbeat and if a thread did not sent
> heart beat for 10 minutes we consider it as stucked and interrrupt and
> recreate it.
>
> As I can see all our worker threads are stucked in cache.put() state and
> interrupted and recreated regularly.
>
> What can be the reason we are stucked at put? Following is stacktrace for
> interruption error.
>
> javax.cache.CacheException: class 
> org.apache.ignite.IgniteInterruptedException:
> Failed to wait for asynchronous operation permit (thread got interrupted).
> at org.apache.ignite.internal.processors.cache.GridCacheUtils.
> convertToCacheException(GridCacheUtils.java:1502)
> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.
> cacheException(IgniteCacheProxy.java:2021)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheProxy.put(IgniteCacheProxy.java:1221)
> at com.intellica.project.helper.ee.ConfigManagerHelperEE.setState(
> ConfigManagerHelperEE.java:90)
> at com.intellica.project.helper.ee.StateMachineConfigManagerEEImp
> l.store(StateMachineConfigManagerEEImpl.java:53)
> at com.evelopers.unimod.runtime.AbstractEventProcessor.
> storeConfig(AbstractEventProcessor.java:175)
> at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
> AbstractEventProcessor.java:130)
> at com.evelopers.unimod.runtime.AbstractEventProcessor.process(
> AbstractEventProcessor.java:80)
> at com.evelopers.unimod.runtime.ModelEngine.process(
> ModelEngine.java:199)
> at com.evelopers.unimod.runtime.StrictHandler.handle(
> StrictHandler.java:46)
> at com.intellica.evam.engine.server.worker.AbstractScenarioWorker.
> runScenarioLogic(AbstractScenarioWorker.java:172)
> at com.intellica.evam.engine.server.worker.AbstractScenarioWorker.
> runScenario(AbstractScenarioWorker.java:130)
> at com.intellica.evam.engine.server.worker.AsyncWorker.
> processEvent(AsyncWorker.java:156)
> at com.intellica.evam.engine.server.worker.AsyncWorker.run(
> AsyncWorker.java:88)
> Caused by: class org.apache.ignite.IgniteInterruptedException: Failed to
> wait for asynchronous operation permit (thread got interrupted).
> at org.apache.ignite.internal.util.IgniteUtils$2.apply(
> IgniteUtils.java:747)
> at org.apache.ignite.internal.util.IgniteUtils$2.apply(
> IgniteUtils.java:745)
> ... 14 more
> Caused by: java.lang.InterruptedException
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1301)
> at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter.
> asyncOpAcquire(GridCacheAdapter.java:4597)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:683)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1014)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:484)
> at org.apache.ignite.internal.processors.cache.
> GridCacheAdapter.putAsync(GridCacheAdapter.java:2541)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:461)
> at org.apache.ignite.internal.processors.cache.
> GridCacheAdapter.put(GridCacheAdapter.java:2215)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheProxy.put(IgniteCacheProxy.java:1214)
> ... 11 more
>
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr
> 
>



-- 
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr



Threads got stuck

2016-10-25 Thread Alper Tekinalp
Hi all.

We have 3 servers and cache configuration like:


















For our worker threads we check heartbeat and if a thread did not sent
heart beat for 10 minutes we consider it as stucked and interrrupt and
recreate it.

As I can see all our worker threads are stucked in cache.put() state and
interrupted and recreated regularly.

What can be the reason we are stucked at put? Following is stacktrace for
interruption error.

javax.cache.CacheException: class
org.apache.ignite.IgniteInterruptedException: Failed to wait for
asynchronous operation permit (thread got interrupted).
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1502)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:2021)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1221)
at
com.intellica.project.helper.ee.ConfigManagerHelperEE.setState(ConfigManagerHelperEE.java:90)
at
com.intellica.project.helper.ee.StateMachineConfigManagerEEImpl.store(StateMachineConfigManagerEEImpl.java:53)
at
com.evelopers.unimod.runtime.AbstractEventProcessor.storeConfig(AbstractEventProcessor.java:175)
at
com.evelopers.unimod.runtime.AbstractEventProcessor.process(AbstractEventProcessor.java:130)
at
com.evelopers.unimod.runtime.AbstractEventProcessor.process(AbstractEventProcessor.java:80)
at
com.evelopers.unimod.runtime.ModelEngine.process(ModelEngine.java:199)
at
com.evelopers.unimod.runtime.StrictHandler.handle(StrictHandler.java:46)
at
com.intellica.evam.engine.server.worker.AbstractScenarioWorker.runScenarioLogic(AbstractScenarioWorker.java:172)
at
com.intellica.evam.engine.server.worker.AbstractScenarioWorker.runScenario(AbstractScenarioWorker.java:130)
at
com.intellica.evam.engine.server.worker.AsyncWorker.processEvent(AsyncWorker.java:156)
at
com.intellica.evam.engine.server.worker.AsyncWorker.run(AsyncWorker.java:88)
Caused by: class org.apache.ignite.IgniteInterruptedException: Failed to
wait for asynchronous operation permit (thread got interrupted).
at
org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUtils.java:747)
at
org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUtils.java:745)
... 14 more
Caused by: java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1301)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.asyncOpAcquire(GridCacheAdapter.java:4597)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:683)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1014)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:484)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAsync(GridCacheAdapter.java:2541)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:461)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2215)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1214)
... 11 more


-- 
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr



Re: Ignite Jdbc connection

2016-10-25 Thread Anil
Thank you Manu. This is really helpful.

On 24 October 2016 at 20:07, Manu  wrote:

> If you use ignite jdbc driver, to ensure that you always get a valid ignite
> instance before call a ignite operation I recommend to use a datasource
> implementation that validates connection before calls and create new ones
> otherwise.
>
> For common operations with a ignite instance, I use this method to ensure a
> *good* ignite instance and don´t waits or control reconnection... maybe
> there are some other mechanisms... but who cares? ;)
>
> public Ignite getIgnite() {
> if (this.ignite!=null){
> try{
> //ensure this ignite instance is STARTED
> and connected
> this.ignite.getOrCreateCache("default");
> }catch (IllegalStateException e){
> this.ignite=null;
> }catch (IgniteClientDisconnectedException cause) {
> this.ignite=null;
> }catch (CacheException e) {
> if (e.getCause() instanceof
> IgniteClientDisconnectedException) {
> this.ignite=null;
> }else if (e.getCause() instanceof
> IgniteClientDisconnectedCheckedException) {
> this.ignite=null;
> }else{
> throw e;
> }
> }
> }
> if (this.ignite==null){
> this.createIgniteInstance();
> }
> return ignite;
> }
>
> also you can wait for reconnection using this catch block instead of
> above... but as I said... who cares?... sometimes reconnection waits are
> not
> desirable...
> [...]
>try{
> //ensure this ignite instance is STARTED
> and connected
> this.ignite.getOrCreateCache("default");
> }catch (IllegalStateException e){
> this.ignite=null;
> }catch (IgniteClientDisconnectedException cause) {
> LOG.warn("Client disconnected from cluster.
> Waiting for reconnect...");
> cause.reconnectFuture().get(); // Wait for
> reconnect.
> }catch (CacheException e) {
> if (e.getCause() instanceof
> IgniteClientDisconnectedException) {
> LOG.warn("Client disconnected from
> cluster. Waiting for reconnect...");
> IgniteClientDisconnectedException cause =
> (IgniteClientDisconnectedException)e.getCause();
> cause.reconnectFuture().get(); // Wait for
> reconnect.
> }else if (e.getCause() instanceof
> IgniteClientDisconnectedCheckedException) {
> LOG.warn("Client disconnected from
> cluster. Waiting for reconnect...");
> IgniteClientDisconnectedCheckedException
> cause =
> (IgniteClientDisconnectedCheckedException)e.getCause();
> cause.reconnectFuture().get(); // Wait for
> reconnect.
> }else{
> throw e;
> }
> }
> [...]
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8441.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>