Re: Spelling and grammar check of subtitles for Ignite videos

2020-04-16 Thread Ivan Pavlukhin
Maxim,

> Unfortunately, someone did not respond, I will do it on my own.

Did you miss a message from Ilya (see a quote below)?

> I could proof read them if nobody with native English will step in.

> I would suggest creating separate Apache Ignite Youtube channel with creds 
> access from PMCs and upload videos there.

While the idea might be good. But I suppose an author is free to
choose any placement for translated videos he would like.

Best regards,
Ivan Pavlukhin

ср, 8 апр. 2020 г. в 01:24, Kseniya Romanova :
>
> Hi Max!
>
> While choosing the videos for translation, please note that some meetup talks 
> were transformed already into webinars in English. Like the talk "How Apache 
> Ignite Powers Real-Time Subscriber Offers for a Leading Telecommunications 
> Company" by Alexey Bednov & Fedor Loginov. It was first presented on Apache 
> Ignite Moscow Meetup and already scheduled as a global webinar on June 3d[1].
>
> It makes sense to discuss the choice first and take only up to date materials 
> that were never translated into English.
>
> [1] 
> https://www.gridgain.com/resources/webinars/apache-ignite-powers-real-time-subscriber-offers-at-telecommunications-company
>
> ср, 8 апр. 2020 г. в 00:46, Maksim Stepachev :
>>
>> Unfortunately, someone did not respond, I will do it on my own.
>>
>> пт, 3 апр. 2020 г., 17:07 Ilya Kasnacheev :
>>>
>>> Hello!
>>>
>>> I could proof read them if nobody with native English will step in.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пт, 3 апр. 2020 г. в 13:58, Maksim Stepachev :
>>>>
>>>> Hi, everyone!
>>>>
>>>> I'm going to translate Russian perfect videos about Apache Ignite in 
>>>> English and dubbing them after that. They will be loaded to my youtube 
>>>> channel. I need to help with grammar checking.
>>>>
>>>>  If somebody wants to take part, please let me know.
>>>>
>>>>


Re: Read through not working as expected in case of Replicated cache

2020-03-04 Thread Ivan Pavlukhin
Hi Prasad,

Answering your questions:
1. Ignite documentation portal has a "suggest edits" link. You can
suggest an improvement there. Another option is to create a ticket to
improve documentation.
2. You can find both approaches in [1]. You can see a different
picture if cache.put is called.

[1] https://gist.github.com/pavlukhin/a94489c6296ace497be950598d7493c5

Best regards,
Ivan Pavlukhin

пн, 2 мар. 2020 г. в 12:57, Prasad Bhalerao :
>
> Hi Ivan,
>
> Thank you for the clarification.
>
> So the behavior is same for REPLICATED as well as PARTITIONED cache.
>
> 1) Can we please have this behavior documented on Ignite web page? This will 
> just help users to avoid confusion and design their cache effectively.
>
> 2)  You said "You can check it using IgniteCache.localPeek method (ask if 
> more details how to do it are needed)".  Can you please explain this in 
> detail?
>
>
> Regard,
> Prasad
>
> On Mon, Mar 2, 2020 at 2:45 PM Ivan Pavlukhin  wrote:
>>
>> Hi Prasad,
>>
>> AFAIK, when value is read through it is not sent to backup nodes. You
>> can check it using IgniteCache.localPeek method (ask if more details
>> how to do it are needed).
>>
>> I usually think about read-through cache for a following case. There
>> is an underlying storage with "real" data, cache is used to speedup an
>> access. Some kind of invalidation mechanism might be used but it is
>> assumed fine to read values from cache which are not consistent with
>> the backing storage at some point.
>>
>> Consequently it seems there is no need to distribute values from an
>> underlying storage over all replicas because if a value is absent a
>> reader will receive an actual value from the underlying storage.
>>
>> Best regards,
>> Ivan Pavlukhin
>>
>> пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao :
>> >
>> > Hi Ivan/Denis,
>> >
>> > Are you saying that when a value is loaded to cache from an underlying
>> > storage using read-through approach, value is loaded only on primary node
>> > and does not get replicated on its back nodes?
>> >
>> > I am under the impression that when a value is loaded in a cache using
>> > read-through approach, this key/value pair gets replicated on all back-up
>> > nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
>> > Please correct me if I am wrong.
>> >
>> > I think the key/value must get replicated on all backup nodes when it is
>> > read through underlying storage otherwise user will have to add the same
>> > key/value explicitly using cache.put(key,value) operation so that it will
>> > get replicated on all of its backup nodes.  This is what I am doing right
>> > now as a workaround to solve this issue.
>> >
>> > I will try to explain my use case again.
>> >
>> > I have few replicated caches for which read-through is enabled but
>> > write-through is disabled. The underlying tables for these caches are
>> > updated by different systems. Whenever these tables are updated by 3rd
>> > party system I want to reload the "cache entries".
>> >
>> > I achieve this using below given steps:
>> > 1) 3rd party systems sends an update message (which contains the key) to
>> > our service by invoking our REST api.
>> > 2) Delete an entry from cache using cache().remove(key) method. (Entry is
>> > just removed from cache but present in DB as write-through is false)
>> > 3) Invoke cache().get(key) method for the same key in step 2 to reload an
>> > entry.
>> >
>> > Thanks,
>> > Prasad
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Prasad
>> >
>> > On Sat, Feb 29, 2020 at 4:49 AM Denis Magda  wrote:
>> >
>> > > Ivan, thanks for stepping in.
>> > >
>> > > Prasad, is Ivan's assumption correct that you query the data with SQL 
>> > > under
>> > > the observed circumstances? My guess is that you were referring to the
>> > > key-value APIs as long as the issue is gone when the write-through is
>> > > enabled.
>> > >
>> > > -
>> > > Denis
>> > >
>> > >
>> > > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin 
>> > > wrote:
>> > >

Re: Read through not working as expected in case of Replicated cache

2020-03-02 Thread Ivan Pavlukhin
Hi Prasad,

AFAIK, when value is read through it is not sent to backup nodes. You
can check it using IgniteCache.localPeek method (ask if more details
how to do it are needed).

I usually think about read-through cache for a following case. There
is an underlying storage with "real" data, cache is used to speedup an
access. Some kind of invalidation mechanism might be used but it is
assumed fine to read values from cache which are not consistent with
the backing storage at some point.

Consequently it seems there is no need to distribute values from an
underlying storage over all replicas because if a value is absent a
reader will receive an actual value from the underlying storage.

Best regards,
Ivan Pavlukhin

пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao :
>
> Hi Ivan/Denis,
>
> Are you saying that when a value is loaded to cache from an underlying
> storage using read-through approach, value is loaded only on primary node
> and does not get replicated on its back nodes?
>
> I am under the impression that when a value is loaded in a cache using
> read-through approach, this key/value pair gets replicated on all back-up
> nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
> Please correct me if I am wrong.
>
> I think the key/value must get replicated on all backup nodes when it is
> read through underlying storage otherwise user will have to add the same
> key/value explicitly using cache.put(key,value) operation so that it will
> get replicated on all of its backup nodes.  This is what I am doing right
> now as a workaround to solve this issue.
>
> I will try to explain my use case again.
>
> I have few replicated caches for which read-through is enabled but
> write-through is disabled. The underlying tables for these caches are
> updated by different systems. Whenever these tables are updated by 3rd
> party system I want to reload the "cache entries".
>
> I achieve this using below given steps:
> 1) 3rd party systems sends an update message (which contains the key) to
> our service by invoking our REST api.
> 2) Delete an entry from cache using cache().remove(key) method. (Entry is
> just removed from cache but present in DB as write-through is false)
> 3) Invoke cache().get(key) method for the same key in step 2 to reload an
> entry.
>
> Thanks,
> Prasad
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Prasad
>
> On Sat, Feb 29, 2020 at 4:49 AM Denis Magda  wrote:
>
> > Ivan, thanks for stepping in.
> >
> > Prasad, is Ivan's assumption correct that you query the data with SQL under
> > the observed circumstances? My guess is that you were referring to the
> > key-value APIs as long as the issue is gone when the write-through is
> > enabled.
> >
> > -
> > Denis
> >
> >
> > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin 
> > wrote:
> >
> > > As I understand the thing here is in combination of read-through and
> > > SQL. SQL queries do not read from underlying storage when read-through
> > > is configured. And an observed result happens because query from a
> > > client node over REPLICATED cache picks random server node (kind of
> > > load-balancing) to retrieve data. Following happens in the described
> > > case:
> > > 1. Value is loaded to a cache from an underlying storage on a primary
> > > node when cache.get is called.
> > > 2. Query is executed multiple times and when the chose node is the
> > > primary node then the value is observed. On other nodes the value is
> > > absent.
> > >
> > > Actually, behavior for PARTITIONED cache is similar, but an
> > > inconsistency is not observed because SQL queries read data from the
> > > primary node there. If the primary node leaves a cluster then an SQL
> > > query will not see the value anymore. So, the same inconsistency will
> > > appear.
> > >
> > > Best regards,
> > > Ivan Pavlukhin
> > >
> > > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> > > prasadbhalerao1...@gmail.com>:
> > > >
> > > > Can someone please comment on this?
> > > >
> > > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda  wrote:
> > > >
> > > > > Ignite Dev team,
> > > > >
> > > > > This sounds like an issue in our replicated cache implementation
> > rather
> > > > > than an expected behavior. Especially, if partitioned caches don't
> > have
> > > > > such a specificity.
> > > > >
> > > > > Who can expla

Re: ClassCastException on Thin client when get cache value with List and Map

2020-02-05 Thread Ivan Pavlukhin
Hi,

Indeed it is a bug. We are already working on it in scope of
https://issues.apache.org/jira/browse/IGNITE-12468

ср, 5 февр. 2020 г. в 11:42, PereTang :
>
> I use the thin java client.
>
>
> I Create a ArrayList and put the Person object into it.
> --
> try (IgniteClient igniteClient = Ignition.startClient(new
> ClientConfiguration().setAddresses("127.0.0.1:10800"))) {
> final ClientCache> demo =
> igniteClient.getOrCreateCache("demo");
> final List personList = new java.util.ArrayList<>();
> personList.add(new Person("apache", 100));
> personList.add(new Person("Ignite", 13));
> demo.put("test", personList);
> }
> --
>
> And when I take it out of the list
> --
> try (IgniteClient igniteClient = Ignition.startClient(new
> ClientConfiguration().setAddresses("127.0.0.1:10800"))) {
> final ClientCache> demo =
> igniteClient.getOrCreateCache("demo");
> final List personList = demo.get("test");
> final Person person = personList.get(0);
> }
> --
>
> I get the following exceptions:
> --
> java.lang.ClassCastException: class
> org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to class
> com.peeandgee.Person (org.apache.ignite.internal.binary.BinaryObjectImpl and
> com.peeandgee.Person are in unnamed module of loader 'app')
> --
>
> However, there is no exception if I modify the code as follows:
> --
> BinaryObject bo = (BinaryObject) map.get(0);
> Person person = bo.deserialize();
> --
>
>
> Same issue in Map [ClassCastException on thinClient in Apache
> Ignite|https://stackoverflow.com/questions/59299316/classcastexception-on-thinclient-in-apache-ignite]
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Custom IgniteConfiguration does not work

2020-02-02 Thread Ivan Pavlukhin
A side note. A minor improvement for Igntie codebase could be making
IgniteConfiguration class final.

чт, 23 янв. 2020 г. в 15:01, Ilya Kasnacheev :
>
> Hello!
>
> I don't think you are supposed to inherit from IgniteConfiguration. Why would 
> you want to?
>
> If you want to pass some data around, you can try using e.g. 
> IgniteConfiguration.setUserAttributes.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 22 янв. 2020 г. в 22:30, Hemambara :
>>
>> I am trying to extend IgniteConfiguration class and try to enhance its
>> properties as I needed them for some custom checks. I have created config
>> like below and started using Ignition.start()
>>
>> But when I call GridKernalContext.config() I am not getting
>> MyCustomIgniteConfiguration instance. I am still getting IgniteConfiguration
>> reference.
>>
>> 
>> 
>> 
>>
>> 
>> ---
>> 
>>
>> This is happening because IgnitionEx has below code in
>> initializeConfiguration() method. So even if I extend this configuration it
>> is creating new instance and using that.
>>
>> Question is : Is there any reason why it is not honoring what client provide
>> in config xml ? Seems like a bug to me.
>>
>>  private IgniteConfiguration initializeConfiguration(IgniteConfiguration
>> cfg) throws IgniteCheckedException {
>> IgniteConfiguration myCfg = new IgniteConfiguration(cfg);
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: REPLICATED caches network overhead?

2020-02-01 Thread Ivan Pavlukhin
Hi Steve,

A bit of additional info, "replicatedOnly" flag will be not needed in
upcoming 2.8. A related ticket [1].

[1] https://issues.apache.org/jira/browse/IGNITE-6295

сб, 4 янв. 2020 г. в 20:32, steve.hostettler :
>
> So to close this one, the problem was because I did not set the
> replicatedOnly flag on the query.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Re:Re: ignite with spring data jpa use grammar IN

2019-12-19 Thread Ivan Pavlukhin
Hi,

I tried a named parameter and indeed it does not work. Actually,
@Query annotation is an Ignite custom one. Subsequently @Param
annotation is not supported, consequently named arguments and list
arguments do not work =(

чт, 19 дек. 2019 г. в 16:52, shishkovilja :
>
> Unfortunately, code placed at this link doesn't work, it throws an exception
> same as yours. I have made changes in reply at Nabble, so you haven't see
> attached exception:
>
> Caused by: org.h2.jdbc.JdbcSQLException: Синтаксическая ошибка в выражении
> SQL "SELECT ""PersonCache"".""PERSON""._KEY, ""PersonCache"".""PERSON""._VAL
> FROM PERSON WHERE SALARY IN (:[*]SAL) "; ожидалось "NOT, EXISTS, INTERSECTS,
> SELECT, FROM, WITH"
> Syntax error in SQL statement "SELECT ""PersonCache"".""PERSON""._KEY,
> ""PersonCache"".""PERSON""._VAL FROM PERSON WHERE SALARY IN (:[*]SAL) ";
> expected "NOT, EXISTS, INTERSECTS, SELECT, FROM, WITH"; SQL statement:
> SELECT "PersonCache"."PERSON"._KEY, "PersonCache"."PERSON"._VAL FROM Person
> WHERE salary IN (:sal) [42001-197]
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: ignite with spring data jpa use grammar IN

2019-12-19 Thread Ivan Pavlukhin
Ilya S.,

Actually, I expected it to work because named parameter should be
expanded to multiple "?" parameters.

Could you please share a ready to run project reproducing the problem?
I can take look.

чт, 19 дек. 2019 г. в 14:59, shishkovilja :
>
> I tried to send with @Param, but error is the same, but ":param_name" is in
> query string, i.e. it seems that query is not properly parsed...
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: ignite with spring data jpa use grammar IN

2019-12-19 Thread Ivan Pavlukhin
I found an example [1]

public interface EmployeeRepository extends CrudRepository {
  @Query("SELECT e FROM Employee e WHERE e.dept = :dept AND "
  + "(SELECT COUNT(DISTINCT e2.salary) FROM Employee e2 "
  + "WHERE e.salary < e2.salary AND e2.dept = :dept) < :topSalNum "
  + "ORDER BY e.salary DESC")
  List findByDeptTopNSalaries(@Param("topSalNum") long
topSalaryNum, @Param("dept") String dept);
}

[1] 
https://www.logicbig.com/tutorials/spring-framework/spring-data/query-named-parameters.html

чт, 19 дек. 2019 г. в 14:53, Ivan Pavlukhin :
>
> Single "?" placeholder assumes single value replacement. Query "SELECT
> * FROM Person WHERE salary IN (?)" returns empty result set because it
> is the same as "SELECT * FROM Person WHERE salary = ?". But there is
> no salary which is equal to any List. Spreading List (and
> other collections) into multiple parameters in query was possible with
> NamedParameteJdbcTemplace (by using placeholder ":parName" instead of
> "?"). There could be something similar in Spring data.
>
> чт, 19 дек. 2019 г. в 14:29, shishkovilja :
> >
> > Oh, sorry, it seems to be my mistake, in this case no exception occurs, but
> > resulting iterator is empty
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
> --
> Best regards,
> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: ignite with spring data jpa use grammar IN

2019-12-19 Thread Ivan Pavlukhin
Single "?" placeholder assumes single value replacement. Query "SELECT
* FROM Person WHERE salary IN (?)" returns empty result set because it
is the same as "SELECT * FROM Person WHERE salary = ?". But there is
no salary which is equal to any List. Spreading List (and
other collections) into multiple parameters in query was possible with
NamedParameteJdbcTemplace (by using placeholder ":parName" instead of
"?"). There could be something similar in Spring data.

чт, 19 дек. 2019 г. в 14:29, shishkovilja :
>
> Oh, sorry, it seems to be my mistake, in this case no exception occurs, but
> resulting iterator is empty
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: ignite with spring data jpa use grammar IN

2019-12-19 Thread Ivan Pavlukhin
Hi,

Clause "IN ?" seems broken to me. Do write your query manually?
Unfortunately I am not familiar with Spring JPA. But with spring
JdbcTemplate "?" can be only replaced with singular argument.
NamedParameterJdbcTemplate is able to expand plural (e.g. List) named
parameters.

чт, 19 дек. 2019 г. в 13:05, Ilya Kasnacheev :
>
> Hello!
>
> I don't think that we support Spring JPA.
>
> Is it possible to tune SQL dialect with JPA? In this case, your best bet is 
> to use H2 dialect.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 19 дек. 2019 г. в 10:43, 张耀文 :
>>
>> hello, first sorry for poor english
>> when i use jpa with 'IN',it cause some problem, how can i resolve it? i 
>> confused about the expression  IN ?[*] . thanks
>>
>> ignite:2.7.6
>> spring jpa:ignite-spring-data_2.0
>> spring boot:2.0.9.RELEASE
>> h2:1.4.197
>>
>> jpa method: List findByShipVisitIdIsIn(List 
>> shipVisitIdList); or findByShipVisitIdIn
>>
>> issue track:
>> Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement 
>> "SELECT ""BerthplanShipVisitCache"".""BerthplanShipVisit""._KEY, 
>> ""BerthplanShipVisitCache"".""BerthplanShipVisit""._VAL FROM 
>> ""BerthplanShipVisit"" WHERE ((""BerthplanShipVisit"".""shipVisitId"" IN 
>> ?[*]))"; expected "("; SQL statement:
>> SELECT "BerthplanShipVisitCache"."BerthplanShipVisit"._KEY, 
>> "BerthplanShipVisitCache"."BerthplanShipVisit"._VAL FROM 
>> "BerthplanShipVisit" WHERE (("BerthplanShipVisit"."shipVisitId" IN ?)) 
>> [42001-197]
>> at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
>> at org.h2.message.DbException.getSyntaxError(DbException.java:217)
>> at org.h2.command.Parser.getSyntaxError(Parser.java:555)
>> at org.h2.command.Parser.read(Parser.java:3518)
>> at org.h2.command.Parser.readCondition(Parser.java:2433)
>> at org.h2.command.Parser.readAnd(Parser.java:2342)
>> at org.h2.command.Parser.readExpression(Parser.java:2334)
>> at org.h2.command.Parser.readTerm(Parser.java:3252)
>> at org.h2.command.Parser.readFactor(Parser.java:2587)
>> at org.h2.command.Parser.readSum(Parser.java:2574)
>> at org.h2.command.Parser.readConcat(Parser.java:2544)
>> at org.h2.command.Parser.readCondition(Parser.java:2370)
>> at org.h2.command.Parser.readAnd(Parser.java:2342)
>> at org.h2.command.Parser.readExpression(Parser.java:2334)
>> at org.h2.command.Parser.readTerm(Parser.java:3252)
>> at org.h2.command.Parser.readFactor(Parser.java:2587)
>> at org.h2.command.Parser.readSum(Parser.java:2574)
>> at org.h2.command.Parser.readConcat(Parser.java:2544)
>> at org.h2.command.Parser.readCondition(Parser.java:2370)
>> at org.h2.command.Parser.readAnd(Parser.java:2342)
>> at org.h2.command.Parser.readExpression(Parser.java:2334)
>> at org.h2.command.Parser.parseSelectSimple(Parser.java:2291)
>> at org.h2.command.Parser.parseSelectSub(Parser.java:2133)
>> at org.h2.command.Parser.parseSelectUnion(Parser.java:1946)
>> at org.h2.command.Parser.parseSelect(Parser.java:1919)
>> at org.h2.command.Parser.parsePrepared(Parser.java:463)
>> at org.h2.command.Parser.parse(Parser.java:335)
>> at org.h2.command.Parser.parse(Parser.java:311)
>> at org.h2.command.Parser.prepareCommand(Parser.java:278)
>> at org.h2.engine.Session.prepareLocal(Session.java:611)
>> at org.h2.engine.Session.prepareCommand(Session.java:549)
>> at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247)
>> at org.h2.jdbc.JdbcPreparedStatement.(JdbcPreparedStatement.java:76)
>> at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:694)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepare0(IgniteH2Indexing.java:539)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatement(IgniteH2Indexing.java:509)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatement(IgniteH2Indexing.java:476)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatementAndCaches(IgniteH2Indexing.java:2635)
>> ... 89 common frames omitted
>>
>>
>>



-- 
Best regards,
Ivan Pavlukhin


Re: Transaction operations using the Ignite Thin Client Protocol

2019-12-09 Thread Ivan Pavlukhin
It worth noting that users should be cautious with SQL transactions.
As fair SQL transactions are supported only for TRANSACTIONAL_SNAPSHOT
mode which are still kind of experimental feature [1].

[1] https://apacheignite.readme.io/docs/multiversion-concurrency-control

пт, 6 дек. 2019 г. в 15:12, Ilya Kasnacheev :
>
> Hello!
>
> As we have discussed privately, ODBC is actually the C++ thin SQL client, and 
> it supports transactions since 2.7.
>
> C++ code should look forward to using ODBC to communicate with Ignite.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 2 дек. 2019 г. в 16:26, Igor Sapego :
>>
>> Ivan,
>>
>> You are right. Though now we have transactions support in thin client 
>> protocol,
>> It is only now implemented for Java. Also, C++ thin client yet to support 
>> SQL.
>>
>> Best Regards,
>> Igor
>>
>>
>> On Sat, Nov 30, 2019 at 9:35 AM Ivan Pavlukhin  wrote:
>>>
>>> Igor,
>>>
>>> Could you please elaborate whether C++ thin client is going to have
>>> transactions support in 2.8? AFAIR, it was implemented only for Java
>>> thin client.
>>>
>>> пт, 29 нояб. 2019 г. в 18:29, Stephen Darlington
>>> :
>>>
>>> >
>>> > The ticket says “Fix version: 2.8” so I would assume it would be 
>>> > available then. Currently planned for late January.
>>> >
>>> > > On 29 Nov 2019, at 13:58, dkurzaj  wrote:
>>> > >
>>> > > Hello,
>>> > >
>>> > > Since this improvement : 
>>> > > https://issues.apache.org/jira/browse/IGNITE-9410
>>> > > is resolved, I'd assume that it is now possible to do SQL transactions 
>>> > > using
>>> > > the C++ thin client, though I'm not sure it is yet since I did not find
>>> > > documentation about that. Would someone happen to know more about this
>>> > > subject?
>>> > >
>>> > > Thank you!
>>> > >
>>> > > Dorian
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>> >
>>> >
>>>
>>>
>>> --
>>> Best regards,
>>> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: Possible race on node shutdown

2019-12-09 Thread Ivan Pavlukhin
Andrey,

Sounds reasonable. Would you mind to create a ticket in Jira?

пн, 9 дек. 2019 г. в 13:33, Andrey Davydov :
>
> AFTER_NODE_STOP state was ignored, because JVM stop. In our case it is not a 
> problem.
> I think that there are some scenarios when AFTER_NODE_STOP is important.
>
> On Thu, Dec 5, 2019 at 11:05 AM Ivan Pavlukhin  wrote:
>>
>> Hi Andrey,
>>
>> Do see the exception only in logs or your code experiences the
>> exception? It looks like the exception is treated as a failure but
>> actually it is not (as it is normal Ignite node stop). I would like to
>> understand how critical is it for users.
>>
>> ср, 4 дек. 2019 г. в 19:25, Andrey Davydov :
>> >
>> > Hello,
>> >
>> >
>> >
>> > Yesterday we got error on node shutdown while test our system:
>> >
>> >
>> >
>> > 2019-12-03 18:49:53,653 [pool-228-thread-1] INFO   
>> > r.s.d.m.c.m.PoisonPill:54 - Poison pill works on 
>> > e45190a6-de52-4e43-b710-1ee8cd1f60a6
>> >
>> > 2019-12-03 18:49:53,657 [pool-228-thread-1] INFO   
>> > r.s.d.m.c.m.ClusterLifecycleBean:61 - IgniteLifecycleBean 331582218 handle 
>> > event: BEFORE_NODE_STOP
>> >
>> > 2019-12-03 18:49:53,658 [pool-228-thread-1] INFO   
>> > r.s.d.m.c.w.j.JettyStarter:145 - Stop http interface.
>> >
>> > 2019-12-03 18:49:53,658 [pool-228-thread-1] INFO   
>> > r.s.d.m.c.w.j.JettyStarter:149 - Http interface stopped.
>> >
>> > 2019-12-03 18:49:53,658 [pool-228-thread-1] INFO   
>> > r.s.d.m.c.m.ClusterLifecycleBean:105 - IgniteLifecycleBean 331582218 
>> > finish handle event: BEFORE_NODE_STOP
>> >
>> > [18:49:53] Topology snapshot [ver=4, locNode=a8ee74de, servers=2, 
>> > clients=0, state=INACTIVE, CPUs=4, offheap=7.2GB, heap=4.0GB]
>> >
>> > [18:49:53] Coordinator changed [prev=TcpDiscoveryNode 
>> > [id=e45190a6-de52-4e43-b710-1ee8cd1f60a6, addrs=[127.0.0.1], 
>> > sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, 
>> > lastExchangeTime=1575398986885, loc=false, 
>> > ver=2.7.5#20190603-sha1:be4f2a15, isClient=false], cur=TcpDiscoveryNode 
>> > [id=a8ee74de-6d37-4413-9418-08ec903bb974, addrs=[127.0.0.1], 
>> > sockAddrs=[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, 
>> > lastExchangeTime=1575398993662, loc=true, 
>> > ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]]
>> >
>> > [18:49:53]   ^-- Baseline [id=0, size=3, online=2, offline=1]
>> >
>> > [18:49:53] Topology snapshot [ver=4, locNode=67aa9f86, servers=2, 
>> > clients=0, state=INACTIVE, CPUs=4, offheap=7.2GB, heap=4.0GB]
>> >
>> > [18:49:53] Coordinator changed [prev=TcpDiscoveryNode 
>> > [id=e45190a6-de52-4e43-b710-1ee8cd1f60a6, addrs=[127.0.0.1], 
>> > sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, 
>> > lastExchangeTime=1575398989326, loc=false, 
>> > ver=2.7.5#20190603-sha1:be4f2a15, isClient=false], cur=TcpDiscoveryNode 
>> > [id=a8ee74de-6d37-4413-9418-08ec903bb974, addrs=[127.0.0.1], 
>> > sockAddrs=[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, 
>> > lastExchangeTime=1575398989326, loc=false, 
>> > ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]]
>> >
>> > [18:49:53]   ^-- Baseline [id=0, size=3, online=2, offline=1]
>> >
>> > 2019-12-03 18:49:53,689 [grid-nio-worker-tcp-comm-1-#1107%TestNode-0%] 
>> > ERROR  :135 - Critical system error detected. Will be handled accordingly 
>> > to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, 
>> > timeout=0, super=AbstractFailureHandler 
>> > [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
>> > SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
>> > [type=SYSTEM_WORKER_TERMINATION, err=java.lang.InterruptedException]]
>> >
>> > java.lang.InterruptedException: null
>> >
>> >  at 
>> > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2158)
>> >  ~[ignite-core-2.7.5.jar:2.7.5]
>> >
>> >  at 
>> > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
>> >  [ignite-core-2.7.5.jar:2.7.5]
>> >
>> >  at 
>> > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) 
>> > [ignite-core-2.7.5.jar:2.7.5]
>> >
>> >  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
>> >
>> > 2019-12-03 18:49:54,033 [grid-nio-worker-tcp-comm-1-#1107%TestNode-0%] 
>> > ERROR  :127 - JVM will be halted immediately due to the failure: 
>> > [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, 
>> > err=java.lang.InterruptedException]]
>> >
>> >
>> >
>> > We can’t find any way for reproduce.
>> >
>> >
>> >
>> > Andrey.
>> >
>> >
>>
>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: Possible race on node shutdown

2019-12-05 Thread Ivan Pavlukhin
Hi Andrey,

Do see the exception only in logs or your code experiences the
exception? It looks like the exception is treated as a failure but
actually it is not (as it is normal Ignite node stop). I would like to
understand how critical is it for users.

ср, 4 дек. 2019 г. в 19:25, Andrey Davydov :
>
> Hello,
>
>
>
> Yesterday we got error on node shutdown while test our system:
>
>
>
> 2019-12-03 18:49:53,653 [pool-228-thread-1] INFO   r.s.d.m.c.m.PoisonPill:54 
> - Poison pill works on e45190a6-de52-4e43-b710-1ee8cd1f60a6
>
> 2019-12-03 18:49:53,657 [pool-228-thread-1] INFO   
> r.s.d.m.c.m.ClusterLifecycleBean:61 - IgniteLifecycleBean 331582218 handle 
> event: BEFORE_NODE_STOP
>
> 2019-12-03 18:49:53,658 [pool-228-thread-1] INFO   
> r.s.d.m.c.w.j.JettyStarter:145 - Stop http interface.
>
> 2019-12-03 18:49:53,658 [pool-228-thread-1] INFO   
> r.s.d.m.c.w.j.JettyStarter:149 - Http interface stopped.
>
> 2019-12-03 18:49:53,658 [pool-228-thread-1] INFO   
> r.s.d.m.c.m.ClusterLifecycleBean:105 - IgniteLifecycleBean 331582218 finish 
> handle event: BEFORE_NODE_STOP
>
> [18:49:53] Topology snapshot [ver=4, locNode=a8ee74de, servers=2, clients=0, 
> state=INACTIVE, CPUs=4, offheap=7.2GB, heap=4.0GB]
>
> [18:49:53] Coordinator changed [prev=TcpDiscoveryNode 
> [id=e45190a6-de52-4e43-b710-1ee8cd1f60a6, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, 
> lastExchangeTime=1575398986885, loc=false, ver=2.7.5#20190603-sha1:be4f2a15, 
> isClient=false], cur=TcpDiscoveryNode 
> [id=a8ee74de-6d37-4413-9418-08ec903bb974, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, 
> lastExchangeTime=1575398993662, loc=true, ver=2.7.5#20190603-sha1:be4f2a15, 
> isClient=false]]
>
> [18:49:53]   ^-- Baseline [id=0, size=3, online=2, offline=1]
>
> [18:49:53] Topology snapshot [ver=4, locNode=67aa9f86, servers=2, clients=0, 
> state=INACTIVE, CPUs=4, offheap=7.2GB, heap=4.0GB]
>
> [18:49:53] Coordinator changed [prev=TcpDiscoveryNode 
> [id=e45190a6-de52-4e43-b710-1ee8cd1f60a6, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, 
> lastExchangeTime=1575398989326, loc=false, ver=2.7.5#20190603-sha1:be4f2a15, 
> isClient=false], cur=TcpDiscoveryNode 
> [id=a8ee74de-6d37-4413-9418-08ec903bb974, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, 
> lastExchangeTime=1575398989326, loc=false, ver=2.7.5#20190603-sha1:be4f2a15, 
> isClient=false]]
>
> [18:49:53]   ^-- Baseline [id=0, size=3, online=2, offline=1]
>
> 2019-12-03 18:49:53,689 [grid-nio-worker-tcp-comm-1-#1107%TestNode-0%] ERROR  
> :135 - Critical system error detected. Will be handled accordingly to 
> configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, 
> timeout=0, super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.InterruptedException]]
>
> java.lang.InterruptedException: null
>
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2158)
>  ~[ignite-core-2.7.5.jar:2.7.5]
>
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
>  [ignite-core-2.7.5.jar:2.7.5]
>
>  at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) 
> [ignite-core-2.7.5.jar:2.7.5]
>
>  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
>
> 2019-12-03 18:49:54,033 [grid-nio-worker-tcp-comm-1-#1107%TestNode-0%] ERROR  
> :127 - JVM will be halted immediately due to the failure: 
> [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, 
> err=java.lang.InterruptedException]]
>
>
>
> We can’t find any way for reproduce.
>
>
>
> Andrey.
>
>



-- 
Best regards,
Ivan Pavlukhin


Re: Alter table issue

2019-12-03 Thread Ivan Pavlukhin
Hi Shravya,

It is a know issue. You can find more details in ticket [1].

[1] https://issues.apache.org/jira/browse/IGNITE-6611

вт, 3 дек. 2019 г. в 07:54, Shravya Nethula <
shravya.neth...@aline-consulting.com>:

> Hi,
>
> I added a new column in an existing table using the following query:
> *ALTER TABLE person ADD COLUMN (order_id LONG)*
>
> Now, I am trying to change the datatype of the new column, so I tried
> executing the following queries:
>
> *ALTER TABLE person DROP COLUMN (order_id) *
>
> *ALTER TABLE person ADD COLUMN (order_id VARCHAR(64)) *
>
> Now when I am trying to insert a row with "varchar" value in "order_id"
> column, it is throwing the following error:
> *Error: Wrong value has been set
> [typeName=SQL_PUBLIC_PERSON_fc7e0bd5_d052_43c1_beaf_fb01b65f2f96,
> fieldName=ORDER_ID, fieldType=long, assignedValueType=String]*
>
> In this link: https://apacheignite-sql.readme.io/docs/alter-table
> The command does not remove actual data from the cluster which means that
> if the column 'name' is dropped, the value of the 'name' will still be
> stored in the cluster. This limitation is to be addressed in the next
> releases.
>
> I saw that there is a limitation in Alter Table. So in which release can
> this support be provided? What is the tentative date?
>
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
>

-- 
Best regards,
Ivan Pavlukhin


Re: Transaction operations using the Ignite Thin Client Protocol

2019-11-29 Thread Ivan Pavlukhin
Igor,

Could you please elaborate whether C++ thin client is going to have
transactions support in 2.8? AFAIR, it was implemented only for Java
thin client.

пт, 29 нояб. 2019 г. в 18:29, Stephen Darlington
:

>
> The ticket says “Fix version: 2.8” so I would assume it would be available 
> then. Currently planned for late January.
>
> > On 29 Nov 2019, at 13:58, dkurzaj  wrote:
> >
> > Hello,
> >
> > Since this improvement : https://issues.apache.org/jira/browse/IGNITE-9410
> > is resolved, I'd assume that it is now possible to do SQL transactions using
> > the C++ thin client, though I'm not sure it is yet since I did not find
> > documentation about that. Would someone happen to know more about this
> > subject?
> >
> > Thank you!
> >
> > Dorian
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


--
Best regards,
Ivan Pavlukhin


Re: Baseline topology - Data inconsistency

2019-11-19 Thread Ivan Pavlukhin
As I wrote before. I hope that split M5 cluster will start as
inactive. You can think that it is an administrator responsibility to
check that there is no split clusters upon activation. If both
clusters are active, yes they will act separately and data will
diverge.

вт, 19 нояб. 2019 г. в 16:46, userx :
>
> Sorry if the questions wasn't clear.
>
> What I wanted to ask is that say, if M5 comes up in a different cluster, so
> eventually there will be two clusters
>
> Cluster 1 :- M1 to M4
> Cluster 2:- M5
>
> Then if they share a common cache (before M5 went down) and there are two
> client requests
>
> Cluster 1:- commonCache.put("1", 1)
> Cluster 2:- commonCache.put("1", 2)
>
> isn't that a conflicting situation, there  are two different values of "1"
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Baseline topology - Data inconsistency

2019-11-19 Thread Ivan Pavlukhin
Sorry did not get the questions from the last message. Could you
please rephrase?

вт, 19 нояб. 2019 г. в 14:43, userx :
>
> So in the case which you highlighted where M5 starts in a different cluster
> because of net issues. Now in such a case there is a possibility that if M5
> tries to go in the first cluster (M1 to M4), there is an exception related
> to "Mismatch of hashid of M5".
>
> So what happens when M1 receives cache.put("1",1) and M5 receieves
> caches.put("1",2) ?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Baseline topology - Data inconsistency

2019-11-19 Thread Ivan Pavlukhin
Expected behavior is that there will be one merged cluster. As my
understanding goes if you have a net split (M5 cannot reach neither
node from M1 to M4 by network) than M5 can start in separate cluster
alone. I hope that such cluster will be inactive, but you definitely
should check it manually. You can try it as follows:
1. Shutdown M5.
2. Shutdown cluster (M1-M4).
3. Start M5.

вт, 19 нояб. 2019 г. в 07:24, userx :

>
> So one quick question Ivan,
>
> When M5 comes up, do we have
>
> a) two clusters -  (M1 to M4) and (M5 alone)
> b) one merged cluster (M1 to M5)
>
> What I am trying to ask is that is it possible without manual intervention
> to have different clusters if M5 was the part of the original cluster (M1 to
> M5) during the activation.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



--
Best regards,
Ivan Pavlukhin


Re: Baseline topology - Data inconsistency

2019-11-18 Thread Ivan Pavlukhin
In described case node M5 should rebalance entry <1, 2> from node M1
as M5 has a stale data. The decision to perform rebalance or not is
based on internal mechanism called "partition update counter". In
described case partition for key "1" on M5 will have updater lesser
than corresponding one on M1. So, rebalance should take place. If
counters are equal, there will be no rebalance for the partition.

пн, 18 нояб. 2019 г. в 11:03, userx :
>
> Any pointers ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: recoveryBallotBoxes in MvccProcessorImpl memory leak?

2019-11-18 Thread Ivan Pavlukhin
Hi,

Thank you for digging deeper! No good ideas about problems in curCrd.local().

Have you tried to reproduce a leak with starting/stopping huge amount
of client nodes?

вс, 17 нояб. 2019 г. в 13:16, mvkarp :
>
> Only other thing I can think of if it's through onDiscovery() is that
> curCrd.local() somehow is returning true. However I am unable to find
> exactly how local() is determined since there appears to be a big chain.
>
> I know that the node uuid on the leaking server is on a different physical
> node as well as has a completely different node ID
> (b-b-b-b-b-b) to what the MVCC coordinator is
> (mvccCrd=a--a-a-a)
>
> Is there any way that the curCrd.local() could be returning True on the
> leaking server JVM? I am trying to investigate how local() is determined and
> what could cause it to be true.
>
>
> Ivan Pavlukhin wrote
> > But currently I suspect that you faced a leak in
> > MvccProcessorImpl.onDiscovery on non MVCC coordinator nodes. Do you
> > think that there is other reason in you case?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: recoveryBallotBoxes in MvccProcessorImpl memory leak?

2019-11-16 Thread Ivan Pavlukhin
> But the population might be through processRecoveryFinishedMessage() - which 
> does not do any check for isLocal() and goes straight to processing message 
> since waitForCoordinatorInit() on a MvccRecoveryFinishedMessage always 
> returns false?

All nodes sends messages intended for handling in
processRecoveryFinishedMessage() to MVCC coordinator node. So, we
assume that only coordinator receives such messages. If you are
interested in, you can find other side code in
IgniteTxManager.NodeFailureTimeoutObject, "recovery finished" messages
are issued there.

But currently I suspect that you faced a leak in
MvccProcessorImpl.onDiscovery on non MVCC coordinator nodes. Do you
think that there is other reason in you case?

сб, 16 нояб. 2019 г. в 12:30, mvkarp :
>
> 1 & 2. Actually, looking at latest Master on the release of 2.7.5 and the
> current Master version, it is 'pickMvccCoordinator' function which returns
> the coordinator (this is same function that selects node that is not Client
> and Ignite version >= 2.7). curCrd is then assigned the return variable of
> pickMvccCoordinator, which becomes the active Mvcc coordinator. So looks
> like it does become active, but not sure the effect of that yet.
>
> 3. Assuming it is then active, looks like there are two entry points into
> recoveryBallotBoxes. Through  onDiscovery() and via
> processRecoveryFinishedMessage().
>
> Is it possible that onDiscovery() does not populate recoveryBallotBoxes as
> there is curCrd0.local() check - so processing will only be done if MVCC
> coordinator is local - thus a node that is actually a MVCC coordinator will
> clear out the recoveryBallotBoxes (which is the explicit check that you
> mentioned).
>
> But the population might be through processRecoveryFinishedMessage() - which
> does not do any check for isLocal() and goes straight to processing message
> since waitForCoordinatorInit() on a MvccRecoveryFinishedMessage always
> returns false?
>
>
> Ivan Pavlukhin wrote
> > 1. MVCC coordinator should not do anything when there is no MVCC
> > caches, actually it should not be active in such case. Basically, MVCC
> > coordinator is needed to have a consistent order between transactions.
> > 2. In 2.7.5 "assigned coordinator" is always selected but it does not
> > mean that it is active. MvccProcessorImpl.curCrd variable corresponds
> > to active MVCC coordinator.
> > 3. If that statement is true, then it should be rather easy to
> > reproduce the problem by starting and stopping client nodes
> > frequently. recoveryBallotBoxes was not assumed to be populated on
> > nodes other than MVCC coordinator. If it happens than we found a bug.
> > Actually, the code in master is different and has an explicit check
> > that recoveryBallotBoxes are populated only on MVCC coordinator.
> >
> > чт, 14 нояб. 2019 г. в 15:42, mvkarp 
>
> > liquid_ninja2k@
>
> > :
> >>
> >> Hi, after investigating I have few questions regarding this issue.
> >>
> >> 1. Having lack of knowledge in what MVCC coordinator is used for, are you
> >> able to shed some light on the role and purpose of the MVCC coordinator?
> >> What does the MVCC coordinator do, why is one selected? Should an MVCC
> >> coordinator be selected regardless of MVCC being disabled? (i.e. is it
> >> used
> >> for any other base features and is it just the way Ignite is meant to
> >> work)
> >>
> >> 2. Following on from this, after looking at the code of the
> >> MvccProcessorImpl.java class in Ignite 2.7.5 Github, it looks like an
> >> MVCC
> >> coordinator is ALWAYS selected and assigns one of the server nodes as the
> >> MVCC coordinator, regardless of having TRANSACTIONAL_SNAPSHOT cache or
> >> not
> >> (mvccEnabled can be false but a MVCC coordinator is still be selected).
> >>
> >> https://github.com/apache/ignite/blob/ignite-2.7.5/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessorImpl.java
> >>
> >> On Line 861, in assignMvccCoordinator method, it loops through all nodes
> >> in
> >> the cluster with only these two conditions.
> >>
> >> *if (!node.isClient() && supportsMvcc(node))*
> >>
> >> It only checks if the node is not a client, and that is supportsMvcc
> >> (which
> >> is true for all versions > 2.7). It does not check mvccEnabled at all.
> >>
> >>
> >> Can you confirm the above is intentional/expected or if there is another
> >> piece of code I am missing?
> >>
> >>
> >> 3.

Re: recoveryBallotBoxes in MvccProcessorImpl memory leak?

2019-11-15 Thread Ivan Pavlukhin
1. MVCC coordinator should not do anything when there is no MVCC
caches, actually it should not be active in such case. Basically, MVCC
coordinator is needed to have a consistent order between transactions.
2. In 2.7.5 "assigned coordinator" is always selected but it does not
mean that it is active. MvccProcessorImpl.curCrd variable corresponds
to active MVCC coordinator.
3. If that statement is true, then it should be rather easy to
reproduce the problem by starting and stopping client nodes
frequently. recoveryBallotBoxes was not assumed to be populated on
nodes other than MVCC coordinator. If it happens than we found a bug.
Actually, the code in master is different and has an explicit check
that recoveryBallotBoxes are populated only on MVCC coordinator.

чт, 14 нояб. 2019 г. в 15:42, mvkarp :
>
> Hi, after investigating I have few questions regarding this issue.
>
> 1. Having lack of knowledge in what MVCC coordinator is used for, are you
> able to shed some light on the role and purpose of the MVCC coordinator?
> What does the MVCC coordinator do, why is one selected? Should an MVCC
> coordinator be selected regardless of MVCC being disabled? (i.e. is it used
> for any other base features and is it just the way Ignite is meant to work)
>
> 2. Following on from this, after looking at the code of the
> MvccProcessorImpl.java class in Ignite 2.7.5 Github, it looks like an MVCC
> coordinator is ALWAYS selected and assigns one of the server nodes as the
> MVCC coordinator, regardless of having TRANSACTIONAL_SNAPSHOT cache or not
> (mvccEnabled can be false but a MVCC coordinator is still be selected).
>
> https://github.com/apache/ignite/blob/ignite-2.7.5/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/mvcc/MvccProcessorImpl.java
>
> On Line 861, in assignMvccCoordinator method, it loops through all nodes in
> the cluster with only these two conditions.
>
> *if (!node.isClient() && supportsMvcc(node))*
>
> It only checks if the node is not a client, and that is supportsMvcc (which
> is true for all versions > 2.7). It does not check mvccEnabled at all.
>
>
> Can you confirm the above is intentional/expected or if there is another
> piece of code I am missing?
>
>
> 3. As extra information, the node that happens to be selected as MVCC
> coordinator does not get the leak. But every other client/server gets the
> leak.
>
>
>
> Ivan Pavlukhin wrote
> > Hi,
> >
> > I suspect a following here. Some node treats itself as a MVCC
> > coordinator and creates a new RecoveryBallotBox when each client node
> > leaves. Some (may be all) other nodes think that MVCC is disabled and
> > do not send a vote (assumed for aforementioned ballot box) to MVCC
> > coordinator. Consequently a memory leak.
> >
> > A following could be done:
> > 1. Figure out why some node treats itself MVCC coordinator and others
> > think that MVCC is disabled.
> > 2. Try to introduce some defensive matters in Ignite code to protect
> > from the leak in a long running cluster.
> >
> > As a last chance workaround I can suggest writing custom code, which
> > cleans recoveryBallotBoxes map from time to time (most likely using
> > reflection).
> >
> > пн, 11 нояб. 2019 г. в 08:53, mvkarp 
>
> > liquid_ninja2k@
>
> > :
> >>
> >> We have frequently stopping and starting clients in short lived client
> >> JVM
> >> processes as required for our purposes, this seems to lead to a huge
> >> bunch
> >> of PME (but no rebalancing) and topology changes (topVer=300,000+)
> >>
> >> Still can not figure out why this map won't clear (there are no
> >> exceptions
> >> or err at all in the entire log)
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: recoveryBallotBoxes in MvccProcessorImpl memory leak?

2019-11-11 Thread Ivan Pavlukhin
Hi,

My first thought is deploying a service [1] (remotely dynamically
Ignite.services().deploy() or statically
IgniteConfiguration.setServiceConfiguration()) clearing problematic
map periodically.

[1] https://apacheignite.readme.io/docs/service-grid

пн, 11 нояб. 2019 г. в 13:20, mvkarp :
>
> Hi,
>
> Would you have any suggestion on how to implement a last chance workaround
> for this issue for the server JVM?
>
>
> Ivan Pavlukhin wrote
> > Hi,
> >
> > I suspect a following here. Some node treats itself as a MVCC
> > coordinator and creates a new RecoveryBallotBox when each client node
> > leaves. Some (may be all) other nodes think that MVCC is disabled and
> > do not send a vote (assumed for aforementioned ballot box) to MVCC
> > coordinator. Consequently a memory leak.
> >
> > A following could be done:
> > 1. Figure out why some node treats itself MVCC coordinator and others
> > think that MVCC is disabled.
> > 2. Try to introduce some defensive matters in Ignite code to protect
> > from the leak in a long running cluster.
> >
> > As a last chance workaround I can suggest writing custom code, which
> > cleans recoveryBallotBoxes map from time to time (most likely using
> > reflection).
> >
> > пн, 11 нояб. 2019 г. в 08:53, mvkarp 
>
> > liquid_ninja2k@
>
> > :
> >>
> >> We have frequently stopping and starting clients in short lived client
> >> JVM
> >> processes as required for our purposes, this seems to lead to a huge
> >> bunch
> >> of PME (but no rebalancing) and topology changes (topVer=300,000+)
> >>
> >> Still can not figure out why this map won't clear (there are no
> >> exceptions
> >> or err at all in the entire log)
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: recoveryBallotBoxes in MvccProcessorImpl memory leak?

2019-11-10 Thread Ivan Pavlukhin
Hi,

I suspect a following here. Some node treats itself as a MVCC
coordinator and creates a new RecoveryBallotBox when each client node
leaves. Some (may be all) other nodes think that MVCC is disabled and
do not send a vote (assumed for aforementioned ballot box) to MVCC
coordinator. Consequently a memory leak.

A following could be done:
1. Figure out why some node treats itself MVCC coordinator and others
think that MVCC is disabled.
2. Try to introduce some defensive matters in Ignite code to protect
from the leak in a long running cluster.

As a last chance workaround I can suggest writing custom code, which
cleans recoveryBallotBoxes map from time to time (most likely using
reflection).

пн, 11 нояб. 2019 г. в 08:53, mvkarp :
>
> We have frequently stopping and starting clients in short lived client JVM
> processes as required for our purposes, this seems to lead to a huge bunch
> of PME (but no rebalancing) and topology changes (topVer=300,000+)
>
> Still can not figure out why this map won't clear (there are no exceptions
> or err at all in the entire log)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: How to insert data?

2019-11-08 Thread Ivan Pavlukhin
Hi Boris,

Perhaps confusing point here is that Ignite separates key and value
parts in KV storage. And there is always underlying KV storage in
Ignite. So you cannot have cache key and value in a single POJO. I
tried your class with annotations and run INSERTs with it. A trick
here is a "_key" column.
class DataX {
@QuerySqlField
private Integer key;
@QuerySqlField
private String value;
public DataX(int key,String value) {
this.key=key;
this.value=value;
}
public int getKey() {
return key;
}
public void setKey(int key) {
this.key=key;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value=value;
}
}

IgniteCache cache = ignite.createCache(new
CacheConfiguration<>(DEFAULT_CACHE_NAME)
.setIndexedTypes(Integer.class, DataX.class));

cache.query(new SqlFieldsQuery("insert into DataX(_key, key, value)
values(1, 42, 'value42')"));

System.err.println(cache.query(new SqlFieldsQuery("select * from
DataX")).getAll());

ср, 6 нояб. 2019 г. в 11:46, BorisBelozerov :
>
> When I run your code in one node, It run OK
> But when I run your code in a cluster, It can't run. I can select but can't
> insert or do other things
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue querying id column only

2019-11-05 Thread Ivan Pavlukhin
>
>
>
> SELECT id FROM people;
>
>
>
> Denis
>
> On 17 Oct 2019, 09:02 +0300, Kurt Semba , wrote:
>
> Hi all,
>
>
>
> Is it possible for a table through the SQL interface to only return some 
> subset of data if querying against a specific column?
>
>
>
> e.g.
>
>
>
> We have a cache configuration defined based on Java SQL Query annotations 
> that contains an id field and some other string fields. The value of the id 
> field in all entries also matches the value of the cache entry key).
>
>
>
> The table contains 3 entries, however if I execute “select id from table” 
> through sqlline, I only am able to see 1 entry. However, if I execute “select 
> id, name from table”, I see all of them.
>
>
>
> Are there any steps I can take to better diagnose this?
>
>
>
> Thank you
>
> Kurt
>
>



--
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-11-05 Thread Ivan Pavlukhin
Hi Hemambara,

You can a write an email to d...@ignite.apache.org with reference to
the issue and a description of the fix. It might be that someone will
be ready to do a review and merge.

вт, 5 нояб. 2019 г. в 15:57, Hemambara :
>
> Okay, so the issue you are facing with is incorrect data type which is valid,
> so its not an issue then.
>
> Yes agreed that it requires more testing, but I feel the fix that is going
> in, is safe and good to do. This fix is really important for us to proceed
> further. I have tested few other scnearios and its working fine for me. Is
> there any way we can plan to merge ?  or you do not want to do it now ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-11-05 Thread Ivan Pavlukhin
Hi Hemambara,

Check my example [1].
1. Launch step1.
2. Uncomment code and enable Address.number field.
3. Launch step2.

Actually the problem here is type mismatch for "number" int vs
varchar. Work with nested fields could be really tricky. Subsequently
there is not much activity to improve their support. Applying proposed
fix requires thorough testing and review. A lot of end to end
scenarios should be covered by tests.

[1] https://gist.github.com/pavlukhin/53d8a23b48ca0018481a203ceb06065f

пт, 1 нояб. 2019 г. в 11:28, Ivan Pavlukhin :
>
> Hi Hemambara,
>
> I appologize but I will be able to share a problematic example only on
> next week.
>
> чт, 31 окт. 2019 г. в 19:49, Hemambara :
> >
> > I did not face any issue. Its working fine for me. Can you share your code
> > and exception that you are getting
> >
> > I tried like below and it worked for me.
> > ((Person)cache.get(1)).address.community)
> >
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
> --
> Best regards,
> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: recoveryBallotBoxes in MvccProcessorImpl memory leak?

2019-11-01 Thread Ivan Pavlukhin
Hi,

Sounds like a bug. Would be great to have a ticket with reproducer.

пт, 1 нояб. 2019 г. в 03:25, mvkarp :
>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2658/heapanalysisMAT.jpg>
>
> I've attached an Eclipse MAT heap analysis. As you can see MVCC is disabled
> (there are no TRANSACTIONAL_SNAPSHOT caches in the cluster)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-11-01 Thread Ivan Pavlukhin
Hi Hemambara,

I appologize but I will be able to share a problematic example only on
next week.

чт, 31 окт. 2019 г. в 19:49, Hemambara :
>
> I did not face any issue. Its working fine for me. Can you share your code
> and exception that you are getting
>
> I tried like below and it worked for me.
> ((Person)cache.get(1)).address.community)
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-31 Thread Ivan Pavlukhin
Hi Hemambara,

I checked some more scenarios with your fix. I a scenario:
1. Register QueryEntity with classes
class Person { String name; Address address; }
class Address { String street; }
2. cache.put(1, new Person(...))
3. Add column dynamically
alter table add column "Address.number"
4. UPDATE Person set "Address.number" = 666
5. Update class Address to be
class Address { String street; int number; }
6. Retrieving a value from cache
cache.get(1).address.number
results in a weird exception.

I checked the same scenario with flat objects (without nesting) and it
finished successfully.

I am afraid I cannot merge the proposed fix as is. The thing with
nesting SQL fields is quite complicated and requires thorough
analysis. Unfortunately, I do not have enough time frame to do a
sufficient analysis.

ср, 30 окт. 2019 г. в 13:31, Hemambara :
>
> Hello Ivan, please let me know if you get a chance to check the code and
> merge it..
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-28 Thread Ivan Pavlukhin
Hi Hemambara,

It really good that we have common understanding now.
> I could not merge pull request as it is saying that only users with "Write 
> access" can merge pull requests

By merge I meant merging branches in your fork repository. It is
really inconvenient to work with bunch of PRs simultaneously. It will
be great if there is only one PR.

пт, 25 окт. 2019 г. в 14:43, Hemambara :
>
> Ohh yes you are right. Without the fix, it is failing to query cache. With
> the other approach that I have attached earlier it failed to join node,
> there I have CacheConfiguration in xml file for both server and client node.
> Not sure if that does matters. Either way it is failing
>
> Also this is my bad, my apologies, that I forget to check in one change in
> GridQueryProcessor.java where it is using QueryBinaryProperty constructor,
> instead I am using QUeryUtils.buildBinaryProperty to build it. Now I have
> checked in all the changes along with test cases.
>
> Can you please review pull requests:
> https://github.com/apache/ignite/pull/7008 - Change in QueryUtils
> https://github.com/apache/ignite/pull/7009 - Test case for QueryUtilsTest
> https://github.com/apache/ignite/pull/7010 - Change in
> GridQueryProcessor.java
> https://github.com/apache/ignite/pull/7011 - Test case to make sure it is
> adding nested field
>
> I could not merge pull request as it is saying that only users with "Write
> access" can merge pull requests. Can you please merge these pull requests
> and review. Please let me know if you see any issues.
>
> Appreciate your help on this.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-25 Thread Ivan Pavlukhin
Hi Hemambara,

I tried adding 2nd node. I saw a following:
1. Without the fix 2nd is able to join cluster but failed to execute "select *".
2. With and without the fix I always got null for "Address.street".

Could you please consolidate all changes in one pull request and
remove links for irrelevant one from the ticket [1]? Please add
end-to-end test ensuring that added column has non-null value to that
PR as well.

[1] https://issues.apache.org/jira/browse/IGNITE-12261

чт, 24 окт. 2019 г. в 21:30, Hemambara :
>
> Result should not be same, could you please share your logs. Couple of issues
> here.
>
> 1) *With out fix *- I got null instead of "Address.street" value
> [[1, Person{name='john', address=Address{street='baker', number=221}}, john,
> Address{street='baker', number=221}, *null*]]
>
>*With fix*
>[[1, Person{name='john', address=Address{street='baker', number=221}},
> john, Address{street='baker', number=221},* baker*]]
>
> 2) Also you are trying with only one node. Could you please try to *bring
> another node*. Using below code. This one will just start another node and
> try to join existing. *With out fix - it cannot join and with fix - it will
> join*
>
>
> @Test
> public void test(){
> TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi()
> .setIpFinder(new
> TcpDiscoveryVmIpFinder().setAddresses(Collections.singleton("127.0.0.1:47500..47509")));
>
> IgniteConfiguration igniteConfig = new
> IgniteConfiguration().setDiscoverySpi(discoverySpi);
>
> try(Ignite ignite = Ignition.start(igniteConfig)){
> IgniteCache cache = ignite.getOrCreateCache(new
> CacheConfiguration<>("cache")
> .setIndexedTypes(Integer.class, Person.class));
>
> System.err.println(cache.query(new SqlFieldsQuery("Select * from
> Person")).getAll());
>
> }
>
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Does ignite support FULL OUTER JOIN ?

2019-10-24 Thread Ivan Pavlukhin
Hi,

FULL OUTER JOIN  is not supported as it is not supported in H2 [1]
(Ignite uses H2 internally for query processing).

[1] https://github.com/h2database/h2database/issues/457

чт, 24 окт. 2019 г. в 14:03, DS :
>
> I am giving the syntax as :
>
> SELECT * FROM person  Outer join  city ON person.city_id = city.city_id
>
> Same query works well with inner ,left ,right join .
> Please correct me if i am doing something wrong.
>
> P.S.
> Both tables have index over city_id column
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-24 Thread Ivan Pavlukhin
Hi Hemambara,

I rewrote my example in form of a junit test [1]. And I suppose that
it should pass to make the feature useful. Could please check it and
tell me if my understanding is wrong. I tried this test with proposed
fix and without it. Result was the same.

[1] 
https://gist.githubusercontent.com/pavlukhin/3ac27c2def1731a5158222a0d542dfa6/raw/0fd0b0ee7d51649c8b79d36358ce5a80940254b8/NestedFieldUnitTest.java

ср, 23 окт. 2019 г. в 23:36, Hemambara :
>
> HI Ivan, thanks for the quick reply.
>
> Yes it perfectly works as needed with the fix.
>
> Person.Address.Street will not work because
> person.getPerson().getAddress().getStreet() does not exist. It has to be
> person.getAddress().getStreet(). So column name should be "Address.street"
>
> Any other name does the trick as long as there is alias option provided in
> SQL, but in SQLFieldQuery I cannot provide alias option so the only option
> that we are left with is use field name as alias name by default. That way
> we will be able to rejoin the node.
>
> Also just creating field name as "street" will not work as with the same
> name there could be another column
>
> Ex: Person.Address.Street  and Person.College.Address.Street. If we use
> street here then which street we we are referring to ? This fix will take
> the whole name and use it query to avoid complete ambiguity
>
>
> Please follow the instructions that I have mentioned in my previous email.
> If you execute those two programs with and with out fix, you can get better
> idea.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-23 Thread Ivan Pavlukhin
System.err.println(cache.query(new SqlFieldsQuery("Select * from
> Person")).getAll());
>
> }
>
> }
> }
>
> class Person {
> String name;
> Address address;
>
> Person(String name, Address address){
> this.name = name;
> this.address = address;
> }
>
> @Override
> public String toString() {
> return "Person{" +
> "name='" + name + '\'' +
> ", address=" + address +
> '}';
> }
> }
>
> class Address {
> String street;
> int number;
>
> public Address(String street, int number){
> this.street = street;
> this.number = number;
> }
>
> @Override
> public String toString() {
> return "Address{" +
> "street='" + street + '\'' +
> ", number=" + number +
> '}';
> }
> }
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-22 Thread Ivan Pavlukhin
Hi Hemambara,

I created an example [1] and checked with the proposed fix and without
it. It seems the thing does not work in both cases.

And think that supporting such dynamic nested fields properly is not
an easy thing in general. My first thought is that you need to
implement object relational mapping in your application code.

[1] https://gist.github.com/pavlukhin/3ac27c2def1731a5158222a0d542dfa6

вт, 22 окт. 2019 г. в 01:16, Hemambara :
>
> Can you please provide an update
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Optimistic Serializable SQL Transactions

2019-10-21 Thread Ivan Pavlukhin
Justin,

Good to know that. Some comments:

> Their SQL is built on top of the key-value interface so they get parallel 
> optimistic transactions "for free"
AFAIK Cockroach transactions does not use RocksDB transactional
capabilities at all. Moreover do not know if your case is possible
with Cockroach, if transaction A updated a value for key 1 then
transaction B would block on reading the value for the same key (I
believe it is due SERIALIZABLE isolation). Also there is a special
case -- deadlock. In case of deadlock (e.g. tx A locked key 1, tx B
locked key 2, A wants to update key 2, B wants to update key 1) one of
transactions will be aborted shortly.

And a bit of purism. I cannot call Cockroach transactions "optimistic"
because they employ locking and waiting during updates, but really
optimistic transactions does not.

сб, 19 окт. 2019 г. в 13:17, Justin Moore :
>
> If I've read this right -- 
> https://www.cockroachlabs.com/blog/how-cockroachdb-distributes-atomic-transactions/
>  -- Cockroach achieves it by using a convention of writing "intents" to 
> provisionally "change" any given key with a timestamp suffix from the hybrid 
> logical clock which orders transactions.  When reading a given key, they'll 
> actually do a range scan of sorts over all the key's write intents to see 
> which is the latest that successfully committed -- subsequently they'll 
> promote write intents to genuine write "effects" and remove prior records.  
> Their SQL is built on top of the key-value interface so they get parallel 
> optimistic transactions "for free" -- I think YugaByte's "provisional writes" 
> work much the same way 
> (https://docs.yugabyte.com/latest/architecture/transactions/distributed-txns/).
>
> Seeing that Ignite would support the optimistic transactions for it's 
> key-value cache, I assumed you were already doing something similar and I was 
> kind of surprised that SQL optimistic transactions weren't naturally 
> supported (hence my question).
>
> On Sat, Oct 19, 2019 at 12:06 AM Ivan Pavlukhin  wrote:
>>
>> Hi Justin,
>>
>> Thank you for sharing a details about your use case. Quite interesting.
>>
>> It seems that in Ignite something like that is possible with
>> optimistic cache transactions (key-value API). Technically it seems to
>> be achievable when transaction protocol accumulates transaction
>> read-write set in memory area allocated for each transaction. Ignite
>> key-value transactions behaves so. But MVCC transactions was designed
>> to support SQL update operations as well which possible be larger than
>> available memory. So, new row versions are written in the same place
>> where a regular (already committed) data resides. Supporting multiple
>> not committed versions will complexify existing implementation a lot
>> (do not have good ideas from scratch how to implement it).
>>
>> Really interesting if cockroachdb supports something like this.
>>
>> сб, 19 окт. 2019 г. в 03:36, Justin Moore :
>> >
>> > Thanks Ivan,
>> >
>> > First, apologies, I don't have a simple example to share...
>> >
>> > I'm trying to explore the possibility answering "what-if" hypothetical 
>> > queries using (SQL) transactions that don't commit (i.e. transactions that 
>> > always rollback/abort).  Each what-if transaction would be like a 
>> > self-contained alternate reality that can surface information from an 
>> > alternate future -- but to do so effectively, the transaction ought to 
>> > process all statements, even though it will never (succeed to) commit, so 
>> > it must not abort early/unintentionally.
>> >
>> > I think a couple of other "distributed SQL" offerings 
>> > (cockroachDB/yugabyte) are architected in a way that makes this possible, 
>> > where traditionally "unclustered" databases (Postgres) generally seemed to 
>> > rely on locking that would prevent the capability.  I was -- and still am 
>> > -- looking for other (probably distributed) options that might make this 
>> > feasible, which lead me to the Ignite docs.
>> >
>> > My specific use case is to do something kind of like a Log Sorted Merge 
>> > representation with a twist, using a shared database and a shared Write 
>> > Ahead Log (the twist is that the WAL is not strictly append-only).  So 
>> > concurrent clients would assemble the current state on demand by applying 
>> > the log to the database -- which is a "snapshot" practically speaking -- 
>> > in a transaction that would not be committed (a separate &

Re: Optimistic Serializable SQL Transactions

2019-10-18 Thread Ivan Pavlukhin
Hi Justin,

Thank you for sharing a details about your use case. Quite interesting.

It seems that in Ignite something like that is possible with
optimistic cache transactions (key-value API). Technically it seems to
be achievable when transaction protocol accumulates transaction
read-write set in memory area allocated for each transaction. Ignite
key-value transactions behaves so. But MVCC transactions was designed
to support SQL update operations as well which possible be larger than
available memory. So, new row versions are written in the same place
where a regular (already committed) data resides. Supporting multiple
not committed versions will complexify existing implementation a lot
(do not have good ideas from scratch how to implement it).

Really interesting if cockroachdb supports something like this.

сб, 19 окт. 2019 г. в 03:36, Justin Moore :
>
> Thanks Ivan,
>
> First, apologies, I don't have a simple example to share...
>
> I'm trying to explore the possibility answering "what-if" hypothetical 
> queries using (SQL) transactions that don't commit (i.e. transactions that 
> always rollback/abort).  Each what-if transaction would be like a 
> self-contained alternate reality that can surface information from an 
> alternate future -- but to do so effectively, the transaction ought to 
> process all statements, even though it will never (succeed to) commit, so it 
> must not abort early/unintentionally.
>
> I think a couple of other "distributed SQL" offerings (cockroachDB/yugabyte) 
> are architected in a way that makes this possible, where traditionally 
> "unclustered" databases (Postgres) generally seemed to rely on locking that 
> would prevent the capability.  I was -- and still am -- looking for other 
> (probably distributed) options that might make this feasible, which lead me 
> to the Ignite docs.
>
> My specific use case is to do something kind of like a Log Sorted Merge 
> representation with a twist, using a shared database and a shared Write Ahead 
> Log (the twist is that the WAL is not strictly append-only).  So concurrent 
> clients would assemble the current state on demand by applying the log to the 
> database -- which is a "snapshot" practically speaking -- in a transaction 
> that would not be committed (a separate "compaction" process would apply a 
> prefix of the log to be persisted permanently).  As such, concurrent clients 
> are going to be trying to do the exact same writes to the database in their 
> transactions -- they need not commit but all other statements should be 
> executed.
>
> Sorry if it's a bit confusing...
>
> Cheers,
> Justin
>
> On Fri, Oct 18, 2019 at 6:31 AM Ivan Pavlukhin  wrote:
>>
>> Hi,
>>
>> Currently there are no activity on optimistic transaction support for SQL.
>>
>> A transaction will be aborted on a first write conflict. May be I got
>> it wrong, but what is the benefit of aborting later (on commit)
>> instead of earlier (on write conflict)? Perhaps a scenario written in
>> terms of Ignite operations (cache.get, cache.put, cache.query) can
>> illustrate your problem better for my understanding.
>>
>> пт, 18 окт. 2019 г. в 06:58, Justin Moore :
>> >
>> > Hi,
>> >
>> > Very new to Ignite and just trying to assess its capabilities.
>> >
>> > tl;dr: are optimistic serializable sql transactions on the roadmap?
>> >
>> > It seems that currently only pessimistic repeatable_read sql transactions 
>> > are supported (in beta as of the current version -- 2.7).  I'm trying to 
>> > confirm that, in Ignite, if I started two concurrent transactions (from 
>> > the same snapshot) where one intends to execute statements #1 
>> > (compare-and-set), #2 (read-only), and #3 (whatever else), while the other 
>> > intends to execute the exact same update statements #1, #2, and #3, but 
>> > also a subsequent #4, understanding that (all but) one of those 
>> > transactions would probably fail to commit, I'm looking to clarify whether 
>> > or not the failing one would throw/abort even before reaching statement #2 
>> > (which might be a read returning values)?
>> >
>> > If I'm reading the docs correctly it seems that in pessimistic 
>> > repeatable_read mode the transaction would fail one of the transactions at 
>> > statement #1 (due to write conflict), but if it could have been optimistic 
>> > serializable, the transaction would simply rollback at the point a commit 
>> > was attempted.  Please correct me if I'm wrong.
>> >
>> > Lastly, are optimistic serializable sql transactions on the roadmap?
>> >
>> > Thanks
>>
>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-17 Thread Ivan Pavlukhin
Hi Hemambara,

Could you bring a little bit more details about your problem?
I would like to see you java classes and QueryEntity configuration.
Ideally it would be to have a runnable reproducer (junit test or class
with main method).

Also I would like to realize how soon are you expecting a fix in Ignite release?
As far as I know next Ignite release is planned in the beginning of next year.

I tried following and it worked for me:

class User {
String userName;
Address address;

public User(String userName, Address address) {
this.userName = userName;
this.address = address;
}
}

class Address {
String streetName;
String zipcode;

public Address(String streetName, String zipcode) {
this.streetName = streetName;
this.zipcode = zipcode;
}
}

@Test
public void testSqlPojoWithoutAnnotations() throws Exception {
IgniteEx grid = startGrid(0);

LinkedHashMap fields = new LinkedHashMap<>();
fields.put("userName", "String");
fields.put("address.zipcode", "String");

IgniteCache cache = grid.createCache(new
CacheConfiguration<>(DEFAULT_CACHE_NAME)
.setQueryEntities(Collections.singleton(new
QueryEntity(Integer.class, User.class)
.setFields(fields;

cache.put(1, new User("ivan", new Address("a", "123")));

System.err.println(cache.query(new SqlFieldsQuery("select * from
User")).getAll());

System.err.println(cache.query(new SqlFieldsQuery("select zipcode
from User")).getAll());

cache.query(new SqlFieldsQuery("create index on User(zipcode)"));

System.err.println(cache.query(new SqlFieldsQuery("select zipcode
from User")).getAll());
}

ср, 16 окт. 2019 г. в 10:59, Hemambara :
>
> Hello Ivan, can you please check and provide updates on this
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-10-17 Thread Ivan Pavlukhin
I created a gist with my code [1] as it renders not so good inline.

[1] https://gist.github.com/pavlukhin/362c0e40d4a010a8f7c0795368e53efb

чт, 17 окт. 2019 г. в 15:01, Ivan Pavlukhin :
>
> Hi Hemambara,
>
> Could you bring a little bit more details about your problem?
> I would like to see you java classes and QueryEntity configuration.
> Ideally it would be to have a runnable reproducer (junit test or class
> with main method).
>
> Also I would like to realize how soon are you expecting a fix in Ignite 
> release?
> As far as I know next Ignite release is planned in the beginning of next year.
>
> I tried following and it worked for me:
>
> class User {
> String userName;
> Address address;
>
> public User(String userName, Address address) {
> this.userName = userName;
> this.address = address;
> }
> }
>
> class Address {
> String streetName;
> String zipcode;
>
> public Address(String streetName, String zipcode) {
> this.streetName = streetName;
> this.zipcode = zipcode;
> }
> }
>
> @Test
> public void testSqlPojoWithoutAnnotations() throws Exception {
> IgniteEx grid = startGrid(0);
>
> LinkedHashMap fields = new LinkedHashMap<>();
> fields.put("userName", "String");
> fields.put("address.zipcode", "String");
>
> IgniteCache cache = grid.createCache(new
> CacheConfiguration<>(DEFAULT_CACHE_NAME)
> .setQueryEntities(Collections.singleton(new
> QueryEntity(Integer.class, User.class)
> .setFields(fields;
>
> cache.put(1, new User("ivan", new Address("a", "123")));
>
> System.err.println(cache.query(new SqlFieldsQuery("select * from
> User")).getAll());
>
> System.err.println(cache.query(new SqlFieldsQuery("select zipcode
> from User")).getAll());
>
> cache.query(new SqlFieldsQuery("create index on User(zipcode)"));
>
> System.err.println(cache.query(new SqlFieldsQuery("select zipcode
> from User")).getAll());
> }
>
> ср, 16 окт. 2019 г. в 10:59, Hemambara :
> >
> > Hello Ivan, can you please check and provide updates on this
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
> --
> Best regards,
> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: Confusion with Events

2019-10-07 Thread Ivan Pavlukhin
cacheManager
> .getCache(event.cacheName())
> .evict(event.key());
> }
>     catch (Exception x)
> {
> log.error("Callback error: {}", x.getMessage(), x);
> }
>
> //Return the event
> return true;
> });
>
> return true;
> }
> },
> EVT_CACHE_ENTRY_EVICTED,
> EVT_CACHE_OBJECT_EXPIRED);
>
>
>
> -
> KJQ
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Confusion with Events

2019-10-03 Thread Ivan Pavlukhin
Hi KJQ,

Thank you for sharing details. Now things are more clear for me.

I suppose you should enable needed event types. You can call
setIncludeEventTypes(EventType.EVT_CACHE_OBJECT_PUT,
EventType.EVT_CACHE_OBJECT_EXPIRED) on your IgniteConfiguration. Also
I noticed that you use ig.compute() on Ignite instance (ig) local to
your calling code. But actually you need to inject actual Ignite
instance where event listener is called, it can be done by using
@IgniteInstanceResource. You can find such injection in
ComputeFibonacciContinuationExample in Ignite sources.

вт, 1 окт. 2019 г. в 18:24, KJQ :
>
> Thank you for the quick response - it is much appreciated.  We are still
> working through our Ignite integration, so far it has been nice, but it is
> definitely a learning curve.
>
> So, we have an environment that is deployed as services in a K8S cluster.
> Each service uses Caffeine as the in-memory cache (i know Ignite has a
> near-cache but cannot make that change now).  Caffeine is also being tested
> inside of Reactive Flux/Mono calls making it harder to change quickly.  Each
> service, a deployed pod, is also an Ignite "server" node as well.  We use
> Ignite, partitioned, as the primary cache (and some distributed compute with
> Drools).
>
> Because all of the nodes use Caffeine, and becoming just a near-cache when
> Ignite is included, we would like Ignite to raise an "expired" event to all
> the nodes and evict that item from Caffeine (before Ignite, Caffeine was
> used as the only in-memory cache per service) - we want to cleanup the local
> caches on all the nodes.  Each Ignite cache configuration has an expiration
> policy setup.
>
> 1) I tried using the `addCacheEntryListenerConfiguration` with each Ignite
> cache thinking this was a better choice because (i thought) it was backed by
> the continuous query and would not require me to explicitly use events.
> But, it looked like that only fired on the node where the operation happened
> (i.e. locally).  Maybe I could broadcast the event from within this listener
> to the other nodes?
>
> 2) My next attempt is using the "remoteListen()."  If I understand you
> correctly, I do not need a "local listener" but the "remote listener" should
> broadcast a message when it is triggered?
>
> *Couple of things I noticed in my test below:*
> - If i take out the PUT it seems I never see any callback notifications
> - Without the local event listener I do not see any expiration messages
> (possibly because of where the data is being cached in the test - local vs.
> remote node)
>
> Basically, I would like to do this:
> 1) R/W to Ignite cache with an "expiration" policy
> 2) When Ignite decides to "expire" an item raise an event to all Ignite
> nodes
> 3) From the event, evict the cache item locally.
>
> This is what I have right now for testing:
>
> ig.events(
> ig.cluster().forServers())
> .remoteListen(
>   new IgniteBiPredicate()
>   {
> @Override
> public boolean apply(UUID uuid, CacheEvent evt)
> {
> log.debug("Received local event {} {} {}  // {} //
> {} ",
>   evt.cacheName(),
>   evt.name(),
>   evt.key(),
>   evt.node().id(),
>   evt.eventNode().id());
> return true; // Continue listening.
>   },
> new IgnitePredicate()
> {
> @Override
> public boolean apply(final CacheEvent evt)
> {
> log.debug("Received remote event {} {} {}  / {} {} ",
>   evt.cacheName(),
>   evt.name(),
>   evt.key(),
>   evt.node().id(),
>   evt.eventNode().id());
>
> //Broadcast the callback
> ig.compute().broadcastAsync(new IgniteCallable()
> {
> @Override
> public Object call() throws Exception
> {
> log.debug("Received callback {} {} {}  / {} {} ",
>  evt.cacheName(),
>  evt.name(),
>  evt.key(),
>  evt.node().id(),
>      evt.eventNode().id());
> return null;
> }
> });
>
> return true;
> }
> },
> EVT_CACHE_OBJECT_PUT,
> EVT_CACHE_OBJECT_EXPIRED)
>
>
>
>
>
>
> -
> KJQ
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Confusion with Events

2019-10-01 Thread Ivan Pavlukhin
Hi KJQ,

A following comes to my mind:
1. Use 
IgniteEvents#remoteListen(org.apache.ignite.lang.IgniteBiPredicate,
org.apache.ignite.lang.IgnitePredicate, int...) with null first
predicate (locLsnr) (because notifying caller seems not needed).
2. Use IgniteCompute#broadcastAsync(org.apache.ignite.lang.IgniteCallable)
in remote event handler (rmtFilter) to notify all nodes about events.

But here can be following caveats:
1. Such broadcasts can lead to poor performance, events buffering
before broadcast might help.
2. Event listeners can be also triggered for backup partitions
updates, perhaps backup notifications should be filtered to avoid
duplicate broadcasts.

Also more details about your use case can help to develop a good
solution. Currently use case is not fully clear for me.

вт, 1 окт. 2019 г. в 03:38, KJQ :
>
> I have some questions regarding Cache Listeners/Events.
>
> We have a system that used a lot of "Caffeine" based caches spread across
> multiple services (in K8S).  Basically "near-caches" (without a backing
> store).  We are now trying to fit Ignite behind those usages.
>
> *What we are trying to do is when Ignite /expires/ an entry receive the
> event on all the nodes and evict it in from Caffeine*.
>
> Are one of these approaches below correct? And/or how can I accomplish this?
> Is there a better/easier way?
>
> 1) I tried registering a CacheListener with each cache configuration but
> that seemed to only fire where the cache event was fired:
>
> config.addCacheEntryListenerConfiguration(new
> IgniteExpiredListener<>(cacheManagerProvider));
>
> 2) I am experimenting with cache events as well like this below.
>
> ig.events(
> ig.cluster().forServers())
> .remoteListen(
> new IgniteBiPredicate()
> {
> @Override
> public boolean apply(UUID uuid, CacheEvent evt)
> {
> log.debug("Received local event "
>   + evt.name()
>   + ", key="
>   + evt.key()
>   + ", at="
>   + evt.node().consistentId().toString()
>   + ", "
>   +
> evt.eventNode().consistentId().toString() );
> cm.getCache(evt.cacheName()).evict(evt.key());
> return true; // Continue listening.
> }
> },
> new IgnitePredicate()
> {
> @Override
> public boolean apply(final CacheEvent evt)
> {
> log.debug("Received remote event "
>   + evt.name()
>   + ", key="
>   + evt.key()
>   + ", at="
>   + evt.node().consistentId().toString()
>   + ", "
>   +
> evt.eventNode().consistentId().toString() );
> return true;
> }
> },
> EVTS_CACHE);
>
>
>
>
> -
> KJQ
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7 (IGNITE-10884 )

2019-10-01 Thread Ivan Pavlukhin
gt;>>>
>>>>> My cache configuration is as follows. I am using TRANSACTIONAL and not  
>>>>> TRANSACTIONAL_SNAPSHOT.
>>>>>
>>>>>
>>>>>
>>>>> private CacheConfiguration ipContainerIPV4CacheCfg() {
>>>>>
>>>>>   CacheConfiguration ipContainerIpV4CacheCfg = new 
>>>>> CacheConfiguration<>(CacheName.IP_CONTAINER_IPV4_CACHE.name());
>>>>>   
>>>>> ipContainerIpV4CacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>>>>   ipContainerIpV4CacheCfg.setWriteThrough(ENABLE_WRITE_THROUGH);
>>>>>   ipContainerIpV4CacheCfg.setReadThrough(false);
>>>>>   ipContainerIpV4CacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>>>>   
>>>>> ipContainerIpV4CacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>>>   ipContainerIpV4CacheCfg.setBackups(1);
>>>>>   Factory storeFactory = 
>>>>> FactoryBuilder.factoryOf(IpContainerIpV4CacheStore.class);
>>>>>   ipContainerIpV4CacheCfg.setCacheStoreFactory(storeFactory);
>>>>>   ipContainerIpV4CacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, 
>>>>> IpContainerIpV4Data.class);
>>>>>   
>>>>> ipContainerIpV4CacheCfg.setCacheStoreSessionListenerFactories(cacheStoreSessionListenerFactory());
>>>>>   ipContainerIpV4CacheCfg.setSqlIndexMaxInlineSize(84);
>>>>>   RendezvousAffinityFunction affinityFunction = new 
>>>>> RendezvousAffinityFunction();
>>>>>   affinityFunction.setExcludeNeighbors(true);
>>>>>   ipContainerIpV4CacheCfg.setAffinity(affinityFunction);
>>>>>   ipContainerIpV4CacheCfg.setStatisticsEnabled(true);
>>>>>
>>>>>   return ipContainerIpV4CacheCfg;
>>>>> }
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Prasad
>>>>>
>>>>> On Wed, Jan 9, 2019 at 5:45 PM Павлухин Иван  wrote:
>>>>>>
>>>>>> Hi Prasad,
>>>>>>
>>>>>> > javax.cache.CacheException: Only pessimistic repeatable read 
>>>>>> > transactions are supported at the moment.
>>>>>> Exception mentioned by you should happen only for cache with
>>>>>> TRANSACTIONAL_SNAPSHOT atomicity mode configured. Have you configured
>>>>>> TRANSACTIONAL_SNAPSHOT atomicity for any cache? As Denis mentioned
>>>>>> there are number of bugs related to TRANSACTIONAL_SNAPSHOT, e.g. [1].
>>>>>>
>>>>>> [1] https://issues.apache.org/jira/browse/IGNITE-10520
>>>>>>
>>>>>> вс, 6 янв. 2019 г. в 20:03, Denis Magda :
>>>>>> >
>>>>>> > Hello,
>>>>>> >
>>>>>> > Ignite versions prior to 2.7 never supported transactions for SQL 
>>>>>> > queries. You were enlisting SQL in transactions for your own risk. 
>>>>>> > Ignite version 2.7 introduced true transactional support for SQL based 
>>>>>> > on MVCC. Presently it's in beta with GA to be available around Q2-Q3 
>>>>>> > this year. The community is working on optimizations.
>>>>>> >
>>>>>> > Please refer to this docs for more details:
>>>>>> > https://apacheignite.readme.io/docs/multiversion-concurrency-control
>>>>>> > https://apacheignite-sql.readme.io/docs/transactions
>>>>>> > https://apacheignite-sql.readme.io/docs/multiversion-concurrency-control
>>>>>> >
>>>>>> > --
>>>>>> > Denis
>>>>>> >
>>>>>> > On Sat, Jan 5, 2019 at 7:48 PM Prasad Bhalerao 
>>>>>> >  wrote:
>>>>>> >>
>>>>>> >> Can someone please explain if anything has changed in ignite 2.7.
>>>>>> >>
>>>>>> >> Started getting this exception after upgrading to 2.7.
>>>>>> >>
>>>>>> >>
>>>>>> >> -- Forwarded message -
>>>>>> >> From: Prasad Bhalerao 
>>>>>> >> Date: Fri 4 Jan, 2019, 8:41 PM
>>>>>> >> Subject: Re: Getting javax.cache.CacheException after upgrading to 
>>>>>> >> Ignite
>>>>>> >> 2.7
>>>>>> >> To: 
>>>>>> >>
>>>>>> >>
>>>>>> >> Can someone please help me with this?
>>>>>> >>
>>>>>> >> On Thu 3 Jan, 2019, 7:15 PM Prasad Bhalerao 
>>>>>> >> >>>>> >> wrote:
>>>>>> >>
>>>>>> >> > Hi
>>>>>> >> >
>>>>>> >> > After upgrading to 2.7 version I am getting following exception. I 
>>>>>> >> > am
>>>>>> >> > executing a SELECT sql inside optimistic transaction with 
>>>>>> >> > serialization
>>>>>> >> > isolation level.
>>>>>> >> >
>>>>>> >> > 1) Has anything changed from 2.6 to 2.7 version?  This work fine 
>>>>>> >> > prior to
>>>>>> >> > 2.7 version.
>>>>>> >> >
>>>>>> >> > After changing it to Pessimistic and isolation level to 
>>>>>> >> > REPEATABLE_READ it
>>>>>> >> > works fine.
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > *javax.cache.CacheException: Only pessimistic repeatable read 
>>>>>> >> > transactions
>>>>>> >> > are supported at the moment.at
>>>>>> >> > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)at
>>>>>> >> > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)at
>>>>>> >> > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)at
>>>>>> >> > com.qualys.agms.grid.dao.AbstractDataGridDAO.getFieldResultsByCriteria(AbstractDataGridDAO.java:85)*
>>>>>> >> >
>>>>>> >> > Thanks,
>>>>>> >> > Prasad
>>>>>> >> >
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best regards,
>>>>>> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-09-30 Thread Ivan Pavlukhin
Hi,

Stacktrace and exception message has some valuable details:
org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Failed to
find a page for eviction [segmentCapacity=126515, loaded=49628,
maxDirtyPages=37221, dirtyPages=49627, cpPages=0, pinnedInSegment=1,
failedToPrepare=49628]

I see a following:
1. Not all data fits data region memory.
2. Exception occurs when underlying cache is destroyed
(IgniteCacheOffheapManagerImpl.stopCache/removeCacheData call in stack
trace).
3. Page for replacement to disk was not found (loaded=49628,
failedToPrepare=49628). Almost all pages are dirty (dirtyPages=49627).

Answering several questions can help:
1. Does the same occur if IgniteCache.destroy() is called instead of DROP TABLE?
2. Does the same occur if SQL is not enabled for a cache?
3. It would be nice to see IgniteConfiguration and CacheConfiguration
causing problems.
4. Need to figure out why almost all pages are dirty. It might be a clue.